text
stringlengths
1
1.52M
meta
dict
\section{Introduction} \label{sec:intro} One of the fundamental issues in the study of genomes is their primary structure, that is, the distribution of nucleotides along DNA sequences. The identification of statistical patterns in the primary structure of DNA sequences has revealed several underlying patterns in genomes \cite{Cattani12,Li_92,Lobry96-2,sobottka&hart2011} and has enabled scientists to propose models for evolutive pressures and mutational mechanisms that might act on organisms \cite{AlbrechtBuehler2006,hart&martinez&olmos2012,sobottka&hart2011} as well as to construct bioinformatics tools. For example, in \cite{Felsenstein81}, a maximum likelihood approach was used to perform analyses of DNA sequences in order to estimate evolutionary trees, while in \cite{Yu_et_al2000}, a measure of the long-range correlation between the nucleotide bases of DNA sequences was used to classify bacteria. In addition, strand compositional asymmetry (SCA) was used to detect replication origins in bacteria \cite{FrankLobry00}, while \cite{Salzberg_et_al98} used interpolated Markov models to identify genes in bacteria, \cite{hart&martinez&videla2006} proposed a maximization model to describe the organization and distribution of genes in bacterial DNA and \cite{martinez2016} presented a stationary stochastic process for modeling the placement of coding and non-coding regions within a genome that incorporates the phenomenon of start codons appearing within coding regions. The aim of this work is to provide a rigorous formalization of a stochastic concatenation model for capturing the primary structure of bacterial DNA sequences which was presented in \cite{sobottka&hart2011}. The model, henceforth referred to as the S-H model, allowed novel statistical symmetries in the mononucleotide and dinucleotide distributions of a collection of bacterial chromosomes to be observed. A key feature of the model is a persymmetric matrix of probabilities which plays a role in determining the nucleic acids seen along a DNA sequence. The persymmetric matrices constitute a special class of matrices which has been employed in models from various fields (see for example \cite{Nian1997, Nian&Chu1994,Nield1994}) and which has been widely studied (see for example \cite{Gutierrez2014,Huang&Cline1972,Reid1997,Xie&Sheng2003}). A genome is a duplex of DNA strands, each strand consisting of a sequence of nucleotides. The nucleotides are of four types: adenine ($A$), cytosine ($C$), guanine ($G$) and thymine ($T$). Of these types, adenine is complementary to thymine while cytosine is complementary to guanine. Each nucleotide on one DNA strand pairs with its complement on the opposite strand. This chemically induced pairing between the two strands causes the strands to assume a ladder-like arrangement which is then twisted to attain the famous helix. The chemical composition of DNA molecules endows a strand with an intrinsic reading direction: each strand can only be read in one direction by the genetic machinery of the cell. Furthermore, the way strands combine to form a duplex means that the two strands are read in opposite directions: they are said to be antiparallel. We shall identify each nucleotide type with a number in $N:=\{1,2,3,4\}$ ($A\equiv 1$, $C\equiv 2$, $G\equiv 3$ and $T\equiv 4$). Let $\alpha:N\rightarrow N$ be the involution which maps each nucleotide to its complement, that is, $\alpha(i)=5-i$. The S-H model is a concatenation model which has at its core a first-order Markov chain whose one-step transition matrix $P=\bigl(P_{ij}\bigr)_{i,j\in N}$ is derived from a positive parameter $m$ and a positive persymmetric matrix $\A=\bigl(L_{ij}\bigr)_{i,j\in N}$: \begin{equation} \label{p.form} P_{ij}=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}, \end{equation} where $M_1=M_4:=m/(2m+2)$ and $M_2=M_3:=1/(2m+2)$. The Markov chain governs how the DNA sequence grows in both directions from an initial nucleotide called the origin by appending nucleotides in three steps. {\bf Step 1.}\ a nucleotide of type $j$ is randomly selected with probability $M_j$. {\bf Step 2.}\ with probability $1/2$, the nucleotide tries to join the end (consonant with the DNA reading direction ) or beginning (contrary to the reading direction) of the sequence. {\bf Step 3.} In the first case, the nucleotide is appended to the sequence with probability $L_{ij}$, where $i$ is the type of the last nucleotide in the sequence; in the latter, the nucleotide is prepended to the sequence with probability $L_{\alpha(k)\alpha(j)}$, where $k$ is the type of the initial nucleotide. This scheme is illustrated in Figure \ref{fig:nucleotide_aggregation}. Provided nucleotides accumulate evenly at the ends of the DNA strand, after a long time one would obtain (with probability~$1$) a sequence with the initial nucleotide at its midpoint. One half would be generated by the stationary Markov chain $(P,\pi)$, where the transition matrix~$P$ is given by~\eqref{p.form} and the chain's stationary distribution~$\pi$ is the left eigenvector of~$P$ corresponding to the eigenvalue~$1$ normalised to sum to~$1$. The other half would have distribution given by the stationary Markov chain $(\tilde P,\tilde \pi)$ where $\tilde \pi_i=\pi_{\alpha(i)}$ and $\tilde P_{ij}=\frac{\pi_{\alpha(j)}}{\pi_{\alpha(i)}}P_{\alpha(j)\alpha(i)}$, for $i,j\in N$. The model is consistent with the observation reported by geneticists that bacterial DNA sequences are usually composed of two distinct segments called chirochores (see \cite{FrankLobry00}). Furthermore, if one estimates the transition matrices~$\tilde P$ and~$P$ for the segments prior to and following the origin nucleotide respectively, one usually finds that $\tilde P_{ij}\approx\frac{\pi_{\alpha(j)}}{\pi_{\alpha(i)}}P_{\alpha(j)\alpha(i)}$ (see \cite{sobottka&hart2011}). \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{nucleotide_aggregation_2.eps} \caption{A schematic presentation of the S-H model for constructing bacterial DNA sequences. Assuming the reading sense of the sequence is from left to right, a new nucleotide of type $C$ is selected with probability $1/(2m+2)$ and is appended to the end of the sequence with probability $L_{32}$, while a nucleotide of type $T$ is selected with probability $m/(2m+2)$ and will be attached to the beginning of the sequence with probability $L_{\alpha(3)\alpha(4)}$. The final DNA sequence obtained is the concatenation of two Markovian processes: one starting at position zero and extending to the right, whose estimated transition matrix is $P$; and the other terminating at zero, whose estimated transition matrix is $\tilde P$.}\label{fig:nucleotide_aggregation} \label{fig:1} \end{figure} The paper is organized as follows. Section~\ref{sec:interp} discusses the probabilistic interpretation of the form~\eqref{p.form} of the matrix~$P$ in greater depth than~\cite{sobottka&hart2011}. Two different probabilistic constructions are presented, the first of which provides the justification for the description of DNA sequence growth given above. Section~\ref{sec:aleph.generated} introduces the set of $\aleph$-generated matrices as matrices of the form \eqref{p.form}, where $\aleph$ is the set of positive persymmetric matrices, and establishes several algebraic characterizations of such matrices. The non-uniqueness of the persymmetric matrix~$\A$ and positive parameter~$m$ that define an $\aleph$-generated matrix is then considered in Section~\ref{sec:families}, where a couple of equivalence relations on~$\aleph$ are considered. This leads to an examination of various properties of $\aleph$-generated matrices as used in the S-H model in Section~\ref{sec:properties}. Finally, we discuss some measures for determining how closely a DNA sequence conforms to the S-H model and make concluding remarks in Section~\ref{sec:conclusion}. \section{Probabilistic interpretation of~$P$} \label{sec:interp} In~\cite{sobottka&hart2011}, a formal description of the way nucleotides are appended to a DNA sequence using the persymmetric matrix $\A$ and the parameter $m$ was presented, but the explicit connection with stochastic matrices of the form~\eqref{p.form} was left for the reader to deduce. Here, we more rigorously discuss how the form \eqref{p.form} of the stochastic matrix~$P$ arises from the DNA-sequence growth mechanism described above. In addition, we shall present an alternative probabilistic interpretation of the growth mechanism. \subsection{Interpretation} To begin, consider the growth of a DNA sequence whose initial nucleotide is taken to be of type~$i$. Let $(\beta_t,\ t\geq0)$ be a Bernoulli scheme on~$N$ with common distribution $M=(M_1, M_2, M_3, M_4)=(m, 1, 1, m)\big/(2m+2)$, that is, an independent and identically distributed sequence of random variables on $N$ with $\beta_s\sim M$. Consider two coupled stochastic processes $(V_t,\ t\geq0)$, which evolves on the state space~$N$, and $(W_t,\ t\geq0)$, which is a Bernoulli $\{0,1\}$-process where $W_t$ is~$1$ with probability $L_{V_t\beta_t}$ (that is, $W_t \sim \textrm{B}\left(L_{V_t\beta_t}\right)$). By setting $V_0:=i$ as the type of the initial nucleotide from which the DNA sequence grows, the process $(V_t,\ t\geq0)$ evolves as a deterministic function of $(\beta_t,\ t\geq0)$ and $(W_t,\ t\geq0)$ as follows: \begin{equation*} V_{t+1} := \beta_tW_t + V_t(1-W_t) = \begin{cases} \beta_t, & \text{if } W_t=1 \\ V_t, & \text{if } W_t=0 \end{cases}\qquad,\qquad \forall t\geq 0. \end{equation*} Note that, while $V_t$ denotes the type of the last nucleotide appended to the sequence, $\beta_t$ corresponds to the mechanism responsible for proposing the type, say~$j$, of the next nucleotide to concatenate to the sequence, and $W_t$ corresponds to the mechanism responsible for accepting or rejecting the new nucleotide in the sequence. If $\beta_t=j$, then~$j$ is accepted as the type of the next nucleotide provided that $W_t=1$, in which case $V_{t+1}$ is set to~$j$. Otherwise, the nucleotide of type~$j$ is discarded and no nucleotide is appended. In that case, $V_{t+1}$ takes the value of~$V_t$. In this way,~$t$ counts the number of nucleotides proposed rather than the length of the DNA sequence while the number of acceptances, given by $\sum_{u=1}^tW_u$, is one less than the length of the DNA sequence, since it doesn't count the initial nucleotide. For all $i\in N$ and $t\geq0$, we define \begin{align*} \gamma_i &:= \Pr(W_t=1 \given V_t=i) = \sum_{j\in N} \Pr(W_t=1, \beta_t=j \given V_t=i) \\ &= \sum_{j\in N} \Pr(W_t=1\given \beta_t=j, V_t=i) \Pr(\beta_t=j \given V_t=i) \\ &= \sum_{j\in N} \Pr(W_t=1\given \beta_t=j, V_t=i) \Pr(\beta_t=j) = \sum_{j\in N} L_{ij}M_j. \end{align*} Next, define a sequence $(\tau_s,\ s\geq0)$ of stopping times by $\tau_0:=0$ and $$ \tau_{s+1} := \min\left\{t>\tau_s \suchthat W_{t-1}=1\right\}. $$ The $\tau_s$'s mark the nucleotide type proposals that were accepted. By construction, they constitute a series of renewal times. Note that $(V_t,\ t\geq0)$ is a discrete step function which transitions to a new nucleotide whenever $t\in\{\tau_s,\ s\geq0\}$. More precisely, for all $s\geq0$, $V_t=V_{\tau_s}$ for $t=\tau_s, \tau_s+1, \ldots, \tau_{s+1}-1$. Let $i\in N$ and $w\in\{0,1\}$. The random variable $\beta_t$ is independent of $W_u$ for $u<t$ and the distribution of $W_t$ is completely determined by the value of $\beta_t$ and $V_t$. Consequently, the event $\{W_t=w$ is conditionally independent of $\{W_u=0\}$ for all $u<t$ given $V_t=i$. For $i\in N$ and $t> u\geq0$, we have \begin{align*} \Pr(W_t=1, W_{t-1}=0, \ldots, W_u=0 \given V_u=i) &= \Pr(W_t=1 \given W_{t-1}=0, \ldots, W_u=0, V_u=i) \cdot \\ &\qquad\ Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \Pr(W_t=1 \given V_t=i, W_{t-1}=0, \ldots, W_u=0, V_u=0) \cdot \\ & \qquad \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \Pr(W_t=w \given V_t=i) \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \gamma_i \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \end{align*} and $$ \Pr(W_t=0, \ldots, W_u=0 \given V_u=i) = (1-\gamma_i)\Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) . $$ Hence, for $s\geq0$, $t\geq1$ and $i\in N$, we obtain \begin{align*} \Pr(\tau_{s+1}-\tau_s=t \given V_{\tau_s}=i) &= \Pr(W_{\tau_s+t-1}=1, w_{\tau_s+t-2}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) \\ &= \Pr(W_{\tau_s+t-2}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) \gamma_i \\ &= \Pr(W_{\tau_s+t-3}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) (1-\gamma_i)\gamma_i \\ &= \cdots \\ &= (1-\gamma_i)^{t-1}\gamma_i. \end{align*} Conditional on $V_{\tau_s}=i$, $\tau_{s+1}-\tau_s$ is thus a geometric random variable taking values on the positive integers: $$ \tau_{s+1}-\tau_s \given V_{\tau_s}=i \sim \textrm{geom}\bigl( \gamma_i\bigr), \quad s\geq0,\ i\in N. $$ Observe that the distribution of $\tau_{s+1}-\tau_s$ is completely determined by the value of $V_{\tau_s}$ and is independent of any events prior to $\tau_s$ if $V_{\tau_s}$ is given. Furthermore, $\tau_{s+1}-\tau_s \given V_{\tau_s}=i$ is identically distributed as $\tau_1 \given V_0=i$, for all $s>0$. Next, define the process $(U_s,\ s\geq0)$ by $U_s := V_{\tau_s}$. Suppose that $V_{\tau_s}=i$ for some fixed $s\geq0$. Then $V_{\tau_{s+1}}$ is determined by $\beta_{\tau_{s+1}-1}$ and $V_{\tau_s}=\beta_{\tau_s-1}$, which are independent of all $\beta_t$, $V_t$ and $W_t$ for all~$t$ prior to $\tau_s-1$. Consequently, $(U_s,\ s\geq0)$ has the Markov property: $$ \Pr(U_{s+1}=j \given U_s=i, U_{s-1}=i_1, \ldots, U_0=i_s) = \Pr(U_{s+1}=j \given U_s=i), $$ for all $i_1,i_2,\ldots, i_s\in N$ and $s\geq0$. Finally, since each $\tau_s$ essentially marks a point at which the process $\bigl((\beta_t, V_t, W_t),t\geq0\bigr)$ is restarted, we have $$ \Pr(U_{s+1}=j \given U_s=i) = \Pr(V_{\tau_{s+1}}=j \given V_{\tau_s}=i) = \Pr(V_{\tau_1}=j \given V_{\tau_0}=i) = \Pr(U_1=j \given U_0=i)=:P_{ij}, $$ for all $s\geq0$. Therefore, $(U_s,\ s\geq0)$ is a time-homogeneous Markov chain on the finite state space~$N$. The following theorem gives the form of the one-step transition matrix $P=\bigl( P_{ij} \bigr)_{i,j\in N}$ in terms of~$\A$ and~$M$. \begin{theo} \label{thm:interp1} The one-step transition matrix $P=\bigl(P_{ij}\bigr)_{i,j\in N}$ of the Markov chain $(U_s,\ s\geq0)$ is given by $$ P_{ij}:=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. $$ \end{theo} \begin{proof} Let $\tau:=\tau_1$. Now, \begin{align} \nonumber P_{ij} &= \Pr( U_1=j \given U_0=i) \\ \nonumber &= \Pr(V_\tau=j \given V_0=i) \\ \nonumber &= \sum_{t=1}^\infty \Pr(V_t=j, \tau=t \given V_0=i) \\ \nonumber &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\Pr(\tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i) \\ \label{eqn:p.coin} &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\sum_{k\in N} \Pr(V_t=k, \tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i). \end{align} However, \begin{align*} \Pr(V_t=j, \tau=t & \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1, \tau=t \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1, W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1 \given V_0=i, W_u=0, u=1,\ldots,t-2) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1 \given V_{t-1}=i) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(W_{t-1}=1 \given \beta_{t-1}=j, V_{t-1}=i) \Pr(\beta_{t-1}=j \given V_{t-1}=i) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= L_{ij}M_j \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \end{align*} and substituting this into \eqref{eqn:p.coin} yields \begin{align*} P_{ij} &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\sum_{k\in N} \Pr(V_t=k, \tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i) \\ &= \sum_{t=1}^\infty \frac{L_{ij}M_j \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i)}{\sum_{k\in N} L_{ik}M_k \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i)} \Pr(\tau=t \given V_0=i) \\ &= \sum_{t=1}^\infty \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k} \Pr(\tau=t \given V_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. \qedhere \end{align*} \end{proof} Clearly, the matrix~$P$ is invariant to rescaling~$\A$. The only effect of rescaling~$\A$ by some constant, say~$h$, is to multiply the mean $1/\gamma_i$ of the distribution of $\tau_{s+1}-\tau_s \given V_{\tau_s}=i$ by a factor of $1/h$. Of course, while such scaling preserves the persymmetry of~$\A$, it only makes sense if $0<h< \min\{1/\gamma_i \suchthat i\in N\}$. \subsection{Alternative interpretation} There is another way to represent how new nucleotides are added to a DNA sequence which provides an alternative derivation of the Markov chain on~$N$ with one-step transition matrix~$P$ of the form \eqref{p.form}. Let $(Y_s,\ s\geq0)$ be a Markov chain on the set of nucleotides~$N$ with transition matrix $K=(K_{ij})_{i,j\in N}$ given by $K_{ij}=L_{ij}\big/\sum_{k\in N}L_{ik}$. Thus, the one-step transition matrix of $(Y_s,\ s\geq0)$ is obtained by converting the positive persymmetric~$\A$ into a stochastic matrix by normalizing its rows to sum to unity. Next, let $(B_s,\ s\geq0)$ be a Bernoulli scheme on~$N$ with common distribution $M$. Since $(Y_s,\ s\geq0)$ is a positive recurrent Markov chain on the finite state space~$N$ and $(B_s,\ s\geq0)$ is an i.i.d. sequence also on~$N$ that is independent of $(Y_s,\ s\geq0)$, the joint process $\left(\bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is a positive recurrent Markov chain on the state space $N\times N$ with one-step transition matrix $\left(R_{(i,k),(j,l)} \right)_{(i,k), (j,l)\in N^2}$ given by $R_{(i,k),(j,l)} = K_{ij}M_l$. We shall assume without loss of generality that $Y_0=B_0$. Define a sequence of stopping times $(T_s,\ s\geq0)$ by $T_0:=0$ and $$ T_{s+1}:=\min\{t>T_s+1 \suchthat Y_{t-1}=B_{t-1}=Y_{T_s} \text{ and } Y_t=B_t\}, $$ for $s\geq0$. By definition, $Y_{T_s}=B_{T_s}$ for all $s\geq0$ and $Y_{T_s-1}=B_{T_s-1}$ for all $s\geq1$. Observe that if $Y_{T_s}$ and $B_{T_s}$ are given, for example, $Y_{T_s}=B_{T_s}=i$, then \begin{align*} T_{s+1}-T_s &=\min\{t>T_s+1 \suchthat Y_{t-1}=B_{t-1}=Y_{T_s} \text{ and } Y_t=B_t\} - T_s \\ &=\min\{t>1 \suchthat Y_{t-1}=B_{t-1}=i \text{ and } Y_t=B_t\}. \end{align*} Thus, $T_{s+1}-T_s$ is independent of $T_s$ if $Y_{T_s}$ is given. Furthermore, $T_{s+1}-T_s \given Y_{T_s}=i$ has the same distribution as $T_1\given Y_0=i$. Thus, each $T_s$ is a renewal time at which the Markov chain $\left(\bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is restarted. Next, define the stochastic process $(X_s,\ s\geq0)$ by $X_s:=Y_{T_s}$. Since $\left( \bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is a Markov chain and $(T_s,\ s\geq0)$ is a sequence of stopping times at which it renews, one may employ the strong Markov property to deduce that $(X_s,\ s\geq0)$ is also a Markov chain. It only remains to compute its one-step transition matrix. \begin{theo} \label{thm:interp2} The Markov chain $(X_s,\ s\geq0)$ has one-step transition matrix $P=\left(P_{ij}\right)_{i,j\in N}$, where $$ P_{ij}:=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. $$ \end{theo} \begin{proof} Fix $X_0=B_0=i$ and let $T:=T_1$. Then, \begin{align*} P_{ij} &= \Pr(X_1=j\given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_T=j, T=t \given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j \given T=t, X_0=i)\Pr(T=t \given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j, B_t=j \given Y_t=B_t, Y_{t-1}=i, B_{t-1}=i, Y_{t-2}\neq B_{t-2}, \ldots, Y_2\neq B_2, Y_1\neq B_1, Y_0=i, B_0=i) \cdot \\ & \quad \Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j, B_t=j \given Y_t=B_t, Y_{t-1}=i)\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{\Pr(Y_t=j, B_t=j \given Y_{t-1}=i)}{\Pr(Y_t=B_t \given Y_{t-1}=i)}\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{\Pr(Y_t=j, B_t=j \given Y_{t-1}=i)}{\sum_{k\in N}\Pr(Y_t=k, B_t=k \given Y_{t-1}=i)}\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{K_{ij}M_j}{\sum_{k\in N} K_{ik}M_k}\Pr(T=t\given X_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k} \sum_{t=2}^\infty \Pr(T=t\given X_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}, \end{align*} since $$ \sum_{t=2}^\infty \Pr(T=t\given X_0=i)=1 $$ and \[ \Pr\{Y_t=j, B_t=j \given Y_{t-1}=i) =\Pr\{Y_t=j \given Y_{t-1}=i)\Pr(B_t=j) =K_{ij}M_j. \qedhere \] \end{proof} Thus, the mechanism by which nucleotides are appended to a DNA sequence according to a Markov chain with transition matrix~$P$ may also be described as follows. Suppose that the last nucleotide in the sequence is of type~$i$. Then, one simply waits until both the Markov chain $(Y_s)$ and the i.i.d. sequence $(B_s)$ simultaneously return to state~$i$ and both immediately jump to the same state, say~$j$. When such a consecutive pair of concordant events occurs, a nucleotide of type~$j$ is appended to the sequence. At this point, this scheme is repeated, but using~$j$ as the initial state, so that one waits for A coincident return of the two processes to state~$j$ followed by simultaneous transitions to a new state, say~$k$, and so on. The Markov chain $Y_s$ transitions from~$i$ to~$j$ with probability $K_{ij}$ while $B_s$ selects~$j$ with probability~$M_j$. In contrast to the original description given in~\cite{sobottka&hart2011} and in Section~\ref{sec:intro}, two nucleotides of types~$j$ and~$k$ are selected with probabilities~$M_j$ and~$K_{ij}$ respectively and a nucleotide of type~$j$ is then appended to the end of the sequence if and only if they are of the same type. In essence, the mechanism by which nucleotides are appended to the DNA sequence can be thought of as carrying out acceptance rejection sampling, by repeatedly drawing independent sample nucleotides from the distributions $(K_{ij},\ j\in N)$ and~$M$ until they agree (assuming~$i$ is the type of the nucleotide at the end of the sequence). In this case, the number of draws needed in order to obtain a suitable nucleotide is a geometric random variable with mean $1\big/\sum_{j\in N}K_{ij}M_j$. The first interpretation also amounts to performing acceptance-rejection sampling, but with a two-step procedure in which a nucleotide type~$j$ is first proposed by sampling it from the distribution~$M$ and then is added to the DNA sequence according to an unfair coin toss with probability~$L_{ij}$. Finally, we note that if the matrix~$\A$ is rescaled so that $\sum_{i,j\in N}L_{ij}=1$, it admits the natural interpretation as the stationary dinucleotide probability distribution, that is, $$ L_{ij} = \Pr(Y_t=i, Y_{t+1}=j), \qquad i,j\in N,\ t\geq0. $$ As noted above, $\A$ remains persymmetric under this kind of rescaling. \section{$\aleph$-generated matrices} \label{sec:aleph.generated} Let $\sS_4$ be the set of all $4\times4$ stochastic matrices, and let $\aleph$ be the cone of positive persymmetric matrices (matrices $\A=(L_{ij})_{i,j\in N}$ with positive entries such that $L_{ij}=L_{\alpha(j)\alpha(i)}$ for all $i,j\in N$). Given $P\in \sS_4$ we will say that $(P,\pi)$ is a stationary Markov chain if the vector $\pi=(\pi_i)_{i\in N}$ is such that $\pi P=\pi$. Let $\digamma:\aleph\times (0,+\infty)\to S_4$ be the map which takes $(\A,m)$ to the matrix $\digamma(\A,m)$, which is given for all $i,j\in N$ by \begin{equation} \label{digamma} \left(\digamma(\A,m)\right)_{ij} := \frac{L_{ij}M_j}{\sum_{k=1}^4 L_{ik}M_k}, \qquad \text{where}\qquad M_\ell=\left\{\begin{array}{lll} m/(2m+2) &\text{, if}& \ell=1,4, \\ 1/(2m+2)&\text{, if}& \ell=2,3. \end{array}\right. \end{equation} Since $\A$ is a positive matrix and $m>0$, the matrix $\digamma(\A,m)$ is primitive, that is, irreducible and aperiodic. \begin{defn} \label{defn:aleph.generated} We say that $P\in\sS_4$ is $\aleph$-generated if there exist $(\A,m)\in \aleph\times (0,+\infty)$ such that $P=\digamma(\A,m)$. \end{defn} Let $\Phi:\sS_4\times (0,+\infty)\times (0,+\infty)\to\aleph$ be the map defined for all stochastic matrices $P$, $\tilde{m}>0$ and $\tilde{s}>0$ by \begin{equation \label{AlephEstimated} \Phi(P,\tilde{m},\tilde{s}):= \tilde{s} \begin{pmatrix} a^{11}_{_P}\kappa_{_P}/\tilde{m} & a^{12}_{_P} & 1 & \kappa_{_P}/\tilde{m} \\\\ a^{21}_{_P} & a^{22}_{_P}\epsilon_{_P} \tilde{m} & \epsilon_{_P} \tilde{m} & 1 \\\\ a^{21}_{_P}a^{42}_{_P} & a^{22}_{_P}\epsilon_{_P} a^{32}_{_P} \tilde{m} & a^{22}_{_P}\epsilon_{_P} \tilde{m} & a^{12}_{_P} \\\\ a^{11}_{_P}a^{41}_{_P}\kappa_{_P}/\tilde{m} & a^{21}_{_P}a^{42}_{_P} & a^{21}_{_P} & a^{11}_{_P}\kappa_{_P}/\tilde{m} \end{pmatrix}, \qquad\text{where}\qquad \begin{array}{lcl} a^{ij}_{_P} & := & P_{ij}/P_{i\alpha(j)};\\\\ \kappa_{_P} & := & P_{14}\big/P_{13};\\\\ \epsilon_{_P} & := & P_{23}\big/P_{24}. \end{array} \end{equation} From \eqref{digamma} it follows that if $P$ is an $\aleph$-generated matrix for some $\A=\bigl(L_{ij}\bigr)_{i,j\in N}\in\aleph$ and $m\in(0,+\infty)$, then the nine ratios that appear in \eqref{AlephEstimated} become: \begin{equation}\label{nine_ratios} a^{ij}_{_P} = L_{ij}/L_{i\alpha(j)},\qquad \kappa_{_P} = L_{14}m/L_{13} \qquad \text{and}\qquad \epsilon_{_P} = L_{23}/L_{24}m. \end{equation} \begin{theo} \label{digamma_inv} For any $\aleph$-generated matrix $P$, $$ \digamma^{-1}(P) = \left\{\bigl(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}. $$ \end{theo} \begin{proof} Let $P=\digamma(\A,m)$ for some fixed $\A\in\aleph$ and $m>0$. Given $\tilde{\A}:=\Phi(P,\tilde{m},\tilde{s})$ for any choice of $\tilde{m},\tilde{s}>0$, it is straightforward to check that $\digamma(\tilde{\A},\tilde{m})=\digamma(\A,m)=P$. Therefore, $\left\{\big(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\big):\ \tilde{m}>0,\ \tilde{s}>0\right\}\subseteq \digamma^{-1}(P)$. On the other hand, suppose $\A'=(L'_{ij})_{i,j\in N}\in\aleph$ and $m'>0$ are such that $(\A',m')\in\digamma^{-1}(P)$. Note that, since $P=\digamma(\A,m)=\digamma(\A',m')$, it follows from \eqref{nine_ratios} that \begin{equation*} a^{ij}_{_P} = L_{ij}/L_{i\alpha(j)} = L'_{ij}/L'_{i\alpha(j)}, \qquad \kappa_{_P} = L_{14}m\big/L_{13} = L'_{14}m'\big/L'_{13}, \qquad \epsilon_{_P} = L_{23}\big/L_{24}m = L'_{23}\big/L'_{24}m'. \end{equation*} Hence, $\A'=\Phi(P,m',L'_{13})$ and so $\digamma^{-1}(P)\subseteq\left\{\bigl(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}$, which completes the proof.\qedhere \end{proof} Since $\Phi$ is linear in $\tilde{s}$, instead of working with $\Phi$ we can work with the map $\varphi:\sS_4\times(0,+\infty)\to\aleph$ defined by \begin{equation}\label{varphi} \varphi(P,\tilde{m}):=\Phi(P,\tilde{m},1). \end{equation} Then, $\digamma^{-1}(P)=\left\{\bigl(\tilde s\varphi(P,\tilde{m}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}$. The next corollary is a simple consequence of \eqref{varphi} and Theorem~\ref{digamma_inv}. \begin{cor} \label{P-aleph1} A stochastic matrix $P$ is $\aleph$-generated if and only if $P$ is obtained by probability-normalizing the rows of the matrix $\A:=\varphi(P,1)$.\qed \end{cor} Given a vector $\mathbf{a}=(a_1,a_2,a_3,a_4)\in \R^4$, let $D(\mathbf{a})$ be the $4\times 4$ diagonal matrix with $\mathbf{a}$ on its diagonal. \begin{cor}\label{P-aleph2} A stochastic matrix $P$ is $\aleph$-generated if and only if there exists a strictly positive vector $\mathbf{x}=(x_i)_{i\in N}\in\R^4$ such that $D(\mathbf{x})P\in\aleph$. \end{cor} \begin{proof} Suppose that $P$ is $\aleph$-generated and define $\mathbf{x}$ to be the vector with elements given by $ x_i:=\sum_{k=1}^4\Bigl(\varphi(P,1)\Bigr)_{ik} $.\sloppy\ Then, from Corollary~\ref{P-aleph1} we have that $D(\mathbf{x})P=\varphi(P,1)\in\aleph$. Conversely, if $D(\mathbf{x})P=\A\in\aleph$ for some $\mathbf x\in\R^4$, then $P=\digamma(\A,1)$ and $\mathbf x$ contains the row sums of $\A$.\qedhere \end{proof} Note that given an $\aleph$-generated matrix $P$, there exist infinitely many vectors $\mathbf{x}$ that satisfy the stated property, all of which are collinear. Because of this, we can decide whether or not a stochastic matrix is $\aleph$-generated by setting $$ \mathbf{x}_P = \left(\frac{P_{j\alpha(i)}}{P_{i\alpha(j)}} \right)_{i\in N} = \frac1{\sum_{k=1}^4\bigl(\varphi(P,1)\bigr)_{jk}} \left( \sum_{k=1}^4\bigl(\varphi(P,1)\bigr)_{ik} \right)_{i\in N}, $$ for a fixed $j\in\{1,2,3,4\}$, and checking whether or not $D(\mathbf{x}_P)P$ belongs to $\aleph$. Observe that $\mathbf{x}_P$ is expressed in terms of elements of~$P$. In particular, we can choose \begin{equation*} \mathbf{x}_P=\left(\frac{P_{44}}{P_{11}},\frac{P_{43}}{P_{21}},\frac{P_{42}}{P_{31}},1\right). \end{equation*} \section{$\aleph$-families and generators} \label{sec:families} From the preceding discussion, it is evident that a given $\aleph$-generated stochastic matrix can be generated using any one of a multitude of persymmetric matrices. We proceed to examine this non-uniqueness in greater detail. \begin{defn} The $\aleph$-family of an $\aleph$-generated matrix $P$ is the set $$ \aleph(P):=\left\{\varphi(P,\tilde{m}):\ \tilde{m}>0\right\}. $$ The family of generators of an $\aleph$-generated matrix $P$ is the set $$ \aleph_G(P):=\left\{\bigl(\varphi(P,\tildem),\tildem\bigr):\ \tildem>0\right\}. $$ \end{defn} The import of the next theorem is that $\aleph$ can be partitioned into equivalence classes. Firstly, any persymmetric matrix can be used to generate a whole host of $\aleph$-generated matrices simply by varying the value of the parameter~$m$. Thus, there are families of persymmetric matrices that give rise to disjoint collections of $\aleph$-generated matrices and these families are mutually exclusive, partitioning the space~$\aleph$ into equivalence classes. Secondly, for each $\aleph$-generated matrix~$P$, there is a set of persymmetric matrices, each of which generates~$P$ when combined with the appropriate value of~$m$. This leads to an equivalence relation on the set $\aleph\times (0,\infty)$. \begin{theo} Suppose $P$ and $Q$ are two $\aleph$-generated matrices. Then: \begin{enumerate} \item Either $\aleph(P)\cap\aleph(Q)=\emptyset$ or $\aleph(P)=\aleph(Q)$. \item Either \begin{enumerate} \item $\aleph_G(P)\cap\aleph_G(Q)=\emptyset$ and $P\neq Q$; or \item $\aleph_G(P)=\aleph_G(Q)$ and $P=Q$. \end{enumerate} \end{enumerate} \end{theo} \begin{proof} \ \par\nobreak \begin{enumerate} \item Suppose $\aleph(P)\cap\aleph(Q)\neq\emptyset$ and choose an $\A=(L_{ij})_{i,j\in N}\in\aleph(P)\cap\aleph(Q)$. Let $m^{(1)},m^{(2)}\in(0,+\infty)$ be such that $P=\digamma(\A,m^{(1)})$ and $Q=\digamma(\A,m^{(2)})$. We begin by proving that $\aleph(P)\subseteq\aleph(Q)$. Let $\mathfrak{B}=(B_{ij})_{i,j\in N}\in\aleph(P)$ and let $m^{(3)}>0$ be such that $P=\digamma(\mathfrak{B},m^{(3)})$. Since $P$ and $Q$ can be generated by the same $\A$, they share the same ratios $a^{ij}_{\cdot}$ listed in~\eqref{nine_ratios}, that is, \begin{equation} \label{a-h} a^{ij}_{_P} = a^{ij}_{_Q} \end{equation} The other two ratios for $P$ will satisfy the following equalities: \begin{gather*} \kappa_{_P} := P_{14}/P_{13} = B_{14}m^{(3)}/B_{13} = L_{14}m^{(1)}/L_{13}, \\ \epsilon_{_P} := P_{23}/P_{24} = B_{23}/B_{24}m^{(3)} = L_{23}/L_{24}m^{(1)}, \end{gather*} which means that \begin{equation} \label{LB} B_{14}m^{(3)}\big/B_{13}m^{(1)} = L_{14}\big/L_{13},\qquad\text{and}\qquad B_{23}m^{(1)}\big/B_{24}m^{(3)} = L_{23}\big/L_{24}. \end{equation} On the other hand, the last two ratios for $Q$ are: \begin{equation} \label{k-e} \begin{gathered} \kappa_{_Q} := Q_{14}/Q_{13} = L_{14}m^{(2)}/L_{13} = B_{14}m^{(3)}m^{(2)}/B_{13}m^{(1)} = \frac{m^{(2)}}{m^{(1)}}\kappa_{_P} ,\\ \epsilon_{_Q} := Q_{23}/Q_{24} = L_{23}/L_{24}m^{(2)} = B_{23}m^{(1)}/B_{24}m^{(3)}m^{(2)} = \frac{m^{(1)}}{m^{(2)}}\epsilon_{_P}, \end{gathered} \end{equation} where the last equality in each line follows from \eqref{LB}. Setting $\tildem:= m^{(2)}m^{(3)}/m^{(1)}$, and taking \eqref{a-h} and \eqref{k-e} together with the last line in the proof of Theorem \ref{digamma_inv} yields $\varphi(Q,\tildem) =\varphi(P, m^{(3)}) = \mathfrak{B}$. Therefore, $\mathfrak{B}\in\aleph(Q)$ and so $\aleph(P)\subseteq\aleph(Q)$. Next, let $\mathfrak B\in\aleph(Q)$. By symmetry, another application of the above argument allows us to conclude that $B\in\aleph(P)$ and hence $\aleph(Q)\subseteq\aleph(P)$. Therefore, $\aleph(P)=\aleph(Q)$. \item By definition, either $\digamma^{-1}(P)=\digamma^{-1}(Q)$, in which case $P=Q$, or $\digamma^{-1}(P)\cap\digamma^{-1}(Q)=\emptyset$ and $P\neq Q$. Now, $\aleph_G(P) \subset \digamma^{-1}(P)$ since $\digamma^{-1}(P) = \left\{ \tildes\A \suchthat \tildes>0,\ \A\in\aleph_G(P) \right\}$, and the result follows. \qedhere \end{enumerate} \end{proof} \begin{defn} Given an $\aleph$-generated matrix $P$ and $\tilde{m}\in (0,+\infty)$, we define the {\em $\tilde{m}$-canonical representative} of $\aleph(P)$ to be the matrix $\A_{P,\tilde{m}}:=\varphi(P,\tilde{m}/\epsilon_{_P})$. \end{defn} Note that $\bigl(\A_{P,\tilde{m}}, \tilde{m}/\epsilon_{_P}\bigr)$ is a generator of $P$. Furthermore, from \eqref{nine_ratios} if $P$ and $Q$ are two $\aleph$-generated matrices with $\aleph(P)=\aleph(Q)$, then $a^{ij}_{_P}=a^{ij}_{_Q}$, for all $i,j$ and $\kappa_{_P}\epsilon_{_P}=\kappa_{_Q}\epsilon_{_Q}$. This gives \begin{cor}\label{cor canonical_representative} Two $\aleph$-generated matrices belong to the same $\aleph$-family if and only if they have identical canonical representatives, that is, if $P$ and $Q$ are $\aleph$-generated, then $$ \aleph(P)=\aleph(Q)\qquad\Longleftrightarrow\qquad \A_{P,1}=\A_{Q,1}\quad \Longleftrightarrow \quad \A_{P,\tilde{m}}=\A_{Q,\tilde{m}}, \text{ for all } \tilde{m}>0. \qed $$ \end{cor} \section{Properties of $\aleph$-generated matrices} \label{sec:properties} Given the stationary Markov chain $(P,\pi)$, consider the following related stationary Markov chains: $(P^\alpha,\pi^\alpha)$ is the complement Markov chain of $(P,\pi)$, where $P_{ij}^\alpha := P_{\alpha(i)\alpha(j)}$ and $\pi^\alpha_i:=\pi_{\alpha(i)}$; $(P^*,\pi^*)$ denotes the reverse Markov chain of $(P,\pi)$, where $P_{ij}^*:=\pi_jP_{ji}\big/\pi_i$ and $\pi^*_i:=\pi_{i}$; and $(\tilde P,\tilde \pi)$ is the reverse complement Markov chain of $(P,\pi)$, where $\tilde P_{ij} = \pi_{\alpha(j)}P_{\alpha(j)\alpha(i)}\big/\pi_{\alpha(i)}$ and $\tilde\pi_i=\pi_{\alpha(i)}$. Note that $\tilde P=(P^\alpha)^*= (P^*)^\alpha$ and $\tilde \pi=(\pi^\alpha)^*=(\pi^*)^\alpha$. The names complement, reverse and reverse complement come from the genetics and Markov chains literature, referring to several kinds of relationship that can exist between nucleotide sequences (genetics), as well as Markov chains. \begin{theo}\label{all_or_none} The matrices~$P$, $P^\alpha$, $P^*$ and $\tilde P$ are either all $\aleph$-generated or none of them are. \end{theo} \begin{proof} Assume $P$ is $\aleph$-generated and take $\A:=\varphi(P,1)=\bigl(L_{ij}\bigr)_{i,j\in N}$. Define $\A^\alpha=(L^\alpha_{ij})_{i,j\in N} \in \aleph$, where $L_{ij}^\alpha:=L_{\alpha(i)\alpha(j)}$. Then, \begin{align*} P_{ij}^\alpha &= P_{\alpha(i)\alpha(j)} = \frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4L_{\alpha(i)k}} = \frac{L_{ij}^\alpha}{\sum_{k=1}^4L_{ik}^\alpha}, \quad i,j\in N \end{align*} and $P^\alpha$ is $\aleph$-generated with $P^\alpha=\digamma\bigl(\A^\alpha, 1\bigr)$. To check that $P$ is $\aleph$-generated implies that $P^*$ is also $\aleph$-generated, it suffices by Corollary \ref{P-aleph2} to set $\ds \mathbf{x}_{P^*}=\left(\frac{P_{44}^*}{P_{11}^*},\frac{P_{43}^*}{P_{21}^*},\frac{P_{42}^*}{P_{31}^*},1\right)$ and prove that $D\bigl(\mathbf{x}_{P^*})P^*\in\aleph$. In fact, $\left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{ij}=\left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{\alpha(j)\alpha(i)}$ because \begin{align*} \left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{ij} &= (\mathbf{x}_{P^*})_i P_{ij}^* = \frac{P_{4\alpha(i)}^*}{P_{i1}^*}P_{ij}^* = \frac{\frac{\pi_{\alpha(i)}}{\pi_4}\ P_{\alpha(i)4}}{\frac{\pi_1}{\pi_i}\ P_{1i}}\frac{\pi_j}{\pi_i}\ P_{ji} =\frac{\pi_{\alpha(i)}\pi_j}{\pi_1\pi_4}\frac{P_{\alpha(i)4}}{P_{1i}} P_{ji} \end{align*} and similarly $$ \left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{\alpha(j)\alpha(i)} = \frac{\pi_{j}\pi_{\alpha(i)}}{\pi_1\pi_4}\frac{P_{j4}}{P_{1\alpha(j)}} P_{\alpha(i)\alpha(j)}, $$ while \begin{align* \frac{P_{\alpha(i)4}}{P_{1i}}P_{ji} &=\frac{L_{\alpha(i)4}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{ji}}{L_{1i}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{L_{\alpha(i)4}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{\alpha(i)\alpha(j)}}{L_{\alpha(i)4}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} \\\\ &=\frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{j4}}{L_{1\alpha(j)}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{P_{j4}}{P_{1\alpha(j)}} P_{\alpha(i)\alpha(j)}. \end{align* Next, to check that $\tilde P$ is $\aleph$-generated given that $P$ is $\aleph$-generated, we need only note that $\tilde P=(P^\alpha)^*$ and apply the above two results one after the other. The proof is completed by realising that $P=(P^\alpha)^\alpha=(P^*)^*=\tilde{\tilde P}$ and hence being $\aleph$-generated is a solidarity property of the four matrices. \qedhere \end{proof} Most bacterial DNA sequences can be segmented into two halves called chirochores~\cite{FrankLobry00} and the two stationary Markov chains that empirically approximate their first-order structure are reverse complements of each other \cite{sobottka&hart2011}. If the DNA sequence conforms to the S-H model then the dinucleotide distribution in one of the chirochores is approximated by $(P,\pi)$ with $P$ being $\aleph$-generated. However, it was an open question as to whether or not the other chirochore would also be approximated by an $\aleph$-generated Markov chain. Theorem \ref{all_or_none} above answers this question in the positive. Furthermore, it is common to find that the stationary Markov chain $(W,\omega)$ that approximates the first-order structure of an entire DNA sequence satisfies {\em intra-strand parity} \cite{AlbrechtBuehler2006,hart&martinez2011}, that is, $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}=\tilde\omega_i\tilde W_{ij}$ for all $i,j\in N$. Intra-strand parity has been observed in the DNA sequences of many organisms such as Bacteria, archaea, plants and animals, but not in other sequences such as those from single-stranded viruses and organelles. The next theorem relates intra-strand parity of dinucleotides to the $\aleph$-generated matrices (cf. the direct characterization in~\cite[Proposition 1]{hart&martinez2011}) and shows that $\aleph$-generated matrices satisfy a weaker property than intra-strand parity. \begin{theo}\label{ISP<->aleph-generated} Let $(W,\omega)$ be a stationary Markov chain. Then $(W,\omega)$ satisfies $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}$ for all $i,j\in N$ if and only if it is $\aleph$-generated and the matrix $\A:=\varphi(W,1)=\bigl(L_{ij}\bigr)_{i,j\in N}$ that generates it satisfies $S_i=S_{\alpha(i)}$ for $i\in N$, where $S_i:=\sum_{k=1}^4L_{ik}$. Furthermore, if $W$ complies with intra-strand parity, then its stationary distribution~$\omega$ can be explicitly expressed as $\omega=\frac1{2(S_1+S_2)}(S_1,S_2,S_2, S_1)$. \end{theo} \begin{proof} \ \par\nobreak \noindent [$(\Longrightarrow)$] It can be seen that $W$ is $\aleph$-generated by observing that $\bigl(D(\omega)W\bigr)_{ij}=\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}=\bigl(D(\omega)W\bigr)_{\alpha(j)\alpha(i)}$. Next, let $\A=\varphi(W,1)$. One can easily check that $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}$, for $i,j\in N$, implies $\omega_i=\omega_{\alpha(i)}$ for all $i\in N$. Therefore, $ \omega_i\frac{L_{ii}}{\sum_{k=1}^4L_{ik}} = \omega_{\alpha(i)}\frac{L_{\alpha(i)\alpha(i)}}{\sum_{k=1}^4L_{\alpha(i) k}} = \omega_i\frac{L_{ii}}{\sum_{k=1}^4L_{\alpha(i) k}}, $ for all $i\in N$, and hence $\sum_{k=1}^4L_{ik}=\sum_{k=1}^4L_{\alpha(i)k}$.\sloppy \noindent [$(\Longleftarrow)$] Suppose $W$ is obtained by normalizing the rows of a matrix $\A=(L_{ij})_{i,j\in N}\in\aleph$, that is, $ W=\left(\frac{L_{ij}}{S_i}\right)_{i,j\in N} $, where $S_i:=\sum_{k=1}^4L_{ik}$. Suppose that $\A$ satisfies $S_i=S_{\alpha(i)}$ for $i=1,2$. It is easy to check that $\omega:=\frac1{2(S_1+S_2)} (S_1,S_2,S_2,S_1)$ is the stationary distribution of~$W$. Hence, it follows that for all $i,j\in N$, \begin{equation*} \omega_iW_{ij}=\frac{S_i}{2(S_1+S_2)}\frac{L_{ij}}{S_i} =\frac{L_{ij}}{2(S_1+S_2)} =\frac{S_{\alpha(j)}}{2(S_1+S_2)}\frac{L_{\alpha(j)\alpha(i)}}{S_{\alpha(j)}} =\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}. \qedhere \end{equation*} \end{proof} \section{Applications and final remarks} \label{sec:conclusion} This article has given a mathematical analysis of the S-H model and elucidated its properties. We conclude with some remarks about the application of the results that have been presented here. Corollary~\ref{cor canonical_representative} provides a way of deciding whether or not two or more $\aleph$-generated matrices can be generated from a single persymmetric matrix $\A$ in conjunction with different values of the parameter~$m$. Meanwhile, Theorem \ref{ISP<->aleph-generated} shows that intra-strand parity in dinucleotides is a special case of $\aleph$-generated matrices. Possessing a weaker structure than that encapsulated by intra-strand parity, it is possible that $\aleph$-generated matrices may be useful for capturing the dinucleotide structure in genomic sequences that do not exhibit intra-strand parity. For the purposes of applications, corollaries~\ref{P-aleph1} and~\ref{P-aleph2} are useful for constructing measures of how close the estimated stationary Markov chain of a bacterial DNA sequence is to being $\aleph$-generated. Given $P\in \sS_4$, we can define the following two examples of such measures: \noindent{\bf Measure 1:} Let $\proj(Q)$ be the orthogonal projection of a $4\times 4$ positive matrix~$Q$ onto~$\aleph$, and define $$\delta_1(P):=\min_{\mathbf{x}=(x_1,x_2,x_3,1)} \norm{D(\mathbf{x})P-\proj\Bigl(D(\mathbf{x})P\Bigr)}. $$ The quantity $\delta_1(P)=0$ if and only if $P$ is $\aleph$-generated. Otherwise, $\delta_1(P)$ gives the minimal distance between some matrix $D(\mathbf{x})P$ which generates $P$ according to the model (but which does not belong to~$\aleph$) and the space~$\aleph$. Note that $\delta_1(P)$ can be analytically computed. The minimum in the expression for $\delta_1(P)$ is attained at the point $\mathbf{x}=(x_1,x_2,x_3,1)$, where $$ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} p_{11}^2+p_{12}^2+p_{13}^2 & -p_{24}p_{13} & -p_{34}p_{12} \\ -p_{13}p_{24} & p_{21}^2+p_{22}^2+p_{24}^2 & -p_{33}p_{22} \\ -p_{12}p_{34} & -p_{22}p_{33} & p_{31}^2+p_{33}^2+p_{34}^2 \end{pmatrix}^{-1} \begin{pmatrix} p_{44}p_{11} \\ p_{43}p_{21} \\ p_{42}p_{31} \end{pmatrix}. $$ \noindent{\bf Measure 2:} Let $\epsilon$ be a $4\times 4$ matrix, $P(\epsilon):=P+\epsilon$, and $\mathbf{x}=(x_1,x_2,x_3,1)$ be a positive vector. Define $\delta_2(P)$ as the solution of the following optimization problem: $$ \text{min } \sum_{i,j\in N} \epsilon_{i,j}^2 \qquad\text{subject to } \left\{\begin{array}{l} P(\epsilon) \in\sS_4;\\ D(\mathbf{x})P(\epsilon)-\proj\Bigl(D(\mathbf{x})P(\epsilon)\Bigr)=\mathbf{0}. \end{array}\right. $$ As was the case with $\delta_1(P)$, we have $\delta_2(P)=0$ if and only if $P$ is $\aleph$-generated, otherwise, $\delta_2(P)$ gives the shortest squared Frobenius distance between $P$ and some $\aleph$-generated stochastic matrix. There being no closed-form solution to the optimization problem, the computation of $\delta_2(P)$ would need to be implemented using numerical methods. Finally, the development of statistical hypothesis tests based on these measures together with further statistical analyses and their application to real bacterial genomes are planned for future publication. \section*{Acknowledgments} This work was supported by the Center for Mathematical Modeling CONICYT Project/Grant PIA AFB 170001, Fondecyt Regular Grant 1070344 and CNPq-Brazil grants 308575/2015-6 and 301445/2018-4. M. Sobotka was partially supported by CNPq-Brazil grant 54091/2017-6. Part of this work was carried out while M. Sobotka was visiting the Center for Mathematical Modeling at the University of Chile. \small \bibliographystyle{plain}
{ "timestamp": "2019-11-28T02:02:25", "yymm": "1805", "arxiv_id": "1805.02231", "language": "en", "url": "https://arxiv.org/abs/1805.02231" }
\section{Introduction} Discovering and understanding nonequilibrium scaling behaviors near the quantum critical point (QCP) is one of the most interesting arenas in condensed matter physics and statistical physics. Continuous quantum phase transitions (QPTs) occur when the control parameter in a Hamiltonian is tuned across QCPs at zero temperature \cite{sachdev2011quantum}. In a continuous phase transition, the order parameter vanishes smoothly as the critical point is approached. The existence of a QCP is usually accompanied by nonanalyticity in the ground state energy, and it usually connects two quantum phases with different symmetries. Strong quantum fluctuations near a QCP always lead to breaking of symmetry and subsequent building up a macroscopic order. The emergence of an order parameter and the nonanalyticity in the ground state energy are related by the Hellmann-Feynman theorem. Universality, which originates from the scale invariance near a critical point, is a remarkable feature in continuous phase transitions \cite{cardy1996scaling,Stanley1999}. As is known from equilibrium critical phenomena in classical systems, universal behaviors emerge in the vicinity of a critical point where a large number of degrees of freedom are strongly correlated. Associated with the critical point a set of critical exponents can be used to describe the scaling behaviors for relevant quantities near the transition. Moreover, the classical notion of universality in thermal phase transition has been extended successfully to describe the quantum critical phenomena due to quantum fluctuations at zero temperature \cite{sachdev2011quantum}. Cold atom experiments facilitate the study of quantum phases and their associated QPTs in a closed quantum many-body system \cite{Bloch2008,Polkovnikov2011, stamperkurn2013spinor,Langen2015}. A wide variety of dynamical properties can be monitored because the relevant energy scales in cold atom systems are much smaller than in conventional condensed matter systems, therefore the relaxation time or the response time is longer and easier to follow experimentally. The equilibrium relaxation time $t_\text{eq}$ of a quantum system, which is typically measured by the inverse of the excitation gap ($\Delta$), diverges in the thermodynamic limit (TDL) because of the gap closing at the QCP. Consequently any driving of the control parameter at a finite rate would cause nonequilibrium effects. An effective approach for the description of such nonequilbrium effects is the celebrated Kibble-Zurek (KZ) mechanism \cite{kibble1976topology,zurek1985cosmological,zurek1996cosmological}, which was first proposed in cosmology physics by Kibble and then extended by Zurek to condensed matter physics. \par The KZ mechanism has been extensively studied both in classical and quantum systems, and in theories \cite{ Damski2005,Zurek2005,Damski2006,Damski2007,Lamacraft2007,Saito2007,Uwe2007,Cucchietti2007, DelCampo2010,Sabbatini2011,Saito2013a,Huang2014,Lee2015, Jaschke2016} as well as in experiments \cite{chuang1991cosmology,bauerle1996laboratory,ruutu1996vortex, Chen2011,Baumann2011,Lamporesi2013,Corman2014, Navon2015,Clark2016,Anquez2016,Aidelsburger2017a}. A signature scaling relation between the number of defects or excitations and the driving rate is predicted when the system is driven across a continuous phase transition. The key enabling element lies at the possibility of combining the equilibrium critical exponents and the driving rate to characterize the nonequilibrium effects from the finite driving rate. The main idea involves seperating the whole dynamics in such a driven process into an adiabatic plus an impulse region. When the driven parameter is far from the critical point, the dynamics is approximately adiabatic due to large equilibrium relaxation time; When the critical point is approached, due to the so-called critical slowing down, the system dynamics can be regarded as frozen and describable by the impulse approximation, and nonadiabatic effects appear. The instant separating the two regions is obtained by equating the time remained to arrive at the QCP, denoted as $t_{\rm KZ}$, to the equilibrium relaxation time $t_{\rm eq}$, {\it i.e.,} $t_{\rm KZ}\simeq t_\text{eq}\simeq {1}/{\Delta}$. The different dynamic regions then originate from the competitions between the two time (length) scales \cite{Huang2014}: the time (length) scale given by external driving and intrinsic relaxation time $t_\text{eq}$ (correlation length $\xi$). Spinor atomic Bose-Einstein condensate (BEC) exhibits rich magnetic phases in the presence of external magnetic field, which makes it a suitable platform to study the dynamics of QPTs. In this work, we focus on a spin-1 BEC with ferromagnetic interactions such as for $\Rb87$ atoms \cite{Stenger1998,Barrett2001,Chang2004,Sadler2006,Luo2017}. Invariably, current atomic BEC systems are trapped in a finite volume by magnetic or optical means with a finite number of atoms, although the total atom number can be changed to some degree from experiment to experiment. In the pioneering experimental work of Ref.\,\cite{Anquez2016}, aimed at checking the predictions of KZ mechanism, the scaling behavior for the impulse stage duration was confirmed. But the deviation of the scaling exponent from the mean-field theory critical exponent is evident especially at the long ramp time limit. It is presumably due to the neglect of the finite size effect, which enters by opening a gap at the QCP and smoothes out the relevant phase transition observables. It cannot be ignored especially when the finite gap opening at the QCP is comparable with the energy scale associated with the dynamics one is investigating. Besides, a finite gap enables near-adiabatic preparation of metrologically meaningful quantum states \cite{Luo2017}. The equilibrium and dynamical properties are studied in this work when the quadratic Zeeman shift is tuned through a continuous QCP as in recent experiments \cite{Anquez2016,Luo2017}. We combine the KZ mechanism with finite-size scaling theory to obtain universal dynamical scaling functions for relevant phase transition observables and successfully verify their scaling collapse in finite systems by using the mean-field critical exponents. We cover the whole range of the driving rate and find that the dynamics in a finite system can be described by adiabatic perturbation theory \cite{Polkovnikov2008a,DeGrandi2010} in the very slow driving limit, and becomes far-from-equilibrium and non-universal in the fast driving limit. \par This paper is organized as follows. We first discuss the QPT for our model in Sect.\,\ref{subsec:modelHam} and extract the critical exponents from mean-field results in Sect.\,\ref{subsec:exponents}. In Sect.\,\ref{subsec:FSSeq}, we study the finite-size scaling for equilibrium observables. Section\,\ref{sec:dynamic} is devoted to a study of the dynamical properties for a linear driven protocol, where three distinct dynamical regions are analyzed. The consequent predictions can be tested in existing experimental setups. Finally in Sect.\,\ref{sec:conclusion}, we conclude with discussions.\\ \begin{figure \centering \includegraphics[width=0.8\columnwidth]{Fig1.pdf} \caption{{\bf Mean-field phase diagram.} The mean-field phase diagram for our model at $q > 0$ in the subspace of zero longitudinal magnetization $F_z = 0$. The broken-axisymmetry phase (BA phase) and the polar phase are separated at the quantum critical point(QCP) $q_c = 2$. The mean-field values for ground state observables: fractional population $\mathcal{N}$ (black solid line) and transverse magnetization $\mathcal{M}$ (red solid line). The mean-field critical exponents can be obtained from the scaling behaviors near the QCP for $\mathcal{N}$ and $\mathcal{M}$ (see the main text).}\label{fig:phasediag} \end{figure} \section{Model Hamiltonian and the Critical exponents}\label{sec:modelHam} \subsection{Spin-1 BEC Hamiltonian and its QPT}\label{subsec:modelHam} {\it Model.}---For a spin-1 BEC of $\Rb87$ or $\Na23$ atoms, the spin-dependent interaction strength is usually much weaker than the density-density interactions, it is therefore reasonable to make the single-mode approximation (SMA) by assuming that all spin states share the same spatial wavefunction $\phi(\br)$, which is unit normalized according to $\int|\phi(\br)|^2d\br =1$ \cite{Law1998}. SMA decouples the spatial mode and the spin. The equations of motion at low energies are simplified to those concerning the internal spin degrees of freedom. The Hamiltonian under SMA becomes \cite{Law1998,Pu1999} \begin{widetext} \begin{eqnarray} \hat H &= &\frac{c_2}{2N}\left[\left(2\hat N_0 -1\right)\left(\hat N_1+\hat N_{-1}\right)+2\left(\hat a_1^\dag\hat a_{-1}^\dag\hat a_0\hat a_0+\text{h.c.}\right)\right] -p \left(\hat N_1 - \hat N_{-1}\right) + \,q\,\left(\hat N_1 + \hat N_{-1}\right)\,,\label{eq:Hamil0} \end{eqnarray} \end{widetext} where $\hat a_{m_f} (m_f=0,\pm1)$ is the annihilation operator of the ground state manifold $|f=1, m_f\rangle$, with number operator $\hat N_{m_f} = \hat a_{m_f}^\dag\hat a_{m_f}$, and the total particle number operator $\hat N = \hat N_{1}+\hat N_0+\hat N_{-1}$ is conserved. $p$ and $q$ are linear and quadratic Zeeman shifts which could be tuned independently in experiments. The spinor dynamic rate $c_2$, which sets the spin-dependent interaction energy scale, is defined as $c_2 = N\int |\phi(\br)|^4 d\br\times\frac{4\pi (a_2-a_0)}{3m_\text{a}}\,,$ with $m_\text{a}$ being the atomic mass, $a_F$ the $s$-wave scattering length in the total spin angular momentum channel of $F=f_1+f_2$ for the two atoms. Atomic interactions naturally give $c_2<0$ for $\Rb87$ atoms and $c_2>0$ for $\Na23$ atoms which corresponds to ferromagnetic and anti-ferromagnetic spin-dependent interactions, respectively. \par The collective spin operators for this spin-1 boson system are defined by $\hat F_+ = \sqrt{2}\,(\hat a_1^\dag \hat a_0+\hat a_0^\dag\hat a_{-1}),\, \hat F_-= \hat F_+^\dag,\, \hat F_z =\hat a_1^\dag a_1-\hat a^\dag_{-1}\hat a_{-1}, $ where $\,\hat F_\pm \equiv \hat F_x \pm i\hat F_y\,$ are the raising and lowering operators, and $[\hat F_z, \hat H]=0$, making the longitudinal magnetization $ F_z$ a good quantum number. Hereafter we constrain to the $F_z = 0$ subspace, which means the linear Zeeman shift can be set to $p=0$, effectively. {\it Phase diagram.}---In the following discussions, we shall focus on the QPT physics in the ferromagnetic condensate with $c_2<0$ and nonnegative (effective) quadratic Zeeman energy $q \geq 0$. As we can see from Eq.\,(\ref{eq:Hamil0}), in the limit of $q/|c_2|\rightarrow +\infty$, all atoms stay in the single-particle state $|1,0\rangle$, but in the limit of $q/|c_2|\rightarrow 0$, the ferromagnetic interaction term dominates. There must exist a critical point when these two terms are comparable. The competition between the ferromagnetic interaction and the quadratic Zeeman energy manifests the system by two phases with different symmetries revealed by their collective spin magnetization. They are the polar phase for $q/|c_2| > 2$ and the broken-axisymmerty (BA) phase for $0\leq q/|c_2|\leq 2$ (see Fig.\,\ref{fig:phasediag} for the phase diagram). In order to clarify the QCP explicitly, we assume a homogeneous density profile $\phi(\br)=\frac{1}{\sqrt{V}}$ for the condensate, which is a good approximation if the atoms are loaded into a flat trap \cite{Gaunt2013,Chomaz2015,Beugnon2016,Mukherjee2017,Hueck2018}. Therefore $c_2 \propto N\int |\phi(\br)|^4 d\br\propto\frac{N}{V}$. Strictly speaking, phase transitions occur only in the limit of thermodynamics $ \lim\limits_{N,V\rightarrow\infty}\frac{N}{V} = \text{const.}\,,\,$ so $c_2$ is intensive and fixed when we take the TDL. From now on we take $|c_2|=1$ as energy unit in the following discussions. If the system is inhomogeneous in space, such as in a 3D harmonic trap \cite{Anquez2016,Luo2017}, under the Thomas-Fermi approximation, one must take $c_2(N)\propto N^{2/5}$ into consideration to keep the interaction energy per atom fixed when the TDL is taken \cite{Anquez2016}. \par For a continuous transition associated with spontaneously broken symmetry, order parameters can be defined to identify the QPT. The following two order parameters \cite{Damski2007,Lamacraft2007,Anquez2016} $$ \mathcal{N} = \frac{\langle \hat N_1 +\hat N_{-1}\rangle}{N}, \quad \mathcal{M} =\frac{\sqrt{\langle\hat F_x^2\rangle+\langle\hat F_y^2\rangle}}{N} \,,$$ are adopted, wherein $\mathcal{N}$ denotes the fractional atomic population in magnetic states $|1,1\rangle$ and $|1,-1\rangle\,$, and $\mathcal{M}$ is the magnitude of the transverse magnetization for the collective spin. $E_n(q)$ denotes the $n$-th ($n\in\mathbb{N}$) eigenvalue of $\hat H(q)$, and $e_n(q)\equiv E_n(q)/N$ the energy per particle. By using the Hellman-Feynman theorem, the fractional population satisfies $\mathcal{N}(q) \equiv \frac{1}{N}\left\langle\frac{\partial \hat H(q)}{\partial q}\right\rangle= \frac{\partial e_0(q)}{\partial q}$ with $e_0$ the ground state energy per particle. From Fig.\,\ref{fig:phasediag}, it is clear that the QCP at $q=2\,$ is a second order transition since the derivative of $e_0$ with respect to $q\,$, namely $\mathcal{N}(q)\,$, is continuous but the higher order derivatives are discontinuous. \begin{figure \centering \includegraphics[width=0.8\columnwidth]{Fig2.pdf}\\ \caption{{\bf The precursor to QPT in a finite system.} The $\frac{\partial^2 e_0}{\partial q^2}$ approaches a discontinuous step with increasing $N$, which implies a second order (continuous) QPT according to Ehrenfest's classification. Inset: the pseudo-critical point $q_c(N)$ (location of the minimal $e_1-e_0$) for different finite size $N$. In the log-log plot, the difference $q_c-q_c(N)$ is seeing to vanish as $N\rightarrow\infty$ according to a power law, wherein $q_c=2$ is the mean-field critical point. This indicates the mean-field critical point is exact. }\label{fig:quantumQCP} \end{figure} \par Besides the mean-field results, in Fig.\,\ref{fig:quantumQCP}, we also show numerical results of $\frac{\partial^2 e_0}{\partial q^2}$ obtained from exact diagonalization of the Hamiltonian of Eq.\,(\ref{eq:Hamil0}) for different total atom number $N$. The increasingly sharper jump from zero to a negative value for $\frac{\partial^2 e_0}{\partial q^2}$ with increasing $N$ serves as a precursor to QPT in a finite system. The inset of Fig.\,\ref{fig:quantumQCP} shows the locations of the minimal $e_1-e_0$ for different $N$, {\it i.e.,} the pseudo-critical points [$q_c(N)$] for a finite system. It is clear that the $q_c(N)$ converges to $q_c=2$ in the TDL, consistent with the mean-field critical point. \begin{figure*}[!hbt] \centering \includegraphics[width = 1.75\columnwidth]{Fig3.pdf}\\ \caption{{\bf Finite-size scaling at equilibrium.} (a)-(c) In the vicinity of the QCP, exact diagonalization of the Hamiltonian of Eq.\,(\ref{eq:Hamil0}) gives the gap $\Delta(q)\,$, fractional population $\mathcal{N}(q)$ and the transverse magnetization $\mathcal{M}(q)$ for the ground state. (d)-(f) show the corresponding data rescaled according to Eqs.\,(\ref{eq:Gapfss})-(\ref{eq:OPfss}) by using the critical exponents in Table \ref{tab:exponents}. Finite-size scaling is clearly verified. Different system sizes for $N = 500, 1000 \text{ and } 5000$ are used in the calculations.}\label{fig:rescale_eq} \end{figure*} \subsection{Static critical properties}\label{subsec:exponents} The Bogoliubov analysis in Ref.\,\cite{Murata2007} for our model system shows there exist three excitation modes at long wavelength limit in the BA phase. One is gapful and the other two are gapless Goldstone modes associated with U(1) and SO(2) symmetries being broken. The gapful mode denoted as $E_\alpha$ in Ref.\,\cite{Murata2007} is directly relevant for our following discussions, \begin{eqnarray*} E_\alpha^2 &=& \Delta^2 + 4|c_2|\epsilon_{\mathbf k} + O(\epsilon_\mathbf{k}^2)\;,\\ \Delta^2 &=& \left(q_c-q\right) \left(q_c+q\right)\;, \end{eqnarray*} where $\epsilon_\mathbf{k} = \frac{\hbar^2{\mathbf k}^2}{2m}$ and $\Delta$ are free particle dispersion and excitation gap, respectively. \par Therefore, the excitation is gapless with a spectrum $E_\alpha\sim\epsilon^{1/2}_\mathbf{k}\sim k^z$ at the QCP $q=q_c$, so we must have the dynamical critical exponent $z=1$. Furthermore, the behavior of the gap approaching the QCP from the BA phase $\Delta({q\rightarrow q_c^{-}})\sim |q-q_c|^{\nu z}$ yields \,$\nu z = 1/2$, thus the correlation length critical exponent $\nu=1/2$. \par The mean-field results for the order parameters $\mathcal{N}$ and $\mathcal{M}$ near the QCP in the BA phase are respectively given by\,\cite{Murata2007,Hoang2016a}, \begin{eqnarray*} \mathcal{N}^\text{(BA)} &\propto&{q_c-q}\,,\qquad \mathcal{M}^\text{(BA)} \propto \sqrt{q_c-q}\;, \end{eqnarray*} as shown in Fig.\,\ref{fig:phasediag}, and both are zero in the polar phase. We thus obtain the exponents of order parameters $\beta_\mathcal{N} = 1$ and $\beta_{\mathcal{M}} = 1/2\,$ from the behavior $\mathcal{O}\sim |q-q_c|^{\beta_\mathcal{O}}$ (where $\mathcal{O} = \mathcal{N}, \mathcal{M}$) in the vicinity of the QCP. \par The Hamiltonian in Eq.\,(\ref{eq:Hamil0}) actually describes $N$ spin--1 bosons interacting equally with all other spins. For such a system mean-field theory gives exact results about the QPT. Because of the infinitely long-range nature of interaction, the concepts of ``dimensionality'' or ``length'' are not well-defined \cite{Botet1982,Botet1983}. The correlation length for a general short-range model must be substituted by an effective quantity $N_\xi$. By following the arguments of Botet and Jullien \cite{Botet1982,Botet1983}, we can define a length scale $\xi$ which simply links the upper critical dimensionality $d_c$ of the corresponding finite-range model according to $N_\xi\sim \xi^{d_c}$. The finite-range spin model has an upper critical dimension $d_c = 4\,$ for a classical phase transition, and since a QPT in $d$-dimension has the same critical behaviors as the classical transition in $(d+z)$-dimension, the upper critical dimensionality is $d = 4-z=3$ for the QPT we discuss. This dimensionality is consistent with what we have in the approximated Hamiltonian\,(\ref{eq:Hamil0}) under SMA. If the coherence number $N_\xi$ is used as an effective correlation length, we find critical exponents $\nu^\ast z^\ast=1/2$ but with $\nu^* = \nu d = 3/2, \, z^*= z/d=1/3$, which implies the information concerning dimensionality is encapsulated into the critical exponents. We list the critical exponents in Table \ref{tab:exponents} for later use. \begin{table}[!htbp] \tabcolsep 2pt \caption{The critical exponents and dimensionality.} \vspace*{-12pt} \begin{center} \def0.3\textwidth{0.3\textwidth} {\rule{0.3\textwidth}{1pt}} \begin{tabular*}{0.3\textwidth}{@{\extracolsep{\fill}}cccccc} $\;\nu$ & $\beta_\mathcal{N}$ &$\beta_{\mathcal{M}}$ & $z$ & $d$ \, \\ \hline $\;1/2$ & 1 & $1/2$ & 1 & 3 \, \label{tab:exponents} \end{tabular*} {\rule{0.3\textwidth}{1pt}} \end{center} \end{table} \subsection{Finite-size scaling in the equilibrium state}\label{subsec:FSSeq} In the vicinity of the QCP with $N \rightarrow \infty$, one has \begin{eqnarray*} \xi &\sim& |q-q_c|^{-\nu}, \quad N_\xi \sim |q-q_c|^{-\nu d}\,,\label{eq:Len_Ninf}\\ \Delta^{-1}&\sim& \xi^{z}\sim N_\xi^{z/d}\sim |q-q_c|^{-\nu z}\,, \end{eqnarray*} which shows the power-law divergence of the characteristic length and time at the critical point. At any finite $N$, the singularity at QCP thus gets rounded, the characteristic length $\xi$ would remain finite and a nonvanishing gap stays at the critical field. The ``rounding off'' can be introduced through a regular scaling function $g_\Delta(x)$, such that for the inverse gap \begin{eqnarray} \Delta^{-1}(q,N)& \sim & \Delta^{-1}(q,N=\infty)\cdot g_\Delta\left({N}/{N_\xi}\right)\,,\label{eq:gapFSS} \end{eqnarray} with $g_\Delta(x)\rightarrow\text{const.}$ for $x\gg 1$, which recovers the nominal TDL, and $g_\Delta(x)\rightarrow x^{\omega_\Delta}$ for $x\ll 1$. The exponent $\omega_\Delta = z/d$ is obtained by assuming that $\Delta^{-1}$ would become regular at $q_c$ for any finite $N$. By using $z=1$ and $d=3$ obtained in last section, we find $\Delta\sim N^{-z/d}\sim N^{-1/3}$ at the pseudo-critical point because finite $N$ takes over the role of $N_\xi$ as a length scale cutoff. Such a scaling was already revealed from fitting numerical calculated values in Ref.\,\cite{Zhang2013,Hoang2016a}. This is the same finite size behavior at the QCP as in the Dicke model \cite{jvidalDickeModel} and the Lipkin-Meshkov-Glick model \cite{Dusuel2004,Leyvraz2005}. \par Based on the above discussions, the finite-size scaling hypotheses for the gap and order parameters can be generally chosen as, \begin{eqnarray} \Delta(\epsilon, N) &\sim& N^{-z/d}g_1(\epsilon N^{1/\nu d})\,,\label{eq:Gapfss}\\ \mathcal{O}(\epsilon,N) &\sim& N^{-\beta_\mathcal{O}/{\nu d}}g_\mathcal{O}(\epsilon N^{1/\nu d})\,,\label{eq:OPfss} \end{eqnarray} where $\epsilon = (q-q_c)/{q_c}$ is the reduced control parameter which measures the distance to QCP. The exponent $\beta_\mathcal{O}$ is the corresponding scaling dimension for observable $\mathcal{O}\,(\mathcal{O}=\mathcal{N},\mathcal{M})$, and $g_{1},\, g_\mathcal{O}$ are the scaling functions. \par We numerically diagonalize the Hamiltonian in the $F_z = 0$ subspace for different size $N$ to obtain the gap $\Delta(q)=E_1(q)-E_0(q)$, ground state fractional population $\mathcal{N}(q)$ and transverse magnetization $\mathcal{M}(q)$. In Fig.\,\ref{fig:rescale_eq}, we show the data collapse by using mean-field critical exponents in Table \ref{tab:exponents}. The scaling hypotheses in Eqs.\,(\ref{eq:Gapfss})-(\ref{eq:OPfss}) are thus well verified near the QCP for the spin mixing model we discuss. \begin{figure*}[!hbt] \centering \includegraphics[width=0.62\columnwidth]{Fig4a.pdf} \quad \includegraphics[width=0.94\columnwidth]{Fig4b.pdf} \caption{{\bf Driven dynamics.} (a) and (b) show the general structures of excitation probability $\mathcal{P}(q)$ and $\mathcal{Q}(q)$ at different driving rate, for $N = 1000$ as an example. (c)-(d) The excitation probability $P(\tau)$ and the heat density $\mathcal{Q}(\tau)$ at the end of the driving for different system size $N$. The driving parameters are taken as $q_i =0 $ and $ q_f = 6$. Three distinct dynamical regions are revealed according to the behaviors of $\mathcal{P}(\tau)$ and $\mathcal{Q}(\tau)$. The black dashed line and dash-dotted lines indicate the $\tau^{-2}$ and $\tau^{-1}$ power laws, respectively. Inset of (c), we rescale $\tau$-axis by $N$ and show the crossover between the adiabatic region and non-adiabatic region occurs at $\tau_c\propto N$ (see main text). }\label{fig:PexQ} \end{figure*} \section{Dynamic behaviors across the QCP}\label{sec:dynamic} The equilibrium criticality established above allows us to study the universal behaviors in the driven dynamics across the QCP. In this section, we discuss such behaviors for the driven dynamics in our model. \par We consider the case of a linear driving protocol with the quadratic Zeeman shift in Eq.\,(\ref{eq:Hamil0}) taking the form, \begin{equation} q(t) = q_i + (q_f - q_i)\cdot t/\tau,\quad\text{for}\quad t\in[0,\tau] \,,\label{eq:protocol} \end{equation} where $q_i\equiv q(0),\, q_f\equiv q(\tau)$ are the initial and final shifts respectively, and $\tau$ is the total driving duration and driving speed is $v = \frac{q_f - q_i}{\tau}\propto\tau^{-1}$. If $\tau\rightarrow 0$, such a driving protocol reduces to a sudden quench, while it corresponds to the adiabatic limit when $\tau\rightarrow \infty$. The initial state $|\Psi(t=0)\rangle$ is always taken to be the ground state of Hamiltonian $\hat H(q_i)$. The dynamical state $|\Psi(t)\rangle$ is solved numerically by evolving the Schr\"odinger equation $i\partial_t|\Psi(t)\rangle = \hat H(t)|\Psi(t)\rangle\,$, with the driving protocol $\hat H(t)\equiv\hat H[q(t)]$ of Eq.\,(\ref{eq:protocol}). Since only two parameters out of the three $(t,\, q,\, \tau)$ are independent, we can use either $(t,\tau)$ or $(q, \tau)$ to denote the same driving process in the following discussion, {\it i.e.}, $\mathcal{O}(q)\equiv \mathcal{O}[q(t)]$ for any time-dependent observables $\mathcal{O}$. \par One can always expand the state $|\Psi(q)\rangle$ as $ |\Psi(q)\rangle = \sum_{n=0}^{\mathcal{D}-1} a_n(q) e^{-i\Theta_n(q)} |\psi_n(q)\rangle \,, $ into the instantaneous eigenstates $ |\psi_n(q)\rangle\,( n\in\mathbb{N})$ of $\hat H(q)$ satisfying $\hat H(q)|\psi_n(q)\rangle=E_n(q)|\psi_n(q)\rangle$. $\{a_n\}$ is the coefficients of superposition and $\mathcal{D}$ is the dimension of Hilbert space. The time-dependent Schr\"odinger equation then reduces to \begin{eqnarray*} \partial_t a_n(t)& = &- \sum_{m=0}^{\mathcal{D}-1} a_m(t) e^{i\left[\Theta_n(t)-\Theta_m(t)\right]} \langle\psi_n(t)|\partial_t|\psi_m(t)\rangle\,, \end{eqnarray*} where the dynamical phase takes the explicit form $\Theta_n(q)=\int_{q_i}^q \frac{E_n(q^\prime)}{\dot{q}^\prime}dq^\prime=v\int_{q_i}^q {E_n(q^\prime)}dq^\prime$. \par We characterize the loss of adiabaticity employing the following two quantities: the excitation probability $\mathcal{P}(t)=1-|\langle\Psi(t)|\psi_0(t)\rangle|^2$ which measures the infidelity of the dynamical state $|\Psi(t)\rangle$ on the adiabatically connected ground state $|\psi_0(t)\rangle$ and the excess heat density $\mathcal{Q}(t)=[\langle\Psi(t)|\hat H(t)|\Psi(t)\rangle-E_0(t)]/N$, which measures the overall net energy gain over $E_0(t)\equiv\langle\psi_0(t)|\hat H(t)|\psi_0(t)\rangle$. Starting from the ground state, with $\mathcal{P}(q_i)=0$ and $\mathcal{Q}(q_i)=0$, we expect $1\geq \mathcal{P}(t)\geq 0$ and $ \mathcal{Q}(t)\geq 0$. \par This study is focused on driving the system from BA phase ($q_i = 0$) to deep in the polar phase ($q_f=6$). When the system is driven across the QCP, due to the vanishing gap at the critical field, non-adiabatic effects become unavoidable even if the driving velocity $v\rightarrow 0$. For a finite-size system, the gap remains finite, and the dynamics show quite different behaviors in the limit $v\rightarrow 0$. This constitutes an important topic to be addressed in the following. \par Based on numerical simulations, we find there exist three distinct regions according to the driving rate and will be called adiabatic, non-adiabatic, and far-from-equilibrium region respectively corresponding to long, intermediate, and short $\tau$. Their non-adiabatic indicators show quite different scaling behaviors and are essentially decided by the dominant time or length scales and the corresponding low energy excitations in the driven processes. {\it The adiabatic region for large $\tau$.}---For a large but finite $N$, a finite gap exists. Adiabatically passing through the pseudo-critical point is possible in the adiabatic perturbation limit $v\rightarrow 0$, when the system can only be excited by the so-called Landau-Zener mechanism. The adiabatic perturbation theory \cite{DeGrandi2010} gives \begin{widetext} \begin{eqnarray} |a_n(q)|^2 &\approx& v^2 \left\{ \left[\frac{|\langle\psi_n|\partial_{q_i}|\psi_0\rangle|^2}{(E_n(q_i)-E_0(q_i))^2} +\frac{|\langle\psi_n|\partial_{q}|\psi_0\rangle|^2}{(E_n(q)-E_0(q))^2}\right] -2 \frac{\langle\psi_n|\partial_{q_i}|\psi_0\rangle}{E_n(q_i)-E_0(q_i)} \frac{\langle\psi_n|\partial_{q}|\psi_0\rangle}{E_n(q)-E_0(q)}\cos[\delta\Theta_{n0}]\right\} \,, \label{eq:apt} \end{eqnarray} \end{widetext} where the accumulated phase difference between the $n$-th excited state and the ground state is defined as $ \delta\Theta_{n0}=\Theta_n(q)-\Theta_0(q)=v\int_{q_i}^q [E_n(q^\prime)-E_0(q^\prime)]d q^\prime$. Provided that only the dominant excitation into the first excited state is considered, we find $\delta\Theta_{10}= v\int_{q_i}^q\Delta(q^\prime) dq^\prime$, see Fig.\,\ref{fig:rescale_eq}\,(a). The integration of the gap ensures $\delta\Theta_{10}(q)$ be a continuous and monotonous increasing function of $q$ and linearly depend on $v$. Therefore, the two terms in Eq.\,(\ref{eq:apt}) can well describe the amplitude and oscillation behaviors of $\mathcal{P}(q)\approx|a_1(q)|^2$ as shown in Fig.\,\ref{fig:PexQ}\,(a), respectively. For a specific large $\tau$, $\mathcal{P}(q)$ shows slow oscillations with large envelope around the QCP and fast oscillations with small envelope away from the QCP. This is due to the gap closing near the QCP, which leads to a slower growth of $\delta\Theta_{10}$. The linear dependence on driving rate $v$ for $\delta\Theta_{10}$ is revealed by the oscillation period structure, shown respectively in Figs.\,\ref{fig:PexQ}\,(a) and (b), reminiscent of a Russian doll collection, between protocols with different $v$. \par In this adiabatic region, diabatic effects induced by the external driving enter only as a perturbation near the QCP. It is clear that the final excitation probability $\mathcal{P} (\tau)$ and excess heat density $\mathcal{Q}(\tau)$ both show the $\sim v^2\propto\tau^{-2}$ scaling for a generic gapped system \cite{Polkovnikov2008a}, as predicted by Eq.\,(\ref{eq:apt}), and also visibly confirmed in the large $\tau$ region in Figs.\,\ref{fig:PexQ}\,(c)-(d). The finite energy gap $\Delta_\text{min}$ at the QCP is the dominant energy scale during the dynamics, or the finite size $N$ is the smallest and dominant length scale. One can thus define a size-dependent KZ rate as $v_{\rm KZ}(N)\sim N^{-{(1+\nu z)}/{\nu d}}$ or equivalently a time scale $\tau_{\rm KZ}(N)\sim N^{{(1+\nu z)}/{\nu d}}$, with such driving rate or time the correlation length $N_\xi$ at the frozen moment is of the order of the system size $N$. When $v$ is smaller than $v_{\rm KZ}(N)$, the system always remains adiabatic \cite{Huang2014}. {\it The non-adiabatic universal region.}---In this intermediate region, $v> v_{\rm KZ}(N)$ but remains much less than the relevant initial gap. The non-adiabatic indicators $\mathcal{P}(\tau)$ and $\mathcal{Q}(\tau)$ exhibit distinct behaviors from the adiabatic region. It is due to the existence of another external time\,(length) scale $t_{\rm KZ}\,(\xi_{\rm KZ})$ which dominates near the QCP. This so-called KZ time $t_{\rm KZ}\sim v^{-\nu z/(1+\nu z)}$ or KZ length scale $\xi_{\rm KZ}\sim v^{-\nu/(1+\nu z)}$ , is determined by the external driving, and acts as the smallest time or length scale in the universal dynamics near the QCP. The crossover between the two regions occurs when $v\simeq v_{\rm KZ}$, which predicts the crossover happens at $\tau_c\propto N$ for different system size $N$, as shown in the inset of Fig.\,\ref{fig:PexQ}\,(c). Analogously, we can define a maximal defect-free size $N_{\rm KZ}\sim \xi_{\rm KZ}^{d}\sim v^{-d\nu/(1+\nu z)}$ or an effective length scale given by the driving, and the defect density from the KZ mechanism is proportional to $1/N_{\rm KZ}$. Therefore we find $\mathcal{P}(\tau)\sim {1}/{N_{\rm KZ}}\sim v^{d\nu/(1+\nu z)}$ and $\mathcal{Q}(\tau)\sim\mathcal{P}(\tau)\sim v^{d\nu/(1+\nu z)}$ \cite{Kolodrubetz2012b,Kolodrubetz2015,dutta2015quantum}. This KZ scaling is expected to hold in the limit of $v\rightarrow 0\; (\tau\rightarrow\infty)$ in the TDL [black dash-dot line in Figs.\,\ref{fig:PexQ}\,(c)-(d)]. The asymptotic behavior for $N\rightarrow\infty$ implies there adiabatic processes are excluded in the TDL. We recall the limits of $v\rightarrow 0$ ({\it i.e.}, $\tau\rightarrow\infty$) and $N\rightarrow\infty$ do not commute \cite{Polkovnikov2008a}. The above two regions respectively correspond to the adiabatic finite-size scaling (FSS) regime and the impulse finite-time scaling (FTS) regime of a finite-size system considered earlier in Ref.\,\cite{Huang2014}. In the FSS regime, $N<N_{\xi}$ and $N<N_{\rm KZ}$, for example $\mathcal{P} = N^{-1}f_1(vN^{\frac{1+\nu z}{\nu d}})$ and we have only considered the excitation at the QCP $\epsilon=0$. The argument $x=vN^{\frac{1+\nu z}{\nu d}}=vN$ is small and the scaling function $f_1(x)$ can be described perturbatively \cite{Huang2014,Liu2014} in $x$. Therefore we have $\mathcal{P}\simeq N^{-1}[f_1(0)+ f_1^\prime(0)\cdot x+\frac{1}{2}f_1^{\prime\prime}(0)\cdot x^2]$, where the first term $f_1(0)$ is the equilibrium excitation and should vanish for a finite system, the second and the third term arise from the perturbation of the driving and we argue that the linear term in $v$ is absent because the excitation or excess heat is insensitive to the sign of $v$ \cite{Polkovnikov2008a}, therefore we have $\mathcal{P}\simeq N^{-1}\cdot\frac{1}{2}f_1^{\prime\prime}(0)\cdot x^2\sim \tau^{-2}$. \par In a general scenario of KZ ramp, the tuning parameter is swept from the deep disordered phase (polar) to the ordered phase (BA). Due to the gap closing from $q=0$ to $q < 0$ and the appearance of a second QCP at $q=-2$, we choose to drive from the BA to the polar phase in order to obtain a steady value of $\mathcal{P}$ for a long ramp time. In order to address the experiments, according to Ref.\,\cite{Gong2010,DeGrandi2011}, the order parameters easily measurable in experiments satisfy the dynamical KZ scaling form, \begin{eqnarray} \mathcal{O}(\epsilon,v) &=& v^{\frac{\beta_\mathcal{O}}{1+\nu z}}f_ \mathcal{O}(\epsilon v^{-\frac{1}{1+\nu z}}, Nv^{\frac{\nu d}{1+\nu z}})\label{eq:KZscaling} \end{eqnarray} where $\mathcal{O} = \langle\mathcal{\hat O}\rangle$ can be either $\mathcal{N}$ or $\mathcal{M}$, $\beta_\mathcal{O}$ is the corresponding critical exponents given in Table \ref{tab:exponents}. $f_ \mathcal{O}(x,y)$ is a scaling function of arguments $(x,y)$, taking the FTS form in Ref.\,\cite{Huang2014} with finite-size effects included. In actual experiments, one can easily prepare the initial state in the polar phase with all atoms in $|1,m_f=0\rangle$ state (the $F_z = 0$ subspace) and tune the quadratic Zeeman shift $q$ in Eq.\,(\ref{eq:Hamil0}) linearly as in Eq.\,(\ref{eq:protocol}) with different driving time $\tau$. During the tuning process, the dynamical values of the fractional population $\mathcal{N}$ and the transverse magnetization $\mathcal{M}$ can be measured in successive realizations. One can also vary the system size $N$ to take the finite-size scaling into consideration. The scaling hypothesis in Eq.\,(\ref{eq:KZscaling}) can be checked by doing data collapse in the two scaling directions with the experimental results. \par We numerically check the full dynamical KZ scaling form by fixing $Nv^{\nu d/(1+\nu z)}=\text{const.}$, Fig.\,\ref{fig:dynamics}\,(a) and (c) show the numerically computed $\mathcal{M}$ and $\mathcal{N}$ with elected experimentally feasible system size $N$. These curves are indeed seen to collapse onto each other after rescaling according to Eq.\,(\ref{eq:KZscaling}), see Fig.\,\ref{fig:dynamics}\,(b) and (d). We note that for a small system size $N$, the scaling collapse region shrinks, which indicates the universality would disappear for the really small $\tau$ (large $v$) region. \begin{figure \centering \includegraphics[width=1.0\columnwidth]{Fig5.pdf} \caption{{\bf Finite-size Kibble-Zurek scaling.} For fixed $N\cdot v^{d\nu/(1+\nu z)}=N\cdot v = 180$ in Eq.\,(\ref{eq:KZscaling}) and starting from the polar phase ($q_i=4.0$) and sweeping to the BA phase $(q_f = 0)$. (a) The dynamical value of $\mathcal{N}(q)$ -- the fractional population. (b) The numerical data rescaled for $\mathcal{N}$. (c) The transverse magnetization $\mathcal{M}(q)$. (d) The numerical data rescaled for $\mathcal{M}$. For $N = 1\times 10^3, 5\times 10^3, 1\times 10^4 \text{ and } 2\times10^4$ which are all within experimentally feasible atom numbers. It is clear that in (b) and (d) the KZ scaling hypotheses are verified near the QCP, but for smaller system size (gray line with square marker), the collapsed region shrinks. This indicates the loss of universality when $v$ is too fast. }\label{fig:dynamics} \end{figure} {\it The far-from-equilibrium region for fast driving.}--- When the driving rate $v$ is too fast such that the driving determined length scale $N_{\rm KZ}$ is not only dominant near the QCP, but also during the whole dynamics as $N_{\rm KZ}< N_{\xi}(q_i)$, the system state becomes frozen in the whole driving process. The excitation probability $\mathcal{P}(q)$ saturates rapidly in the initial ramp and loses its feature as an indicator, as shown in Fig.\,\ref{fig:PexQ}\,(a) and (c). The heat density $\mathcal{Q}(\tau)$ shows almost no size dependence since the finite-size effects are unimportant at the initial gap $\Delta(q_i)$ [Fig.\,\ref{fig:rescale_eq}\,(a)], and $\mathcal{Q}(\tau)$ tends to nearly a constant for $\tau\rightarrow0$ as shown in Fig.\,\ref{fig:PexQ}\,(d). This far-from-equilibrium region by fast driving is non-universal. \section{Discussions and Conclusions}\label{sec:conclusion} In this paper, we study the equilibrium and dynamical properties in a ferromagnetic spinor atomic Bose-Einstein condensate. At equilibrium, we extract the mean-field critical exponents and verify the finite-size scaling hypothesis. Because of the infinitely long-range nature of the interaction (within the SMA), the mean-field theory gives exact results about the critical phenomena in the equilibrium. The dynamical process is realized by linearly tuning the quadratic Zeeman shift across a continuous QCP. In the vicinity of the QCP, universal behaviors are also observed in the dynamics. Three distinct dynamical regions are identified corresponding to different total driving time $\tau$\,(or equivalently driving rate $v\propto\tau^{-1}$), characterized by two adiabaticity indicators: the excitation probability $\mathcal{P}$ and the excess heat density $\mathcal{Q}\,$. We show that the adiabatic region of $\,\mathcal{P}\sim\mathcal{Q}\sim\tau^{-2}\,$ exists in any finite system for $v<v_{\rm KZ}(N)$, in which external driving enters the dynamics only as a perturbation. In this region the adiabatic perturbation theory can give a nice description for the dynamics. While the non-adiabatic universal region of $\,\mathcal{P}\sim\mathcal{Q}\sim\tau^{-\nu d/(1+\nu z)}\,$, which corresponds to intermediate driving rate $v>v_{\rm KZ}(N)$, and in the thermodynamic limit is well described by the Kibble-Zurek mechanism. The dynamical Kibble-Zurek scaling is found to apply to finite-size systems in this universal region and the scaling hypotheses for fractional population $\mathcal{N}$ and transverse magnetization $\mathcal{M}$ are presented which can be checked directly in ongoing experiments. Finally, the region of the fastest driving rate is found to be non-universal and far-from-equilibrium with $\mathcal{P}$ and $\mathcal{Q}$ essentially being constants independent of $\tau$. The distinct behaviors of the dynamics originate from the competitions between different length scales, the scale given by the external driving $N_{\rm KZ}$, the intrinsic correlation length scale of the system $N_{\xi}$, and the finite size $N$. The smallest one always dominates the dynamic behavior. We also note that the above three regions: adiabatic, non-adiabatic and far-from-equilibrium regions may respectively correspond to the analytical, non-adiabatic, and non-analytical processes in Ref.\,\cite{Polkovnikov2008a}. As pointed by the authors of Ref.\,\cite{Polkovnikov2008a}, in the analytical and non-analytical regimes, there exist no highly populated low-energy modes and finite-size or relaxation effects are unimportant. \par Finally, we emphasize that the simplicity and rich magnetic phases of spinor condensates could offer us a promising platform to study the critical phenomena theoretically and experimentally, both in equilibrium and the nonequilibrium. {\it Note added.}---A related work addressing the similar topic but in the Lipkin-Meshkov-Glick model appeared in the archive very recently \cite{Defenu2018}. \section*{Acknowledgement} This work is supported by the National Basic Research Program of China (973 program) (No. 2013CB922004), NSFC (No. 11574100, No. 91636213 and No. 11747605). S.Y. is supported in part by China Postdoctoral Science Foundation (Grant No. 2017M620035).
{ "timestamp": "2018-05-10T02:05:12", "yymm": "1805", "arxiv_id": "1805.02174", "language": "en", "url": "https://arxiv.org/abs/1805.02174" }
\section{Introduction} \label{sect:Einleitung} Many imaging and data analysis problems in the applied sciences lead to the numerical task of parameter identification in exponential sums $\sum_{j=1}^M c_j \textnormal{e}^{-2\pi i\langle t_j,\cdot\rangle}$. For sparse exponential sums, i.e., for small $M$, Prony's method enables the identification of its parameters $\{t_j\}_{j=1}^M\subset \ensuremath{\mathbb{R}}^d$ and contributions $\{c_j\}_{j=1}^M\subset\ensuremath{\mathbb{C}}$ from relatively few sampling values, see e.g.~\cite{Potts:2010ko,Potts:2013vb} and references therein. The most feasible implementations for $d=1$ are based on the eigenvalue analysis of the associated Prony matrix, see e.g.~\cite{Beinert:2017gy,PP13}. The principles of the multivariate setting have been examined in \cite{KPRO16,Kunis:by,AnCa17,Mo18}, for instance, but associated numerical schemes have not been extensively studied yet. The works \cite{ Sa18, DiIs17, PoTa13} describe multivariate Prony methods that are based on finding zeros of several univariate respectively multivariate polynomials. We shall completely circumvent this algebraic geometry problem by developing a numerical scheme based on a randomized multivariate matrix pencil method. We construct matrices $S_1,\ldots,S_d$ from the sampling values, so that their simultaneous diagonalization yields the parameters $\{t_j\}_{j=1}^M$. Since $S_1,\ldots,S_d$ are not normal, standard numerical algorithms for simultaneous diagonalization are not available, cf.~\cite{Bunse-Gerstner:1993jy,Cardoso:1996ck,Golub:1996fk,Kressner:2005sp}. To circumvent this problem, we derive the joint eigenbasis from the eigendecomposition of a single matrix that is a random linear combination of $S_1,\ldots,S_d$. While \cite{AnCa17} diagonalizes $S_1$ and hopes for simple eigenvalues, the recent papers \cite[Alg.~3.1]{Mo18} and the algorithm introduced in \cite{SaUsCo17} also use the above random linear combination and argue that generically the eigenvalues are simple. While in \cite{SaUsCo17} the authors focus on analyzing the influence of pertubations on their multivariate ESPRIT-method, here in the new multivariate matrix pencil method, we describe the situation of using a random linear combination of $S_1, \dots, S_d$ in more detail and quantify the influence of the minimal separation of $\{t_j\}_{j=1}^M$ on the eigendecomposition of the random matrix. To check on its feasibility, our methodology is applied to analyze fluorescence microscopy images. We cast the problem of locating protein markers as a parameter identification in exponential sums. Due to its analytic roots, Prony's method enables the identification of locations at the subpixel scale, sometimes referred to as superresolution fluorescence microscopy, cf.~\cite{Studer:2012oq}. The results on experimental fluorescence images show that our scheme is numerically feasible. The outline is as follows: In Section \ref{sect:pre} we develop our numerical scheme. The approach of simultaneous diagonalization to identify $\{t_j\}_{j=1}^M$ is presented in Section \ref{sec:sim}. The problem of simultaneous diagonalization is reduced to the diagonalization of a single random matrix in Section \ref{sec:single}, where we examine the influence of the minimal separation of the parameters $\{t_j\}_{j=1}^M$. Our new scheme is applied to synthetic and to experimental fluorescence microscopy data in Section \ref{sec:appl}. \section{Reconstruction of sparse exponential sums from samples}\label{sect:pre} Let $\{t_j\}_{j=1}^M\subset [0,1)^d$ always denote $M$ pairwise different $d$-dimensional parameters and consider the exponential sum \begin{equation}\label{eq:fund prob samp} f(k) = \sum_{j=1}^M c_j \textnormal{e}^{-2\pi \mathrm i\langle t_j,k\rangle},\quad k\in\ensuremath{\mathbb{Z}}^d, \end{equation} with nonzero coefficients $\{c_j\}_{j=1}^M\subset\ensuremath{\mathbb{C}}\backslash \{0\}$. Our aim is to identify the parameters $\{t_j\}_{j=1}^M$ and coefficients $\{c_j\}_{j=1}^M$ from sampling values $\{f(k)\}_{k\in I}$ with suitable $I\subset \ensuremath{\mathbb{Z}}^d$. \subsection{Reconstruction by simultaneous diagonalization}\label{sec:sim} For $n\in\ensuremath{\mathbb{N}}$, let $I_n:=\{0,\dots,n\}^d$ and select a fixed ordering of the elements in $I_n$. Knowledge of the sampling values of $f$ on the set difference $I:=I_{n+1}-I_n$ enables us to build the matrices \begin{equation*} T:= \left(f(k-l)\right)_{k,l\in I_n},\qquad T_{\ell}:=(f(k-l+e_{\ell}))_{k,l\in I_n}, \quad\ell=1,\ldots,d. \end{equation*} If $T$ has rank $M$, then we compute the reduced singular value decomposition \begin{equation*} T=U \Sigma V^*, \end{equation*} where $\Sigma\in\ensuremath{\mathbb{R}}^{M\times M}$ is positive definite and $U\in\ensuremath{\mathbb{C}}^{N\times M}$, $V\in\ensuremath{\mathbb{C}}^{M\times N}$ satisfy $U^*U=V^*V=\id\in\ensuremath{\mathbb{R}}^{M\times M}$ with $N:=\# I_n =(n+1)^d$. Therefore, we can define the set of $M\times M$ matrices \begin{equation}\label{eq:S def} S_{\ell}:=U^* T_{\ell} V \Sigma^{-1},\quad \ell=1,\ldots,d. \end{equation} These matrices turn out to be simultaneous diagonalizable, cf.~Theorem \ref{th:22}, which shall enable us to identify the vectors $\{t_j\}_{j=1}^M$. In the following theorem, $K_d$ denotes an absolute constant that only depends on $d$ and is further specified in \cite{Kunis:by,KPRO16}. We also make use of \begin{equation*} z_j:=\textnormal{e}^{-2\pi i t_j}:=(\textnormal{e}^{-2\pi i t_{j,1}},\ldots,\textnormal{e}^{-2\pi i t_{j,d}}), \quad j=1,\ldots,M, \end{equation*} so that it is sufficient to reconstruct $\{z_j\}_{j=1}^M$ in order to identify $\{t_j\}_{j=1}^M$. \begin{thm}\label{th:22} If $n\geq \frac{K_d}{\min_{i\neq j} \|z_i - z_j\|}$, then $T$ has rank $M$ and $S_1,\ldots,S_d$ are simultaneously diagonalizable. Furthermore, any regular matrix $W$ that simultaneously diagonalizes $S_1,\ldots,S_d$ yields a permutation $\tau$ on $\{1,\ldots,M\}$ such that \begin{equation*} W^{-1} S_\ell W = \diag(\langle z_{\tau(1)},e_\ell\rangle,\ldots,\langle z_{\tau(M)},e_\ell\rangle),\quad \ell=1,\ldots,d. \end{equation*} \end{thm} \begin{proof} According to \cite{KPRO16}, $T$ always admits the factorization \begin{equation}\label{eq:factorT} T= A^* D A, \end{equation} where $A$ is the $M\times N$ multivariate complex Vandermonde matrix \begin{equation*} A=\big(z_j^k\big)_{\substack{j=1,\dots,M\\k\in I_n}}, \end{equation*} and $D=\diag(c_1,\ldots,c_M)$. The condition on $n$ implies that $A$ has full rank $M$, cf.~\cite{Kunis:by,KPRO16}. Hence, $T$ has indeed rank $M$ since all $c_1,\ldots,c_M$ are nonzero. We also deduce the factorization \begin{equation*} T_\ell = A^* D_\ell A,\quad \ell=1,\ldots,d, \end{equation*} where the diagonal matrix $D_\ell$ is given by \begin{equation*} D_\ell:=\diag(c_1\langle z_1,e_\ell\rangle,\ldots,c_M\langle z_M,e_\ell\rangle), \quad \ell=1,\ldots,d. \end{equation*} We shall now check that the specific matrix $W_0:=(AU)^*$ (which is not accessible to us) simultaneously diagonalizes $S_1,\ldots,S_d$. Indeed, by inserting the definitions, we obtain \begin{equation*} W_0^{-1} S_\ell W_0 = (AU)^{-*} U^* A^* D_\ell A V \Sigma^{-1} (AU)^*. \end{equation*} Note that the reduced singular value decomposition implies that both matrices, $AU$ and $AV$, are regular. Since $\Sigma = U^*TV = U^*A^*DAV$, we deduce $\Sigma^{-1}=(AV)^{-1} D^{-1}(AU)^{-*}$, which implies \begin{equation*} W_0^{-1} S_\ell W_0 =D_\ell D^{-1} = \diag(\langle z_1,e_\ell\rangle,\ldots,\langle z_{M},e_\ell\rangle),\quad \ell=1,\ldots,d, \end{equation*} so that $W_0$ simultaneously diagonalizes $S_1,\ldots,S_d$. Note that $W_0$ also diagonalizes any complex linear combination \begin{equation}\label{eq:Cmu} C_\mu:=\sum_{\ell=1}^d \overline{\mu}_\ell S_\ell,\quad \mu\in\ensuremath{\mathbb{C}}^d. \end{equation} Because of \begin{equation*} W_0^{-1} C_\mu W_0 = \diag\left(\sum_{\ell = 1}^d \bar \mu_\ell\langle z_1,e_\ell \rangle, \ldots,\sum_{\ell = 1}^d \bar \mu_\ell\langle z_1,e_\ell \rangle \right), \end{equation*} the eigenvalues $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ of $C_\mu$ are \begin{equation*} \lambda_j(\mu)=\langle z_j,\mu\rangle \end{equation*} with the ordering induced by $W_0$. Since $\{t_j\}_{j=1}^M$ are pairwise different, so are $\{z_j\}_{j=1}^M$, and, hence, there is $\tilde{\mu}\in \S_\ensuremath{\mathbb{C}}^{d-1}=\{x\in\mathbb{C}^d:\|x\|=1\}$ such that $\langle z_i - z_j, \tilde \mu \rangle \neq 0$ for all $i\neq j$ and thus $\{\lambda_j(\tilde{\mu})\}_{j=1}^M$ are pairwise different. In other words, all eigenspaces of $C_{\tilde{\mu}}$ are $1$-dimensional. Any matrix $W=(w_1,\ldots,w_M)$ that simultaneously diagonalizes $S_1,\ldots,S_d$ also diagonalizes $C_{\tilde{\mu}}$. Thus, there is a permutation $\tau$ such that $w_{\tau(i)}$ spans the same space as the $i$-th column of $W_0$, which concludes the proof. \end{proof} According to Theorem \ref{th:22}, the diagonalization of $S_\ell$ encodes the $\ell$-th entry of a permutation of the vectors $\{z_j\}_{j=1}^M$. We require simultaneous diagonalization to ensure that these entries are associated to the same permutation across all $\ell=1,\ldots,d$. In general, the matrices $S_1,\ldots,S_d$ are not normal. Therefore, the numerical task of simultaneous diagonalization is difficult and many simultaneous diagonalization algorithms in the literature are not suitable, cf.~\cite{Bunse-Gerstner:1993jy,Cardoso:1996ck,Golub:1996fk,Kressner:2005sp}. We attempt to circumvent such issues by using $C_\mu$ from \eqref{eq:Cmu}, which shall enable us to restrict our diagonalization efforts to a single matrix: \begin{corollary}\label{th:single} If $\mu\in\ensuremath{\mathbb{C}}^d$ is such that $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ are pairwise different, then any matrix $W$ that diagonalizes $C_\mu$ also simultaneously diagonalizes $S_1,\ldots,S_d$. \end{corollary} \begin{proof} The matrices $C_\mu, S_1,\ldots,S_d$ are simultaneously diagonalizable. The same arguments as in the proof of Theorem \ref{th:22} imply the assertion. \end{proof} According to Corollary \ref{th:single} we aim to find $\mu\in\ensuremath{\mathbb{C}}^d$ such that $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ are pairwise different. For a nonzero vector $z\in\ensuremath{\mathbb{C}}^d$, let $z^\perp$ denote the $d-1$-dimensional linear subspace of $\ensuremath{\mathbb{C}}^d$ orthogonal to $z$. The proof of Theorem \ref{th:22} reveals that \begin{equation}\label{eq:set} \big\{\mu \in\ensuremath{\mathbb{C}}^d : \lambda_1(\mu),\ldots,\lambda_M(\mu) \text{ are pairwise different} \big\} = \ensuremath{\mathbb{C}}^d\setminus \bigcup_{i\neq j}(z_i-z_j)^\perp \end{equation} Hence, this set is the entire $\ensuremath{\mathbb{C}}^d$ except for at most $\binom{M}{2}$ many $(d-1)$-dimensional subspaces. \begin{example} Let $d=2$, $M=5$, and choose $t_1, \dots, t_5 \in [0,1)^2$ randomly. We construct $S_1, S_2 \in\mathbb C^{5\times 5}$ by \eqref{eq:S def}. Thus, we choose $\mu = (\mu_1, \mu_2)^\top \in\mathbb{S}_\ensuremath{\mathbb{C}}^1$ and construct $C_\mu = \mu_1S_1 + \mu_2 S_2$. According to \eqref{eq:set} we expect $\binom{5}{2} = 10$ great circles on $\mathbb S_\ensuremath{\mathbb{C}}^1$, with the property that choosing a $\mu$ from one of those great circles results in a $C_\mu$, that has at least one eigenspace of dimension larger than one. For $\xi \in\mathbb C$, with $\|\xi\|=1$ we get $C_{\mu \xi} = \xi\left(\mu_1S_1 + \mu_2 S_2\right)$. This shows that the multiplication of $C_\mu$ by a global phase $\xi$ does not change the pairwise differences of the eigenvalues of $C_\mu$ and therefore we can use Hopf fibration, to identify great circles on $\mathbb S_\ensuremath{\mathbb{C}}^1$ with a single point on $\mathbb S^2$, for visualization. Indeed we can observe that the minimal distance of any two eigenvalues of $C_\mu$ is nonzero on $\mathbb{S}^2$ except for $10$ points, see Figure \ref{fig:MuAbhaengigkeit}(a). Note, that we only see $8$ of those $10$ points in \ref{fig:MuAbhaengigkeit}(a), the other $2$ are on the back side of the sphere. For visual illustration of the expected great circles, we now switch to the real case and choose $d=3$, $M=5$, and restrict $\mu$ to the real sphere $\mathbb S^2$. In Figure \ref{fig:MuAbhaengigkeit}(b) we see $10$ great circles on $\mathbb S^2$, for which $C_\mu$ has eigenspaces of dimension larger than one. Observe that away from those great circles, the minimal distance of any two eigenvalues of $C_\mu$ rapidly increases. \begin{figure} \subfigure[$S_1, S_2 \in\mathbb C^{5 \times 5}$, $\mu \in \mathbb S_\ensuremath{\mathbb{C}}^1$ ]{ \includegraphics[width=0.45\textwidth]{CvsR.pdf} } \subfigure[$d=3$, $M=5$, and $\mu \in \mathbb S^2$ ]{ \includegraphics[width=0.45\textwidth]{Sphaere8EigBar.pdf} } \caption{Visualization of the smallest distance of any two eigenvalues of $C_\mu$.} \label{fig:MuAbhaengigkeit} \end{figure} \end{example} \begin{remark} Our approach to simultaneous diagonalization of $S_1,\ldots,S_d$ suggested in Corollary \ref{th:single} requires our present setting, in which $\{z_j\}_{j=1}^M$ are pairwise different. It does not apply to the problem of simultaneous diagonalization in general. \end{remark} \subsection{Simultaneous diagonalization by random linear combinations}\label{sec:single} The present section is dedicated to quantify the difference $\lambda_i(\mu)-\lambda_j(\mu)$ in relation to the difference $z_i-z_j$. If $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ is a random vector, distributed according to the unitarily invariant probability measure on $\S_\ensuremath{\mathbb{C}}^{d-1}$, then \begin{equation*} \mathbb{E}|\lambda_i(\mu)-\lambda_j(\mu)| = \frac{1}{\sqrt{d}}\|z_i-z_j\|. \end{equation*} The following result provides a more quantitative analysis: \begin{thm}\label{th:stab oder so} Let $i\neq j$ be fixed and suppose $\epsilon\in[0,1]$. If $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ is a random vector, distributed according to the unitarily invariant probability measure on $\S_\ensuremath{\mathbb{C}}^{d-1}$, then the probability that \begin{equation}\label{eq:mu satifsies} |\lambda_i(\mu)-\lambda_j(\mu)| < \epsilon \|z_i-z_j\| \end{equation} holds is at most $2\sqrt{\frac{d}{\pi}}\epsilon$. \end{thm} Theorem \ref{th:stab oder so} immediately implies that the probability that any of the inequalities \begin{equation}\label{eq:mu satifsies} |\lambda_i(\mu)-\lambda_j(\mu)|\geq \epsilon \|z_i-z_j\|, \quad \forall i\neq j, \end{equation} is violated is at most $\binom{M}{2} 2\sqrt{\frac{d}{\pi}}\epsilon$. In other words, if we select about $M^2$ many independent $\mu$, then the probability that \eqref{eq:mu satifsies} fails is at most of the order $\epsilon$. \begin{proof}[Proof of Theorem \ref{th:stab oder so}] The complex sphere $\S_\ensuremath{\mathbb{C}}^{d-1}$ admits the standard identification with the real sphere $\S^{2d-1}$ by $x\mapsto \Big(\begin{smallmatrix}\Real(x)\\ \Imag(x) \end{smallmatrix}\Big) $, and $\Big(\begin{smallmatrix}\Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\Big)$ is distributed according to the orthogonal invariant probability measure on $\S^{2d-1}$, the latter being the standard normalized surface measure. Let $y:=\frac{z_i-z_j}{\|z_i-z_j\|}\in \mathbb{S}_\ensuremath{\mathbb{C}}^{d-1}$, so that $|\lambda_i(\mu)-\lambda_j(\mu)|/\|z_i-z_j\|=|\langle y,\mu\rangle|$. Since \begin{equation}\label{eq:Real Im} \Big| \left\langle \big(\begin{smallmatrix} \Real(y)\\ \Imag(y) \end{smallmatrix}\big), \big(\begin{smallmatrix} \Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\big)\right\rangle\Big| = |\Real\big(\langle y,\mu\rangle\big)| \leq |\langle y,\mu\rangle|, \end{equation} we obtain an upper bound by simply considering \begin{equation}\label{eq:dfrt} \Big| \left\langle \big(\begin{smallmatrix} \Real(y)\\ \Imag(y) \end{smallmatrix}\big), \big(\begin{smallmatrix} \Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\big)\right\rangle\Big| \leq \epsilon. \end{equation} Due to the orthogonal invariance of the surface measure on $\S^{2d-1}$, the distribution of the left-hand-side in \eqref{eq:dfrt} does not depend on the special choice of $y\in\mathbb{S}_\ensuremath{\mathbb{C}}^{d-1}$, so that we can simply assume that $\Big(\begin{smallmatrix}\Real(y)\\ \Imag(y) \end{smallmatrix}\Big)$ is the north pole. The inequality \eqref{eq:dfrt} reduces to $-\epsilon\leq \Real(\mu_1)\leq \epsilon$, hence, describes the complement of two opposing spherical caps in $\S^{2d-1}$. This ``equatorial band'' has measure \begin{equation*} 1-\mathcal{I}_{[1-\epsilon^2]}(d-\frac{1}{2},\frac{1}{2}) = \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}), \end{equation*} see, for instance, \cite{Li:2011id}, where $\mathcal{I}_{[x]}(a,b)$ is the cumulative distribution function of the Beta distribution, i.e., \begin{equation*} \mathcal{I}_{[x]}(a,b) = \frac{\int_0^x t^{a-1}(1-t)^{b-1}dt }{\Beta(a,b)},\qquad \Beta(a,b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}. \end{equation*} For $d=1$, we observe \begin{equation*} \mathcal{I}_{[\epsilon^2]}(1/2,1/2) = \frac{2\arcsin(\epsilon)}{\pi} \leq \frac{2}{\sqrt{\pi}}\epsilon. \end{equation*} Suppose now $d\geq 2$ and define \begin{equation*} f(x):=2\sqrt{x} - \mathcal{I}_{[x]}(1/2,d-1/2) \Beta(1/2,d-1/2). \end{equation*} A short calculation yields that its derivative satisfies \begin{equation*} f'(x)=\frac{1-(1-x)^{d-3/2}}{\sqrt{x}}\geq 0,\quad x\in[0,1]. \end{equation*} Since $f(0)=0$, we obtain \begin{equation} \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) \leq \frac{2\epsilon}{\Beta(1/2,d-1/2)},\quad\epsilon\in[0,1]. \end{equation} The observation $1/\Beta(1/2,d-1/2)\leq \sqrt{d/\pi}$ concludes the proof. \end{proof} \begin{remark} A short calculation leads to \begin{equation*} \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) = \frac{2}{\pi} \Big[ \arcsin(\epsilon) + \epsilon \sum_{k=2}^{d} \frac{4^{k-2}(k-2)!^2 }{(2k-3)(2k-4)!} (1-\epsilon^2)^{k-3/2} \Big]. \end{equation*} One then deduces directly that, for fixed $d$ and small $\epsilon$, the term $\mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) $ is of the order $\epsilon$. \end{remark} Theorem \ref{th:22}, Corollary \ref{th:single}, and Theorem \ref{th:stab oder so} enable us to determine $z_{\tau(1)},\ldots,z_{\tau(M)}$. The actual parameters $t_{\tau(j)}$ are computed as the principal values of $\log(z_{\tau(j)})$. The coefficients $c_{\tau(1)},\ldots,c_{\tau(M)}$ can be determined by solving the linear system $T=A^*DA$ for $D=\diag(c_{\tau(1)},\ldots,c_{\tau(M)})$ by the least squares method. We have summarized these steps in Algorithm \ref{alg_1}. \begin{algorithm} \caption{Prony's method using the multivariate matrix pencil approach}\label{alg_1} \begin{algorithmic}[1] \State \textbf{input} $f(k)$, $k\in I$. \State Compute the reduced singular value decomposition of $T$. \State Build the matrices $S_1,\ldots,S_d$. \State Choose random $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ and compute a matrix $W$ that diagonalizes $C_\mu$. \State Use $W$ to simultaneously diagonalize $S_1,\ldots,S_d$ and reconstruct $z_{\tau(1)},\ldots,z_{\tau(M)}$. \State Compute $t_{\tau(j)}$ as the principal value of $\log(z_{\tau(j)})$, $j=1,\ldots,M$. \State Solve $\mathrm{argmin}_{c}\,\,\|A^* c- f\|_2$ to recover $c_{\tau(1)},\ldots,c_{\tau(M)}$. \State \textbf{return} $t_{\tau(1)},\ldots,t_{\tau(M)}$ and $c_{\tau(1)},\ldots,c_{\tau(M)}$. \end{algorithmic} \end{algorithm} \section{Application in superresolution microscopy}\label{sec:appl} \subsection{Mathematical model} In fluorescence microscopy one puts a fluorescence marker on proteins and stimulates them with a laser. In accordance with the fluorescent microscope's resolution limits, proteins are modeled as point sources, cf.~\cite{Studer:2012oq}, so that the probe is considered a tempered distribution \begin{equation}\label{eq:dira} G = \sum_{j=1}^M c_j \delta_{t_j}, \end{equation} on $\ensuremath{\mathbb{R}}^d$, where $\{t_j\}_{j=1}^M\subset [0,1)^d$ is associated to the protein locations and $\delta_{t_j}$ denotes the Dirac delta function with center $t_j$. Let $\mathcal{F}$ denote the Fourier transform on the space of tempered distributions on $\ensuremath{\mathbb{R}}^d$. Then $\mathcal{F}(G)$ is an exponential sum \begin{equation}\label{eq:ft sum} \mathcal{F}(G)=\sum_{j=1}^M c_j \textnormal{e}^{-2\pi i\langle t_j,\cdot\rangle}. \end{equation} The actual measurements $g$ are the convolution of $G$ with some smooth and sufficiently fast decaying function $\varphi$, \begin{equation*} g=G*\varphi = \sum_{j=1}^M c_j \varphi(\cdot-t_j). \end{equation*} Usually, $\varphi$ is modeled as a Gaussian with known parameters determined by the camera system. In order to determine the locations $\{t_j\}_{j=1}^M$ and the contributions $\{c_j\}_{j=1}^M$, suppose we have access to the Fourier transform of the measurements, \begin{equation*} \mathcal{F}(g) = \mathcal{F}(G) \mathcal{F}(\varphi). \end{equation*} Since $\varphi$ is known, let us also assume that we have access to $\mathcal{F}(\varphi)$. If $\varphi$ is a Gaussian, for instance, we know $\mathcal{F}(\varphi)$ analytically. We now look for some sampling set $I\subset \ensuremath{\mathbb{Z}}^d$, where $\mathcal{F}(\varphi)$ does not vanish, and are able to determine the right-hand-side of \begin{equation}\label{eq:form ert} \mathcal{F}(G)(k)=\mathcal{F}(g)(k) / \mathcal{F}(\varphi)(k), \quad k\in I. \end{equation} Combining \eqref{eq:ft sum} with \eqref{eq:form ert} leads to the sampling problem \eqref{eq:fund prob samp} discussed in the previous sections, i.e., \begin{equation}\label{eq:eq finale} \sum_{j=1}^M c_j \textnormal{e}^{-2\pi i \langle t_j,k\rangle } = f(k),\qquad k\in I, \end{equation} with $f(k):=\mathcal{F}(g)(k) / \mathcal{F}(\varphi)(k)$. The parameters $\{t_j\}_{j=1}^M$ and $\{c_j\}_{j=1}^M$ can now be determined by Algorithm \ref{alg_1} in principle. Note that the above derivations in this section have also been used in \cite{PePoTa11} in combination with the univariate Prony's method. In practice though, we are not able to numerically compute the Fourier transform of $g$ directly, so that the right-hand-side of \eqref{eq:eq finale} is not readily available. Aiming at the application of the discrete Fourier transform (DFT), we recognize that sufficient decay of $\varphi$ implies $g\in L^1(\ensuremath{\mathbb{R}}^d)$, so that its periodization \begin{equation*} g_{\per} : = \sum_{l\in\ensuremath{\mathbb{Z}}^d} g(\cdot+l) \end{equation*} converges pointwise almost everywhere towards a function $g_{\per}\in L^1(\mathbb{T}^d)$, where $\mathbb{T}^d\simeq [0,1)^d$ is the $d$-dimensional torus. Let $\hat{g}_{\per}(k)$ denote the $k$-th Fourier coefficient of $g_{\per}$. The Poisson formula yields \begin{equation*} \mathcal{F}(g)(k) = \hat{g}_{\per}(k),\quad k\in I. \end{equation*} Thus, \eqref{eq:eq finale} can be evaluated by first computing the periodization $g_{\per}$, so that its Fourier coefficients yield \begin{equation}\label{eq:rhs final} \sum_{j=1}^M c_j \textnormal{e}^{-2\pi \mathrm i \langle t_j,k\rangle } = \hat{g}_{\per}(k) / \mathcal{F}(\varphi)(k) ,\qquad k\in I. \end{equation} Numerically, the DFT enables the approximation of the Fourier coefficients $\hat{g}_{\per}(k)$, $k\in I$, from samples of $g_{\per}$. It should be mentioned that all numerical experiments were realized in Python on an Intel~i7, 8GByte, 3GHz, macOS 10.12. \subsection{Numerical results on synthetic data} In our numerical experiments, we shall apply an implementation of the DFT to compute the discrete Fourier transform of samples of $g_{\per}$. The sampling rate of $g$ and hence $g_{\per}$ is determined by the pixel resolution. For both, synthetic and experimental fluorescence microscopy data, we choose $\varphi(\cdot) = \mathrm e^{-b \|\cdot\|^2}$ with adjusted parameter $b$ derived from the camera system. Therefore, the values $\mathcal{F}(\varphi)$ are even available in analytic form. Our analysis is first used on synthetic data in Figure \ref{fig:1} with \begin{align*} t_1 &= \left(\tfrac{2}{5}, \tfrac{2}{5}\right), &c_1 &= 1,& b&=150,\\ t_2 &= \left(\tfrac{2}{5}, \tfrac{3}{5}\right), & c_2 &= 1, \\ t_3 &= \left(\tfrac{3}{5}, \tfrac{2}{5}\right), & c_3& = 1. \end{align*} The measurements $g$ are first exact and in a second experiment corrupted by additive Gaussian noise with a signal to noise ratio of $\mathrm{SNR} = 2.554$, cf.~Figure \ref{fig:1}. For our computations we choose, if not stated otherwise, $n=4$, so that $I=\{-4,\ldots,5\}^2$ and $T$ is an $N\times N$ Toeplitz matrix with $N = 25$. These matrix dimensions show that our methodology is numerically feasible. By examining significant drops in the singular values of $T$, we determine $M$ being $3$ for the synthetic data. The reconstructed locations $\tilde{t}_1,\tilde{t}_2,\tilde{t}_3$ satisfy $\|t_j- \tilde{t}_j\|\leq 1.88\cdot 10^{-3}$, for $i=1,2,3$, in the noisy regime, and coincide with the correct locations up to machine precision in the noise-free regime, see Figure \ref{fig:1}. It is important to note that our approach does not require the parameters $\{t_j\}_{j=1}^M$ to lie on the pixel grid. The pixel grid is only used to approximate $\hat{g}_{\per}(k)$, $k\in I$, by the DCT to determine the right-hand-side in \eqref{eq:rhs final}. \begin{figure} \subfigure[Blue stars indicate the three identified locations within noiseless synthetic data.]{ \includegraphics[width=.45\textwidth]{ThreeBumps.pdf} \label{fig:EchteDaten}}\hfill \subfigure[Good location identification within synthetic data corrupted by additive Gaussian noise with $\mathrm{SNR}=2.554$.]{ \includegraphics[width=.45\textwidth]{ThreeNoisyBumps.pdf} \label{fig:ModellAusDaten} } \caption{In noiseless synthetic data and in the presence of additive Gaussian noise in spatial domain, our proposed algorithm manages to find the locations $t_1, t_2, t_3$ with reasonable accuracy.} \label{fig:1} \end{figure} Indeed, the locations that we compute do not lie on the pixel grid, so we are identifying locations on the subpixel level. This is an important advantage we gain by making our computations in the Fourier domain. Figure \ref{fig:SubpixelNeed} shows the difference between true locations $t_1 = 0.44, t_2 = 0.56$ of two one dimensional Gaussians, compared to the local maxima of their sum. For illustration purpose we use a one dimensional scenario in Figure \ref{fig:SubpixelNeed}. Even though this effect is negligible when $\|t_1 - t_2\|_2 \gg 0$, it would entail miscalculations when the positions $t_1, t_2$ of two proteins are close to each other. Consider a movie, where each frame is a picture as in Figure \ref{fig:ModellAusDaten} and the found locations $t_j$ are used to compute movement speeds of each protein. Then one would falsely compute an accelerated attraction and a longer contact phase of two approaching proteins if this effect is not considered. \begin{figure} \includegraphics[width=.75\textwidth]{NeedForSubpixelLocationCrop.pdf} \caption{The red crosses show the true location of $t_1= 0.44, t_2 = 0.56$ of two one-dimensional Gaussians, each depicted as a dotted line. The red bars however, show the local maxima of the sum of these gaussians and this sum is shown in a continuous line.} \label{fig:SubpixelNeed} \end{figure} To illustrate potential numerical issues when the measurements are corrupted by noise, i.e., when $\tilde{g}:=g+\varepsilon$ is measured in place of $g$, we show in Figure \ref{fig:NoiseVsNoNoise} the real-parts of $\hat{g}_{\per}(k)$, $\hat{\tilde{g}}_{\per}(k)$, approximated by the DFT and $\mathcal{F}(\varphi)(k)=\hat{\varphi}_{\per}(k)$, as well as the respective ratios on a line $k_1=0$ and $k_2=-15,\ldots,15$. Even though we are dealing with images of the size $31\times 31$ pixels, the frequency data of the noisy ratio $\hat{\tilde{g}}_{\per}(k)/\hat{\varphi}_{\per}(k)$ seems only reliable close to the center. While $\hat{\varphi}_{\per}(k)$ decays with growing $k$, the noise keeps $\hat{\tilde{g}}_{\per}(k)$ from decaying, so that the ratio becomes unreasonably large. Therefore, we must restrict $n$ depending on the noise level, and $n=4$ seems to work in our synthetic data with fixed $\mathrm{SNR}$ as well as in our fluorescence microscopy data. Figure \ref{fig:WeitTeilen} shows the ratios $\hat{g}_{\per}(k)/\hat{\varphi}(k)$ for $k\in \{-4,\ldots,5\}^2$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{NoiseVsNoNoise.pdf} \caption{The horizontal axis corresponds to $k_1=0$ and $k_2=-15,\ldots,15$. The decay of the Fourier coefficients $\hat{\tilde{g}}_{\per}$ stagnates in the presence of noise, so that the ratio $\hat{\tilde{g}}_{\per}(k)/\hat{\varphi}_{\per}(k)$ is unbounded away from the center.} \label{fig:NoiseVsNoNoise} \end{figure} \begin{figure} \subfigure[Real part]{ \includegraphics[width=0.45\textwidth]{GDurchPhiReal.pdf} } \subfigure[Imaginary part]{ \includegraphics[width=0.45\textwidth]{GDurchPhiImaginaer.pdf} } \caption{$\hat{g}_{\per}(k)/\hat{\varphi}_{\per}(k)$ on $k\in \{-4,\ldots,5\}^2$.} \label{fig:WeitTeilen} \end{figure} Theorem \ref{th:22} requires $n$ to be larger if the minimal separation distance \begin{equation*} q:=\min_{j\neq i}\|z_j-z_i\| \end{equation*} becomes smaller. In Figure \ref{fig:EinflussVonQ} we illustrate this relation by two examples with noisy synthetic data, one for $q_1 = 0.283$ and the other for $q_2 = 0.057$. For $n=1$ and $n=4$, the locations can still be recovered reasonably well for $q_1$. In the case $q_2$, the choice $n=1$ fails to recover the locations that are close to each other but $n=4$ is successful. \begin{figure} \subfigure[$\min_{i\neq j}\|z_i-z_j\|=0.283$: locations are recovered with error margins $\leq 7.1\cdot 10^{-3}$ and $\leq 2.8\cdot 10^{-3}$ for $n=1$ and $n=4$, respectively.]{ \includegraphics[width=0.45\textwidth]{QGross.pdf} } \hfill \subfigure[$\min_{i\neq j}\|z_i-z_j\|= 0.057$: $n=1$ fails. Locations are correctly recovered for $n=4$ with error $\leq 1\cdot 10^{-2}$.]{ \includegraphics[width=0.45\textwidth]{QKlein.pdf} } \caption{Noisy synthetic data with $\mathrm{SNR}=2.554$. The light blue circles show the true locations $t_1, t_2, t_3$. The blue stars show the reconstruction with $n=1$, the magenta crosses show the reconstruction with $n=4$. In accordance with the ``spirit'' of the requirements on $n$ in Theorem \ref{th:22}, well-separated true locations allow for small $n$. If locations are not well-separated, then $n=1$ fails but the choice $n=4$ enables reconstruction.} \label{fig:EinflussVonQ} \end{figure} \subsection{Numerical results on fluorescence microscopy data} The cell-surface receptor IFNAR2 (type I interferon beta-subunit) of living cells was labelled with biofunctionalized quantum dots (QD605, Cat. No. Q21501MP, Invitrogen \cite{YoWiRiBeLiPi13}). These nanoparticles are small in size (hydrodynamic radius of 15-21 nm) but show an extraordinary high fluorescence signal. Single-molecule imaging was done on an inverted TIRF (total internal reflection fluorescence) microscope (Olympus IX71) with a scientific grade digital camera (Hamamatsu ORCA Flash 4.0). After optical magnification (150xTIRF objective UAPO; NA, 1.45; Olympus) and pixel-binning the final pixel size in the image plane was calculated to be 87 nm. To achieve a high signal-to-noise ratio the signal integration time was set to 32 ms. The decay of the singular values of $T$ with $n=4$ for the experimental fluorescence microscopy data in Figure \ref{fig:real 1}(a) suggest $M=8$. This yields $C_\mu,S_1,S_2 \in\mathbb C^{8\times 8}$ and our algorithm finds the parameters $t_j, c_j$, $j=1,\ldots,8$, in less than a millisecond. Note in Figure \ref{fig:real 1}(b) that our algorithm, somewhat surprisingly, successfully identifies proteins at the boundary of the image, even though one would expect artifacts due to periodization issues. However, those identified translations close to the boundary are not very reliable and will need a post- or pre-processing step in a more elaborate analysis in practice. \begin{figure} \subfigure[]{ \includegraphics[width=0.45\textwidth]{Frame162.pdf} } \hfill \subfigure[]{ \includegraphics[width=0.45\textwidth]{Frame92.pdf} } \caption{Experimental data with blue stars marking identified locations.} \label{fig:real 1} \end{figure} \section*{Conclusion} We proposed an algorithm that finds multivariate frequencies out of structured samples of a finite sum of multivariate exponentials. Our proposed algorithm is a multivariate generalization of a matrix pencil method and is based on simultaneous diagonalization of a pencil of non-normal matrices. We also studied a method to simultaneously diagonalize the occurring non-normal matrices by analyzing random linear combinations. Randomness was also quantified in relation to the minimal separation of the exponential parameters. We successfully tested our algorithm on experimental data from fluorescence microscopy. \section*{Acknowledgements} The authors have been partially funded by WWTF through project VRG12-009, by DAAD through P.R.I.M.E.~57338904, by FWF project P30148, and by DFG-SFB944. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2018-05-17T02:07:05", "yymm": "1805", "arxiv_id": "1805.02485", "language": "en", "url": "https://arxiv.org/abs/1805.02485" }
\section{Introduction} The second Gaia data release (Gaia DR2) contains astrometric data for 1.693 billion sources from magnitude 3 to 21 based on the observations of the European Space Agency Gaia satellite during the 22-month period between 25 July 2014 and 23 May 2016 \citep{Lindegren2018Gaia}, hereafter cited as Gaia-DR2 Astrometry paper. Among all the sources with a full 5-parameter astrometric solution, DR2 provides more than 550 000 quasars, obtained from a positional cross-match with the ICRF3-prototype and the AllWISE AGN catalogues. These quasars are made to represent a kinematically non-rotating reference frame (the celestial reference frame of Gaia, or Gaia-CRF2) in the optical domain \citep{Mignard2018Gaia}, hereafter denoted as Gaia-CRF2 paper. Quasars (or QSOs) are extremely distant and small in apparent size. They are essential for absolute astrometry in the sense that they present no significant parallax or proper motion. Thus, they are ideal objects to investigate the properties of an astrometric solution. Besides the AllWISE AGN catalog \citep{Secrest2015Identification}, there are other catalogues that can enlarge the sample of quasars in Gaia DR2, such as the Large Quasar Astrometric Catalogue (LQAC) \citep{Souchay2015The}, the spectroscopically confirmed quasars in the SDSS-DR14 Quasar Catalog \citep{P2017The} and the spectroscopically confirmed quasars in LAMOST DR5 using LAMOST spectroscopic data \citep{Cui2012LAmost}; all these have been cross-matched with DR2 sources and collected in a comprehensive catalog named "Known Quasars Catalog for Gaia" (KQCG) \citep{Liao2018KQCG}. The aim of this paper is to make an independent assessment of the astrometry of quasars in DR2. After describing the quasar selection process in section 2, we address the global parallax and proper motion bias in section 3. In section 4, we discuss the analysis of the proper motion field; the scalar spherical harmonics analysis of parallaxes is presented in section 5, and section 6 is devoted to the comparison between ICRF2 sources and their counterparts in Gaia DR2.The last section reports our conclusions. \section{Data Used} Gaia DR2 includes 555934 quasars matched to the AllWISE AGN catalogue, plus 2820 sources matched to the ICRF3-prototype \citep{Lindegren2018Gaia,Mignard2018Gaia}. The union of these two sets makes a total of 556869 sources, denoted as GCRF2. Among these, 485985 sources matched to the AllWISE AGN catalog are used to {\it define} a kinematically non-rotating reference frame, and are identified in the Gaia Archive by the field $frame\_rotator\_object\_type=3$ (Type3); whereas the 2820 sources matched to the ICRF3-prototype and used to align the GCRF2 axes with the radio ICRF are indicated by $frame\_rotator\_object\_type=2$ (Type2). To maximize the size of our quasars sample, we cross-matched Gaia DR2 with the compilation of SDSS-DR14Q, LQAC3 and LAMOST DR5 which are known to contain a huge number of reliable QSOs/AGNs. For the final selection, we adopt the joint conditions in Equation (14) of the Gaia DR2 Astrometric paper in order to reduce the risk of stellar contamination, as reported below: \begin{itemize} \item[(i)] astrometric$\_$matched$\_$observations $\ge$ 8, \item[(ii)] astrometric$\_$params$\_$solved=31, \item[(iii)] $\left|(\omega+0.029 mas)/\sigma_{\omega}\right|$<5, \item[(iv)] $(\mu_{\alpha^{\ast}}/\sigma_{\mu\alpha^{\ast}})^2+(\mu_{\delta}/\sigma_{\mu\delta})^2<25$, \item[(v)] $\left|\sin b\right|$>0.1 \item[(vi)] $\rho$<(2 arcsec)$\times$$\left|\sin b\right|$ \end{itemize} Where $\rho$ is the radius for the positional matching, $b$ is the Galactic latitude. With these precepts, we found 208743 new quasars (KQCG) in Gaia DR2. There are about 87$\%$ of the quasars located in the northern hemisphere. The sky density distribution of the KQCG catalog is depicted in Figure \ref{Figkqcg-non-define}; Figure \ref{FigGmag_kqcg} shows the histograms of the G-magnitude distribution for the GCRF2 and KQCG samples, indicating that our additional quasars populate the dimmer end, and most of them are fainter than $G=19$. \begin{figure} \centering \includegraphics[width=6cm]{Fig1.jpg} \caption{The sky distribution of KQCG. The map shows the sky density with each cell of approximately 0.84 $deg^{2}$, using the Hammer-Aitoff projection in Galactic coordinates with zero longitude at the centre and increasing longitude from right to left.} \label{Figkqcg-non-define} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig2.jpg} \caption{G magnitude distribution of the Gaia-CRF2 sources and the KQCG sources.} \label{FigGmag_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig3.jpg} \caption{Parallaxe distribution for the KQCG quasars. Outer (red) curve is the whole KQCG sample; inner (blue) is the subsample of 143806 sources with $\sigma_{\omega}$<1 mas.} \label{FigParallax_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig4.jpg} \caption{Proper motions distribution for the KQCG. The red curve is the proper motions in right ascension $\mu_{\alpha\ast}$ and the blue is the proper motions in declination $\mu_{\delta}$.} \label{FigPM_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=.30\textwidth]{Fig5-1.jpg} \includegraphics[width=.30\textwidth]{Fig5-2.jpg} \caption{Parallaxes of the KQCG quasars plotted against Gaia G magnitude (top), colour (bottom). The yellow dots are the parallax data points, the blue lines are the parallax medians $\omega_{med}$ of each running-bin.} \label{FigParallax_bisa_kqcg} \end{figure} \section{Global bias} \subsection{Parallax zero point} Figure \ref{FigParallax_kqcg} shows the distribution of parallaxes for the complete KQGC (red curve) and the high-precision subset (blue curve). The mean and median parallax of the whole sample are $-0.0330$ $mas$ and $-0.0278$ $mas$, respectively; the corresponding values for the high-precision subset are $-0.0270$ $mas$ and $-0.0264$ $mas$. Table \ref{parallax_bias} gives the different averages calculated for each data sample. The weighted mean parallax is consistent between different subsets, setting to about $-0.029$ $mas$. However, the mean parallax for $Type2$ is sensibly smaller, offsetting by $0.02$ $mas$ from the other samples. Plots of parallax versus magnitude and effective wavenumber, closely related to the source colour, are shown in Figure \ref{FigParallax_bisa_kqcg}, which reveals the presence of trends in the systematic parallax error, with an excursion of $\sim$0.020 mas over the range covered by the data. \begin{table} \centering \caption{The mean and median parallax (in mas) of different quasar subsets. The formal parallax error is used as weight to calculate the weighted average.} \label{parallax_bias} \begin{tabular}{ccccc} \hline \hline &&&&\\ Subset&N&Mean&Weighted Mean &Median\\ \hline &&&&\\ KQCG & 208743 & -0.0330 & -0.0291 & -0.0278 \\ GCRF2 & 556869 & -0.0308 & -0.0292 & -0.0287 \\ Type2 & 2843 & -0.0511 & -0.0382 & -0.0351 \\ Type3 & 485985 & -0.0284 & -0.0283 & -0.0281 \\ \hline \end{tabular} \end{table} \subsection{Proper motion bias} \label{pmbias} Besides parallaxes, the proper motions of quasars are also nominally zero (the Galactic acceleration effect is neglected here). Figure \ref{FigPM_kqcg} shows the distribution of the proper motion for the KQCG sample; Table \ref{men_med_pm} gives the mean and median proper motion of the different subsets. For the GCRF2 sample we obtain $+1.8$ $\mu as/yr$ and $-1.5$ $\mu as/yr$ in $\mu_{\alpha\ast}$, which is near zero; however, the mean and median in $\mu_{\delta}$ raise to $+12.3$ $\mu as/yr$ and $+11.7$ $\mu as/yr$. For the KQCG sample, the corresponding values are $-8.7$ $\mu as/yr$ and $-7.5$ $\mu as/yr$ in $\mu_{\alpha\ast}$, and +8.3 $\mu as/yr$ and $+11.4$ $\mu as/yr$ in $\mu_{\delta}$, respectively. Looking at the Type2 sample, we get $+10$ $\mu as/yr$ in both components. If we take weighted averages based on formal errors, only the KQCG sample has a significant bias of about $-9.1$ $\mu as/yr$ in $\mu_{\alpha\ast}$, while there is a common bias of $+10$ $\mu as/yr$ in declination for all subsets. The distribution of proper motion versus magnitude and effective wavenumber of KQCG and GCRF2 are plotted in Figures \ref{kqcgpm} and \ref{gcrfpm}. In the second panel of Figure \ref{gcrfpm}, the median proper motion of $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ ; this result seems in agreement with the findings of the Gaia-DR2 Astrometric paper, see their Figure 3. Interestingly, the KQCG sample does not clearly follow the same trend as function of the effective wavenumber, suggesting either a different correlation between magnitude and color for these quasars, or a more complex color dependence of the astrometric calibration for fainter objects. \begin{table*} \centering \caption{The mean and median proper motion of different quasar subsets. The proper motion error is used as weight.} \label{men_med_pm} \begin{tabular}{cccccccc} \hline \hline \multirow{2}{*}{Subset} & \multirow{2}{*}{N} & \multicolumn{3}{c}{$\mu_{\alpha\ast}$$(\mu as/yr)$} & \multicolumn{3}{c}{$\mu_{\delta}$$(\mu as/yr)$} \\ & & Mean &Weighted Average & Median & Mean &Weighted Average & Median \\ \hline &&&&&\\ KQCG & 208743 & -8.7 &-9.1 & -7.5 & +8.3 &+11.1 & +11.4 \\ GCRF2 & 556869 & +1.8 &-0.7 & -1.5 &+12.2 & +12.3 & +11.7 \\ Type2 & 2843 & +16.1 &+2.9 & +10.5 & +19.3 &+14.7 & +8.1 \\ Type3 & 485985 & +0.3 &-1.3 & -1.4 & +11.9 &+11.8 & +11.7 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=4.15cm]{Fig6-1.jpg} \includegraphics[width=4.10cm]{Fig6-2.jpg} \includegraphics[width=4.15cm]{Fig6-3.jpg} \includegraphics[width=4.10cm]{Fig6-4.jpg} \caption{Proper motions of the KQCG plotted against the Gaia G magnitude and colour (the first and second panel from the left are for $\mu_{\alpha\ast}$, and the third and fourth are for $\mu_{\delta}$). The yellow dots are the proper motion data. The green line is the mean proper motion, while the red lines are the proper motion medians of each running-bin.} \label{kqcgpm} \end{figure*} \begin{figure*} \centering \includegraphics[width=4.15cm]{Fig7-1.jpg} \includegraphics[width=4.2cm]{Fig7-2.jpg} \includegraphics[width=4.2cm]{Fig7-3.jpg} \includegraphics[width=4.3cm]{Fig7-4.jpg} \caption{Proper motions of the GCRF2 plotted against the Gaia G magnitude and colour (the first and second panel from the left are for $\mu_{\alpha\ast}$, and the third and fourth are for $\mu_{\delta}$). The yellow dots are the proper motion data. The green line is the mean proper motion, while the red lines are the proper motion medians of each running-bin.} \label{gcrfpm} \end{figure*} \section{Analysis of the proper motion field}\label{pmvsh} In this section, we perform the vector spherical harmonics (VSH) analysis of different quasar samples. The results of the VSH analysis are listed in Table \ref{vshresults}. After adding the KQCG sample to GCRF2 (KQCG plus GCRF2, denoted as KG), rotation and glide do not change very much between the harmonics of degree $l=1$ and $l=10$, and agree well with the results of the GCRF2 sample. Since the quasars in KQCG are mostly fainter than 19 and are not uniformly distributed, we also compare two subsamples ($19\leq$ G< 20 and G$\geq$ 20) of KG and GCRF2. The results agree with each other, which indicates consistency between the astrometric solutions. As pointed out in Section \ref{pmbias}, the median proper motion of $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ for GCRF2 sample. The VSH analysis result shows that the two quasar subsets ($\nu_{eff}$ $\geq$ 1.58 and $\nu_{eff}$ < 1.58) have a similar glide but a very different rotation (mainly $x$ and $y$ components). The glide results agree with different subsets with a typical glide value of $(-9, +5 ,+12)\pm1$ $\mu as/yr$. If we subtract the global proper motion bias in both components before performing the VSH analysis, the typical glide is $(-9, +5 ,-2)\pm1$ $\mu as/yr$, see the rows marked with $\ast$ in Table \ref{vshresults}. \begin{table*} \centering \caption{VSH analysis of the proper motion field of different quasar subsets in Gaia DR2. In the rows marked with $\ast$ the mean proper motion is subtracted before the VSH analysis is performed. All solutions are weighted. "-" means no estimation.} \label{vshresults} \begin{tabular}{ccccccccc} \hline \hline & & & & && & & \\ \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Source\\ Subset\end{tabular}} & \multirow{2}{*}{$l_{max}$} & \multirow{2}{*}{N} & \multicolumn{3}{c}{Rotation[$\mu$as/yr]} & \multicolumn{3}{c}{Glide[$\mu$as/yr]} \\ & && x & y & z & x & y & z \\ \hline & & & & & & & & \\ GCRF2& 5 &556869& -5.5$\pm$1.1 & -7.4$\pm$0.9 & 5.6$\pm$1.2 & -9.2$\pm$1.2 & 4.7$\pm$1.0 & 11.6$\pm$1.0 \\ $\ast$&5 &556869& -5.5$\pm$1.1 & -7.4$\pm$0.9 & 3.5$\pm$1.2 & -9.1$\pm$1.2 & 4.8$\pm$1.0 & -2.9$\pm$1.0 \\ 19$\leq$G<20&5&257446&10.9$\pm$2.3 & 3.5$\pm$1.8 & 6.7$\pm$2.5 &-7.8$\pm$2.5 &1.9$\pm$2.0 &16.8$\pm$2.0 \\ G$\geq$20&5&148910&3.3$\pm$6.0 & 27.5$\pm$5.4 & 5.4$\pm$7.2 &-15.3$\pm$6.7 &12.6$\pm$5.7 &7.6$\pm$5.7 \\ & & & & & & & & \\ $\nu_{eff}$ $\geq$1.58&5&416380&-7.3$\pm$1.3 & -10.0$\pm$1.0 & 5.9$\pm$1.4 &-8.6$\pm$1.4 &4.0$\pm$1.1 &12.0$\pm$1.1 \\ $\nu_{eff}$<1.58&5&140489&6.9$\pm$2.5 & 10.6$\pm$2.5 & 5.5$\pm$3.0 &-15.1$\pm$2.8 &7.2$\pm$2.8 &13.4$\pm$2.5 \\ \hline & & & & && & & \\ KQCG & 1 &208743&9.6$\pm$2.6 &7.6$\pm$1.9 &-16.3$\pm$2.6 &- &- &- \\ $\nu_{eff}$ $\geq$1.58& 1 & 185360&7.8$\pm$2.7 &6.7$\pm$2.0 &-16.7$\pm$2.7 &- &- &- \\ $\nu_{eff}$<1.58&1&22526 &25.5$\pm$8.1 &18.3$\pm$6.4&-10.7$\pm$8.3 &- & - &-\\ \hline & & & & && & & \\ \multirow{3}{*}{KQCG+GCRF2} & 1 &765612 &-2.2$\pm$0.8 & -1.2$\pm$0.7 & -2.0$\pm$0.8 & -6.3$\pm$0.8 & 4.7$\pm$0.7 & 11.8$\pm$0.7 \\ & 5 &765612& -4.5$\pm$1.1 &-6.8$\pm$0.9 & 5.2$\pm$1.2 & -9.1$\pm$1.2 & 4.5$\pm$1.0 & 11.7$\pm$1.0 \\ & 10 &765612& -4.6$\pm$1.5 &-7.6$\pm$1.2 & 6.7$\pm$1.5 & -11.7$\pm$1.6 & 5.0$\pm$1.2 & 13.2$\pm$1.3 \\ $\ast$&5&765612&-4.5$\pm$1.1&-6.8$\pm$0.9&6.5$\pm$1.2&-9.0$\pm$1.2&4.6$\pm$1.0&-1.5$\pm$0.9\\ & & & && & & & \\ \multirow{1}{*}{19$\leq$G<20} & 5 & 329900 &11.2$\pm$2.2 &1.4$\pm$1.7 &6.6$\pm$2.4 &-8.4$\pm$2.4 &1.8$\pm$1.9& 15.7$\pm$1.9\\ \multirow{1}{*}{G$\geq$20} & 5 &273836 &5.1$\pm$5.6 &23.3$\pm$4.6 &9.0$\pm$6.1 &-15.6$\pm$6.2 &5.4$\pm$4.8 &6.7$\pm$4.9 \\ \hline & & & & && & & \\ \multirow{2}{*}{Type3} & 1 &485985&-3.8$\pm$0.8 &-3.2$\pm$0.7 &-0.9$\pm$0.9 &-6.9$\pm$0.8 &4.6$\pm$0.8 &11.5$\pm$0.8 \\ & 5 & 485985&-5.0$\pm$1.1 &-8.4$\pm$0.9 &5.6$\pm$1.2 &-10.0$\pm$1.2 &4.9$\pm$1.0 &10.8$\pm$1.0 \\ $\ast$&5&485985 &-5.0$\pm$1.1 &-8.4$\pm$0.9&5.2$\pm$1.2 &-9.9$\pm$1.2 & 5.0$\pm$1.0 & -3.3$\pm$1.0\\ && & && & & & \\ \multirow{2}{*}{Type2} & 1 &2843& -25.0$\pm$6.2 &-1.5$\pm$5.8 & 2.0$\pm$6.6 & -8.8$\pm$6.3 & -1.1$\pm$6.2 & 24.7$\pm$5.5 \\ & 5 &2843& -28.1$\pm$7.8 & -2.8$\pm$7.1 & 5.0$\pm$8.6 & -9.1$\pm$8.7 & 8.0$\pm$7.6 & 20.0$\pm$7.0 \\ $\ast$&5&2843 &-28.0$\pm$7.8 &-2.9$\pm$7.1&-14.1$\pm$8.6 &-9.0$\pm$8.7 & 7.8$\pm$7.6 & -2.8$\pm$7.0\\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Global rotation of different quasar subsets in proper motions. All solutions are weighted.} \label{rotation_result} \begin{tabular}{ccccc} \hline \hline Subset & N& \begin{tabular}[c]{@{}c@{}}$w_{X}$ \\ ($\mu$as/yr)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{Y}$\\ ($\mu$as/yr)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{Z}$\\ ($\mu$as/yr)\end{tabular} \\ \hline &&&\\ KQCG+GCRF2 &765612& -2.1$\pm$0.8&-0.8$\pm$0.7& -2.4$\pm$0.8 \\ North & 465093 &-3.4$\pm$1.0 & -2.2$\pm$0.9 & -7.3$\pm$1.2 \\ South & 300519 & 0.0$\pm$1.1 & 0.9$\pm$1.0 & 3.0$ \pm$1.2 \\ \hline &&&&\\ GCRF2 &556869& -3.1$\pm$0.8&-1.9$\pm$0.7& -1.0$\pm$0.9 \\ North & 285806 &-5.6$\pm$1.1 & -4.5$\pm$1.0 & -5.2$\pm$1.3 \\ South & 271063 & -0.3$\pm$1.2 & 0.8$\pm$1.0 & 3.0$ \pm$1.2 \\ \hline &&&&\\ Type3 &485985 &-3.3$\pm$0.8& -2.8$\pm$0.7 & -0.9$\pm$0.9 \\ North &247999& -5.7$\pm$1.1& -5.5$\pm$1.0 & -5.1$\pm$1.3 \\ South &237986& -0.6$\pm$1.2 & -0.1$\pm$1.0 & 2.9$ \pm$1.3 \\ \hline &&&&\\ Type2 &2843& -23.1$\pm$5.8 & 2.3$\pm$5.4 & 2.7$\pm$5.6 \\ North & 1635&-25.1$\pm$7.1 & -6.9$\pm$6.5 & 2.6$\pm$8.5 \\ South & 1208& -17.8$\pm$10.3& 9.0$\pm$9.7 & 1.5$ \pm$10.7 \\ \hline \end{tabular} \end{table*} We also tried to fit a pure rotation to the proper motions, using the following equations \citep{Mignard2012}: \begin{equation} \begin{array}{l} \mu_{\alpha\ast} = -w_{X}\cos\alpha\sin\delta-w_{Y}\sin\alpha\sin\delta+w_{Z}\cos\delta\\ \mu_{\delta} = +w_{X}\sin\alpha-w_{Y}\cos\alpha \end{array} \label{rotation} \end{equation} Where $w_{X}$, $w_{Y}$, and $w_{Z}$ are the three spin rates of the proper motion field. We apply this fit to further investigate the spin rate of different quasar subsets in the northern and southern hemisphere. The results are shown in Table \ref{rotation_result}. For the Type2 quasars, no significant spin difference between the two hemispheres is found. However, for the other quasar subsets, the spin rate is clearly above the statistical noise in the northern hemisphere, but negligible in the southern one; this feature could be explained by a north/south dichotomy in the magnitude and color distribution of the fitted quasars, or by a global positional rotation between the northern and southern subsets inducing a rotation in the proper motion field. \section{The scalar spherical harmonics expansion of parallaxes} The parallaxes of quasars can be treated as parallax residuals, and can be seen as the radial part of spatial position differences on the celestial sphere. Therefore, they represent a scalar field on the sphere surface that can be expanded in terms of spherical harmonics (SSH) as follows \citep{bucciarelli2011}: \begin{equation} \Delta\pi=V_{\pi}(\alpha,\delta)=\sum_{l}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\alpha,\delta) \label{ssh_1} \end{equation} Where $Y_{lm}$ are the standard spherical functions defined here by the following sign convention: \begin{equation} Y_{lm}=(-1)^m\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}P_{lm}(\sin\delta)e^{im\alpha} \label{ylm} \end{equation} for $m\geq0$, and we have $Y_{l,-m}(\alpha,\delta)=(-1)^mY_{lm}^{\ast}(\alpha,\delta)$ for $m <0$. The ${\ast}$ denotes complex conjugation, and $P_{lm}(x)$ are the associated Legendre polynomials. Equation \ref{ssh_1} can be reduced as: \begin{equation} \Delta \pi(\alpha,\delta)=\sum^{l_{max}}_{l=1}\left[c^R_{l0}Y^R_{l0}+2\sum^{l}_{m=1}\left(c^R_{lm}Y^R_{lm}-c^I_{lm}Y^I_{lm}\right)\right] \label{ssh} \end{equation} Where $R$ and $I$ denote the real and imaginary part of the function. Starting from the definition of $power$ as the integral of the squared function divided by the domain area, by virtue of Parseval's Theorem we can express the total $power$ per degree $l$ of $\Delta\pi(\alpha,\delta)$ in terms of the expansion coefficents as \begin{equation} P_l=(c^R_{l0})^2+2\sum^l_{m=1}\left[(c^R_{lm})^2+(c^I_{lm})^2\right] \label{pow} \end{equation} Normalizing each coefficient of the above sum by its formal error, assuming white Gaussian noise, we obtain a $\chi^2$-distributed variable with $2l+1$ degrees of freedom, which can be used to test the statistical significance of the corresponding degree. A more robust form of test variable, still $\chi^2$-distributed, is given by equation (87) of \citet*{Mignard2012}, or the derived quantity $Z_{\chi^2}$ which follows a standard normal distribution (see Eq. (85) of \cite{Mignard2012}) and it is the one we used in the present analysis. The results of the SSH analysis, having subtracted beforehand the bias to each parallax, are summarized in Table \ref{ssh}. Note that a value of $Z_{\chi^2}> 2.33$ corresponds to a confidence level of $99\%$, or $2.33\sigma$ of a normal distribution. The parameter $(P_l/4\pi)^{1/2}$ represents the RMS value of the scalar field for the corresponding degree $l$. The expansion of Type2 subset does not present particular signatures, while the other subsets show significant powers for degrees $l=1$ and $l=4$. The total RMS value for $l\leq 10$ (angular scales $\ge$ $180/l$=18 degree) of each subset is about 13 $\mu as$ (apart from Type2). Using a different spatial correlation technique, the Gaia-DR2 Astrometry paper \citep{Lindegren2018Gaia} reports an angular scale of 14 degrees with a RMS amplitude of 17 $\mu as$, which is in good agreement with our results. \begin{table*} \centering \caption{The spherical harmonics expansion of the parallaxes of different quasar subsets. The parallax bias is subtracted before the expansion. All solutions are weighted.} \label{ssh} \begin{tabular}{ccccccccccc} \hline \hline &&&&\\ & \multicolumn{2}{c}{KQCG+GCRF2} & \multicolumn{2}{c}{Type2+Type3} & \multicolumn{2}{c}{Type3} & \multicolumn{2}{c}{Type2} & \multicolumn{2}{c}{GCRF2} \\ \hline &&&&&&&&\\ $l$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ \\ \hline &&&&&&&&\\ 1 & 5.1 & 5.9 & 5.9 & 7.2 &5.8&7.0&12.3&1.2&5.2&5.9 \\ 2 & 3.1 & 2.6 & 2.8 & 2.4 &3.1&2.7&18.7&2.1&2.8&2.0\\ 3 & 4.2 & 4.2 & 4.9 & 5.5 &4.8&5.3&16.7&1.1&4.3&4.2\\ 4 & 5.5 & 5.6 & 6.8 & 7.8 &7.0&7.9&16.8&0.6 &5.9&6.0 \\ 5 & 4.9 & 5.0 & 4.6 & 4.7 &4.6&4.6&21.2&1.5&4.8&4.7\\ 6 & 3.3 & 1.8 & 4.1 & 3.5 &4.4&3.9&24.7&2.1&3.3&1.7\\ 7 & 4.2 & 3.9 & 4.2 & 4.0 &4.3&4.0&21.7&1.1&4.0&3.4\\ 8 & 3.3 & 1.8 & 3.2 & 1.6 &3.3&1.8&18.5&-0.2&3.2&1.5\\ 9 & 3.3 & 2.7 & 3.5 & 3.1 &3.5&2.9&24.5&1.8 &3.5&2.9\\ 10 & 3.9 & 3.8 & 3.4 & 2.6 &3.4&2.5&27.0&2.6&3.8&3.3 \\ \hline \end{tabular} \end{table*} \section{ICRF2 sources in Gaia DR2} In this section, we compare the VLBI positions of ICRF2 sources \citep{Fey2015The} with their optical counterparts in Gaia DR2. After cross-matching, 2146 ICRF2 sources are found in the Gaia DR2 sources, with sky distribution given in Figure \ref{FigICRFDR2skydensity}. Most angular differences $\rho$ between matched sources are smaller than 1 mas, and just a few sources have $\rho>10$ mas, see Figure \ref{PD_H} for color-coded scatter plot of position differences $\rho$ in right ascension and declination. \begin{figure} \centering \includegraphics[width=6cm]{Fig8.jpg} \caption{Sky distribution of ICRF2 sources found in Gaia DR2, Hammer-Aitoff projection in equatorial coordinates. Blue dots are defining sources (D), green dots are VLBA Calibrator Survey sources (VCS V), and blacks are non VCS sources (N).} \label{FigICRFDR2skydensity} \end{figure} \begin{figure} \centering \includegraphics[width=5.5cm]{Fig9.jpg} \caption{Scatter plot of position differences in right ascension and declination (Gaia DR2 minus ICRF2). } \label{PD_H} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig10.jpg} \caption{The formal position uncertainties $\sigma_{pos,max}$ of the Gaia DR2 sources (abscissa) with respect to the ICRF2 sources (ordinate). The color bar on the right is the position differences $\rho$ (in mas) between Gaia DR2 and the ICRF2 sources. The axis is log-log scale.} \label{rhoVsig} \end{figure} Figure \ref{rhoVsig} shows a plot of color-coded angular separations between matched sources in the plane of formal positional uncertainties $\sigma_{DR2}$ , $\sigma_{ICRF2}$ . Most of the sources in Gaia DR2 have position uncertainties under 1 mas, while the uncertainties of the sources in ICRF2 range from 0.04 mas to 10 mas with few even up to tens of mas. Some sources with small position uncertainties show large angular differences, which may caused by an offset between the centers of emission at optical and radio wavelengths. The alignment of the optical positions in Gaia DR2 with respect to the ICRF2 can be modelled by an infinitesimal solid rotation with the following equations \citep{Mignard2012}: \begin{equation} \begin{array}{l} \Delta\alpha_{\ast}=-\epsilon_{X}\cos\alpha\sin\delta-\epsilon_{Y}\sin\alpha\sin\delta+\epsilon_{Z}\cos\delta\\ \Delta\delta=+\epsilon_{X}\sin\alpha-\epsilon_{Y}\cos\alpha \end{array} \label{glodiff} \end{equation} Where $\Delta\alpha_{\ast}=\Delta\alpha\cos\delta$, and $\epsilon_{X}$, $\epsilon_{Y}$ and $\epsilon_{Z}$ are the three rotation angles between the two reference frames. \begin{table*} \centering \caption{Global difference between the Gaia-CRF2 positions of ICRF sources and their positions in ICRF2. } \label{globaldiff} \begin{tabular}{ccccc} \hline \hline & & & & \\ Subset & N & $\epsilon_{X}$ ($\mu$as) & $\epsilon_{Y}$ ($\mu$as) & $\epsilon_{Z}$ ($\mu$as)\\ \hline & & & & \\ All&2146&-3.6$\pm$ 27.5&27.2$\pm$26.9 &3.8$\pm$25.7\\ Defining&257&-19.1 $\pm$ 36.2&30.4$\pm$ 35.0&-32.9$\pm$ 37.0\\ Non-defining&1889&12.1 $\pm$ 37.7&25.2 $\pm$ 37.1&26.6$ \pm$ 32.9\\ \hline \end{tabular} \end{table*} The weighted least-squares estimation of the orientation parameters between Gaia-CRF2 and ICRF2 are listed in table \ref{globaldiff}. No significant rotation is found at the level of 0.03 $mas$ in position. This indicates that the axes of Gaia-CRF2 and the ICRF2 are well aligned with each other within 30 $\mu as$. \section{Conclusions} We cross-matched the quasars from the compilation of the SDSS-DR14, LQAC3 and LAMOST DR5 with Gaia DR2, and found 208743 extra quasars in Gaia DR2, which is about $37\%$ of the Gaia-CRF2 sample. We used this independent sample and the already known quasars in DR2 to investigate the properties of the QSOs solution, also by comparing the astrometric residuals of various quasar subsets in DR2. In general, we obtained consistent results between the samples; some signatures varying with different subsets, and clearly above the statistical noise, are still compatible with systematic errors depending on source position, magnitude and color not completely cured in the second release of the Gaia data, as discussed in the Gaia-DR2 astrometry paper. The results of our analysis are summarized below : \begin{enumerate} \item The parallaxes of our KQCG sample have a mean bias of $-0.0330$ $mas$ and a median of $-0.0278$ $mas$, which agree well with the results of the GCRF2 sample; we note, however, that the mean parallax of Type2 subset in GCRF2 is $0.02$ $mas$ smaller. \item There is a $-9.1$ $\mu as/yr$ bias in $\mu_{\alpha\ast}$ of the KQCG sample, and a bias of about $+10$ $\mu as/yr$ in $\mu_{\delta}$ for all quasar subsets. The mean systematic error in $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ for the GCRF2 sample. \item The VSH method is applied to the proper motion vector field of different quasar subsets. The results for different subsets agree with each other. For Type2, no significant rotation difference between northern and southern hemisphere is found. However, the GCRF2 and other subsets shows a different rotation between two hemispheres. \item The spherical harmonics expansion of the parallaxes shows an angular scale of 18 deg with an RMS amplitude of 13 $\mu as$. \item The comparison of the VLBI-based positions of ICRF2 sources and their Gaia DR2 counterparts shows that the axes of Gaia-CRF2 and the ICRF2 are well aligned with each other within 30 $\mu as$. \end{enumerate} \section*{Acknowledgements} This work has made used of data from ESA space mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). We are grateful to the developers of the TOPCAT (\citep{Taylor2005TOPCAT}) software. This work has been supported by the grants from the National Science Foundation of China (NSFC) through grants 11703065, 11573054 and 11503042. \bibliographystyle{mnras}
{ "timestamp": "2018-10-25T02:13:29", "yymm": "1805", "arxiv_id": "1805.02194", "language": "en", "url": "https://arxiv.org/abs/1805.02194" }
\section{Introduction} Recommendation is ubiquitous in today's cyber-world --- almost every one of your Web activities can be viewed as a recommendation, such as news or music feeds, car or restaurant booking, and online shopping. Therefore, accurate recommender system is not only essential for the quality of service, but also the profit of the service provider. One such system should exploit the rich side information beyond user-item interactions, such as content-based (\textit{e.g.}, user attributes~\cite{Silkroad} and product image features~\cite{Yu:2018:ACR}), context-based (\textit{e.g.}, where and when a purchase is made~\cite{rendle2011fast,NFM}), and session-based (\textit{e.g.}, the recent browsing history of users~\cite{Li:2017:NAS:3132847.3132926,iCD}). However, existing collaborative filtering (CF) based systems merely rely on user and item features (\textit{e.g.}, matrix factorization based~\cite{fastMF} and the recently proposed neural collaborative filtering methods~\cite{NCF,bai2017neural}), which are far from sufficient to capture the complex decision psychology of the setting and the mood of a user behavior~\cite{ACF}. Factorization Machine (FM)~\cite{Rendle2011Factorization} is one of the prevalent feature-based recommendation model that leverages rich features of users and items for accurate recommendation. FM can incorporate any side features by concatenating them into a high-dimensional and sparse feature vector. The key advantage of it is to learn $k$-dimensional latent vectors , \textit{i.e.}, the embedding parameters $\mathbf{V}\in\mathbb{R}^{k\times n}$, for all the $n$ feature dimensions. They are then used to model pairwise interactions between features in the embedding space. However, since $n$ is large (\textit{e.g.} practical recommender systems typically need to deal with over millions of items and other features where $n$ is at least $10^7$~\cite{Wang:2018:PFD}), it is impossible on-device storage of $\mathbf{V}$. Moreover, it requires large-scale multiplications of the feature interaction $\mathbf{v}^T_i\mathbf{v}_j$ for user-item score, even linear time-complexity is prohibitively slow for float operations. Therefore, existing FM framework is not suitable for fast recommendation, especially for mobile users. In this paper, we propose a novel feature-based recommendation framework, named \textit{Discrete Factorization Machine} (DFM), for fast recommendation. In a nutshell, DFM replaces the real-valued FM parameters $\mathbf{V}$ by binary-valued $\mathbf{B}\in\{\pm 1\}^{k\times n}$. In this way, we can easily store a bit matrix and perform XOR bit operations instead of float multiplications, making fast recommendation on-the-fly possible. However, it is well-known that the binarization of real-valued parameters will lead to significant performance drop due to the quantization loss~\cite{Zhang2016Discrete}. To this end, we propose to directly optimize the binary parameters in an end-to-end fashion, which is fundamentally different from the widely adopted two-stage approach that first learns real-valued parameters and then applies round-off binarization~\cite{Zhang2014Preference}. Our algorithm jointly optimize the two challenging objectives: 1) to tailor the binary codes $\mathbf{B}$ to fit the original loss function of FM, and 2) imposing the binary constraint that is balanced and decorrelated, to encode compact information. In particular, we develop an alternating optimization algorithm to iteratively solve the mixed-integer programming problems. We evaluate DFM on two real-world datasets Yelp and Amazon, the results demonstrate that 1) DFM consistently outperforms state-of-the-art binarized recommendation models, and 2) DFM shows very competitive performance compared to its real-valued version (FM), demonstrating the minimized quantization loss. Our contributions are summarized as follows: \begin{itemize}[leftmargin=*] \item We propose to binarize FM, a dominant feature-based recommender model, to enable fast recommendation. To our knowledge, this is the first generic solution for fast recommendation that learns a binary embedding for each feature. \item We develop an efficient algorithm to address the challenging optimization problem of DFM that involves discrete, balanced, and de-correlated constraints. \item Through extensive experiments on two real-world datasets, we demonstrate that DFM outperforms state-of-the-art hash-based recommendation algorithms. \end{itemize} \section{Related Work} We first review efficient recommendation algorithms using latent factor models, and then discuss recent advance in discrete hashing techniques. \subsection{Efficient Recommendation} As pioneer work, \cite{Das2007Google} used Locality-Sensitive Hashing (LSH) \cite{Gionis1999Similarity} to generate hash codes for Google new users based on their item-sharing history similarity. Following the work, \cite{Karatzoglou2010Collaborative} applied random projection for mapping learned user-item latent factors from traditional CF into the Hamming space to acquire hash codes for users and items. Similar to the idea of projection, \cite{Zhou2012Learning} generate binary code from rotated continuous user-item latent factors by running ITQ \cite{Gong2011Iterative}. In order to derive more compact binary codes, \cite{Liu2014Collaborative} imposed the de-correlation constraint of different binary codes on continuous user-item latent factors and then rounded them to produce binary codes. However, \cite{Zhang2014Preference} argued that hashing only preserves similarity between user and item rather than inner product based preference, so subsequent hashing may harm the accuracy of preference predictions, thus they imposed a Constant Feature Norm(CFN) constraint on user-item continuous latent factors, and then quantized similarities by respectively thresholding their magnitudes and phases. The aforementioned work can be easily summarized as two independents stages: relaxed user-item latent factors learning with some specific constraints and binary quantization. However, such a two-stage relaxation is well-known to suffer from a large quantization loss according to \cite{Zhang2016Discrete}. \subsection{Binary Codes Learning} Direct binary code learning by discrete optimization --- is becoming popular recently in order to decrease quantization loss aforementioned. Supervised hashing~\cite{Luo:2018} improve on joint optimizations of quantization losses and intrinsic objective functions, whose significant performance gain over the above two-stage approaches. In the recommendation area, \cite{Zhang2016Discrete} is the first work that proposes to learn binary codes for users and items by directly optimizing the recommendation task. The proposed method \textit{Discrete Collaborative Filtering} (DCF) demonstrates superior performance over aforementioned two-stage efficient recommendation methods. To learn informative and compact codes, DCF proposes to enforce balanced and de-correlated constraints on the discrete optimization. Despite its effectiveness, DCF models only user-item interactions and cannot be trivially extended to incorporate side features. As such, it suffers from the cold-start problem and can not be used as a generic recommendation solution, e.g., for context-aware~\cite{Rendle2011Factorization} and session-based recommendation~\cite{iCD}. Same as the relationship between FM and MF, our DFM method can be seen as a generalization of DCF that can be used for generic feature-based recommendation. Specifically, feeding only ID features of users and items to DFM will recover the DCF method. In addition, our DFM can learn binary codes for each feature, allowing it to be used for resource-limited recommendation scenarios, such as context-aware recommendation in mobile devices. This binary representation learning approach for feature-based recommendation, to the best of knowledge, has never been developed before. The work that is most relevant with this paper is \cite{Lian2017Discrete}, which develops a discrete optimization algorithm named \textit{Discrete Content-aware Matrix Factorization} (DCMF), to learn binary codes for users and items at the presence of their respective content information. It is worth noting that DCMF can only learn binary codes for each user ID and item ID, rather than their content features. Since its prediction model is still MF (\textit{i.e.,}, the dot product of user codes and item codes only), it is rather limited in leveraging side features for accurate recommendation. As such, DCMF only demonstrates minor improvements over DCF for feature-based collaborative recommendation~(\textit{cf.} Figure 2(a) for their original paper). Going beyond learning user codes and item codes, our DFM can learn codes for any side feature and model the pairwise interactions between feature codes. As such, our method has much stronger representation ability than DCMF, demonstrating significant improvements over DCMF in feature-based collaborative recommendation. \section{Preliminaries} We use bold uppercase and lowercase letters as matrices and vectors, respectively. In particular, we use $\mathbf{a}_i$ as the $a$-th column vector of matrix $\mathbf{A}$. We denote ${\|\cdot\|}_F$ as the Frobenius norm of a matrix and $\text{tr}(\cdot)$ as the matrix trace. We denote $\text{sgn}(\cdot):\mathbb{R}\rightarrow \{\pm 1\}$ as the round-off function, \textit{i.e.}, $\text{sgn}(x) = +1$ if $x\geq 0$ and $\text{sgn}(x) = -1$ otherwise. Factorization Machine (FM) is essentially a score prediction function for a (user, item) pair feature $\mathbf{x}$: \begin{equation} \label{eq:fm} \small \text{FM}(\mathbf{x}):= w_{0}+\sum\limits_{i=1}^{n} w_i x_i+ \sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{v}_i,\mathbf{v}_j\rangle x_i x_j, \end{equation} where $\mathbf{x}\in\mathbb{R}^n$ is a high-dimensional feature representation of the rich side-information, concatenated by one-hot user ID and item ID, user and item content features, location features, \textit{etc}. $\mathbf{w}\in\mathbb{R}^n$ is the model bias parameter: $w_o$ is the global bias and $w_i$ is the feature bias. $\mathbf{V}\in\mathbb{R}^{k\times n}$ is the latent feature vector, and every $\langle \mathbf{v}_i,\mathbf{v}_j\rangle$ models the interaction between the $i$-th and $j$-th feature dimensions. Therefore, $\mathbf{V}$ is the key reason why FM is an effective feature-based recommendation model, as it captures the rich side-information interaction. However, on-the-fly storing $\mathbf{V}$ and computing $\langle \mathbf{v}_i,\mathbf{v}_j\rangle$ are prohibitively expensive when $n$ is large. For example, a practical recommender system for Yelp\footnote{\href{https://www.yelp.ca/dataset}{https://www.yelp.ca/dataset}} needs to provide recommendation for over $1,300,000$ users with about $174,000$ business, which have more than $1,200,000$ attributes (here, $n=1,300,000+174,000+1,200,000=2,674,000$). To this end, we want to use binary codes $\mathbf{B}\in\{\pm 1\}^{k\times n}$ instead of $\mathbf{V}$, to formulated our proposed framework: Discrete Factorization Machines (DFM): \begin{equation}\label{eq:dfm}\small \text{DFM}(\mathbf{x}):= w_{0}+\sum\limits_{i=1}^{n} w_i x_i+ \sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j. \end{equation} However, directly obtain $\mathbf{B} = \textrm{sgn}(\mathbf{V})$ will lead to large quantization loss and thus degrade the recommendation accuracy significantly~\cite{Zhang2016Discrete}. In the next section, we will introduce our proposed DFM learning model and discrete optimization that tackles the quantization loss. \section{Discrete Factorization Machines} We first present the learning objective of DFM and then elaborate the optimization process of DFM, which is the key technical difficulty of the paper. At last, we shed some lights on model initialization, which is known to have a large impact on a discrete model. \subsection{Model Formulation} Given a training pair $(\mathbf{x},y)\in\mathcal{V}$, where $y$ is the groundtruth score of feature vector $\textbf{x}$ and $\mathcal{V}$ denotes the set of all training instances, the problem of DFM is formulated as: \begin{align}\small &\mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{B}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} (y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i -\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j)^2 \notag \\ &+ \alpha\sum\limits_{i=1}^{n} w_i^2, \text{s.t.}\ \mathbf{B} \in\{\pm1\}^{k\times n},\ \underbrace{\mathbf{B}\mathbf{1} = \mathbf{0}}_{\text{Balance}},\ \underbrace{ \mathbf{B}\mathbf{B}^T = n\mathbf{I} }_{\text{De-correlation}} \label{eq:obj} \end{align} Due to the discrete constraint in DFM, the regularization ${\|\mathbf{B}\|}_F^2$ becomes an constant and hence is removed. Additionally, DFM imposes balanced and de-correlated constraints on the binary codes in order to maximize the information each bit carries and to make binary codes compact \cite{Zhou2012Learning}. However, optimizing the objective function in Eq.(\ref{eq:obj}) is a highly challenging task, since it is generally NP-hard. To be specific, finding the global optimum solution needs to involve $\mathcal{O}(2^{kn})$ combinatorial search for the binary codes~\cite{Stad2001Some}. Next, we introduce a new learning objective that allows DFM to be solved in a computationally tractable way. The basic idea is to soften the balanced and de-correlated constraints. To achieve this, let us first introduce a delegate continuous variable $\mathbf{D}\in\mathcal{D}$, where $\mathcal{D}=\{\mathbf{D}\in\mathbb{R}^{k\times n}|\mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I}\}$. Then the balanced and de-correlated constraints can be softened by $\min_{D\in\mathcal{D}}\|\mathbf{B}-\mathbf{D}\|_F$. As such, we can get the softened learning objective for DFM as: \begin{align}\small\label{eq:softobj} \mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{B}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} &(y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j)^2 \notag \\ &+ \alpha\sum\limits_{i=1}^{n} w_i^2 - 2\beta tr(\mathbf{B}^T\mathbf{D}), \\ \notag \text{s.t.}\ &\mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I},\mathbf{B}\in\{\pm 1\}^{k\times n}, \end{align} where we use $2tr(\mathbf{B}^T\mathbf{D})$ instead of $\|\mathbf{B}-\mathbf{D}\|_F$ for the ease of optimization (note that the two terms are identical since $\mathbf{B}^T\mathbf{B}$ and $\mathbf{D}^T\mathbf{D}$ are constant). $\beta$ is tunable hyperparameter controlling the strength of the softened de-correlation constraint. As the above Eq.(\ref{eq:softobj}) allows a certain discrepancy between $\mathbf{B}$ and $\mathbf{D}$, it makes the binarized optimization problem computationally tractable. Note that if there are feasible solution in Eq.(\ref{eq:obj}), we can impose a very large $\beta$ to enforce $\mathbf{B}$ to be close to $\mathbf{D}$. The above Eq.(\ref{eq:softobj}) presents the objective function to be optimized for DFM. It is worth noting that we do not discard the discrete constraint and we still perform a direct optimization on discrete $\mathbf{B}$. Furthermore, through joint optimization for the binary codes and the delegate real variables, we can obtain nearly balanced and uncorrelated binary codes. Next, we introduce an efficient solution to solve the mixed-integer optimization problem in Eq.(\ref{eq:softobj}). \subsection{Optimization} We employ alternating optimization strategy~\cite{liu2017pami} to solve the problem. Specifically, we alternatively solve three subproblems for DFM model in Eq.(\ref{eq:softobj}), taking turns to update each of $\mathbf{B}$, $\mathbf{D}$, $\mathbf{w}$, given others fixed. Next we elaborate on how to solve each of the subproblems. \noindent $\mathbf{B}$\textbf{-subproblem}.\quad In this subproblem, we aim to optimize $\mathbf{B}$ with fixed $\mathbf{D}$ and $\mathbf{w}$. To achieve this, we can update $\mathbf{B}$ by updating each vector $\mathbf{b}_r$ according to \begin{equation*} \begin{aligned}\small &\mathop{\arg\min}\limits_{\mathbf{b}_r\in\{\pm 1\}^k} \mathbf{b}_r^T\mathbf{U} (\sum\limits_{\mathcal{V}_r}x_r^2 \hat{\mathbf{x}} \hat{\mathbf{x}} ^T) \mathbf{U}^T\mathbf{b}_r -2(\sum\limits_{\mathcal{V}_r}x_r\psi \hat{\mathbf{x}} ^T )\mathbf{U}^T \mathbf{b}_r \\ &-2\beta \mathbf{d}_r^T\mathbf{b}_r,\ \text{where}\ \psi = y-w_0 - \textbf{w}^T \textbf{x} - \sum\limits_{i=1}^{n-1}\sum\limits_{j=i+1}^{n-1}\langle \mathbf{u}_i,\mathbf{u}_j\rangle \hat{x}_i \hat{x}_j \end{aligned} \end{equation*} where $\mathcal{V}_r=\{(\mathbf{x},y)\in \mathcal{V}|x_r\neq 0\}$ is the training set for $\mathbf{r}$, vector $\hat{\mathbf{x}}$ is equal to $\mathbf{x}$ excluding element $x_r$, $\mathbf{U}$ excludes the column $\mathbf{b}_r$ of matrix $\mathbf{B}$, and $\mathbf{u}_i$ is a column in $\mathbf{U}$. Due to the discrete constraints, the optimization is generally NP-hard. To this end, we use Discrete Coordinate Descent (DCD)~\cite{Zhang2016Discrete} to take turns to update each bit of binary codes $\mathbf{b}_r$. Denote $b_{rt}$ as the $t$-th bit of $\mathbf{b}_r$ and $\mathbf{b}_{r\bar{t}}$ as the rest codes excluding $b_{rt}$, DCD will update $b_{rt}$ by fixing $\mathbf{b}_{r\bar{t}}$. Thus, we update $b_{rt}$ based on the following rule: \begin{equation}\small \begin{split} &b_{rt}\leftarrow\text{sgn}\big( K(\hat{b}_{rt},b_{rt})\big),\\ \hat{b}_{rt}=\sum_{\mathcal{V}_r} &(x_r\psi-x_r^2\hat{\mathbf{x}} ^T\mathbf{Z}_{\bar{t}}\mathbf{b}_{r\bar{t}}) \hat{\mathbf{x}}^T\mathbf{z}_t +\beta d_{rt} \end{split} \end{equation} where $\mathbf{Z}=\mathbf{U}^T$, $\mathbf{z}_t$ is the $t$-th column of the matrix $\mathbf{Z}$ while $\mathbf{Z}_{\bar{t}}$ excludes the $t$-th column from $\mathbf{Z}$, and $K(x,y)$ is a function that $K(x,y)=x$ if $x\neq 0$ and $K(x,y)=y$ otherwise. Through this way, we can control that when $\hat{b}_{rt}=0$, we do not update $b_{rt}$. \vspace{+5pt} \noindent $\mathbf{D}$\textbf{-subproblem}.\quad When $\mathbf{B}$ and $\mathbf{w}$ are fixed in Eq.(\ref{eq:softobj}), the optimization subproblem for $\mathbf{D}$ is: \begin{equation}\label{eq:dsubp}\small \mathop{\arg\max}\limits_{\mathbf{D}}tr(\mathbf{B}^T\mathbf{D}), s.t.\ \mathbf{D}\mathbf{1}=\mathbf{0}, \mathbf{D}\mathbf{D}^T=m\mathbf{I}. \end{equation} It can be solved with the aid of a centering matrix $\mathbf{J}=\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^T$. Specifically, by Singular Value Decomposition (SVD), we have $\mathbf{B}\mathbf{J}=\overline{\mathbf{B}}=\mathbf{P}\mathbf{\Sigma}\mathbf{Q}^T$, where $\mathbf{P}\in\mathbb{R}^{k\times k'}$ and $\mathbf{Q}\in\mathbb{R}^{n\times k'}$ are left and right singular vectors corresponding to the $r' (\leq r)$ positive singular values in the diagonal matrix $\mathbf{\Sigma}$. We first apply eigendecomposition for the small $k\times k$ matrix $\overline{\mathbf{B}}\ \overline{\mathbf{B}}^T= \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix} \begin{bmatrix} \mathbf{\Sigma}^2&\mathbf{0}\\ \mathbf{0}&\mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix}^T$, where $\widehat{\mathbf{P}}$ are the eigenvectors of the zero eigenvalues. Therefore, by the definition of SVD, we have $\mathbf{Q}=\overline{\mathbf{B}}^T\mathbf{P}\mathbf{\Sigma}^{-1}$. In order to satisfy the constraint $\mathbf{D}\mathbf{1}=0$, we further obtain additional $\widehat{\mathbf{Q}}\in\mathbb{R}^{n\times(k-k')}$ by Gram-Schmidt orthogonalization based on $\begin{bmatrix} \mathbf{Q}&\mathbf{1} \end{bmatrix}$. As such, we have $\widehat{\mathbf{Q}}^T\mathbf{1}=\mathbf{0}$. Then we can get the closed-form update rule for the $\mathbf{D}$-subproblem in Eq.(\ref{eq:dsubp}) as: \begin{equation}\small \mathbf{D}\leftarrow\sqrt{n} \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix} \begin{bmatrix} \mathbf{Q}&\widehat{\mathbf{Q}} \end{bmatrix}^T \end{equation} \noindent $\mathbf{w}$\textbf{-subproblem}.\quad When $\mathbf{B}$ and $\mathbf{D}$ are fixed in Eq.(\ref{eq:softobj}), the subproblem is for optimizing $\mathbf{w}$ is: \begin{equation}\small \begin{split} \mathop{\arg\min}\limits_{w_0,\mathbf{w}} &\sum\limits_{(\mathbf{x},y)\in\mathcal{V}} (\phi -w_{0}-\sum\limits_{i=1}^{n} w_i x_i)^2 +\alpha\sum\limits_{i=1}^{n} w_i^2 ,\\ &\phi = y-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j. \end{split} \end{equation} Since $\textbf{w}$ is a real-valued vector, it is the standard multivariate linear regression problem. Thus we can use coordinate descent algorithm provided in the original FM~\cite{Rendle2011Factorization} to find the optimal value of $\mathbf{w}$ and the global bias $w_0$. \subsection{Initialization} Since DFM deals with mixed-integer non-convex optimization, the initialization of model parameters plays an important role for faster convergence and for finding better local optimum solution. Here we suggest an efficient initialization strategy inspired by DCF~\cite{Zhang2016Discrete}. It first solves a relaxed optimization problem in Eq.(4) by discarding the discrete constraints as: \begin{equation} \begin{small} \begin{aligned} \label{eq:init} \small &\mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{V}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} (y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{v}_i,\mathbf{v}_j\rangle x_i x_j)^2\notag \\ + &\alpha \sum\limits_{i=1}^{n} w_i^2 + \beta\|\mathbf{V}\|_F^2 - 2\beta tr(\mathbf{V}^T\mathbf{D}), \text{s.t.}\ \mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I}\notag \end{aligned} \end{small} \end{equation} To solve the problem, we can initialize real-valued $\mathbf{V}$ and $\mathbf{w}$ randomly and find feasible initializations for $\mathbf{D}$ by solving $\mathbf{D}$-subproblem. The optimization can be done alternatively by solving $\mathbf{V}$ by traditional FM, solving $\mathbf{D}$ by $\mathbf{D}$-subproblem, and solving $\mathbf{w}$ by gradient descent. Assuming the solution is ($\mathbf{V}^\ast,\mathbf{D}^\ast,\mathbf{w}^\ast,w_0^\ast$), we can then initialize the parameters in Eq.(\ref{eq:softobj}) as: \begin{equation} \mathbf{B}\leftarrow\text{sgn}(\mathbf{V}^\ast), \mathbf{D}\leftarrow\mathbf{D}^\ast, \mathbf{w}\leftarrow\mathbf{w}^\ast, w_0\leftarrow w_0^\ast \end{equation} \section{Experiments} As the key contribution of this work is the design of DFM for fast feature-based recommendation, we conduct experiments to answer the following research questions: ~\\ \noindent \textbf{RQ1}.\quad How does DFM perform as compared to existing hash-based recommendation methods? ~\\ \noindent \textbf{RQ2}.\quad How does the key hyper-parameter of DFM impact its recommendation performance? ~\\ \noindent \textbf{RQ3}.\quad How efficient is DFM as compared to the real-valued version of FM? \begin{figure*}[!tbh] \centering \includegraphics[scale = 1.5]{leg.pdf}\\ \large\textbf{Yelp}\\ \vspace{-0.2cm} \includegraphics[width=0.265\textwidth]{y8.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y16.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y32.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y64.pdf} \\ \vspace{-0.2cm} \large\textbf{Amazon}\\ \includegraphics[width=0.265\textwidth]{a8.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a16.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a32.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a64.pdf} \vspace{-0.2cm} \caption{\textbf{Performance of NDCG@K (K ranges from 1 to 10) \textit{w.r.t.,} code length ranges for 8 to 64 on the two datasets.}} \label{fig:performance} \vspace{-0.3cm} \end{figure*} \subsection{Experimental Settings} \textbf{Datasets}. We experiment on two publicly available datasets with explicit feedbacks from different real-world websites: \textit{Yelp} and \textit{Amazon}. Note that we assume each user has only one rating for an item and average the scores if an item has multiple ratings from the same user. \textbf{a) Yelp.} This dataset \cite{Lian2017Discrete} originally contains 409,117 users, 85,539 items (points of interest on Yelp such as restaurants and hotels), and 2,685,066 ratings with integer scores ranging from 1 to 5. Besides, each item has a set of textual reviews posted by the users. \textbf{b) Amazon.} This book rating dataset \cite{mcauley2015inferring} originally includes 12,886,488 ratings of 929,264 items (books on Amazon from 2,588,991 users. In this dataset, an item also has a set of integer rating scores in $[1, 5]$ and a set of textual reviews. Considering the extreme sparsity of the original Yelp and Amazon datasets, we remove users with less than $20$ ratings and items rated by less than $20$ users. After the filtering, there are 13,679 users, 12,922 items, and 640,143 ratings left in the Yelp dataset. For the Amazon dataset, we remain 35,151 users, 33,195 items, and 1,732,060 ratings. For fair comparison with DCMF, we leave out the side information from the user field and represent an item with the bag-of-words encoding of its textual contents after aggregating all review contents of the item. Note that we remove \textit{stopping words} and truncate the vocabulary by selecting the top 8,000 words regarding their \textit{Term Frequency–Inverse Document Frequency}. By concatenating the bag-of-words encoding (side information of the item) and one-hot encoding of user and item ID, we obtain a feature vector of dimensionality 34,601 and 76,346 for a rating (use-item pair) for Yelp and Amazon, respectively. \vspace{+5pt} \noindent\textbf{Baselines}. We implement our proposed DFM method using Matlab\footnote{Codes are available: \href{https://github.com/hanliu95/DFM}{https://github.com/hanliu95/DFM}} and compare it with its real-valued version and state-of-the-art binarized methods for Collaborative Filtering: \begin{itemize}[leftmargin=*] \item \textbf{libFM}. This is the original implementation\footnote{\href{http://www.libfm.org/}{http://www.libfm.org/}} of FM which has achieved great performance for feature-based recommendation with explicit feedbacks. Note that we adopt $l_2$ regularization on the parameters to prevent overfitting and use the SGD learner to optimize it. \item \textbf{DCF}. This is the first binarized CF method that can directly optimize the binary codes~\cite{Zhang2016Discrete}. \item \textbf{DCMF}. This is the state-of-the-art binarized method for CF with side information~\cite{Lian2017Discrete}. It extends \textbf{DCF} by encoding the side features as the constraints for user codes and item codes. \item \textbf{BCCF}. This is a two-stage binarized CF method~\cite{Zhou2012Learning} with a relaxation stage and a quantization stage. At these two stages, it successively solves MF with balanced code regularization and applies orthogonal rotation to obtain user codes and item codes. \end{itemize} Note that for \textbf{DCF} and \textbf{DCMF}, we use the original implementation as released by the authors. For BCCF, we re-implement it due to the unavailability. \vspace{+5pt} \noindent\textbf{Evaluation Protocols}. We first randomly split the ratings from each user into training ($50\%$) and testing ($50\%$). As practical recommender systems typically recommend a list of items for a user, we rank the testing items of a user and evaluate the ranked list with \textit{Normalized Discounted Cumulative Gain} (NDCG), which has been widely used for evaluating ranking tasks like recommendation~\cite{NCF}. To evaluate the efficiency of \textbf{DFM} and real-valued FM, we use \textit{Testing Time Cost} (TTC) \cite{Zhang2016Discrete}, where a lower cost indicates better efficiency. \vspace{+5pt} \noindent\textbf{Parameter Settings}. As we exactly follow the experimental settings of \cite{Lian2017Discrete}, we refer to their optimal settings for hyper-parameters of \textbf{DCMF}, \textbf{DCF}, and \textbf{BCCF}. For \textbf{libFM}, we test the $l_2$ regularization on feature embeddings $\mathbf{V}$ of $\{10^{-i} | i = -4, -3, -2, -1, 0, 1, 2\}$. Under the same range, we test the de-correlation constraint (\textit{i.e.,} $\beta$ in Eq. (\ref{eq:obj})) of \textbf{DFM}. Besides, we test the code length in the range of $[8, 16, 32, 64]$. It is worth mentioning that we conduct all the experiments on a computer equipped with an Intel(R) Core(TM) i7-7700k 4 cores CPU at 4.20GHZ, 32GB RAM, and 64-bit Windows 7 operating system. \subsection{Performance Comparison (RQ1)} In Figure \ref{fig:performance}, we show the recommendation performance (NDCG@1 to NDCG@10) of \textbf{DFM} and the baseline methods on the two datasets. The code length varies from 8 to 64. From the figure, we have the following observations: \begin{itemize}[leftmargin=*] \item \textbf{DFM} demonstrates consistent improvements over state-of-the-art binarized recommendation methods across code lengths (the average improvement is 7.95\% and 2.38\% on Yelp and Amazon, respectively). The performance improvements are attributed to the benefits of learning binary codes for features and modeling their interactions. \item Besides, \textbf{DFM} shows very competitive performance compared to \textbf{libFM}, its real-valued version, with an average performance drop of only 3.24\% and 2.40\% on the two datasets. By increasing the code length, the performance gap continuously shrinks from 5.68\% and 4.76\% to 1.46\% and 1.19\% on Yelp and Amazon, respectively. One possible reason is that \textbf{libFM} suffers from overfitting as the increase of its representative capability (\textit{i.e.,} larger code length)~\cite{NFM}, whereas binarizing the parameters can alleviate the overfitting problem. This finding again verifies the effectiveness of the proposed \textbf{DFM}. \item Between baseline methods, \textbf{DCF} consistently outperforms \textbf{BCCF}, while slightly underperforms \textbf{DCMF} with an average performance decrease of 1.58\% and 0.76\% on the two datasets, respectively. This is consistent with the findings in \cite{Liu2014Collaborative} that the direct discrete optimization is stronger than two-stage approaches and that side information makes the user codes and item codes more representative, which can boost the performance of recommendation. However, the rather small performance gap between \textbf{DCF} and \textbf{DCMF} indicates that \textbf{DCMF} fails to make full use of the side information. The main reason is because that \textbf{DCMF} performs prediction only based on user codes and item codes (which is same as \textbf{DCF}). This inevitably limits the representation ability of DCMF. \end{itemize} \begin{figure} \centering \includegraphics[width=0.252\textwidth]{p1.pdf} \hspace{-0.24in} \includegraphics[width=0.252\textwidth]{p2.pdf} \vspace{-15pt} \caption{\textbf{Recommendation performance of libFM and DFM (code length=64) on NDCG@10 \textit{w.r.t.,} $l_2$ regularization (libFM) and de-correlation constraint (DFM). }} \label{fig:hyperparameter} \vspace{-0.3cm} \end{figure} \subsection{Impact of Hyper-parameter (RQ2)} Figure \ref{fig:hyperparameter} shows the recommendation performance of \textbf{libFM} and \textbf{DFM} on NDCG@10 regarding $l_2$ regularization of \textbf{libFM} and de-correlation constraint, respectively. We omit the results on different values of $K$ and code length other than $K = 10$ and code length = 64 since they shown the same trend. First, we can see that the performance of \textbf{libFM} continuously drops as we decrease the $l_2$ regularization. One reason is that \textbf{libFM} could easily suffer from overfitting \cite{xiao2017attentional}. Second, we observe that \textbf{DFM} performs slightly worse as decreasing the de-correlation constraint. By setting the de-correlation constraint and $l_2$ regularization to be zero, both of \textbf{DFM} and \textbf{libFM} exhibit significant performance decrease in NDCG@10. Specifically, the performance of \textbf{DFM} drops with a 1.91\% and 2.05\% margin on Yelp and Amazon, respectively, while \textbf{libFM} encounters a 10.44\% and 6.56\% one. The above findings again demonstrate the overfitting problem of \textbf{libFM}, which leads to \textbf{libFM} to be very sensitive to the $l_2$ regularization hyper-parameter, while the proposed \textbf{DFM} is relatively insensitive to its de-correlation constraint hyper-parameter. \subsection{Efficiency Study (RQ3)} As \textbf{libFM} is implemented based on C++, we re-implement the testing algorithm of \textbf{DFM} with C++ and compile it with the same C++ compiler (gcc-4.9.3) for a fair comparison. Table \ref{tab:efficiency} shows the efficiency comparison between \textbf{DFM} and \textbf{libFM} regarding TTC on the two datasets. We have the following observations: \begin{itemize}[leftmargin=*] \item \textbf{DFM} achieves significant speedups on both datasets regarding TTC, significantly accelerating the \textbf{libFM} by a large amplitude (on average, the acceleration ratio over \textbf{libFM} is 15.99 and 16.04, respectively). This demonstrates the great advantage of binarizing the real-valued parameters of FM. \item The acceleration ratio of \textbf{DFM} based on \textbf{libFM} is stable around 16 times on both the datasets when the code length increases from 8 to 64. \end{itemize} Along with the comparable recommendation performance of \textbf{DFM} and \textbf{libFM}, the above findings indicate that \textbf{DFM} is an operable solution for many large-scale Web services, such as Facebook, Instagram, and Youtube, to substantially reduce the computation cost of their recommendation systems. \begin{table}[t] \centering \caption{\textbf{Efficiency comparison between DFM (C++ implementation) and libFM \textit{w.r.t.,} TTC (minutes) where the code length ranges from 8 to 64 on the two datasets.}} \vspace{-0.3cm} \textbf{Yelp}\\ \vspace{+1pt} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c||c|c|c|c|} \hline \textbf{Code Length} & \textbf{8} & \textbf{16} & \textbf{32} & \textbf{64}\\ \hline \textbf{libFM} (TTC) &$27.18$ & $56.77$ & $114.10$ & $217.64$ \\ \hline \textbf{DFM} (TTC) &$2.06$ & $3.56$ & $6.60$ & $12.43$\\ \hline Acceleration Ratio &$13.19$ & $15.95$ & $17.29$ & $17.51$\\ \hline \end{tabular} } ~\\ \vspace{+1pt} \textbf{Amazon}\\ \vspace{+2pt} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c||c|c|c|c|} \hline Code Length & 8 &16 & 32 & 64\\ \hline \textbf{libFM} (TTC) & $177.03$ & $357.46$ & $716.83$ & $1,414.67$ \\ \hline \textbf{DFM} (TTC) &$12.67$ & $22.50$ & $42.56$ & $81.04$\\ \hline Acceleration Ratio &$13.97$ & $15.89$ & $16.84$ & $17.46$\\ \hline \end{tabular} } \label{tab:efficiency} \vspace{-0.3cm} \end{table} \section{Conclusions} In this paper, we presented DFM, the first binary representation learning method for generic feature-based recommendation. In contrast to existing hash-based recommendation methods that can only learn binary codes for users and items, our DFM is capable of learning a vector of binary codes for each feature. As a benefit of such a compact binarized model, the predictions of DFM can be done efficiently in the binary space. Through extensive experiments on two real-world datasets, we demonstrate that DFM outperforms state-of-the-art hash-based recommender systems by a large margin, and achieves a recommendation accuracy rather close to that of the original real-valued FM. This work moves the first step towards developing efficient and compact recommender models, which are particularly useful for large-scale and resource-limited scenarios. In future, we will explore the potential of DFM for context-aware recommendation in mobile devices, a typical application scenario that requires fast and compact models. Moreover, we will develop pairwise learning method for DFM, which might be more suitable for personalized ranking task. With the fast developments of neural recommendation methods recently~\cite{NFM}, we will develop binarized neural recommender models in the next step to further boost the performance of hash-based recommendation. Besides, we are interested in deploying DFM for online recommendation scenarios, and explore how to integrate bandit-based and reinforcement learning strategies into DFM. Lastly, we will explore the potential of DFM in other tasks such as popularity prediction of online content~\cite{feng2018learning}. \vspace{+5pt} \noindent\textbf{Acknowledgment} This work is supported by the National Basic Research Program of China (973 Program), No.: 2015CB352502; National Natural Science Foundation of China, No.: 61772310, No.: 61702300, and No.: 61702302; and the Project of Thousand Youth Talents 2016. This work is also part of NExT research, supported by the National Research Foundation, Prime Minister's Office, Singapore under its IRC@SG Funding Initiative.
{ "timestamp": "2018-09-20T02:07:22", "yymm": "1805", "arxiv_id": "1805.02232", "language": "en", "url": "https://arxiv.org/abs/1805.02232" }
\section{Introduction}\label{sect.introduction} Expressive and efficient temporal reasoning is essential to a number of areas in Artificial Intelligence~(AI)~\cite{KOUBARAKIS2006,PANI200155,Schwalb1998}. Over the past few years, many constraint-based formalisms have been developed to represent and reason about time in automated planning and temporal scheduling~\cite{Dechter2003,Nau2004}. We begin by recalling the Disjunctive Temporal Problem (DTP)~\cite{Oddi2000,STERGIOU2008,TSAMARDINOS2003}. The general form of a DTP being, given a finite set $\mathcal{T}=\{X_0,X_1,\ldots, X_N\}$ of temporal variables (i.e.,\xspace time-points), to schedule them on the real line in such a way as to satisfy a prescribed finite set $\mathcal{C}$ of temporal constraints over $\mathcal{T}$. Every constraint $c_i\in\mathcal{C}$ is a disjunction of the form $s_{(i,1)}\vee s_{(i,2)}\vee \cdots \vee s_{(i,T_i)}$, where every $s_{i,j}$ is a simple temporal constraint of the form $(l_{i,j}\leq X_{\beta_{i,j}} - X_{\alpha_{i,j}} \leq u_{i,j})$ for some integers $0\leq \alpha_{i,j}, \beta_{i,j} \leq N$ and reals $l_{i,j}, u_{i,j}$. Although DTPs are expressive enough to capture many tasks in automated planning and temporal scheduling, they are \textsc{NP}-complete~\cite{STERGIOU2008}. The principal direct approach taken to solve DTPs has been to convert the original problem into one of selecting a disjunct from each constraint~\cite{STERGIOU2008,TSAMARDINOS2003}, then to check whether the set of selected disjuncts forms a consistent Simple Temporal Problem (STP)~\cite{DechterMP91}. This can be done in strongly polynomial time by computing single-source shortest paths (e.g.,\xspace with the Bellman-Ford's algorithm~\cite{Bellman58}). Under this prospect, of course the prohibitive complexity of solving DTPs comes from the fact that there are exponentially many disjunct combinations~possible. In~\cite{Kumar2006,Kumar05}, T.K.S. Kumar studied the Restricted Disjunctive Temporal Problem (RDTP), a tractable subclass of DTPs strictly including the classical and well established STPs~\cite{DechterMP91}. In RDTPs, each constraint can be either one of the following three types: ($\texttt{t}_1\xspace$) $(Y-X\leq w)$, for $w$ real (a simple temporal difference-constraint); ($\texttt{t}_2\xspace$) $(l_1\leq X\leq u_1)\vee \cdots \vee (l_k\leq X\leq u_k)$, for $l_i,u_i$ reals (a single-variable disjunction of many interval-constraints); ($\texttt{t}_3\xspace$) $(l_1\leq X\leq u_1) \vee (l_2\leq Y\leq u_2)$, for $l_i,u_i$ reals (a two-variable disjunction of two interval-constraints). It was shown in~\cite{Kumar05} that RDTPs are solvable in deterministic strongly polynomial time by reducing them to the Connected Row-Convex (CRC)~\cite{DEVILLE1999} constraint satisfaction problem, faster randomized algorithms were also proposed. CRC constraints generalize many other known tractable classes of constraints like 2-SAT, implicational, and binary integer-weighted linear constraints~\cite{DEVILLE1999}. Particularly, Kumar's deterministic algorithm for solving RDTPs works by reducing them into binary Constraint Satisfiability Problems (CSPs) over meta-variables representing $\texttt{t}_2\xspace$ or $\texttt{t}_3\xspace$ constraints, meanwhile showing that such binary constraints are indeed CRC constraints, finally exploiting the algorithmic tractability of CRC constraints. An instantiation of a consistency checking algorithm (e.g.,\xspace~\cite{DEVILLE1999}) that further exploits the structure of CRC constraints leads to a time complexity of $O\big((|\mathcal{C}_{\texttt{t}_2\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|)^3\cdot d^2_{\max} + |\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_2\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|)^2\big)$, where $\mathcal{C}_{\texttt{t}_1\xspace, \texttt{t}_2\xspace, \texttt{t}_3\xspace}$ is the set of ${\texttt{t}_1\xspace},{\texttt{t}_2\xspace},{\texttt{t}_3\xspace}$ constraints (respectively), and $d_{\max}$ is the maximum number of disjuncts possible per single constraint~\cite{Kumar05}. Randomization reduces the running time to $O\big((|\mathcal{C}_{\texttt{t}_2\xspace}|+|\mathcal{C}_{\texttt{t}_3\xspace}|)^2 \cdot d_{\max}^2 \cdot \delta + |\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_2\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|)^2\big)$, where $\delta$ is the degree of the CRC network (i.e.,\xspace the maximum number of constraints any variable participates into)~\cite{Kumar05}. Notable applications of RDTPs include solving STP{s} with Taboo Regions, cfr.~\cite{Kumar2013}. \textit{Contributions.} This work offers a deeper comprehension on the tractability of RDTPs, leading to elementary deterministic strongly polynomial time algorithms, significantly improving the asymptotic running times of both the Kumar's deterministic and randomized solutions. Our time complexity is $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\cdot|\mathcal{C}_{\texttt{t}_3\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|^2\big)$, where $d_{\mathcal{C}_{\texttt{t}_2\xspace}}$ is the total number of disjuncts counting over all $\texttt{t}_2\xspace$-constraints. Since $d_{\mathcal{C}_{\texttt{t}_2\xspace}}\leq d_{\max}\cdot |\mathcal{C}_{\texttt{t}_2\xspace}|$, this improves over all of the previous solutions. The result is obtained by reducing RDTPs to the Single-Source Shortest Paths (SSSP) and the 2-SAT problem (jointly), instead of reducing to CRCs. So the full expressive power of CRCs is not needed, binary linear and 2-SAT constraints are enough. In passing, we obtain a faster (quadratic time) deterministic algorithm for solving temporal problems having only $\{\texttt{t}_1\xspace, \texttt{t}_2\xspace\}$-constraints and no ${\texttt{t}_3\xspace}$-constraint. As a second main contribution, we study the tractability frontier of RDTPs widened with another kind of restricted disjunctive constraints, i.e.,\xspace Hyper Temporal Networks (HyTN\xspace{s})~\cite{CominPR17}, a strict generalization of STN\xspace{s} grounded on directed hypergraphs and introduced to overcome the limitation of considering only conjunctions of constraints but maintaining a practical efficiency in the consistency check of the instances. In a HyTN\xspace a single temporal multi-tail (or multi-head) hyperarc-constraint is defined as a set of two or more maximum delay (minimum anticipation, respectively) constraints which is satisfied when at least one of these delay constraints is so. We prove that solving temporal problems having only ${\texttt{t}_2\xspace}$-constraints and either only multi-tail or only multi-head hyperarc-constraints lies in $\textsc{NP}\cap\textsc{\text{co-}NP}$ and admits deterministic pseudo-polynomial time algorithms; on the other hand, solving temporal problems having only ${\texttt{t}_3\xspace}$-constraints and either only multi-tail or only multi-head hyperarc-constraints turns out strongly \textsc{NP}-complete. See Table~1 below for a summary. \begin{center} \begin{tabular}{ | c | c | c | c |} \textit{Problem} & \textit{Complexity} & \textit{Improved Time Bound} & \textit{Cfr.} \\ \hline $\texttt{t}_2\xspace$DTPs & \textsc{P} & $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\big)$ & Sect.~3 \\ \hline RDTPs & \textsc{P} & \begin{tabular}{@{}c@{}}$O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|)\, +$ \\ $+\,|\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\cdot|\mathcal{C}_{\texttt{t}_3\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|^2\big)$ \end{tabular} & Sect.~4 \\ \hline $\texttt{t}_2\xspace$HyTPs & $\textsc{NP}\cap \text{co\,-\textsc{NP}}$ & $O\big((|\mathcal{T}|+|\mathcal{A}|)\cdot m_{\mathcal{A}}\cdot W_{\mathcal{A},\mathcal{C}_{\texttt{t}_2\xspace}}\big)$ & Sect.~6 \\ \hline $\texttt{t}_3\xspace$HyTPs & $\textsc{NP}\text{-complete}$ & \emph{n.a. (exponential time)} & Sect.~5 \\ \hline \end{tabular}\label{tab:results} \end{center} \begin{center} \colorbox{colorbox}{\rule{0pt}{3pt}\rule{3pt}{0pt}}\; \footnotesize\textbf{Table~\ref{tab:results}} Summary of main results. \normalsize \end{center} \section{Background}\label{sect.Background} This section offers the basic background notions that are assumed in the rest of the paper, let's start with Simple Temporal Networks (STN\xspace{s}) and related problems (STP{s}), cfr.~\cite{Dechter2003,DechterMP91}. \begin{mydefinition}[STN\xspace{s}, STP\xspace{s}~\cite{Dechter2003,DechterMP91}] A {\em Simple Temporal Network} (STN\xspace) is a pair $(\mathcal{T},\mathcal{C})$, where $\mathcal{T}$ is a set of real-valued variables called {\em time-points,} and $\mathcal{C}$ is a set of linear real-weighted binary constraints over $\mathcal{T}$ called {\em simple (or $\texttt{t}_1\xspace$)} temporal constraints, each having the form: \[(Y-X\leq w_{X,Y}),\text{ where } X,Y\in \mathcal{T} \text{ and } w_{X,Y}\in\ensuremath{\mathbb{R}}.\] An STN\xspace is \textit{consistent} if it admits a \emph{feasible schedule}, i.e.,\xspace some $s: \mathcal{T}\mapsto \ensuremath{\mathbb{R}}$ such that $s(Y) \leq s(X) + w_{X,Y}$ for all $(Y-X\leq w_{X,Y})\in \mathcal{C}$. So the {\em Simple Temporal Problem} (STP\xspace) is that of determining whether a given STN\xspace is consistent or not. \end{mydefinition} Any STN\xspace $\mathcal{N}=(\mathcal{T},\mathcal{C})$ can be seen as a directed weighted graph with vertices $\mathcal{T}$ and arc set $A_\mathcal{C}\triangleq \{(X,Y, w_{X,Y})\mid (Y-X\leq w_{X,Y})\in \mathcal{C}\}$. So, a \emph{path} $p$ in $\mathcal{N}$ is any finite sequence of vertices $p=(v_0,v_1,\ldots, v_k)$ (some $k\geq 1$) such that $(v_i, v_{i+1}) \in A_\mathcal{C}$ for every $i\in [0,k)\cap\ensuremath{\mathbb{Z}}$; the total weight of $p$ is then $w_p\triangleq \sum_{i=0}^{k-1} w_{v_i, v_{i+1}}$. A \emph{cycle} $C$ in $\mathcal{N}$ is any set of arcs $C\subseteq A_\mathcal{C}$ cyclically sequenced as $a_0, a_1, \ldots a_{\ell-1}$ where an head equals a tail, i.e.,\xspace $h(a_i) = t(a_j)$, iff $j=i+1 \mod \ell$; it is called a \emph{negative cycle} if $w(C) \leq 0$, where $w(C)$ stands for $\sum_{a\in C} w_a$. A graph is called \textit{conservative} when it contains no negative cycle. A \textit{schedule} is any function $f: \mathcal{T} \mapsto \ensuremath{\mathbb{R}}$. So the \textit{reduced weight} of an arc $a = (t,h,w_a)$ with respect to a schedule $f$ is defined as $w^{f}_a \triangleq w_a - f(h) + f(t)$. A schedule $f$ is \textit{feasible} iff $w^{f}_a\geq 0$ for every $a\in A_\mathcal{C}$. It is also worth noticing that, given two feasible schedules $s_1,s_2$ of any STN\xspace, the pointwise-minimum schedule, $s(u)\triangleq \min(s_1(u), s_2(u))$ $\forall u\in\mathcal{T},$ is also feasible. Indeed, among all of the possible feasible schedules of a given consistent STN\xspace, it is natural to consider the \emph{least} feasible one; i.e.,\xspace $\hat{s}:\mathcal{T}\rightarrow\ensuremath{\mathbb{R}}_{\geq 0}$ is the \emph{least feasible schedule} of STN\xspace $\mathcal{N}$ if $\hat{s}$ is feasible for $\mathcal{N}$ and, for any other non-negative feasible schedule $s'\geq 0$ of $\mathcal{N}$, it holds $\hat{s}(u)\leq s'(u)$ $\forall u\in \mathcal{T}$. Remarkably, finding the least feasible schedule of an STN\xspace takes polynomial time~\cite{Dechter2003}. \begin{mytheorem}[\cite{Dechter2003}]\label{thm:main_stn} Let $\mathcal{N}=(\mathcal{T},\mathcal{C})$ be an STN\xspace. The \emph{Bellman-Ford} (BF) algorithm (cfr.~\cite{Bellman58}) produces in $O(|\mathcal{T}|\cdot |\mathcal{C}|)$ time: either the least feasible schedule $\hat{s} :\mathcal{T}\rightarrow\ensuremath{\mathbb{R}}_{\geq 0}$, in case $\mathcal{N}$ is consistent; or a certificate that $\mathcal{N}$ is inconsistent in the form of a negative cycle. Moreover, if the weights of the arcs are all integers, then the scheduling values of $\hat{s} $ are all integers too. \end{mytheorem} Concerning the BF algorithm itself, it's worth considering an improved variant of it that we call the \emph{Bellman-Ford Value-Iteration}~(BF-VI). The basic idea of BF-VI is the same as the original BF algorithm in that each vertex is used as a candidate to relax its adjacent vertices. The improvement is that instead of trying all vertices blindly, BF-VI maintains a queue $Q$ of candidate vertices and adds a vertex to $Q$ only if that vertex is relaxed. A candidate vertex $v$ is extracted from $Q$ according to a fixed policy (e.g.,\xspace LIFO), and then the adjacent vertices of $v$ are possibly relaxed as usual and added to $Q$ (if they are not already in there, no repetitions are allowed). This process repeats until no more vertex can be relaxed. BF-VI serves us as a basic model, to be leveraged to design faster algorithms for RDTPs. \subsection{Restricted Disjunctive Temporal Problems} \begin{figure} \begin{minipage}[t]{0.45\textwidth} \begin{tikzpicture}[scale=0.75] \begin{axis}[axis lines=middle, xtick={0,...,9}, y=16cm, ymajorticks=false, axis equal,grid=both, xlabel=$X_i$, every axis x label/.style={ at={(ticklabel* cs:1)}, anchor=west}, every tick/.style={ black, thick }] \addplot[color=black, mark=] coordinates{(0,1) (0,-1)}; \addplot[color=black, mark=] coordinates{(1,1) (1,-1)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (0, 1) (1, 1) (1, -1) (0,-1) }; \addplot[color=black, mark=] coordinates{(2,1) (2,-1)}; \addplot[color=black, mark=] coordinates{(3,1) (3,-1)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (2, 1) (3, 1) (3, -1) (2,-1) }; \addplot[color=black, mark=] coordinates{(5,1) (5,-1)}; \addplot[color=black, mark=] coordinates{(7,1) (7,-1)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (5, 1) (7, 1) (7, -1) (5,-1) }; \addplot[color=black, mark=] coordinates{(8,1) (8,-1)}; \addplot[color=black, mark=] coordinates{(9,1) (9,-1)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (8, 1) (9, 1) (9, -1) (8,-1) }; \end{axis} \end{tikzpicture}\caption{An example of a $\texttt{t}_2\xspace$-constraint: $(0\leq X_i\leq 1)\vee (2\leq X_i\leq 3)\vee (5\leq X_i\leq 7)\vee (8\leq X_i\leq 9)$.}\label{fig:type2} \end{minipage} \qquad \begin{minipage}[t]{0.45\textwidth} \begin{tikzpicture}[scale=0.75] \begin{axis}[axis lines=middle, y=15cm, x=15cm, xtick={0,...,3}, axis equal,grid=both, xlabel=$X_i$, ylabel=$X_j$, every axis x label/.style={ at={(ticklabel* cs:1)}, anchor=west}, every tick/.style={ black, thick }, every axis y label/.style={ at={(ticklabel* cs:1.05)}, anchor=north west}, every tick/.style={ black, thick } ] \addplot[color=black, mark=] coordinates{(2,3) (2,0)}; \addplot[color=black, mark=] coordinates{(3,3) (3,0)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (2, 3) (3, 3) (3, 0) (2,0) }; \addplot[color=black, mark=] coordinates{(0,1) (4,1)}; \addplot[color=black, mark=] coordinates{(0,2) (4,2)}; \addplot [color=black,mark=,fill=gray, fill opacity=0.5] coordinates { (0, 1) (0, 2) (2, 2) (2,1) (3, 1) (3, 2) (4, 2) (4,1) }; \end{axis} \end{tikzpicture}\caption{An example of a $\texttt{t}_3\xspace$-constraint: $(2\leq X_i\leq 3)\vee (1\leq X_j\leq 2)$.}\label{fig:type3} \end{minipage} \end{figure} Let us proceed by formally defining RDTN\xspace{s} and RDTP\xspace{s}. \figref{fig:type2} and \figref{fig:type3} (above) illustrate an example of a $\texttt{t}_2$-constraint and $\texttt{t}_3$-constraint (respectively). \begin{mydefinition}[RDTN\xspace{s}, RDTP\xspace{s}~\cite{Kumar2006,Kumar05}] A {\em Restricted Disjunctive Temporal Network}~(RDTN\xspace)~$\mathcal{N}$ is a pair $(\mathcal{T},\mathcal{C})$, where $\mathcal{T}$ is a set of time-points and $\mathcal{C} = \mathcal{C}_{\texttt{t}_1\xspace}\cup\mathcal{C}_{\texttt{t}_2}\cup\mathcal{C}_{\texttt{t}_3}$ is a set of {\em restricted disjunctive temporal constraints} over $\mathcal{T}$, each being either one of the following three types: \begin{enumerate} \item[($\texttt{t}_1\xspace$)]: $(Y-X\leq w_{X,Y})$, where $X,Y\in \mathcal{T}$ and $w_{X,Y}\in\ensuremath{\mathbb{R}}$; \item[($\texttt{t}_2$)]: $\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)$, where $X\in \mathcal{T}$ and $l_i,u_i\in \ensuremath{\mathbb{R}}$ for every $i=1, \ldots, k$; \item[($\texttt{t}_3$)]: $(l_1\leq X\leq u_1) \vee (l_2\leq Y\leq u_2)$, where $X,Y\in \mathcal{T}$ and $l_i,u_i\in \ensuremath{\mathbb{R}}$ for $i=1,2$. \end{enumerate} An RDTN\xspace is \textit{consistent} if it admits a \emph{feasible schedule}, i.e.,\xspace some $s: \mathcal{T}\mapsto \ensuremath{\mathbb{R}}$ satisfying all of the disjunctive temporal constraints in $\mathcal{C}$. The {\em Restricted Disjunctive Temporal Problem} (RDTP\xspace) is that of determining whether a given RDTN\xspace is consistent or not. \end{mydefinition} Notice that $\texttt{t}_1\xspace$-constraints do coincide with simple temporal constraints of STN\xspace{s}. We assume w.l.o.g. that the disjuncts of any $\texttt{t}_2\xspace$-constraint are arranged in ascending order of the end points of their corresponding intervals, i.e.,\xspace $l_i<l_{i+1}\wedge u_i<u_{i+1}$ $\forall i$, whenever $\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)\in\mathcal{C}_{\texttt{t}_2}$; these natural orderings on the interval domains of the time-points will be referred to as as their \emph{nominal ordering}. For any $\tau\in\{1,2,3\}$, $|\mathcal{C}_{\texttt{t}_\tau}|$ denotes the number of $\texttt{t}_\tau$-constraints (i.e.,\xspace the cardinality of $\mathcal{C}_{\texttt{t}_\tau}$, not the encoding length). Also, for any $c_X\in \mathcal{C}_{\texttt{t}_2}$, $|c_X|$ denotes the number of disjuncts of $c_X$, and $d_{\mathcal{C}_{\texttt{t}_2}}\triangleq \sum_{c_X\in\mathcal{C}_{\texttt{t}_2}} |c_X|$. Finally, let us fix a total ordering on the time-points, i.e.,\xspace $\mathcal{T}=\{T_1, \ldots, T_k\}$, this induces an ordering on the pair of disjuncts in any $\texttt{t}_3\xspace$-constraint; so, provided, $c=(l_1\leq X_i\leq u_1)\vee (l_2\leq X_j\leq u_2)\in\mathcal{C}_{\texttt{t}_3}$, for some $i<j$, then, $d'\triangleq (l_1\leq X_i\leq u_1)$ and $d''\triangleq (l_2\leq X_j\leq u_2)$ are called the \emph{first} and the \emph{second} disjunct of $c$ (respectively). As mentioned in the introduction, Kumar showed in~\cite{Kumar05, Kumar2006} that RDTPs are solvable in deterministic strongly polynomial time by reducing them to CRCs~\cite{DEVILLE1999}. \subsection{Hyper Temporal Networks} In order to study the tractability frontier of RDTP\xspace{s}, we shall consider the HyTN\xspace model which is grounded on directed hypergraphs as defined next. \begin{mydefinition}[\cite{CominPR17}] A \emph{directed hypergraph} ${\cal H}$ is a pair $(\mathcal{T},\mathcal{A})$, where $\mathcal{T}$ is the set of nodes, and $\mathcal{A}$ is the set of \emph{hyperarcs}. Each hyperarc $A\in\mathcal{A}$ is either \emph{multi-head} or \emph{multi-tail}: \begin{itemize} \item A \emph{multi-head} hyperarc $A=(t_A, H_A, w_A)$ has a distinguished node $t_A$, called the \emph{tail} of $A$, and a non-empty set $H_A\subseteq V\setminus\{t_A\}$ containing the \emph{heads} of $A$; to each head $v\in H_A$, it is associated a \emph{weight} $w_A(v)\in\ensuremath{\mathbb{R}}$, which is a real number (unless otherwise specified). \figref{fig:head_hyperarc} depicts a possible representation of a multi-head hyperarc: the tail is connected to each head by a dashed arc labeled by the name of the hyperarc and the weight associated to the considered head. \item A \textit{multi-tail} hyperarc $A=(T_A, h_A, w_A)$ has a distinguished node $h_A$, called the \emph{head} of $A$, and a non-empty set $T_A\subseteq V\setminus\{h_A\}$ containing the \emph{tails} of $A$; to each tail $v\in T_A$, it is associated a \emph{weight} $w_A(v)\in\ensuremath{\mathbb{R}}$, which is a real number (unless otherwise specified). \figref{fig:tail_hyperarc} depicts a possible representation of a multi-tail hyperarc: the head is connected to each tail by a dotted arc labeled by the name of the hyperarc and weights. \end{itemize} \end{mydefinition} \begin{figure}[!h] \centering \begin{minipage}[]{.35\linewidth} \begin{tikzpicture}[arrows=->,scale=.7,node distance=.5 and 2] \node[node,xshift=1ex,label={above, yshift=.5ex:$H_A$}] (v1) {$v_1$}; \node[node,below=of v1] (v2) {$v_2$}; \node[node,below=of v2] (v3) {$v_3$}; \node[node,left=of v2] (u) {$t_A$}; \draw[>=stealth, multiHead] (u) to node[timeLabel,above,sloped] {$A, w_A(v_1)$} (v1);% \draw[>=stealth, multiHead] (u) to node[timeLabel,above,sloped] {$A, w_A(v_2)$} (v2);% \draw[>=stealth, multiHead] (u) to node[timeLabel,above,sloped] {$A, w_A(v_3)$} (v3);% \draw[dashed, ultra thin, rounded corners=15pt] (-.55,1.2) rectangle (.8,-2.75); \end{tikzpicture}\subcaption{Multi-Head Hyperarc}\label{fig:head_hyperarc} \end{minipage} \begin{minipage}[]{.35\linewidth} \begin{tikzpicture}[arrows=->, scale=.7,node distance=.5 and 2] \node[node,label={above, yshift=.5ex:$T_A$}] (v1) {$v_1$}; \node[node,below=of v1] (v2) {$v_2$}; \node[node,below=of v2] (v3) {$v_3$}; \node[node,right=of v2] (u) {$h_A$}; \draw[>=stealth,multiTail] (v1) to node[timeLabel,above,sloped] {$A, w_A(v_1)$} (u.north west);% \draw[>=stealth, multiTail] (v2) to node[timeLabel,above,sloped] {$A, w_A(v_2)$} (u);% \draw[>=stealth,multiTail] (v3) to node[timeLabel,above,sloped] {$A, w_A(v_3)$} (u.south west);% \draw[dashed, ultra thin, rounded corners=15pt] (-.65,1.2) rectangle (.725,-2.75); \end{tikzpicture}\subcaption{Multi-Tail Hyperarc}\label{fig:tail_hyperarc} \end{minipage} \caption{Hyperarcs in Hyper Temporal Networks.}\label{fig:hyperarcs} \end{figure} The \emph{cardinality} of a hyperarc $A\in \mathcal{A}$ is $|A|\triangleq |H_A\cup \{t_A\}|$ if $A$ is multi-head, and $|A| \triangleq |T_A\cup\{h_A\}|$ if $A$ is multi-tail; when $|A|=2$, then $A=(u, v, w)$ is a standard arc. The \textit{order} and \textit{size} of a directed hypergraph $(\mathcal{T},\mathcal{A})$ are $|\mathcal{T}|$ and $m_\mathcal{A} \triangleq \sum_{A\in \mathcal{A}} |A|$~(respectively). \begin{mydefinition}[\GHTN~\cite{CominPR17}] A \emph{general-HyTN\xspace} is a directed hypergraph ${\cal H} = (\mathcal{T},\mathcal{A})$ where each node $X\in \mathcal{T}$ represents a time-point, and each multi-head\slash multi-tail hyperarc stands for a set of temporal distance constraints between the tail\slash head and the heads\slash tails. \end{mydefinition} In general-HyTN\xspace{s}, an hyperarc is \textit{satisfied} when at least one of its distance constraints is satisfied. Then, a HyTN\xspace{} is \textit{consistent} when it is possible to assign a value to each time-point so that all of its hyperarcs are satisfied. More formally, in the HyTN\xspace model the consistency-checking problem is the following decision problem. \newcommand{\savefootnote}[2]{\footnote{\label{#1}#2}} \newcommand{\repeatfootnote}[1]{\textsuperscript{\ref{#1}}} \begin{mydefinition}[\GHTP~\cite{CominPR17}] Given a general-HyTN\xspace \mbox{${\cal H}=(\mathcal{T},\mathcal{A})$}, the \emph{General Hyper Temporal Problem} (\GHTP) is that of deciding whether or not there exists a schedule \mbox{$s:\mathcal{T} \rightarrow \ensuremath{\mathbb{R}}$} such that, for every hyperarc $A\in\mathcal{A}$, the following hold: \begin{itemize} \item if $A=(t,h,w)$ is a standard arc, then: $s(h)-s(t)\leq w$; \item if $A=(t_A, H_A, w_A)$ is a multi-head hyperarc, then: $s(t_A) \geq \min_{v\in H_A} \{s(v) - w_A(v) \}$; \item if $A=(T_A, h_A, w_A)$ is a multi-tail hyperarc, then: $s(h_A) \leq \max_{v\in T_A} \{s(v) + w_A(v) \}$. \end{itemize} \end{mydefinition} Any such schedule $s$ is called \textit{feasible}. A HyTN\xspace that admits at least one feasible schedule is called \textit{consistent}. Comparing the consistency of HyTN\xspace{s} with the consistency of STN\xspace{s}, the most important aspect of novelty is that, while in a distance graph of STN\xspace{s} each arc represents a distance constraint and all such constraints have to be satisfied by any feasible schedule, in a HyTN\xspace each hyperarc represents a disjunction of one or more distance constraints and a feasible schedule has to satisfy at least one of such distance constraints for each hyperarc. Let us survey some interesting properties about the consistency-checking problem above. The first one is that any integer-weighted HyTN\xspace admits an integer-valued feasible schedule when it is consistent, as stated in the following proposition. \begin{proposition}[\cite{CominPR17}]\label{prop:int_sched} Let ${\cal H}=(\mathcal{T},\mathcal{A})$ be an integer-weighted\footnote{Integer-weighted $HyTN\xspace$ means that $w_A(v)\in\mathbb{Z}$ for every $A\in\mathcal{A}$ and $v\in\mathcal{T}$ for which $w_A(v)$ is defined.} and consistent general-$HyTN\xspace$. Then ${\cal H}$ admits an integer feasible schedule $s:\mathcal{T} \rightarrow \{-T,-T+1, \ldots, T-1, T\}$, where $T = \sum_{A\in\mathcal{A}, v\in \mathcal{T}} |w_A(v)|$. \end{proposition} \noindent The following theorem states that \GHTP is \text{NP}-complete, in a strong sense. \begin{mytheorem}[\cite{CominPR17}]\label{Teo:npcompleteness} \GHTP is an \text{NP}-complete problem even if the input instances ${\cal H}=(V, \mathcal{A})$ are restricted to satisfy $w_A(\cdot) \in\{-1, 0, 1\}$ and $|H_A|, |T_A|\leq 2$ for every $A\in\mathcal{A}$. \end{mytheorem} As observed in \cite{CominPR17}, Theorem~\ref{Teo:npcompleteness} motivates the study of consistency problems on HyTN\xspace{s} having either only multi-head or only multi-tail hyperarcs. In the former case, the consistency-checking problem is called \textsc{head-HyTP}\xspace, while in the latter it is \THTP; as stated in Theorem~\ref{Teo:MainAlgorithms}, the complexity of checking these two problems turns out to be lower than that for DTP\xspace{s}, i.e.,\xspace both $\textsc{head-HyTP}\xspace, \THTP \in \text{NP}\cap\text{co-NP}$, instead of being $\text{NP}$-complete. So it's worth considering the following specialized notion of consistency for HyTN\xspace{s}. \begin{mydefinition}[\textsc{head-HyTP}\xspace] Given a multi-head HyTN\xspace \mbox{${\cal H}=(\mathcal{T},\mathcal{A})$}, the \textsc{head-HyTP}\xspace problem is that of deciding whether or not there exists a schedule \mbox{$s:\mathcal{T} \rightarrow \ensuremath{\mathbb{R}}$} such that: \[s(t_A) \geq \min_{v\in H_A} \{s(v) - w_A(v) \},\quad \forall A\in\mathcal{A}.\] \end{mydefinition} The tightest currently known worst-case time complexity upper-bound for solving (integer-weighted) \textsc{head-HyTP}\xspace{s} was established in~\cite{CominPR17} and it is expressed in the following theorem. \begin{mytheorem}[\cite{CominPR17}]\label{Teo:MainAlgorithms} The following proposition holds on (integer-weighted, multi-head) HyTN\xspace{s}. There exists an $O\big((|\mathcal{T}|+|\mathcal{A}|)\cdot m_{\mathcal{A}}\cdot W\big)$ pseudo-polynomial time algorithm for checking \textsc{head-HyTP}\xspace; given any HyTN\xspace ${\cal H}=(\mathcal{T}, \mathcal{A})$, if ${\cal H}$ is consistent the same algorithm also returns an integer-valued feasible schedule $s:\mathcal{T}\rightarrow \ensuremath{\mathbb{Z}}$ of ${\cal H}$; otherwise, it returns a negative certificate in the form of a negative \emph{hypercycle} (cfr. Appendix~A for more details on that). Above, $W\triangleq \max_{A\in\mathcal{A}, v\in H_A} |w_A(v)|$ is the maximum absolute value among the weights. \end{mytheorem} Concluding this section we recall that the two problems \textsc{head-HyTP}\xspace and \THTP are actually inter-reducible, i.e.,\xspace one can check any one of the two models in $f(m,n,W)$-time whenever there's an $f(m,n,W)$-time procedure for checking the consistency of the other one. \begin{mytheorem}[\cite{CominPR17}] \textsc{head-HyTP}\xspace and \THTP are inter-reducible by means of $\log$-space, linear-time, local-replacement reductions. \end{mytheorem} Thus, Theorem~\ref{Teo:MainAlgorithms} extends to multi-tail HyTN\xspace{s} (i.e.,\xspace they're checkable in pseudo-poly time). \section{Faster Deterministic Algorithm for $\texttt{t}_2\xspace$DTPs}\label{sect.Type2-TP_Algo} This section offers a deterministic quadratic time algorithm for solving temporal problems having only $\{\texttt{t}_1\xspace, \texttt{t}_2\xspace\}$-constraints, as defined below. The same algorithm will be leveraged to solve RDTPs fastly, later on in Section~\ref{sect.RDTP_ALGO}. \begin{mydefinition} Any RDTN\xspace $\mathcal{N}=(\mathcal{T}, \mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace} \cup \mathcal{C}_{\texttt{t}_3\xspace})$ having $\mathcal{C}_{\texttt{t}_3\xspace}=\emptyset$ is called \emph{$\texttt{t}_2\xspace$DTN}. So, $\texttt{t}_2\xspace$DTNs are denoted simply as $(\mathcal{T}, \mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$. The corresponding temporal problem, i.e.,\xspace \emph{$\texttt{t}_2\xspace$DTP}, is that of determining whether a given $\texttt{t}_2\xspace$DTN is consistent or not. \end{mydefinition} One possible solution to $\texttt{t}_2\xspace$DTPs is Kumar's reduction from RDTPs to CRCs~\cite{Kumar05}. Our solution, named \texttt{$\text{t}_2\text{DTP()}$}\xspace, employs kind of a value-iteration approach in which all are initially set to zero and then updated monotonically upwards by necessary arc relaxations -- this is somehow reminiscent of the BF-VI algorithm for STPs mentioned in Section~\ref{sect.Background}. Indeed, given a $\texttt{t}_2\xspace$DTN $\mathcal{N}_{\texttt{t}_2\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$, we firstly solve the STP\xspace $\mathcal{N}_{\texttt{t}_1\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace})$ (e.g.,\xspace with BF-VI). If $\mathcal{N}_{\texttt{t}_1\xspace}$ is consistent, the returned least feasible schedule $\hat{\varphi}_\mathcal{N}$ provides an initial candidate, the next step in mind being that of satisfying all the $\texttt{t}_2\xspace$-constraints. For this, recall that the disjuncts of any $c_{\texttt{t}_2\xspace}\in\mathcal{C}_{\texttt{t}_2\xspace}$ are arranged according to their nominal ordering, so that we can try to satisfy any given $c_{\texttt{t}_2\xspace}$ by iteratively picking the next (i.e.,\xspace in ascending order) unsatisfied disjunct of $c_{\texttt{t}_2\xspace}$ and by enforcing its lower-bound constraint in an auxiliary STN\xspace as if it were a $\texttt{t}_1\xspace$-constraint. While there's an unsatisfied $\texttt{t}_2\xspace$-constraint $c_{\texttt{t}_2\xspace}$, the current candidate schedule is thus increased by the least necessary amount satisfying both $c_{\texttt{t}_2\xspace}$ and the whole $\mathcal{C}_{\texttt{t}_1\xspace}$. It turns out that this can be done efficiently by performing $|\mathcal{C}_{\texttt{t}_2\xspace}|$ calls to the Dijkstra shortest paths algorithm~\cite{Dijkstra1959}. In order to show this, let us point out two key facts (i.e.,\xspace Lemma~\ref{lem:slack}~and~\ref{lem:update}). \begin{mylemma}\label{lem:slack} Let $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace})$ be any STN\xspace, and let $\varphi, \varphi'$ be any pair of schedules of $\mathcal{N}$. Let $\mathcal{N}^{\varphi}$ be the STN\xspace \emph{reweighted} according to the reduced-costs weight transformation~$w^{\varphi}$ (i.e.,\xspace each weight $w_{X,Y}$ in $\mathcal{N}$ is simply replaced by $w^{\varphi}_{X,Y}$), let $\mathcal{N}^{\varphi'}$ be the same w.r.t.\xspace $w^{\varphi'}$. Let $\delta^{\varphi}_X:\mathcal{T}\rightarrow \mathbb{Z}\cup\{+\infty\}$ be the length of the shortest path in $\mathcal{N}^{\varphi}$ from any $T\in\mathcal{T}$ to $X\in \mathcal{T}$, and let $\delta^{\varphi'}_X$ be the same w.r.t.\xspace $\mathcal{N}^{\varphi'}$. Then for every $T,X\in\mathcal{T}$, either $\delta^{\varphi'}_X(T)$ and $\delta^{\varphi}_X(T)$ are both $+\infty$, or the following holds: \[ \delta^{\varphi'}_X(T) - \delta^{\varphi}_X(T) = \big(\varphi'(T)-\varphi(T)\big) - \big(\varphi'(X)-\varphi(X)\big).\] \end{mylemma} \begin{proof} Let $T,X\in\mathcal{T}$ (arbitrarily), w.l.o.g. $X$ is reachable from $T$ in $\mathcal{N}$ (otherwise, $\delta^{\varphi'}_X(T)$ and $\delta^{\varphi}_X(T)$ are both $+\infty$). Consider any path $p_{T,X}$ from $T$ to $X$ in $\mathcal{N}^{\varphi}$, i.e.,\xspace for some $k\geq 0$: \[ p_{T,X}\triangleq (T=T_0, T_1, T_2, \ldots, T_k = X), \text{ having total weight } w^\varphi_{p_{T,X}}\triangleq \sum_{i=0}^{k-1} w^\varphi_{T_i, T_{i+1}} \text{ in } \mathcal{N}^{\varphi}.\] Then, the following holds by telescoping: \begin{align*} w^\varphi_{p_{T,X}} &= \big(w_{T_0,T_1} - \varphi(T_1)+\varphi(T_0)\big) + \big(w_{T_1,T_2}-\varphi(T_2)+\varphi(T_1)\big) + \ldots \\ & \hspace{35ex} \ldots + \big(w_{T_{k-1},T_k}-\varphi(T_k)+\varphi(T_{k-1})\big) \\ &= \varphi(T_0) - \varphi(T_k) + \sum_{i=0}^{k-1} w_{T_i,T_{i+1}} = \varphi(T) - \varphi(X) + w_{p_{T,X}}. \tag{1} \end{align*} Thus, provided $\delta_{X}(T)$ is the shortest path distance from $T$ to $X$ in the original network $\mathcal{N}$, we have: \begin{align*}\hspace{-4.5ex} \delta^\varphi_{X}(T) &= \min\left\{w^\varphi_{p_{T,X}} \mid p_{T,X}\text{ is a path from $T$ to $X$ in $\mathcal{N}$} \right\} & \text{(by def. of $\delta^\varphi_{X}$)} \\ &= \min\left\{\varphi(T) - \varphi(X) + w_{p_{T,X}} \mid p_{T,X}\text{ is any path from $T$ to $X$ in $\mathcal{N}$} \right\} & \text{(by (1))} \\ &= \varphi(T) - \varphi(X) + \min\left\{w_{p_{T,X}} \mid p_{T,X}\text{ is any path from $T$ to $X$ in $\mathcal{N}$} \right\} & \text{($\varphi$ is constant here)}\\ &= \varphi(T) - \varphi(X) + \delta_{X}(T). & \text{(by def. of $\delta_{X}$)} \end{align*} For the same reason, $\delta^{\varphi'}_{X}(T) = \varphi'(T) - \varphi'(X) + \delta_{X}(T)$. Therefore, \begin{align*} \delta^{\varphi'}_X(T) - \delta^{\varphi}_X(T) &= \big(\varphi'(T) - \varphi'(X)+ \delta_{X}(T)\big) - \big(\varphi(T) - \varphi(X)+ \delta_{X}(T)\big) \\ &= \big(\varphi'(T)-\varphi(T)\big) - \big(\varphi'(X)-\varphi(X)\big). \end{align*} This concludes the proof. \end{proof} \begin{mylemma}\label{lem:update} Let $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace})$ be any STN\xspace, and let $\hat{\varphi}$ be the least feasible schedule of $\mathcal{N}$. Fix some $X\in\mathcal{T}$ and some real value $l_X\geq \hat{\varphi}(X)$. Let $\mathcal{N}' = (\mathcal{T}', \mathcal{C}'_{\texttt{t}_1\xspace})$ be the auxiliary STN\xspace obtained by introducing a corresponding lower-bound $\texttt{t}_1\xspace$-constraint over $X$, i.e.,\xspace \[ \mathcal{T}'\triangleq \mathcal{T}\cup\{z\}, \\ \mathcal{C}'_{\texttt{t}_1\xspace}\triangleq \mathcal{C}_{\texttt{t}_1\xspace}\cup\big\{(z-T\leq 0)\mid T\in\mathcal{T}\big\}\cup \big\{(z-X\leq -l_X)\big\}. \] Let $\mathcal{N}^{\hat{\varphi}}$ be the STN\xspace reweighted according to the reduced-costs weight transformation $w^{\hat{\varphi}}$, and let $\delta^{\hat{\varphi}}_X(T)$ be the length of the shortest path in $\mathcal{N}^{\hat{\varphi}}$ from (any) $T\in\mathcal{T}$ to $X$. Then, for every $T\in\mathcal{T}$, the least feasible schedule $\hat{\varphi}'$ of $\mathcal{N}'$ is given by: \[ \hat{\varphi}'(T) = \hat{\varphi}(T) + \max\big(0, l_X - \hat{\varphi}(X) - \delta^{\hat{\varphi}}_X(T) \big).\] \end{mylemma} \begin{proof} Let w.l.o.g. $\hat{\varphi}'(z)=0$. In order to become feasible for $\mathcal{N}'$ we claim, for every $T\in \mathcal{T}$, that the least feasible schedule $\hat{\varphi}(T)$ must be increased by at least $\max\big(0, l_X - \hat{\varphi}(X) - \delta^{\hat{\varphi}}_X(T) \big)$ time units (because of the lower-bound constraint $(z-X\leq -l_X)\in \mathcal{C}'_{\texttt{t}_1\xspace}$). Indeed, for any $T\in \mathcal{T}$ that reaches $X$ in $\mathcal{N}$, the $\texttt{t}_1\xspace$-constraint $(X-T\leq \delta_X(T))$ (which is induced by telescoping all of the $\texttt{t}_1\xspace$-constraints along any shortest path from $T$ to $X$) must be satisfied. On the other hand, by Lemma~\ref{lem:slack} (applied to $\hat\varphi$ and to the anywhere-zero\footnote{The anywhere-zero schedule $\zeta$ is that defined as, $\zeta(T)=0$ for every $T\in\mathcal{T}$.} schedule), it holds $\hat{\varphi}(X)-\hat{\varphi}(T)=\delta_X(T)-\delta^{\hat{\varphi}}_X(T)$. This can be seen as follows: if $\hat{\varphi}(T)$ is kept fixed, then $\hat{\varphi}(X)$ can be increased by at most $\delta^{\hat{\varphi}}_X(T)$ time units without breaking the induced constraint $(X-T\leq \delta_X(T))$. Here, $\hat{\varphi}(X)$ must be increased by at least $l_X-\hat{\varphi}(X)$ time units in order to satisfy $(z-X\leq -l_X)\in \mathcal{C}'_{\texttt{t}_1\xspace}$, so $\hat{\varphi}(T)$ must be increased by at least the amount said above. Next, we claim this increase also preserves feasability, i.e.,\xspace it is the \emph{least feasible increase}. For ease of notation, let $f(z)\triangleq 0$ and $f(T) \triangleq \hat{\varphi}(T) + \max\big(0, l_X - \hat{\varphi}(X) - \delta^{\hat{\varphi}}_X(T) \big)$ $\forall\, T\in\mathcal{T}$. In order to prove that $f$ satifies all the constraints in $\mathcal{C}_{\texttt{t}_1\xspace}$, pick any $(B-A\leq w_{A,B})\in \mathcal{C}_{\texttt{t}_1\xspace}$. By hypothesis, it holds: \begin{equation} \hat\varphi(B)-\hat\varphi(A)\leq w_{A,B}. \end{equation} For the sake of the argument, let us define: $\Delta_{A,B} \triangleq \big(f(B) - \hat\varphi(B)\big) - \big(f(A) - \hat\varphi(A)\big)$. So, the following holds: \begin{equation}f(B)-f(A) = \hat\varphi(B) - \hat\varphi(A) + \Delta_{A,B}.\end{equation} Then, either one of the following two cases holds: \begin{itemize} \item If $l_X - \hat\varphi(X) \leq \delta^{\hat{\varphi}}_{X}(B)$, then $f(B) = \hat\varphi(B)$, so $\Delta_{A,B}\leq 0$. Therefore, \begin{align*} f(B)-f(A) & \leq \hat\varphi(B) - \hat\varphi(A) & \text{(by (2))} \\ & \leq w_{A,B}. & \text{(by (1))} \end{align*} \item If $l_X - \hat\varphi(X) > \delta^{\hat{\varphi}}_{X}(B)$, it is easy to check that $\Delta_{A,B}\leq \delta^{\hat{\varphi}}_{X}(A) - \delta^{\hat{\varphi}}_{X}(B)$. By definition of $\delta^{\hat{\varphi}}_{X}$ and since $(B-A\leq w_{A,B})\in \mathcal{C}_{\texttt{t}_1\xspace}$, then $\delta^{\hat{\varphi}}_{X}(A)\leq \delta^{\hat{\varphi}}_{X}(B) + w^{\hat\varphi}_{A,B}$. Therefore, \begin{align*} f(B)-f(A) & \leq \hat\varphi(B) - \hat\varphi(A) + \delta^{\hat{\varphi}}_{X}(A) - \delta^{\hat{\varphi}}_{X}(B) \\ & \leq \hat\varphi(B) - \hat\varphi(A) + w^{\hat\varphi}_{A,B} = w_{A,B}. \end{align*} \end{itemize} So, in either case, $f(B)-f(A)\leq w_{A,B}$. Finally, clearly $f(X)=l_X$, so $(z-X\leq -l_X)\in \mathcal{C}'_{\texttt{t}_1\xspace}$ is also satisfied. This proves $f$ is a feasible schedule of $\mathcal{N}'$. All in, it is the least feasible, i.e.,\xspace $f=\hat\varphi'$. \end{proof} With this two facts in mind, the description of \texttt{$\text{t}_2\text{DTP()}$}\xspace can now proceed more smoothly. Recall that, firstly, the STN\xspace $\mathcal{N}_{\texttt{t}_1\xspace}$ is checked. If $\mathcal{N}_{\texttt{t}_1\xspace}$ is already inconsistent, so it is $\mathcal{N}_{\texttt{t}_2\xspace}$; otherwise, $\hat{\varphi}_\mathcal{N}$ is the least feasible schedule of $\mathcal{N}_{\texttt{t}_1\xspace}$. So, $w^{\hat{\varphi}_\mathcal{N}}\geq 0$ for every constraint in~$\mathcal{C}_{\texttt{t}_1\xspace}$. Now, for each target node $X\in \mathcal{T}$, the Dijkstra algorithm on input $(\mathcal{N}^{\hat{\varphi}_\mathcal{N}}, X)$ computes $\delta^{\hat{\varphi}_\mathcal{N}}_X(T)$. The whole distance matrix $\{\delta^{\hat{\varphi}_\mathcal{N}}_X(T)\}_{T\in\mathcal{T}, X\in \mathcal{T}}$ is computed here, and kept stored in memory. Multiple-sources single-target shortest paths are needed, actually, but these can be easily computed with the traditional Dijkstra's algorithm, e.g.,\xspace just reverse the direction of all arcs in the input network and treat the single-target node as if it were a single-source. What follows aims, if there's still an unsatisfied $\texttt{t}_2\xspace$-constraint $c_{\texttt{t}_2\xspace}\in\mathcal{C}_{\texttt{t}_2\xspace}$, at increasing the candidate schedule $f$ by the least necessary amount satisfying both $c_{\texttt{t}_2\xspace}$ and the whole $\mathcal{C}_{\texttt{t}_1\xspace}$. So, let us initialize~$f\leftarrow \hat{\varphi}_\mathcal{N}$. Then the following iterates. While $\exists$ some $X\in \mathcal{T}$ and $c_X=\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)\in \mathcal{C}_{\texttt{t}_2\xspace}$ s.t. $f(X)$ doesn't satisfy $c_X$: if $f(X)>u_k(=\max_i u_i)$, then $\mathcal{N}_{\texttt{t}_2\xspace}$ is inconsistent (see~Theorem~\ref{thm:TTP_correctness}); otherwise, let $i^*$ be the smallest $i\in [1,k]$ such that $f(X)<l_i$. By Lemma~\ref{lem:slack} and given $f$, then $\delta^{f}_X$ is given by: \[ \delta^{f}_X(T)\leftarrow \delta^{\hat\varphi_0}_X(T) + \big(f(T)-\hat\varphi_0(T)\big) - \big(f(X)-\hat\varphi_0(X)\big),\; \forall\; T\in\mathcal{T}. \tag{\texttt{rule}-$\delta$}\] So, by Lemma~\ref{lem:update}, the following updating rule: \[ f(T) \leftarrow f(T) + \max\big(0, l_{i^*} - f(X) - \delta^{f}_X(T)\big),\; \forall\; T\in\mathcal{T}.\tag{\texttt{rule}-$f$}\] yelds the least feasible schedule for the next auxiliary STN\xspace $\mathcal{N}'_{\texttt{t}_1\xspace}$ obtained by adding the new lower-bound $\texttt{t}_1\xspace$-constraint $(z-X\leq -l_{i^*})$. At each iteration of the while-loop $\mathcal{N}'_{\texttt{t}_1\xspace}$ is enriched with an additional lower-bound $\texttt{t}_1\xspace$-constraint as above. So, $\mathcal{N}'_{\texttt{t}_1\xspace}$ has $|\mathcal{T}'|=|\mathcal{T}|+1$ time-points ($z$ included) and at most $|\mathcal{C}'_{\texttt{t}_1\xspace}|\leq |\mathcal{C}_{\texttt{t}_1\xspace}|+|\mathcal{C}_{\texttt{t}_2\xspace}|$ $\texttt{t}_1\xspace$-constraints (one $\texttt{t}_1\xspace$-constraint per $c_{\texttt{t}_2\xspace}\in\mathcal{C}_{\texttt{t}_2\xspace}$ is enough, as for each $c_{\texttt{t}_2\xspace}$ only its greatest lower-bound counts). If the while-loop completes without ever finding $\mathcal{N}_{\texttt{t}_2\xspace}$ to be inconsistent (because, eventually, $f(X)>u_k(=\max_i u_i)$ for some $X\in\mathcal{T}$ at some point), then the last updating of $f$ yelds the least feasible schedule of $\mathcal{N}_{\texttt{t}_2\xspace}$ (as shown below in~Theorem~\ref{thm:TTP_correctness}). This concludes the description of \texttt{$\text{t}_2\text{DTP()}$}\xspace. Notice that, during the whole computation, the scheduling values can only increase monotonically upwards -- like in a value-iteration process. \begin{mytheorem}\label{thm:TTP_correctness} \texttt{$\text{t}_2\text{DTP()}$}\xspace is correct, i.e.,\xspace on any input $\texttt{t}_2\xspace$DTN $\mathcal{N}_{\texttt{t}_2\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$, it returns a feasible schedule $\varphi:\mathcal{T}\rightarrow\ensuremath{\mathbb{R}}$, if $\mathcal{N}_{\texttt{t}_2\xspace}$ is consistent; otherwise, it recognizes $\mathcal{N}_{\texttt{t}_2\xspace}$ as inconsistent. \end{mytheorem} \begin{proof} Let $\iota=0, 1, 2, \ldots, \iota_h$ be all the iterations of the while-loop of \texttt{$\text{t}_2\text{DTP()}$}\xspace, where $\iota_h$ is assumed to be the last iteration where the updating $\texttt{rule-}f$ is applied. For every iteration $\iota\in [1, \iota_h]$, the auxiliary STN\xspace $\mathcal{N}'^{(\iota)}_{\texttt{t}_1\xspace}$ is formally defined as: \[ \mathcal{N}'^{(\iota)}_{\texttt{t}_1\xspace}\triangleq (\mathcal{T}\cup\{z\}, \mathcal{C}'^{(\iota)}_{\texttt{t}_1\xspace}), \text{ where } z \text{ is the \emph{zero time-point}, and ... }\] \[ \mathcal{C}'^{(\iota)}_{\texttt{t}_1\xspace} \triangleq \mathcal{C}_{\texttt{t}_1\xspace}\cup \big\{(z-T\leq 0)\mid T\in\mathcal{T}\big\} \cup \big\{ (z-X^{(\gamma)}\leq -l^{(\gamma)}_{i^*})\mid 1 \leq \gamma \leq \iota\big\},\] where, for all $\gamma\leq\iota$, $X^{(\gamma)}$ is the (unique) $X\in \mathcal{T}$ appearing in some $\texttt{t}_2\xspace$-constraint that is considered at the while-loop's $\gamma$-th iteration, and $l^{(\gamma)}_{i^*}$ is its corresponding lower-bound. Also, let $f^{(\iota)}$ be the candidate schedule as updated by $\texttt{rule-}f$ during the $\iota$-th iteration. By applying Lemma~\ref{lem:slack}~and~\ref{lem:update} repeatedly, for each iteration $\iota$, it holds that $f^{(\iota)}$ is the least feasible schedule of $\mathcal{N}'^{(\iota)}_{\texttt{t}_1\xspace}$. This is the key invariant at the heart of \texttt{$\text{t}_2\text{DTP()}$}\xspace. Concerning actual correctness, firstly, assume that \texttt{$\text{t}_2\text{DTP()}$}\xspace recognizes $\mathcal{N}_{\texttt{t}_2\xspace}$ as inconsistent. If $\mathcal{N}_{\texttt{t}_1\xspace}$ was already inconsistent (cfr.~Theorem~\ref{thm:main_stn}), so $\mathcal{N}_{\texttt{t}_2\xspace}$ is too. Otherwise, the inconsistency of $\mathcal{N}_2$ really holds because of these two facts jointly: (i) the key invariant mentioned above; and, (ii) at the end of the while-loop, it must be $f(X)>u_k(=\max_i u_i)$ for some $\texttt{t}_2\xspace$-constraint $c_X=\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)\in \mathcal{C}_{\texttt{t}_2\xspace}$. Indeed notice that, by (i), no possible feasible schedule $g<f$ can be neglected (discarded) during the upward monotone (value-iteration like) updates of the schedules; and, by (ii), no possible schedule $g\geq f$ can ever satisfy $c_X\in C_2$. So, $\mathcal{N}_{\texttt{t}_2\xspace}$ is really inconsistent. Secondly, assume that $\mathcal{N}_{\texttt{t}_2\xspace}$ is recognized as consistent, by returning a schedule $f^{(\iota_h)}$. Since \texttt{$\text{t}_2\text{DTP()}$}\xspace can do that only after the above while-loop completes, the exit condition of the latter ensures that $f^{(\iota_h)}$ satisfies every constraint in $\mathcal{C}_{\texttt{t}_2\xspace}$. Moreover, the key invariant implies that $f^{(\iota_h)}$ is the least feasible schedule of $\mathcal{N}'^{(\iota_h)}_{\texttt{t}_1\xspace}$, so that $f^{(\iota_h)}$ satisfies all of the $\texttt{t}_1\xspace$-constraints in $\mathcal{C}_{\texttt{t}_1\xspace}$. These two combined, $f^{(\iota_h)}$ is the least feasible schedule~of~$\mathcal{N}_{\texttt{t}_2\xspace}$. So, $\mathcal{N}_{\texttt{t}_2\xspace}$ is indeed consistent. \end{proof} The next result asserts that \texttt{$\text{t}_2\text{DTP()}$}\xspace always halts in time polynomial in the input size. \begin{mytheorem}\label{thm:ttp_complexity} Suppose that \texttt{$\text{t}_2\text{DTP()}$}\xspace runs on input $\texttt{t}_2\xspace$DTN $\mathcal{N}_{\texttt{t}_2\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$. Then, \texttt{$\text{t}_2\text{DTP()}$}\xspace halts in time $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\big)$. \end{mytheorem} \begin{proof} Solving the STP\xspace $\mathcal{N}_{\texttt{t}_1\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace})$ with BF-VI takes $O(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}|)$ time (cfr. Theorem~\ref{thm:main_stn}). Computing the shortest paths distance matrix $\{\delta^{\hat{\varphi}_\mathcal{N}}_X(T)\}_{T\in\mathcal{T}, X\in \mathcal{T}}$ takes $|\mathcal{C}_{\texttt{t}_2\xspace}|$ calls to the Dijkstra algorithm (one per $X\in\mathcal{T}$ participating in some $\texttt{t}_2\xspace$-constraint), so, $O(|\mathcal{C}_{\texttt{t}_2\xspace}|\cdot(|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log|\mathcal{T}|))$ total time. Checking the while-loop exit condition (i.e.,\xspace wether there exists some unsatisfied $c_X\in \mathcal{C}_{\texttt{t}_2\xspace}$), can be done in $O(|\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}})$ total time (because there are at most $d_{\mathcal{C}_{\texttt{t}_2\xspace}}$ iterations and each check can be done in $O(|\mathcal{T}|)$ time). At each iteration of the while-loop, applying $\texttt{rule-}\delta$ and $\texttt{rule-}f$ to all $T\in\mathcal{T}$ takes $O(|\mathcal{T}|)$ time per iteration, and we have at most $d_{\mathcal{C}_{\texttt{t}_2\xspace}}$ of them; so, notice that it takes only $O(1)$ time per single application of the rules. Therefore, the overall time complexity of \texttt{$\text{t}_2\text{DTP()}$}\xspace on any input $\mathcal{N}_{\texttt{t}_2\xspace}=(\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup\mathcal{C}_{\texttt{t}_2\xspace})$~is: \[ \texttt{Time}_{\texttt{$\text{t}_2\text{DTP()}$}\xspace}(\mathcal{N}_{\texttt{t}_2\xspace}) = O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\big).\] This is a strongly polynomial time, i.e.,\xspace not depending on the magnitude of the arc weights. \end{proof} \section{Faster Deterministic Algorithm for RDTP\xspace{s}}\label{sect.RDTP_ALGO} With our brand new $\texttt{t}_2\xspace$DTPs algorithm in mind, let us now focus on solving RDTP\xspace{s} fastly. Given an input RDTP\xspace $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace} \cup \mathcal{C}_{\texttt{t}_3\xspace})$, we firstly solve the $\texttt{t}_2\xspace$DTP $\mathcal{N}_{\texttt{t}_2\xspace} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$ with \texttt{$\text{t}_2\text{DTP()}$}\xspace (cfr. Section~\ref{sect.Type2-TP_Algo}). If $\mathcal{N}_{\texttt{t}_2\xspace}$ is already inconsistent, we're done as $\mathcal{N}$ is too. Otherwise, the key idea is that of checking the consistency of all the $\texttt{t}_3\xspace$-constraints by making one single reduction call to the 2-SAT problem (which can be solved in linear-time~\cite{Aspvall1979}). For this reason, the universe of boolean variables is $\{x_c\}_{c\in\mathcal{C}_{\texttt{t}_3\xspace}}$, i.e.,\xspace we have one variable per $c\in\mathcal{C}_{\texttt{t}_3\xspace}$. Let $d',d''$ be the first and second disjunct of any given $c\in\mathcal{C}_{\texttt{t}_3\xspace}$ (respectively), the intended interpretation being that $x_c$ is $\texttt{true}$ iff $d'$ is satisfied (and $d''$ can be anything), whereas $x_c$ is $\texttt{false}$ iff $d'$ is unsatisfied and $d''$ is satisfied. The 2-CNF formula $\textsc{Cl}_{\mathcal{N}}$ is built as follows. Basically, for each $c\in\mathcal{C}_{\texttt{t}_3\xspace}$ and each disjunct $d$ of $c$, we enforce the binding requirement of satisfying all the temporal constraints in $\{d\}\cup\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace}$, and we check whether this implies that some other disjunct $\tilde{d}$ of any other $\texttt{t}_3\xspace$-constraint $\tilde{c}\neq c$ becomes unsatisfiable as a consequence. More precisely, we check whether satisfying $\{d\}\cup\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace}$ implies that some weight $\tilde{u}$ must become a \emph{strict} lower-bound for the scheduling value of some $\tilde{X}\in\mathcal{T}$ that appears in some other $\texttt{t}_3\xspace$-disjunct $\tilde{d}=(\tilde{l}\leq \tilde{X}\leq \tilde{u})$. This is formalized in Definition~\ref{def:clause} (below). If that is the case, a binary clause asserting the above implication\footnote{Here, recall the rule of material implication $p\rightarrow q\leftrightarrow \neg p \vee q$.} is added to $\textsc{Cl}_{\mathcal{N}}$. Let us formally describe the details of this construction. \begin{mydefinition}\label{def:clause} Given any RDTP\xspace $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace} \cup \mathcal{C}_{\texttt{t}_3\xspace})$, initially $\textsc{Cl}_{\mathcal{N}}$ is an empty set of binary clauses. For each $\texttt{t}_3\xspace$-constraint of $\mathcal{N}$, e.g.,\xspace for each $c=d'_c \vee d''_c\in \mathcal{C}_{\texttt{t}_3\xspace}$ where $d'_c=(l_1\leq X_i\leq u_1)$ and $d''_c=(l_2\leq X_j\leq u_2)$, some $i<j$, $\textsc{Cl}_{\mathcal{N}}$ is populated as~follows: \begin{enumerate} \item Consider the $\texttt{t}_2\xspace$DTP $\mathcal{N}[d'_c]_{\texttt{t}_2\xspace}$ in which $d'_c$ is added to $\mathcal{C}_{\texttt{t}_1\xspace}$ as a pair of $\texttt{t}_1\xspace$-constraints, i.e.,\xspace \begin{align*} \mathcal{N}[d'_c]_{\texttt{t}_2\xspace}\triangleq \Big( \mathcal{T}\cup\{z\}, \big(\mathcal{C}_{\texttt{t}_1\xspace} & \cup \{ (z-X_i\leq -l_1), (X_i-z\leq u_1) \} \\ &\cup \{z-T\leq 0\mid T\in\mathcal{T}\}\big) \; \cup \; \mathcal{C}_{\texttt{t}_2\xspace} \Big). \end{align*} If $\mathcal{N}[d'_c]_{\texttt{t}_2\xspace}$ is consistent, let $ \hat\varphi[d'_c]$ be its least feasible schedule; otherwise, add the unary clause $\neg x_c$ to $\textsc{Cl}_{\mathcal{N}}$. For each $\tilde{c}\neq c$ in $\mathcal{C}_{\texttt{t}_3\xspace}$, e.g.,\xspace $\tilde{c}=(\tilde{l}_1\leq X_{\tilde{i}}\leq \tilde{u}_1) \vee (\tilde{l}_2\leq X_{\tilde{j}}\leq \tilde{u}_2)\in\mathcal{C}_{\texttt{t}_3\xspace}$, \begin{itemize} \item if $ \hat\varphi[d'_c](X_{\tilde{i}})>\tilde{u}_1$ then add the implication $x_c\Rightarrow \neg x_{\tilde{c}}$ (i.e.,\xspace clause $\neg x_c \vee \neg x_{\tilde{c}}$) to $\textsc{Cl}_{\mathcal{N}}$; \item if $ \hat\varphi[d'_c](X_{\tilde{j}})>\tilde{u}_2$ then add the implication $x_c\Rightarrow x_{\tilde{c}}$ (i.e.,\xspace clause $\neg x_c \vee x_{\tilde{c}}$) to $\textsc{Cl}_{\mathcal{N}}$. \end{itemize} \item Consider the $\texttt{t}_2\xspace$DTP $\mathcal{N}[d''_c]_{\texttt{t}_2\xspace}$ in which $d''_c$ is added to $\mathcal{C}_{\texttt{t}_1\xspace}$ (similarly as above). If $\mathcal{N}[d''_c]_{\texttt{t}_2\xspace}$ is consistent, let $ \hat\varphi[d''_c]$ be its least feasible schedule; otherwise, add the unary clause $x_c$ to $\textsc{Cl}_{\mathcal{N}}$. Again, for each $\texttt{t}_3\xspace$-constraint $\tilde{c}\neq c$ of $\mathcal{N}$, e.g.,\xspace $\tilde{c}=(\tilde{l}_1\leq X_{\tilde{i}}\leq \tilde{u}_1) \vee (\tilde{l}_2\leq X_{\tilde{j}}\leq \tilde{u}_2)$: if $ \hat\varphi[d''_c](X_{\tilde{i}})>\tilde{u}_1$ then add the implication $\neg x_c\Rightarrow \neg x_{\tilde{c}}$ (i.e.,\xspace clause $x_c \vee \neg x_{\tilde{c}}$) to $\textsc{Cl}_{\mathcal{N}}$; and if $ \hat\varphi[d''_c](X_{\tilde{j}})>\tilde{u}_2$ then add the implication $\neg x_c\Rightarrow x_{\tilde{c}}$ (i.e.,\xspace the clause $x_c \vee x_{\tilde{c}}$) instead. \end{enumerate} \end{mydefinition} So, if the 2-SAT problem instance $\textsc{Cl}_{\mathcal{N}}$ is unsatisfiable, the input RDTP\xspace $\mathcal{N}$ is inconsistent. Otherwise, for every $c=d'\vee d''\in\mathcal{C}_{\texttt{t}_3\xspace}$ we get at least one feasible $\texttt{t}_2\xspace$DTP: either $\mathcal{N}[d'_c]_{\texttt{t}_2\xspace}$, which is related to the first disjunct $\{d'\}\cup\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace}$; or $\mathcal{N}[d''_c]_{\texttt{t}_2\xspace}$, which is related to the second $\{d''\}\cup\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace}$ (according to whether $x_c$ is $\texttt{true}$ or not in the satisfying assignment of $\textsc{Cl}_{\mathcal{N}}$). Then we compute the pointwise-maximum schedule taken among all of~those.~Formally, \begin{mydefinition}\label{def:max_sched} Let $\phi:\{x_c\}_{c\in\mathcal{C}_{\texttt{t}_3\xspace}}\rightarrow\{\texttt{true}, \texttt{false}\}$ be any satisfying assignment of $\textsc{Cl}_{\mathcal{N}}$. For every $c=d'_c \vee d''_c\in \mathcal{C}_{\texttt{t}_3\xspace}$, let us define: \[ d^\phi_c \triangleq \left\{ \begin{array}{ll} d'_c, & \text{ if } \phi(x_c)=\texttt{true}; \\ d''_c, & \text{ otherwise.} \end{array} \right.\;\;\;\;\;\;\;\; \text{ then, } \;\;\;\;\;\;\;\; \check{\varphi}_\mathcal{N}(T) \triangleq \max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](T),\;\; \forall T\in\mathcal{T},\] where $\hat\varphi[d^\phi_c]$ denotes the least feasible schedule of the consistent $\texttt{t}_2\xspace$DTP $\mathcal{N}[d^\phi_c]_{\texttt{t}_2\xspace}$. \end{mydefinition} The above pointwise-maximum schedule $\check{\varphi}_\mathcal{N}$ turns out to be feasible for the input RDTP~$\mathcal{N}$, as we show next. It is assumed we are given an RDTP\xspace $\mathcal{N}$ for which $\textsc{Cl}_{\mathcal{N}}$ is satisfiable. \begin{proposition}\label{prop:varphi_c1} Given $\mathcal{N}$ as above, the schedule $\check{\varphi}_\mathcal{N}$ satisfies every $c\in \mathcal{C}_{\texttt{t}_1\xspace}$. \end{proposition} \begin{proof} Let $c_{\texttt{t}_1\xspace}=(Y-X\leq w_{X,Y})\in\mathcal{C}_{\texttt{t}_1\xspace}$ be any $\texttt{t}_1\xspace$-constraint, some $X,Y\in\mathcal{T}$ and $w\in\ensuremath{\mathbb{R}}$. Pick any $c_Y^* \in \arg\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](Y)$. Clearly, $\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X) \geq \hat\varphi[d^\phi_{c_Y^*}](X)$. Therefore: \begin{align*} \check{\varphi}_\mathcal{N}(Y) - \check{\varphi}_\mathcal{N}(X) &= \max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](Y) - \max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X) \\ &\leq \hat\varphi[d^\phi_{c_Y^*}](Y) - \hat\varphi[d^\phi_{c_Y^*}](X) \leq w, \end{align*} where the very last inequality holds because $\hat\varphi[d^\phi_{c_Y^*}]$ is feasible for $(\mathcal{T}, \mathcal{C}_{\texttt{t}_1\xspace})$. So, $\check{\varphi}_\mathcal{N}$ satisfies~$c_{\texttt{t}_1\xspace}$. \end{proof} \begin{proposition}\label{prop:varphi_c2} Given $\mathcal{N}$ as above, the schedule $\check{\varphi}_\mathcal{N}$ satisfies every $c\in \mathcal{C}_{\texttt{t}_2\xspace}$. \end{proposition} \begin{proof} Let $c_{\texttt{t}_2\xspace}=\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)\in \mathcal{C}_{\texttt{t}_2\xspace}$ be any $\texttt{t}_2\xspace$-constraint, some $X\in \mathcal{T}$, $l_i,u_i\in \ensuremath{\mathbb{R}}$. Pick any $c_X^* \in \arg\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X)$. By definition $\hat\varphi[d^\phi_{c_X^*}]$ is a feasible schedule of $\mathcal{N}[d^\phi_{c_X^*}]_{\texttt{t}_2\xspace}$, thus it is feasible for $(\mathcal{T}, \mathcal{C}_{\texttt{t}_2\xspace})$ too. Therefore, \[ l_q\leq \hat\varphi[d^\phi_{c_X^*}](X)\leq u_q, \text{ for some } q\in \{1, \ldots, k\}.\] Since, $\check{\varphi}_\mathcal{N}(X)=\varphi[d^\phi_{c_X^*}](X)$, then $\check{\varphi}_\mathcal{N}(X)\in [l_q,u_q]$ for the same $q$. So, $\check{\varphi}_\mathcal{N}$ satisfies $c_{\texttt{t}_2\xspace}$. \end{proof} \begin{proposition}\label{prop:varphi_c3} Given $\mathcal{N}$ as above, the schedule $\check{\varphi}_\mathcal{N}$ satisfies every $c\in \mathcal{C}_{\texttt{t}_3\xspace}$. \end{proposition} \begin{proof} Let $c_{\texttt{t}_3\xspace}=(l_1\leq X\leq u_1) \vee (l_2\leq Y\leq u_2)\in \mathcal{C}_{\texttt{t}_3\xspace}$ be any $\texttt{t}_3\xspace$-constraint, some $X,Y\in \mathcal{T}$, $X<Y$ and $l_1,l_2,u_1,u_2\in \ensuremath{\mathbb{R}}$. Assume w.l.o.g. $\phi(x_{c_{\texttt{t}_3\xspace}})=\texttt{true}$. Then, $l_1\leq \hat\varphi[d^\phi_{c_{\texttt{t}_3\xspace}}](X)\leq u_1$. If ${c_{\texttt{t}_3\xspace}} \in \arg\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X)$, then $\check{\varphi}_\mathcal{N}(X)= \hat\varphi[d^\phi_{c_{\texttt{t}_3\xspace}}](X)\in [l_1,u_1]$; so, $\check{\varphi}_\mathcal{N}$ would satisfy~$c_{\texttt{t}_3\xspace}$. Otherwise, ${c_{\texttt{t}_3\xspace}} \not\in \arg\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X)$, and assume $\check{\varphi}_\mathcal{N}(X)\not\in [l_1,u_1]$ towards a contradiction. Pick any $c_X^* \in \arg\max_{c\in\mathcal{C}_{\texttt{t}_3\xspace}} \hat\varphi[d^\phi_c](X)$. All these hypotheses combined: \[\check{\varphi}_\mathcal{N}(X)= \hat\varphi[d^\phi_{c_X^*}](X) > u_1.\] Therefore, $\phi$ must satisfy either $p\Rightarrow \neg x_{c_{\texttt{t}_3\xspace}}$ or $\neg p\Rightarrow \neg x_{c_{\texttt{t}_3\xspace}}$, for some boolean variable $p$ (where the actual case depends on the actual value of $d^\phi_{c_X^*}$). Since $\phi$ satisfies either $p$ or $\neg p$, then $\phi$ must satisfy $\neg x_{c_{\texttt{t}_3\xspace}}$; i.e.,\xspace $\phi(x_{c_{\texttt{t}_3\xspace}})=\texttt{false}$. This is absurd, as we assumed $\phi(x_{c_{\texttt{t}_3\xspace}})=\texttt{true}$. The proof of the other case, in which $\phi(x_{c_{\texttt{t}_3\xspace}})=\texttt{false}$ is initially assumed, is symmetric. So, $\check{\varphi}_\mathcal{N}(X)$ satisfies $c_{\texttt{t}_3\xspace}$. \end{proof} Let us mention that our algorithm is called \texttt{RDTP()}\xspace, basically, it aims at computing $\hat\varphi_\mathcal{N}$ as above; if it fails in that (either because $\mathcal{N}_{\texttt{t}_2\xspace}$ is already inconsistent or $\textsc{Cl}_{\mathcal{N}}$ is unsatisfiable), it recognizes the input RDTP\xspace $\mathcal{N}$ as inconsistent. Now, we can prove this is correct and fast. \begin{mytheorem}\label{thm:varphi_feasible} \texttt{RDTP()}\xspace is correct, i.e.,\xspace on any RDTN $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace} \cup \mathcal{C}_{\texttt{t}_3\xspace})$, it returns a feasible schedule $\hat\varphi_\mathcal{N}:\mathcal{T}\rightarrow\ensuremath{\mathbb{R}}$, if $\mathcal{N}$ is consistent; otherwise, $\mathcal{N}$ is recognized as inconsistent. \end{mytheorem} \begin{proof} Recall that $\mathcal{N}$ is recognized as inconsistent only if $(\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup\mathcal{C}_{\texttt{t}_2\xspace})$ is already inconsistent or if the 2-SAT problem instance $\textsc{Cl}_{\mathcal{N}}$ is unsatisfiable. In the former case, since $(\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup\mathcal{C}_{\texttt{t}_2\xspace})$ is inconsistent, so it is $\mathcal{N}$. In the latter, by construction of $\textsc{Cl}_{\mathcal{N}}$, it is not possible to satisfy all the constraints in $\mathcal{C}_{\texttt{t}_1\xspace}\cup\mathcal{C}_{\texttt{t}_2\xspace}\cup \mathcal{C}_{\texttt{t}_3\xspace}$ (otherwise, the reader can check, it would've been possible to construct a satisfying assignment for $\textsc{Cl}_{\mathcal{N}}$, straightforwardly); so, $\mathcal{N}$ is really inconsistent. On the other side, by Propositions~\ref{prop:varphi_c1},~\ref{prop:varphi_c2}~and~\ref{prop:varphi_c3}, schedule $\hat\varphi_\mathcal{N}$ is really feasible for $\mathcal{N}$. \end{proof} The next result asserts that the halting time is strongly polynomial in the input size. \begin{mytheorem} Let \texttt{RDTP()}\xspace run on any input RDTP $\mathcal{N} = (\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace}\cup C_3)$. Its always halts within time $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\cdot|\mathcal{C}_{\texttt{t}_3\xspace}| + |\mathcal{C}_{\texttt{t}_3\xspace}|^2\big)$. \end{mytheorem} \begin{proof} By Theorem~\ref{thm:ttp_complexity}, $(\mathcal{T},\mathcal{C}_{\texttt{t}_1\xspace}\cup \mathcal{C}_{\texttt{t}_2\xspace})$ takes $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\big)$ time to be checked. Using that solution as an initial candidate, solving the two $\texttt{t}_2\xspace$DTPs $\mathcal{N}[d'_c]_{\texttt{t}_2\xspace}$ and $\mathcal{N}[d''_c]_{\texttt{t}_2\xspace}$, for each $c\in \mathcal{C}_{\texttt{t}_3\xspace}$ where $c=d'_c \vee d''_c$, it takes $O\big(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{C}_{\texttt{t}_2\xspace}|\cdot (|\mathcal{C}_{\texttt{t}_1\xspace}| + |\mathcal{T}|\cdot \log |\mathcal{T}|) + |\mathcal{T}|\cdot d_{\mathcal{C}_{\texttt{t}_2\xspace}}\cdot|\mathcal{C}_{\texttt{t}_3\xspace}|\big)$ total time. Next, for each $c,\tilde{c}\in \mathcal{C}_{\texttt{t}_3\xspace}$ such that $\tilde{c}\neq c$, eventually adding the corresponding clauses to $\textsc{Cl}_{\mathcal{N}}$ takes $O(1)$ time per clause; so, $\textsc{Cl}_{\mathcal{N}}$ is built in total time $O(|\mathcal{C}_{\texttt{t}_3\xspace}|^2)$. Since $|\textsc{Cl}_{\mathcal{N}}|=O(|\mathcal{C}_{\texttt{t}_3\xspace}|^2)$, solving the 2-SAT problem on input $\textsc{Cl}_{\mathcal{N}}$ takes time $O(|\mathcal{C}_{\texttt{t}_3\xspace}|^2)$ (e.g.,\xspace with the algorithm of~\cite{Aspvall1979}). Finally, computing $d^\phi_c$ and $\check{\varphi}_\mathcal{N}$ takes $O(|\mathcal{T}|\cdot |\mathcal{C}_{\texttt{t}_3\xspace}|)$ time. All in, the above mentioned time complexity of \texttt{RDTP()}\xspace follows. \end{proof} \section{\text{NP}-completeness of Multi-Tail \& Multi-Head $\texttt{t}_3\xspace$HyTPs} This section enquiries the tractability frontier of RDTPs by considering HyTN\xspace{s}~\cite{CominPR17}, where the basic idea is that of blending the two models together and see what happens to the complexity of the corresponding temporal problems. Two restricted kinds of disjunctive temporal problems, \textsc{tail-$\TTHREE$HyTP}\xspace and \textsc{head-$\TTHREE$HyTP}\xspace, are both proven to be \text{NP}-complete. The former problem is that of deciding whether a multi-tail {$\texttt{t}_3\xspace$}HyTN\xspace (i.e.,\xspace a temporal network in which the constraints can be modeled only by multi-tail hyperarcs and by $\texttt{t}_3\xspace$-constraints) is consistent or not. The latter, \textsc{head-$\TTHREE$HyTP}\xspace, is the same as the former but considers multi-head hyperarcs instead. Let us now focus on \textsc{tail-$\TTHREE$HyTP}\xspace. \begin{mytheorem}\label{Teo:npcompleteness_tail} \textsc{tail-$\TTHREE$HyTP}\xspace is \text{NP}-complete in a strong sense, i.e.,\xspace even if the input $(\mathcal{T}, \mathcal{A}\cup\mathcal{C}_{\texttt{t}_3})$ are restricted to satisfy $w_A(\cdot) \in\{-1, 0, 1\}$, $|T_A|\leq 2$ for every $A\in\mathcal{A}$, and every $\texttt{t}_3\xspace$-constraint $(l_i\leq X\leq u_i) \vee (l_j\leq Y\leq u_j)\in \mathcal{C}_{\texttt{t}_3}$ has all zero-valued lower/upper-bounds. \end{mytheorem} \begin{proof} We claim that if ${\cal H}=(\mathcal{T}, \mathcal{A}\cup\mathcal{C}_{\texttt{t}_3})$ is an integer-weighted and consistent multi-tail {$\texttt{t}_3\xspace$}HyTN\xspace, it admits an integer-valued feasible schedule $s:\mathcal{T}\rightarrow\{-T, \ldots, T\}$ where $T = \sum_{A\in\mathcal{A}, v\in V} |w_A(v)| + \sum_{c\in\mathcal{C}_{\texttt{t}_3}, c=(l_1\leq X\leq u_1)\vee (l_2\leq Y\leq u_2)} (|l_1|+|u_1|+|l_2|+|u_2|)$. Indeed let $s$ be a feasible schedule (integer-valued or not) of ${\cal H}$, and consider the projection HyTN\xspace ${\cal H}^s\triangleq (\mathcal{T}, \mathcal{A}')$, for $ \mathcal{A}'\triangleq \mathcal{A}\cup\bigcup_{c\in\mathcal{C}_{\texttt{t}_3}} A^s_c$, where for every $c=(l_1\leq X\leq u_1)\vee (l_2\leq Y\leq u_2)\in\mathcal{C}_{\texttt{t}_3}$ we pick the following pair of $\texttt{t}_1\xspace$-constraints: $ A^s_c\triangleq \left\{ \begin{array}{ll} \big\{(Z-X\leq -l_1), (X-Z\leq u_1)\big\}, & \text{ if } l_1\leq s(X)\leq u_1;\\ \big\{(Z-Y\leq -l_2), (Y-Z\leq u_2)\big\}, & \text{ otherwise.} \end{array}\right.$ By construction of ${\cal H}^s$, $s$ is a feasible for HyTN\xspace ${\cal H}^s$. So, by Proposition~\ref{prop:int_sched}, ${\cal H}^s$ admits an integer-valued feasible schedule $s'$ bounded by $-T$ and $+T$ as above. By contruction of ${\cal H}^s$, $s'$ is feasible for ${\cal H}$ too. Moreover, any such integer-valued feasible schedule can be verified in strongly polynomial time w.r.t.\xspace the size of the input; hence, \textsc{tail-$\TTHREE$HyTP}\xspace is in \text{NP}. To show that the problem is \text{NP}-hard, we describe a reduction from 3-SAT. Let us consider a boolean 3-CNF formula with $n\geq 1$ variables and $m\geq 1$ clauses: $\varphi(x_1, \ldots, x_n) = \bigwedge_{i=1}^m (\alpha_i \vee \beta_i \vee \gamma_i)$, where $C_i = (\alpha_i \vee \beta_i \vee \gamma_i)$ is the $i$-th clause of $\varphi$ and each $\alpha_i,\beta_i,\gamma_i\in \{x_j, \overline{x}_j\mid 1\leq j\leq n\}$ is either a positive or a negative literal. We associate to $\varphi$ a multi-tail {$\texttt{t}_3\xspace$}HyTN\xspace ${\cal H}_{\varphi}=(\mathcal{T}, \mathcal{A}\cup \mathcal{C}_{\texttt{t}_3})$, where each boolean variable $x_i$ occurring in $\varphi$ gets represented by two time-points, $x_i$ and $\overline{x}_i$. $\mathcal{T}$ also contains a time-point $z$ that represents the reference initial time-point for ${\cal H}_{\varphi}$, i.e.,\xspace the first time-point that has to be executed at time zero. Moreover, for each pair $x_i$ and $\overline{x}_i$, ${\cal H}_{\varphi}$ contains: a multi-tail hyperarc with tails $\{x_i,\overline{x}_i\}$, both weighted $-1$, and head in $z$. a $\texttt{t}_3\xspace$-constraint $\big((0\leq x_i\leq 0) \vee (0\leq \overline{x}_i\leq 0)\big)\in\mathcal{C}_{\texttt{t}_3}$. If ${\cal H}_{\varphi}$ is consistent, the multi-tail hyperarc and the $\texttt{t}_3\xspace$-constraint associated to $x,\neg x$ assures that ${\cal H}_{\varphi}$ admits an integer feasible schedule $s$ (as we mentioned above) such that $s(x_i)$ and $s(\overline{x}_i)$ are coherently set with values in $\{0,1\}$. In this way, $s$ is forced to encode a truth assignment on the $x_i$'s. The HyTN\xspace ${\cal H}_{\varphi}$ contains also a time-point $C_j$ for each clause $C_j$ of $\varphi$; each $C_j$ is connected by a multi-tail hyperarc with head in $C_j$ and tails over the literals occurring in $C_j$ and by two standard and opposite arcs with time-point $z$ as displayed in \figref{fig:gadgets} (right). This assures that if ${\cal H}_{\varphi}$ admits a feasible schedule $s$, then $s$ assigns scheduling time $1$ at least to one of the time-point representing the literals connected with the multi-tail hyperarc. \figref{fig:gadgets} depicts the gadgets. \begin{figure}[tb] \begin{tikzpicture}[arrows=->,scale=1,node distance=2 and 2] \node[node, label={below:$[0]$}] (zero) {$z$}; \node[node, above right=of zero] (nX) {$\overline{x}_i$}; \node[node, above left=of zero] (X) {$x_i$}; \draw[] (zero) to [bend left=40] node[below] {$1$} (X); \draw[] (X) to [bend left=45] node[above] {$0$} (zero); \draw[>=,dashed, thick,sloped] (zero) to [bend right=10] node[above,xshift=.55ex,yshift=-.55ex] {\footnotesize $(0,0)$, \tiny $\texttt{t}_3\xspace$} (X); \draw[dotted, thick] (X) to [bend right=15] node[below] {$-1$} (zero); \draw[] (zero) to [bend left=45] node[above] {$1$} (nX); \draw[] (nX) to [bend left=40] node[below] {$0$} (zero); \draw[>=, dashed, thick,sloped] (zero) to [bend left=10] node[above,yshift=-.55ex] {\footnotesize $(0,0)$, \tiny $\texttt{t}_3\xspace$} (nX); \draw[dotted, thick] (nX) to [bend left=15] node[below] {$-1$} (zero); \end{tikzpicture}\label{FIG:Var_i} \begin{tikzpicture}[arrows=->,scale=.65,node distance=1.5 and 2] \node[node,label={above:$[1]$}] (one) {$C_j$}; \node[node,below =of one] (beta) {$\beta_j$}; \node[node,left=of beta] (alpha) {$\alpha_j$}; \node[node,right=of beta] (gamma) {$\gamma_j$}; \node[node,label={below:$[0]$}, below=of beta] (zero) {$z$}; \coordinate (fakeL) at ($(alpha.west)+(-.3,0)$); \coordinate (fakeR) at ($(gamma.east)+(.3,0)$); \draw[>=] (zero) to [bend left=45] node[left] {$+1$} (fakeL.north); \draw[] (fakeL) to [bend left=45] (one); \draw[>=] (one) to [bend left=45] node[above] {$-1$} (fakeR.south); \draw[] (fakeR) to [bend left=45] (zero); \draw[] (alpha) to [bend right=20] node[timeLabel,below] {} (zero); \draw[] (zero) to [bend right=20] (alpha); \draw[dashed, dotted] (alpha) to [bend right=5] (zero); \draw[dashed] (zero) to [bend right=5] (alpha); \draw[dotted, thick] (alpha) to [bend right=25] node[timeLabel,below] {$0$} (one.south west); \draw[] (zero) to [bend left=25] (beta); \draw[] (beta) to [bend left=25] (zero); \draw[dotted] (beta) to [bend right=10] (zero); \draw[dashed] (zero) to [bend right=10] (beta); \draw[dotted, thick] (beta) to [] node[timeLabel,right] {$0$} (one.south); \draw[] (gamma) to [bend left=20] (zero); \draw[] (zero) to [bend left=20] (gamma); \draw[dotted] (gamma) to [bend left=5] (zero); \draw[dashed] (zero) to [bend left=5] (gamma); \draw[dotted, thick] (gamma) to [bend left=25] node[timeLabel,below] {$0$} (one.south east); \end{tikzpicture} \caption{Variable and clause gadgets (at left and right, respectively) used in Theorem~\ref{Teo:npcompleteness_tail}.}\label{fig:gadgets} \end{figure} A more formal definition of ${\cal H}_{\varphi}$ is given in Appendix~A. The reader can check that $|\mathcal{T}|=1+2n+m=O(m+n)$, $m_{\mathcal{A}}=O(m+n)$, $|\mathcal{C}_{\texttt{t}_3}|=O(n)$; therefore, the transformation is linearly bounded. We next show that $\varphi$ is satisfiable if and only if ${\cal H}_{\varphi}$ is consistent. Any truth assignment $\nu:\{x_1, \ldots, x_n\}\rightarrow \{\texttt{true}, \texttt{false}\}$ satisfying $\varphi$ can be translated into a feasible schedule $s:\mathcal{T}\rightarrow \ensuremath{\mathbb{Z}}$ of ${\cal H}_{\varphi}$ as follows. For time-point $z$, let $s(z)=0$, and let $s(C_j)=1$ for each $j=1, \ldots, m$; then, for each $i=1, \ldots, n$, let $s(x_i) = 1$ and $s(\overline{x}_i) = 0$ if the truth value of $x_i$, $\nu(x_i)$, is \texttt{true}, otherwise let $s(x_i) = 0$ and $s(\overline{x}_i) = 1$. It is simple to verify that, using this schedule $s$, all the constraints comprising each single gadget are satisfied and, therefore, the network is consistent. So, ${\cal H}_{\varphi}$ is consistent. Vice versa, assume that ${\cal H}_{\varphi}$ is consistent. Then, it admits an integer-valued feasible schedule $s$ (as we mentioned above). After the translation $s(v)\triangleq s(v) - s(z)$, we can assume that $s(z)=0$. Hence, $s(C_j) = 1$ for each $j=1,\ldots, m$, as enforced by the two standard arcs incident at $C_j$ in the clause gadget, and $\{s(x_i),s(\overline{x}_i)\} = \{0,1\}$ for each $i=1,\ldots, n$, as enforced by the constraints comprising the variable gadgets. Therefore, the feasible schedule $s$ can be translated into a truth assignment $\nu:\{x_1, \ldots, x_n\}\rightarrow \{\texttt{true}, \texttt{false}\}$ defined by $\nu(x_i)=\texttt{true}$ if $s(x_i)=1$ (and $s(\overline{x}_i)=0$); $\nu(x_i)=\texttt{false}$ if $s(x_i)=0$ (and $s(\overline{x}_i)=1$) for every $i = 1, \ldots , n$. So, $\varphi$ is satisfiable. To conclude, we observe that any hyperarc $A\in\mathcal{A}$ of ${\cal H}_{\varphi}$ has weights $w_A(\cdot)\in\{-1, 0, 1\}$, size $|A|\leq 3$, and any $\texttt{t}_3\xspace$-constraint $c=(l_i\leq X\leq u_i) \vee (l_j\leq Y\leq u_j)\in \mathcal{C}_{\texttt{t}_3}$ has zero lower and upper-bounds (i.e.,\xspace $l_i=u_i=l_j=u_j=0$). Since any hyperarc with three tails can be replaced by two hyperarcs each having at most two tails, the consistency problem remains \text{NP}-Complete even if $|A|\leq 2$ for every $A\in A$. \end{proof} In order to prove that \textsc{head-$\TTHREE$HyTP}\xspace is also \text{NP}-complete, we could proceed with an argument similar to that of Theorem~\ref{Teo:npcompleteness_tail}. However, we also observe that the same result follows as an immediate corollary of the following inter-reducibility between the two models. \begin{mydefinition} A multi-tail (multi-head) RHyTN is any temporal network in which the constraints can be modeled only by multi-tail (multi-head) hyperarcs and by $\{\texttt{t}_2\xspace, \texttt{t}_3\xspace\}$ disjunctive temporal constraints. The problem of checking whether a given RHyTN is consistent is named RHyTP. Observe, \end{mydefinition} \begin{proposition}\label{prop:inter-reducitble-HTNs} Multi-head and multi-tail RHyTPs are inter-reducible by means of $\log$-space, linear-time, local-replacement reductions. Particularly, multi-head and multi-tail $\texttt{t}_3\xspace$HyTPs are inter-reducible by such reductions. (The proof is in Appendix~A) \end{proposition} Therefore, by Proposition~\ref{prop:inter-reducitble-HTNs}, it follows that \textsc{head-$\TTHREE$HyTP}\xspace is also strongly \text{NP}-complete. \section{Pseudo-Polynomial Time Algorithm for $\texttt{t}_2$HyTPs}\label{sect:algo_type2hytp} We end by studying multi-tail and multi-head $\texttt{t}_2$HyTNs (i.e.,\xspace temporal networks in which the temporal constraints can be only $\texttt{t}_2\xspace$ disjunctive temporal constraints and either only multi-tail or multi-head hyperarcs). It turns out that checking the corresponding temporal problems, \textsc{tail-$\TTWO$HyTP}\xspace and \textsc{head-$\TTWO$HyTP}\xspace, lies in $\textsc{NP}\cap \text{co-}\textsc{NP}$ and admits pseudo-polynomial time algorithms. By Proposition~\ref{prop:inter-reducitble-HTNs}, it is sufficient to focus on multi-head $\texttt{t}_2$HyTPs only. The corresponding pseudo-polynomial time algorithm is named $\texttt{t}_2\texttt{HyTP()}$\xspace, and described below -- notice that it generalizes \texttt{$\text{t}_2\text{DTP()}$}\xspace. Given any integer-weighted multi-head $\texttt{t}_2$HyTPs ${\cal H}_{\texttt{t}_2} = (\mathcal{T},\mathcal{A}\cup \mathcal{C}_{\texttt{t}_2})$ in input, we firstly solve the HyTP ${\cal H} = (\mathcal{T},\mathcal{A})$ with the VI algorithm of Theorem~\ref{Teo:MainAlgorithms}. If ${\cal H}$ is recognized as inconsistent, the algorithm halts. Otherwise, let $\varphi$ be the least feasible schedule of ${\cal H}$. Then proceed as follows: While $\exists$ some $X\in \mathcal{T}$ and $c_X=\bigvee_{i=1}^{k}(l_i\leq X\leq u_i)\in \mathcal{C}_{\texttt{t}_2}$ s.t. $\varphi(X)$ doesn't satisfy $c_X$: If $\varphi(X)>u_k(=\max_i u_i)$, then ${\cal H}_{\texttt{t}_2}$ is recognized as inconsistent; otherwise, let $i^*$ be the smallest $i\in [1,k]$ such that $\varphi(X)<l_i$. Firstly, we increase the value of $\varphi(X)$ up to $l_{i^*}$, i.e.,\xspace update $\varphi(X)\leftarrow l_{i^*}$. Secondly, the VI algorithm of Theorem~\ref{Teo:MainAlgorithms} is invoked on input $({\cal H}, \varphi)$, so, then, $\varphi$ becomes the schedule returned by that run of VI. The process iterates so on and so forth, and if the while-loop completes without recognizing ${\cal H}_{\texttt{t}_2}$ as inconsistent, $\varphi$ is returned. The correctness and the time complexity are asserted below. (The proof is in Appendix~A) \begin{mytheorem}\label{thm:htn_algo} $\texttt{t}_2\texttt{HyTP()}$\xspace is correct, i.e.,\xspace running on any integer-weighted multi-head $\texttt{t}_2$HyTP ${\cal H}_{\texttt{t}_2} = (\mathcal{T},\mathcal{A}\cup \mathcal{C}_{\texttt{t}_2})$, an integer-valued feasible schedule $\varphi:\mathcal{T}\rightarrow\ensuremath{\mathbb{Z}}$ is returned, in case ${\cal H}_{\texttt{t}_2}$ is consistent; otherwise, ${\cal H}_{\texttt{t}_2}$ is correctly recognized as inconsistent. Moreover, the corresponding time complexity is pseudo-polynomial, i.e.,\xspace \begin{align*}\hspace{2.5ex} \texttt{Time}_{\texttt{t}_2\texttt{HyTP()}}({\cal H}_{\texttt{t}_2}) = O\big(( & |\mathcal{T}|+|\mathcal{A}|) \cdot m_{\mathcal{A}}\cdot W_{\mathcal{A},\mathcal{C}_{\texttt{t}_2}} \big), \\ \text{ where } & W_{\mathcal{A},\mathcal{C}_{\texttt{t}_2}}\triangleq \max\Big(\max_{A\in \mathcal{A}}\max_{h\in A} |w_A(h)|, \max_{\substack{l_j \text{ appears in any } \\ \vee_{i=1}^{k}(l_i\leq X\leq u_i)\in \mathcal{C}_{\texttt{t}_2}}} l_j\Big). \end{align*} \end{mytheorem} Finally, since $\texttt{t}_2\texttt{HyTP()}$\xspace is correct, it is possible to establish the following complexity result. \begin{mytheorem}\label{thm:np_conp} $\textsc{tail-$\TTWO$HyTP}\xspace, \textsc{head-$\TTWO$HyTP}\xspace \in \textsc{NP}\cap \text{co-}\textsc{NP}$. (The proof is in Appendix~A) \end{mytheorem} \section{Conclusions and Future Works}\label{sect.CFW} A deeper combinatorial comprehension on the algorithmics of RDTPs led to a new elementary deterministic strongly polynomial time procedure for solving them, significantly improving the asymptotic running times suggested by Kumar before. In future works we'd like to investigate further on possible generalizations/extensions of the proposed algorithms, aiming at covering some compatible (or even wider) subclasses of the disjunctive temporal constraints~problem.
{ "timestamp": "2018-08-07T02:08:54", "yymm": "1805", "arxiv_id": "1805.02183", "language": "en", "url": "https://arxiv.org/abs/1805.02183" }
\section{Introduction} \vspace{-0.5em} Mobile edge computing (MEC) has emerged as a promising technique to enhance the computation capacity and energy efficiency of wireless devices, for enabling various computation-intensive and latency-critical Internet-of-things (IoT) applications \cite{Mao2017}. By deploying MEC servers at the network edge such as access points (APs), IoT devices can wirelessly offload the computation-heavy tasks to APs for efficient remote execution (see, e.g., \cite{BarbarossaSardellittiLorenzo2014,Chen2016,WangXuTWC,WangXuGCWorkshop,YouHuangChaeKim2017}). Despite the benefits, the wireless task offloading introduces new data security problems for wireless IoT devices. Due to the broadcast nature of wireless communications, the computation tasks offloaded from these devices are likely to be overheard by malicious attackers nearby, which may decode such information for launching security attacks. For the success of MEC, it is crucial to keep the confidentiality of the task offloading against eavesdropping attacks. Physical-layer security has emerged as a viable solution to ensure perfectly secured wireless communications against eavesdropping attacks, {provided that (partial) channel state information (CSI) of the eavesdroppers is available at the legitimate users} (see, e.g., \cite{Wyner1975,LiangPoorShamai2008,WangTao2011}). In physical-layer security, the key design objective is to maximize the so-called secrecy rate, i.e., the secure communication rate under the condition that the eavesdroppers cannot overhear any information. In this letter, we propose to employ the physical-layer security to secure the wireless computation offloading in MEC. We particularly focus on a multiuser multicarrier (e.g., orthogonal frequency-division multiple access (OFDMA)) system as shown in Fig. \ref{fig0}, in which a single AP (with an MEC server integrated) serves multiple users for their computation offloading, in the presence of a malicious eavesdropper. Each user can partition the computation tasks into two parts, which are computed locally and securely offloaded to AP, respectively. {Due to the employment of physical-layer security, new secure offloading constraints are imposed, i.e., the offloading rate at each user cannot exceed its secrecy rate to the AP, such that no information will be leaked to the eavesdropper.} {By taking into account such constraints and considering practical issues such as the imperfect CSI of the eavesdropper, how to jointly optimize the communication and computation resource allocations at multiple users for efficient MEC is a new problem that has not been investigated in the literature yet. Under this setup, we minimize the weighted sum-energy consumption for these users while ensuring their computation latency requirements, by jointly optimizing the users' local computing, as well as their transmit power and subcarrier allocations for secure offloading.} Although this problem is non-convex, we obtain its solution in a semi-closed form via the Lagrange duality method. Via numerical results, we validate the effectiveness of our proposed design over other benchmark schemes. We also show that as compared to the conventional setup without eavesdropper, the users consume more energy to secure the computation offloading from the eavesdropper's interception. \begin{figure} \centering \epsfxsize=1\linewidth \includegraphics[width=6.4cm]{system.eps} \caption{The MEC system model with secure multiuser computation offloading over a multicarrier channel, in the presence of a malicious eavesdropper.} \label{fig0}\vspace{-2em} \end{figure} \vspace{-1em} \section{System Model} \vspace{-0.5em} As shown in Fig. \ref{fig0}, we consider an MEC system with a single AP (with an MEC server integrated) and $K > 1$ users, in the presence of a malicious eavesdropper.{\footnote{Our results are extendible to the case with more than one eavesdropper, in which each user's achievable secrecy rate for offloading should be modified based on that in the so-called compound wire-tap channels with multiple eavesdroppers (see, e.g., \cite{Liang2009}).} Let $\mathcal K \triangleq \{1,\ldots,K\}$ denote the set of users. All nodes are equipped with a single antenna. We focus on a particular time block with duration $T$, during which each user $k\in\mathcal K$ needs to execute the computation tasks with $L_k > 0$ input bits. We consider the data partition task model for partial offloading, in which each task-input bit can be viewed as an independent sub-task. Therefore, user $k$ can partition the respective tasks into two portions with $l_k$ and $(L_k-l_k)$ input bits, which are locally computed at the user itself and securely offloaded to the AP over a multicarrier channel for remote execution, respectively. We consider a quasi-static subcarrier channel model, in which the wireless channels remain constant over each subcarrier within this block. Let $N$ denote the number of subcarriers in this system. For each subcarrier $n\in\mathcal N \triangleq \{1,\ldots,N\}$, let $h_{k,n}$ and $\tilde{g}_{k,n}$ denote the channel power gains from user $k$ to the AP and the eavesdropper, respectively. We assume that the AP perfectly knows the CSI of $h_{k,n}$'s and the computation information of all users, {but only partially knows that of $\tilde{g}_{k,n}$'s.{\footnote{When the eavesdropper is active, each user $k$ can monitor the eavesdropper's potential active transmission to estimate the corresponding $\tilde{g}_{k,n}$'s. When the eavesdropper is passive without transmitting any signal, each user $k$ can still detect the passive eavesdropping, by e.g. from the eavesdropper's local oscillator power leaked from its RF front end \cite{Mukherjee2012}. For both cases, the users can eventually obtain some information of $\tilde{g}_{k,n}$'s and then send such information back to the AP. In this case, the AP can partially know $\tilde{g}_{k,n}$'s with some errors.}} As commonly adopted in the physical-layer security literature \cite{WangSecure2013,Muhammad2017,Khandaker2018}, we consider the deterministic CSI uncertainty model for $\tilde{g}_{k,n}$'s, where $\tilde{g}_{k,n}=\overline{g}_{k,n}+\Delta g_{k,n}, k\in\mathcal K, n\in\mathcal N$. Here, $\overline{g}_{k,n}$ denotes the estimated CSI of $\tilde{g}_{k,n}$ at the AP and $\Delta g_{k,n}$ denotes the estimation error that is bounded by a possible value $\epsilon \ge 0$ (also know by the AP) as $|\Delta g_{k,n} | \leq \epsilon$.} As for the local computing of the $l_k$ input bits at each user $k\in\mathcal K$, let $C_k$ denote the number of CPU cycles required for computing {one task-input bit (or each independent sub-task). Accordingly, the total number of CPU cycles required for computing the $l_k$ bits is $C_k l_k$. Employing the dynamic voltage and frequency scaling technique \cite{Mao2017,BarbarossaSardellittiLorenzo2014}, user $k$ can control the CPU frequency $f_{k,m}$ for each cycle $m\in\{1,\ldots,C_k l_k\}$. In particular, in order to minimize the energy consumption for local computing at each user $k$, the CPU frequencies $f_{k,m}$'s should be identical over different cycles $m$'s \cite{WangXuTWC}. By using this fact and noting that the local execution time should be $T$ to meet the computation latency, we have the CPU frequencies at each user $k$ as $f_{k,m} = C_k l_k/T, \forall m\in\{1,\ldots,C_k l_k\}$. Therefore, the user $k$'s energy consumption for local computing is given by $E_k^{\text{loc}} = \sum_{m=1}^{C_kl_k} \zeta_k f_{k,m}^2 = \zeta_k C_k^3 l_k^3/T^2$, where $\zeta_k$ denotes the effective capacitance coefficient that depends on the chip architecture at user $k$ \cite{Mao2017}. Furthermore, let $f_k^{\max}$ denote the maximum CPU frequency at each user $k$; we have $f_{k,m} \le f_k^{\max}, \forall k,m$. Accordingly, it must hold that $l_k \le l_k^{\max} \triangleq f_k^{\max}T/C_k, \forall k\in\mathcal K$. Next, we consider the secure offloading of the $(L_k - l_k)$ task input bits for each user $k\in\mathcal K$.\footnote{Note that the MEC server and the AP generally have large computation capability and transmission power, respectively. Therefore, we ignore the time required for remote computation at the MEC server and computation results downloading from the AP to the users (see, e.g., \cite{WangXuTWC,WangXuGCWorkshop,YouHuangChaeKim2017}).} Let $\{\theta_{k,n}\} $ denote the indicators for subcarrier allocation with $\theta_{k,n}\in \{0,1\}$, where $\theta_{k,n} = 1$ or $\theta_{k,n} = 0$ mean that the sub-carrier $n\in\mathcal N$ is or is not allocated to user $k$, respectively. Let $p_{k,n} \ge 0$ denote the transmit power at user $k$ for secure task offloading, and $B$ the bandwidth of each subcarrier. {Under the CSI uncertainty model, the worst-case achievable secrecy rate (in bits/sec) at user $k$ for offloading is given as \begin{align*} &R_k(\mv \theta_k,\mv p_k) = \min_{|\Delta g_{k,n}| \le \epsilon} B \sum_{n=1}^N \theta_{k,n}\big(\log_2\big(1 + h_{k,n} p_{k,n}\big) \\ &- \log_2\big(1 + {\tilde{g}_{k,n} p_{k,n}}\big)\big)^+\end{align*} \begin{align*} &= B \sum_{n=1}^N \theta_{k,n}\big(\log_2\big(1 + h_{k,n} p_{k,n}\big) - \log_2\big(1 + {g_{k,n} p_{k,n}}\big)\big)^+, \end{align*}where $g_{k,n} = \overline{g}_{k,n} + \epsilon$ denotes the best possible channel power gain of the eavesdropper known by the AP.} Here, the receiver noise powers at the AP and the eavesdropper are normalized to be unity, $(x)^+ \triangleq \max(x,0)$, $\mv \theta_k \triangleq [\theta_{k,1},\ldots,\theta_{k,N}]^\dagger$, and $\mv p_k \triangleq [p_{k,1},\ldots,p_{k,N}]^\dagger$, with the superscript $\dagger$ denoting the transpose. The user $k$'s transmission energy consumption for secure offloading is given as $E_k^{\text{off}} = \sum_{n=1}^N \theta_{k,n} p_{k,n}T$. Under this setup, our objective is to minimize the weighted sum-energy consumption at the $K$ users (i.e., $\sum_{k=1}^K \alpha_k(E_k^{\text{loc}} + E_k^{\text{off}})$) while ensuring the successful computation task execution within this block. Here, $\alpha_k > 0$ denotes the energy weight for each user $k\in\mathcal K$, where a larger value of $\alpha_k$ indicates a higher priority for user $k$ in energy minimization. The decision variables include the task partition $\mv l \triangleq [l_{1},\ldots,l_{K}]^\dagger$, as well as the subcarrier allocation $\mv{\Theta} \triangleq [\mv \theta_1,\ldots,\mv \theta_{K}]$ and the power allocation $\mv{P} \triangleq [\mv p_1,\ldots,\mv p_{K}]$ for secure task offloading. Mathematically, this problem is formulated as \begin{align} \mathrm{(P1)}:&\min_{\mv l, \mv{\Theta},\mv P} \sum_{k=1}^K \alpha_k \bigg(\zeta_k C_k^3 l_k^3/T^2 + \sum_{n=1}^N \theta_{k,n} p_{k,n}T\bigg) \nonumber\\ \mathrm{s.t.}~& TR_k(\mv \theta_k,\mv p_k) \ge L_k - l_k, \forall k\in\mathcal K \label{eqn:rate:con}\\ &0\le l_k \le l_k^{\max}, p_{k,n} \ge 0, \forall k\in\mathcal K, n\in\mathcal N\label{eqn:power:con}\\ &\sum_{k=1}^K \theta_{k,n} = 1, \theta_{k,n} \in \{0,1\},\forall n\in\mathcal N,\label{eqn:subcarrier:con} \end{align} {Notice that in \eqref{eqn:rate:con}, the worst-case secrecy rate for each user $k$ must be no smaller than the offloading rate, such that the offloading is secured under any possible eavesdropper channels.} Furthermore, the constraints in \eqref{eqn:subcarrier:con} ensure that each subcarrier is only allocated to one user. However, due to the binary variables in $\mv{\Theta}$, problem (P1) is a non-convex optimization problem that is generally difficult to solve. {Before proceeding, it is worth noting that in the special case without the eavesdropper (or equivalently $g_{k,n} = 0, \forall k\in\mathcal K,n\in\mathcal N$), problem (P1) corresponds to the energy-efficient multiuser computation offloading problem over multicarrier systems in \cite{YouHuangChaeKim2017}. In the other special case with only offloading (or equivalently $l_{k} = 0, \forall k\in\mathcal K$), problem (P1) corresponds to a secrecy communication problem over a multicarrier channel (see, e.g., \cite{WangTao2011}). Therefore, problem (P1) unifies the conventional computation offloading design in MEC and the energy efficient communication with physical-layer security.} \vspace{-1em} \section{Proposed Solution to Problem (P1)} \vspace{-0.5em} Though non-convex, it can be shown that problem (P1) satisfies the time-sharing condition in \cite{YuLui2006}, as the number of subcarriers $N$ becomes infinite. In this case, zero duality gap or strong duality holds between (P1) and its Lagrange dual problem. In this section, we solve problem (P1) by using the Lagrange dual method, by considering the zero duality gap.\footnote{In our simulations in Section \ref{sec:IV} with $N=64$ subcarriers, the duality gap of (P1) is actually negligibly small and thus can be ignored. Moreover, the duality gap reduces as $N$ increases, and approaches zero for $N \to \infty$\cite{YuLui2006}.} Let $\lambda_k \ge 0, k\in\mathcal K$, denote the dual variable associated with the $k$-th constraint in \eqref{eqn:rate:con}. The Lagrangian of (P1) is given as\vspace{-0.9em} \begin{footnotesize} \begin{align} &\mathcal{L}(\mv l, \mv{\Theta},\mv P,\mv \lambda)= \sum_{k=1}^K \alpha_k \big(\zeta_k C_k^3 l_k^3/T^2 + \sum_{n=1}^N \theta_{k,n}p_{k,n}T\big)\\ \nonumber&- \sum_{k=1}^K \lambda_k\bigg(T B\sum_{n=1}^N \theta_{k,n}\left(\log_2 \frac{1 + h_{k,n} p_{k,n}}{1 + g_{k,n} p_{k,n}}\right)^+ - (L_k - l_k) \bigg). \end{align} \end{footnotesize} The dual function is given by \begin{align} f(\mv \lambda) = \min_{\mv l, \mv{\Theta},\mv P}& \mathcal{L}(\mv l, \mv{\Theta},\mv P,\mv \lambda),~~ \mathrm{s.t.}~\eqref{eqn:power:con}~{\text{and}}~\eqref{eqn:subcarrier:con}.\label{eqn:dual:function} \end{align} The dual problem is \begin{align} \mathrm{(D1)}:\max_{\mv \lambda}~&f(\mv \lambda),~\mathrm{s.t.}~\lambda_k \ge 0, \forall k\in\mathcal K.\label{eqn:dual:problem} \end{align} In the following, we solve problem (P1) by first solving problem \eqref{eqn:dual:function} to obtain $f(\mv \lambda)$ under any given $\mv \lambda$ satisfying \eqref{eqn:dual:problem}, and then solving (D1) via updating $\mv \lambda$ to maximize $f(\mv \lambda)$. First, consider problem \eqref{eqn:dual:function} under any given $\mv \lambda$ satisfying \eqref{eqn:dual:problem}. In this case, problem \eqref{eqn:dual:function} can be decomposed into the following subproblems by dropping the irrelevant constant $\sum_{k=1}^K\lambda_k L_k$. \begin{align} &~~~~\min_{0\le l_k \le l_k^{\max}}~\alpha_k \zeta_k C_k^3l_k^3/T^2 - \lambda_k l_k, \label{eqn:sub:problem:1}\\ &\min_{\{\theta_{k,n}, p_{k,n}\}_{k=1}^K} ~\sum_{k=1}^K\alpha_k \theta_{k,n}p_{k,n}T - \sum_{k=1}^K \lambda_k \theta_{k,n}TB\nonumber\\ &~~~~\times\big(\log_2\big(1 + {h_{k,n} p_{k,n}}\big)- \log_2\big(1 + {g_{k,n} p_{k,n}}\big)\big)^+\nonumber\\ &~~\mathrm{s.t.}~p_{k,n} \ge 0, \theta_{k,n} \in \{0,1\}, \forall k\in\mathcal K,~ \sum_{k=1}^K \theta_{k,n} = 1,\label{eqn:sub:problem:2} \end{align} where each subproblem \eqref{eqn:sub:problem:1} corresponds to one user $k$, and each subproblem \eqref{eqn:sub:problem:2} corresponds to one subcarrier $n$. For the $k$th subproblem in \eqref{eqn:sub:problem:1}, by checking the first-order derivative, the optimal solution is given by \begin{align}\label{eqn:l:k} {l_k^{(\mv \lambda)} = \min\left(\sqrt{\frac{\lambda_k T^2}{3\alpha_k \zeta_kC_k^3}},l_k^{\max}\right).} \end{align} For the $n$th subproblem in \eqref{eqn:sub:problem:2}, it is evident that only one user can be active due to the constraint $\sum_{k=1}^K \theta_{k,n} = 1$, i.e., there exists exactly one user $k$ such that $\theta_{k,n} = 1$ and $\theta_{\hat k,n} = 0, \forall \hat k \neq k$. As a result, we can optimally solve this problem by solving for $\{p_{k,n}\}_{k=1}^K$ under each possible $\{\theta_{k,n}\}_{k=1}^K$, and then comparing the resultant objective values to find the optimal $\{\theta_{k,n}\}_{k=1}^K$. When user $k$ is active (i.e., $\theta_{k,n} = 1$ and $\theta_{\hat k,n} = 0, \forall \hat k \neq k$), we define the objective function of problem \eqref{eqn:sub:problem:2} as \begin{align} \nonumber\psi_{k,n}(p_{k,n})\triangleq \alpha_k p_{k,n}T - \lambda_k TB \left(\log_2\frac{1 + h_{k,n} p_{k,n}}{1 + g_{k,n} p_{k,n}}\right)^+. \end{align} Then we have the following lemma to solve problem \eqref{eqn:sub:problem:2}. \begin{lemma}\label{lemma:1} For the $n$th subproblem in \eqref{eqn:sub:problem:2} under given $\mv\lambda$, the optimal power allocation solution is given as \begin{align}\label{eqn:p_kn:lambda} p_{k,n}^{(\mv\lambda)}= &\left\{ \begin{array}{ll} 0, & {\text{if}}~h_{k,n} \le g_{k,n} \\ \left(\frac{\lambda_kB}{\ln 2 \alpha_k} - \frac{1}{h_{k,n}}\right)^+& {\text{if}}~g_{k,n}=0\\ \bigg(\frac{\sqrt{\Delta^{(\mv\lambda)}_{k,n}}-(h_{k,n} + g_{k,n})}{2h_{k,n}g_{k,n}} \bigg)^+ & {\text{otherwise}}, \end{array} \right. \end{align} $\forall k\in\mathcal K$, where \begin{align \nonumber\Delta_{k,n}^{(\mv\lambda)} = (h_{k,n} - g_{k,n})^2 + \frac{4\lambda_kBh_{k,n}g_{k,n}}{\ln 2\alpha_k} (h_{k,n} - g_{k,n}). \end{align} In this case, the index of the active user is \begin{align}\label{eqn:k:lambda:n} k^{(\mv\lambda)}_n = \arg \min_{k\in\mathcal K} \psi_{k,n}(p_{k,n}^{(\mv\lambda)}), \end{align} and accordingly, the subcarrier allocation is given as \begin{align}\label{eqn:theta:lambda:n} \theta_{k,n}^{(\mv\lambda)} = & \left\{ \begin{array}{ll} 1, & {\text{if}}~k= k^{(\mv\lambda)}_n \\ 0,& {\text{otherwise}} \end{array} \right.,\forall k\in\mathcal K. \end{align} \end{lemma} \begin{IEEEproof} Suppose that user $k$ is active with $\theta_{k,n} = 1$ and $\theta_{\hat k,n} = 0, \forall \hat k \neq k$, problem \eqref{eqn:sub:problem:2} is reexpressed as $\min_{p_{k,n} \ge 0} \psi_{k,n}(p_{k,n})$. When $h_{k,n} \le g_{k,n}$, we have $\log_2(1 + h_{k,n} p_{k,n}) - \log_2(1 + g_{k,n} p_{k,n})\le 0$ under any $p_{k,n}\ge 0$, and therefore, it follows that $p_{k,n}^{(\mv\lambda)} = 0$ in this case. When $h_{k,n} > g_{k,n}$, this problem is indeed convex. By checking the first-order derivative of $\psi_{k,n}(p_{k,n})$ in this case, we have $p_{k,n}^{(\mv\lambda)}$ in \eqref{eqn:p_kn:lambda}. As a result, the optimal objective value of problem \eqref{eqn:sub:problem:2} in the case with the user $k$ being active is given as $\psi_{k,n}(p_{k,n}^{(\mv\lambda)})$. By comparing $\psi_{k,n}(p_{k,n}^{(\mv\lambda)})$'s under different $k$'s, the optimal $k^{(\mv\lambda)}_n$ and $\theta_{k,n}^{(\mv\lambda)}$ can be obtained in \eqref{eqn:k:lambda:n} and \eqref{eqn:theta:lambda:n}, respectively. \end{IEEEproof} By combining $l_k^{(\mv \lambda)}$'s in \eqref{eqn:l:k} as well as $\theta_{k,n}^{(\mv\lambda)}$'s and $p_{k,n}^{(\mv\lambda)}$'s in Lemma \ref{lemma:1}, the dual function $f(\mv \lambda)$ in \eqref{eqn:dual:function} is obtained. Next, it remains to solve problem (D1). As the dual problem (D1) is always convex but generally non-differentiable, we can use subgradient-based methods such as the ellipsoid method to solve (D1) optimally, by using the fact that the subgradient of $f(\mv \lambda)$ with respect to $\lambda_k$ is $( L_k - l_k^{(\mv\lambda)}) - TR_k(\mv \theta_k^{(\mv\lambda)},\mv p_k^{(\mv\lambda)})$. We denote $\mv\lambda^{*} = [\lambda_1^{*},\ldots,\lambda_K^{*}]^\dagger$ as the optimal dual solution to (D1). Finally, based on the optimal $\mv\lambda^{*}$ to (D1), we have the following proposition to solve (P1). \begin{proposition}\label{proposition:1} The solution to problem (P1) is given as $l_k^* = l_k^{(\mv\lambda^*)}, p_{k,n}^* = p_{k,n}^{(\mv\lambda^*)}$, and $\theta_{k,n}^* = \theta_{k,n}^{(\mv\lambda^*)}, \forall k\in\mathcal K, n\in\mathcal N$, where $l_k^{(\mv \lambda)}$'s, $p_{k,n}^{(\mv\lambda)}$'s, and $\theta_{k,n}^{(\mv\lambda)}$'s are given in \eqref{eqn:l:k}, \eqref{eqn:p_kn:lambda}, and \eqref{eqn:theta:lambda:n}, respectively. \end{proposition} \begin{remark} Proposition \ref{proposition:1} generalizes the resource allocation for the computation offloading in MEC and that for the physical-layer security (see, e.g., \cite{WangTao2011}) over multicarrier systems. First, when $g_{k,n} = 0,\forall k\in\mathcal K,n\in\mathcal N$, a water-filling-like power allocation is observed in \eqref{eqn:p_kn:lambda} for each user. This corresponds to the energy efficient multiuser computation offloading design without eavesdropper (see, e.g., \cite{YouHuangChaeKim2017}). Next, it is observed that the optimal power and subcarrier allocations in \eqref{eqn:p_kn:lambda} and \eqref{eqn:theta:lambda:n} have similar structures as those for the secrecy communication over a multicarrier channel in \cite{WangTao2011}, while the difference lies in the determination of the dual variable $\lambda^*$, which controls the energy consumption tradeoff between the local computing and secure offloading in our consideration. \end{remark} \vspace{-1em} \section{Numerical Results}\label{sec:IV} \vspace{-0.5em} In this section, we present numerical results to validate the performance of our proposed design as compared to two benchmark schemes, as well as the conventional design without eavesdropper that servers as a performance upper bound (or energy lower bound). \subsubsection{Secure full offloading} All the $K$ users choose to offload all the task input bits to the AP. In this case, the weighted sum-energy minimization corresponds to solving problem (P1) by setting $l_k = 0, \forall k\in\mathcal K$. \subsubsection{Local computing} All the $K$ users locally compute all the computation tasks, i.e., $l_k = L_k, \forall k\in\mathcal K$. The weighted sum-energy consumption by the $K$ users is $\sum_{k=1}^K \alpha_k \zeta_k C_k^3L_k^3/T^2$. \subsubsection{Conventional design without eavesdropper} The weighted sum-energy minimization corresponds to solving problem (P1) by setting $g_{k,n} = 0, \forall k\in\mathcal K, n\in\mathcal N$. In the simulation, we consider a multicarrier system with $N=64$ subcarriers and $K=4$ users. We consider the Rayleigh fading channel model for $h_{k,n}$'s and $g_{k,n}$'s, and assume that the average channel power gains follow the pathloss model $\beta_0 (d/d_0)^{-\xi}$, where $d$ denotes the distance between the respective nodes, $\beta_0 = -30$ dB corresponds to the pathloss at a reference distance of $d_0 = 1$ meter (m), and $\xi = 3.7$ corresponds to the pathloss exponent. We set $\zeta_k = 10^{-28}$ Joule (J)/cycle, $C_k = 10^3$ cycles/bit, $B = 0.3125$ MHz, the noise power spectrum density to be $-105$ dBm/Hz, and {$\epsilon$ to be 10\% of the corresponding pathloss}. We also set $\alpha_k = 1/K, \forall k\in\mathcal K$, and thus we consider the average energy consumption at the $K$ users as the performance metric. We also consider $L_k = L, \forall k\in\mathcal K$, and set the distances from the $K$ users to the AP to be identical as $20$ meters. \begin{figure}[!t] \centering \epsfxsize=1\linewidth \includegraphics[width=5.5cm]{Fig1.eps} \caption{{Average energy consumption at the users versus the number of computation input bits $L$ at each user.}} \label{fig1}\vspace{-2em} \end{figure} Fig. \ref{fig1} shows the average energy consumption of the $K$ users versus the number of computation input bits $L$ at each user, in which the distances from the $K$ users to the eavesdropper are all $20$ meters. It is observed that when $L$ is small (e.g., $L \le 3\times 10^5$ bits), the proposed design, the local computing, and the conventional design without eavesdropper achieve similar energy consumption performance, and outperform the secure full offloading. This is because in this case, the local computing is sufficient to handle the computation tasks. By contrast, when $L$ becomes large (e.g., $L \ge 4\times 10^5$ bits), the proposed design is observed to outperform the secure full offloading and local computing. This shows the importance of joint optimization of local computing and secure offloading. In this case, the proposed design is also observed to consume more energy than the conventional design without eavesdropper, for the purpose of anti-eavesdropping. Fig. \ref{fig2} shows the average energy consumption of the $K$ users versus the identical distance from the users to the eavesdropper, in which we set $L = 7\times10^5$ bits. It is observed that as the distance increases, the energy consumption for secure offloading decreases, as the wireless channels to the eavesdropper become weaker. More specifically, the proposed design is observed to have a similar performance as the conventional design without eavesdropper, when the distance is larger than $30$ m. \begin{figure}[!t] \centering \epsfxsize=1\linewidth \includegraphics[width=5.5cm]{Fig2.eps} \caption{{Average energy consumption at the users versus the distance from the users to the eavesdropper.}} \label{fig2}\vspace{-2em} \end{figure} \vspace{-1em} \section{Conclusion} \vspace{-0.5em} This letter proposed to use physical layer security to ensure the computation task offloading in MEC systems. By focusing on a multiuser multicarrier system, we studied a latency-constrained weighted sum-energy minimization problem via jointly optimizing the local computing and secure offloading. How to extend the secure computation offloading to other MEC setups with, e.g., multiple APs and multiple antennas is interesting future directions worth investigating. \vspace{-0.5em} \footnotesize \bibliographystyle{IEEEtran} \vspace{-0.5em}
{ "timestamp": "2018-05-08T02:13:46", "yymm": "1805", "arxiv_id": "1805.02322", "language": "en", "url": "https://arxiv.org/abs/1805.02322" }
\section{Introduction} \label{section:intro} This article deals with the convergence of sequences of a mixed type Hermite-Pad\'e approximation. Hermite-Pad\'e approximation was introduced by Ch. Hermite \cite{Her} for proving the trascendence of the number $e$ and subsequently it has been used in other number theory related problems (for a survey of such applications see \cite{walter}). In recent years, they have received increasing attention because of their applicability in other areas such as non-intersecting brownian motions theory\cite{Daems}, the study of multiple orthogonal polynomials ensembles \cite{Kuij}, random matrix theory \cite{Ble,Kuij2}, and in the solution of the Degasperis-Procesi (DP) differential equation (see, for example, \cite{2,3,jacek}). This paper is motivated in an approximation problem relevant to the solution of the DP equation. \medskip \subsection{The Degasperis-Procesi equation and an approximation problem.} \noindent In \cite{jacek}, the authors study the following partial differential equation \begin{equation}\label{PDE} u_t-u_{xxt}+(b+1)uu_{x}=bu_xu_{xx}+uu_{xxx}, \quad \quad (x,t)\in \mathbb{R}^{2}. \end{equation} It is known that this equation is completely integrable if and only if $b = 2$ or $b = 3$. The case $b = 2$ is the well-known Camassa-Holm (CH) shallow water equation \cite{4}. The case $b = 3$ is the Degasperis-Procesi (DP) equation, found by Degasperis and Procesi \cite{7}, and subsequently shown by Degasperis, Holm, and Hone \cite{5, 6} to be integrable. All equations in the family (\ref{PDE}) admit (in a weak sense) a type of non smooth solutions called multipeakons (peakon = peaked soliton). These take the form of a train of peak-shaped interacting waves, \begin{equation}\label{peakons} u(x, t) =\sum_{i=1}^{n}m_i(t)e^{-|x-x_i(t)|}. \end{equation} This \textit{Ansatz} is then substituted into \eqref{PDE}, resulting in a system of ordinary differential equations on unknown smooth functions $x_i(t)$ and $m_i(t)$. \medskip To solve this system the authors of \cite{jacek} consider a certain boundary value problem, called the discrete cubic string. This problem is the main tool to obtain the explicit formulas, but it is also an interesting problem in its own right from the point of view of operator theory. By the forward cubic string problem we mean the following third-order spectral problem: for a given positive measure $g(y)$, determine the eigenvalues $z$ for which nontrivial continuous eigenfunctions $\phi(y)$ satisfy \begin{equation}\label{CSP} \phi_{yyy}(y) = zg(y) \phi(y), \quad \quad \text{ for } y\in (-1, 1), \quad \quad \phi(-1) = \phi_{y}(-1) = 0, \quad \phi(1) = 0, \end{equation} in a distributional sense. If the singular support of $g(y)$ contains the endpoints then the values of $\phi$ and its derivatives at $-1,1$ are replaced with the left hand, right hand limits respectively. \medskip This spectral problem is proved in \cite{jacek} to be equivalent under a change of variables to the one appearing in the DP Lax pair. It can be viewed as a non-self-adjoint generalization of the well-known (self-adjoint) string equation \begin{equation}\label{SP} \phi_{yy}= zg(y)\phi(y) \quad \quad \text{ for } y\in (-1, 1), \quad \quad \phi(-1) = 0,\quad \phi(1) = 0, \end{equation} studied by M.G. Krein in the 50's \cite{krein-string}. \medskip The discrete case arises when $g(y)$ is a discrete measure; in other words $g =\sum_{i=1}^{N}g_i\delta_{y_i}$. Since the point masses $g_i$ are placed at positions $y_i$ and there are no masses between the points the eigenfunctions are piecewise linear in $y$ for the ordinary string, and piecewise quadratic for the cubic string. \medskip The discrete (ordinary) string plays a crucial role in finding the general $n$-peakon solution for the CH equation \cite{2,3}. The inverse spectral problem consists in determining the positions $y_i$ and masses $g_i$ given the eigenfrequencies and suitable additional information about the eigenfunctions (encoded in the spectral measure of the string or, equivalently, in its Cauchy transform). The solution presented in \cite{2} relies on the work of T. Stieltjes \cite{Sti}, as well as its interpretation by M.G. Krein \cite{krein-string} as a special case of the inverse string problem; it involves Stieltjes continued fractions, the classical moment problem, Pad\'e approximation, and orthogonal polynomials. \medskip The remarkable fact is that in both cases (CH and DP) the associated spectral problems have a finite positive spectrum; this is not so surprising in the case of the ordinary string which is a self-adjoint problem, but it is quite unexpected for the cubic string, since the problem is non-self-adjoint and there is no \textit{a priori} reason for the spectrum to even be real, much less positive. \medskip Though the inverse cubic string problem is not the main concern of this paper, in \autoref{inverse} we will show how its solution is connected with an approximation problem which we will present shortly using a terminology and notation more convenient for our purpose. \medskip Given two measures $\sigma_1, \sigma_2$ whose supports are contained on the real line and have at most one point in common, suppose that the following functions are well defined in the complement of the support of $\sigma_1$ \[\widehat{s}_{1,1}(z) = \int \frac{d\sigma_1(x)}{z-x}, \qquad \widehat{s}_{1,2}(z) = \int \frac{d\sigma_1(x)}{z-x} \int \frac{d\sigma_2(x)}{x-y}\] The pair $(\widehat{s}_{1,1},\widehat{s}_{1,2})$ constitutes what is known as a Nikishin system of functions (of order 2). Interchanging the roles of $\sigma_1,\sigma_2$ we can define in the same manner the Nikishin system $(\widehat{s}_{2,2},\widehat{s}_{2,1})$. \medskip \begin{HP}\label{Nik2} Consider the systems $(\widehat{s}_{1,1},\widehat{s}_{1,2})$ and $(\widehat{s}_{2,2},\widehat{s}_{2,1})$. Then for each $n \in \mathbb{N},$ we seek polynomials $(a_{n,0},a_{n,1}, a_{n,2}),$ not all identically equal to zero, with $\deg a_{n,0}\leq n-1$, $\deg a_{n,1}\leq n-1,$ and $\deg a_{n,2}\leq n$, that satisfy: \begin{align} \left(a_{n,0}-a_{n,1}\widehat{s}_{1,1}+a_{n,2}\widehat{s}_{1,2}\right)(z)=\mathcal{O}(1/z^{n+1}) \label{JLS1},\\ \left(a_{n,1}-a_{n,2}\widehat{s}_{2,2}\right)(z)=\mathcal{O}(1/z). \label{JLS2} \end{align} \end{HP} \medskip In the inverse cubic string problem, the measures $\sigma_1,\sigma_2$ are connected with the Weyl functions of the spectral problem \eqref{CSP}. In the situation considered in \cite{2,3} the measures are discrete. With the degree of generalization presented here this problem was proposed in \cite{Bertola:CBOPs}. \medskip In the present paper, in Theorem \ref{equiv} we study the existence and uniqueness of the solution of an analogous approximation problem as well as the location of the zeros of the Nikishin polynomials for systems of order $m\geq 2$, biorthogonality properties satisfied by the polynomials $a_{n,m}$ are given in Theorem \ref{TBIO}, and the limit behavior of the Nikishin polynomials is described in Theorem \ref{CTPm}. \subsection{Nikishin systems.} \label{subsec:NS} In Hermite-Pad\'e approximation the object of approximation is a system of analytic functions. We restrict our attention to so called Nikishin systems which contain, in particular, the functions appearing in equations (\ref{JLS1}) and (\ref{JLS2}). Nikishin systems were first introduced in \cite{Nik}. We will use a more general definition given in \cite{FL4} which is more appropriate for our purpose. \medskip In the sequel $\Delta$ denotes an interval contained in the real axis. By $\mathcal{M}(\Delta)$ we denote the class of all Borel measures $s$ with constant sign whose support consists on infinitely many points and is contained in $\Delta$ such that $x^\nu \in L_1(s)$ for all $\nu \in \mathbb{Z}_+$. We denote the Cauchy transform of $s$ by \[ \displaystyle \widehat{s}(z) = \int\frac{d s(x)}{z-x}. \] We have \begin{equation}\label{expansion} \displaystyle \widehat{s}(z) \sim \sum_{j=0}^{\infty} \frac{c_j}{z^{j+1}}, \qquad c_j = \int x^j ds(x). \end{equation} If the support of $s$, $\mbox{supp}(s)$, is bounded the series is convergent in a neighborhood of $\infty$; otherwise, the expansion is asymptotic at $\infty$. That is, for each $k \geq 0$ \[ \lim_{z \to \infty} z^{k+1}\left(\widehat{s}(z) - \sum_{j=0}^{k-1} \frac{c_j}{z^{j+1}}\right) = c_k, \] where the limit is taken along any curve which is non tangential to $\mbox{supp}(s)$ at $\infty$. \medskip Now, let $\Delta_{\alpha}, \Delta_{\beta}$ be two intervals contained in the real line with at most one common point. Take $\sigma_{\alpha} \in {\mathcal{M}}(\Delta_{\alpha})$ and $\sigma_{\beta} \in {\mathcal{M}}(\Delta_{\beta})$ such that $\widehat{\sigma}_{\beta} \in L_1(\sigma_{\alpha})$. Then, using the differential notation, we define a third measure $\langle \sigma_{\alpha},\sigma_{\beta} \rangle$ as follows \[d \langle \sigma_{\alpha},\sigma_{\beta} \rangle (x) := \widehat{\sigma}_{\beta}(x) d\sigma_{\alpha}(x).\] In consecutive products of measures such as $\langle \sigma_{\gamma}, \sigma_{\alpha},\sigma_{\beta} \rangle :=\langle \sigma_{\gamma}, \langle \sigma_{\alpha},\sigma_{\beta} \rangle \rangle,$ we assume not only that $\widehat{\sigma}_{\beta} \in L_1(\sigma_{\alpha})$ but also $\langle \sigma_{\alpha},\sigma_{\beta} \widehat {\rangle} \in L_1(\sigma_{\gamma})$, where $\langle \sigma_{\alpha},\sigma_{\beta} \widehat{\rangle}$ denotes the Cauchy transform of $\langle \sigma_{\alpha},\sigma_{\beta} {\rangle}$. \medskip Consider a collection $\Delta_j, j=1,\ldots,m,$ of intervals such that \[ \Delta_j \cap \Delta_{j+1} = \emptyset, \qquad \mbox{or} \quad \Delta_j \cap \Delta_{j+1} = \{x_{j,j+1}\}, \quad j=1,\ldots,m-1, \] where $x_{j,j+1}$ is a single point. Let $(\sigma_1,\ldots,\sigma_m)$ be a system of measures such that $\mbox{\rm Co}(\mbox{\rm supp} (\sigma_j)) = \Delta_j, \sigma_j \in {\mathcal{M}}(\Delta_j), j=1,\ldots,m,$ where $\mbox{\rm Co}(E)$ denotes the convex hull of the set $E$. Denote \begin{equation*} \langle \sigma_{j},\ldots,\sigma_k {\rangle} := \langle \sigma_j,\langle \sigma_{j+1},\ldots,\sigma_k\rangle\rangle\in {\mathcal{M}}(\Delta_j), \qquad 1 \leq j < k\leq m. \end{equation*} If $\Delta_j \cap \Delta_{j+1} = \{x_{j,j+1}\}$ we also assume that $x_{j,j+1}$ is not a mass point of either $\sigma_j$ or $\sigma_{j+1}$. \begin{definition} With the notation above, we say that ${\bf s}=(s_{1,1},\ldots,s_{1,m}) = {\mathcal{N}}(\sigma_1,\ldots,\sigma_m)$, where \begin{equation} \label{eq:ss} s_{1,1} = \sigma_1, \quad s_{1,2} = \langle \sigma_1,\sigma_2 \rangle, \ldots \quad , \quad s_{1,m} = \langle \sigma_1, \sigma_2,\ldots,\sigma_m \rangle. \end{equation} is the \textit{Nikishin system} of measures generated by $(\sigma_1,\ldots,\sigma_m)$. The corresponding Nikishin system of functions will be denoted by ${\bf \widehat{s}}=\left(\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m}\right)$, where $\widehat{s}_{1,j}$ is the Cauchy transform of $s_{1,j}$. \end{definition} This definition extends the one given in \cite{Nik} by allowing the generating measures to have unbounded support and/or have consecutive $\Delta_j$ with a common endpoint. That the generating measures have infinite support is not required for the definition of Nikishin systems; however, this condition is frequently used in the proof of the main results. If a measure has discrete support its Cauchy transform reduces to a rational function and the arguments used in the proof of some results must be modified. Sometimes the statement of the results themselves become obvious. For example, if that is the case in equations (\ref{JLS1}) and (\ref{JLS2}), the left hand sides of those relations become identically equal to zero, for all $n$ larger than the number of mass points, and the question about convergence of the approximants is trivial. \medskip In what follows, for $1\leq j\leq k\leq m$, we denote \begin{equation} \label{eq:sjk} s_{j,k} := \langle \sigma_j,\sigma_{j+1},\ldots,\sigma_k \rangle, \qquad s_{k,j} := \langle \sigma_k,\sigma_{k-1},\ldots,\sigma_j \rangle. \end{equation} In particular, with the collection of measures $(\sigma_1,\ldots,\sigma_m)$, we can also define the reversed Nikishin system $(s_{m,m},\ldots,s_{m,1}) = \mathcal{N}(\sigma_m,\ldots,\sigma_1)$. \subsection{Statement of main results.} Equations (\ref{JLS1}) and (\ref{JLS2}) suggests the following extensions to the case of Nikishin systems with $m\geq 2$ measures. \begin{definition}{[Direct/reversed Hermite-Pad\'e approximation]} \label{MTPm1} \noindent Consider the Nikishin systems $\mathcal{N}(\sigma_1, \sigma_2,\ldots, \sigma_{m})$ and $\mathcal{N}(\sigma_m, \sigma_{m-1},\ldots, \sigma_1)$. Then, for each $n \in \mathbb{N},$ there exist polynomials $a_{n,0},a_{n,1},\ldots, a_{n,m}$, with $\deg a_{n,j}\leq n-1, j=0,1\ldots,m-1$, and $\deg a_{n,m}\leq n$, not all identically equal to zero, called \textit{direct/reversed (DR) Hermite-Pad\'e polynomials} that satisfy: \begin{align} \left(a_{n,0}-a_{n,1}\widehat{s}_{1,1}+a_{n,2}\widehat{s}_{1,2}\cdots+ (-1)^{m}a_{n,m}\widehat{s}_{1,m}\right)(z)=\mathcal{O}(1/z^{n+1})\label{tipoIm}\\ (-1)\left(a_{n,1}-a_{n,m}\widehat{s}_{m,2}\right)(z)=\mathcal{O}(1/z)\label{tipoIIcm}\\ ..........................................................\nonumber\\ (-1)^{m-2}\left(a_{n,m-2}-a_{n,m}\widehat{s}_{m,m-1}\right)(z)=\mathcal{O}(1/z)\label{tipoIIbm}\\ (-1)^{m-1}\left(a_{n,m-1}-a_{n,m}\widehat{s}_{m,m}\right)(z)=\mathcal{O}(1/z)\label{tipoIIam}. \end{align} \end{definition} Alternatively, we could extend the approximation problem as follows. \begin{definition}{[Multi-level Hermite-Pad\'e approximation]}\label{MTPm2} \noindent Consider the Nikishin system $\mathcal{N}(\sigma_1, \sigma_2,\ldots, \sigma_{m})$. Then, for each $n \in \mathbb{N},$ there exist polynomials $a_{n,0},a_{n,1},\ldots, a_{n,m}$ with $\deg a_{n,j}\leq n-1, j=0,1\ldots,m-1,$ and $\deg a_{n,m}\leq n$, not all identically equal to zero, called \textit{multi-level (ML) Hermite-Pad\'e polynomials} that verify: \begin{align} \mathcal{A}_{n,0}(z) := \left(a_{n,0}-a_{n,1}\widehat{s}_{1,1}+a_{n,2}\widehat{s}_{1,2}\cdots+ (-1)^{m}a_{n,m}\widehat{s}_{1,m}\right)(z)=\mathcal{O}(1/z^{n+1})\label{tipoIam}\\ \mathcal{A}_{n,1}(z) :=\left(-a_{n,1}+a_{n,2}\widehat{s}_{2,2}-a_{n,3}\widehat{s}_{2,3}\cdots+ (-1)^{m}a_{n,m}\widehat{s}_{2,m}\right)(z)=\mathcal{O}(1/z)\label{tipoIbm}\\ ........................................................................................\nonumber\\ \mathcal{A}_{n,m-1}(z) :=\left((-1)^{m-1}a_{n,m-1}+(-1)^{m}a_{n,m}\widehat{s}_{m,m}\right)(z)=\mathcal{O}(1/z)\label{tipoIdm}. \end{align} \end{definition} Notice that in this formulation the reversed Nikishin system $\mathcal{N}(\sigma_m, \sigma_{m-1},\ldots, \sigma_1)$ does not appear explicitly. On the other hand, the interpolation conditions involve all Nikishin systems at the ``inner'' levels; that is, $\mathcal{N}(\sigma_1, \sigma_2,\ldots, \sigma_{m})$, $\mathcal{N}(\sigma_2, \sigma_3,\ldots, \sigma_{m})$, $\ldots$, $(s_{m,m})=\mathcal{N}(\sigma_m)$. However, in Section 3 we prove \begin{theorem}\label{equiv} For each fixed $n$, the DR Hermite-Pad\'e polynomials and the ML Hermite-Pad\'e polynomials coincide and the vector polynomial $(a_{n,0},a_{n,1},\ldots, a_{n,m})$ is uniquely determined except for constant multiples. Additionally, $\deg a_{n,j}=n-1, j=0,\ldots,m-1,$ and $\deg a_{n,m}=n$. Moreover, the zeros of $a_{n,m-1}$ and $a_{n,m}$ are all simple and lie in $\stackrel{\circ}\Delta_m$ (the interior of $\Delta_m$ with the Euclidean topology of $\mathbb{R}$). \end{theorem} In both cases, finding the polynomials $(a_{n,0},a_{n,1},\dots,a_{n,m})$ reduces to solving a homogeneous linear system of $n(m+1)$ equations (the interpolations conditions) on $n(m+1)+1$ unknowns (the coefficients of the polynomials); therefore, the corresponding system of equations has a non-trivial solution. \medskip For a fixed $n \in \mathbb{N},$ consider the vector $(b_{n,0},\ldots,b_{n,m})$ of ML Hermite-Pad\'e polynomials associated with the reversed Nikishin system $\mathcal{N}(\sigma_m,\ldots,\sigma_1)$. That is, the vector is non null and \begin{align} \left(b_{n,0}-b_{n,1}\widehat{s}_{m,m}+b_{n,2}\widehat{s}_{m,m-1}\cdots+ (-1)^{m}b_{n,m}\widehat{s}_{m,1}\right)(z)=\mathcal{O}(1/z^{n+1})\label{dtipoIam}\\ \left(-b_{n,1}+b_{n,2}\widehat{s}_{m-1,m-1} \cdots+ (-1)^{m}b_{n,m}\widehat{s}_{m-1,1}\right)(z)=\mathcal{O}(1/z)\label{dtipoIbm}\\ ........................................................................................\nonumber\\ \left((-1)^{m-1}b_{n,m-1}+(-1)^{m}b_{n,m}\widehat{s}_{1,1}\right)(z)=\mathcal{O}(1/z)\label{dtipoIdm}. \end{align} Let \[ K(x_1,x_2)=\frac{1}{x_1-x_2} \] denote the usual Cauchy kernel. For $m> 2$ we define the Cauchy convolution kernel in the following manner \[ K(x_1,x_m)=\int_{\Delta_2}\int_{\Delta_3}\cdots \int_{\Delta_{m-1} } \frac{\, d \sigma_{m-1}(x_{m-1})\cdots \, d\sigma_3(x_3)d\sigma_2(x_2) } {(x_{m-1}-x_m)(x_{m-2}-x_{m-1})\cdots\,(x_2-x_3)(x_1-x_2) }. \] \begin{theorem}\label{TBIO} The sequences of polynomials $\left(a_{n,m}(z)\right)_{n\in \mathbb{N}}$ and $\left(b_{k,m}(z)\right)_{k\in \mathbb{N}}$ are biorthogonal with respect to the Cauchy convolution kernel and the measures $(\sigma_1,\sigma_2,\ldots, \sigma_m)$; that is, \begin{equation}\label{Bi} \int_{\Delta_1}\int_{\Delta_m} b_{k,m}(x_1)K(x_1,x_m) a_{n,m}(x_m) d\sigma_m(x_m)d\sigma_1(x_1) =h_n \delta_{n,k}, \qquad h_n\neq 0. \end{equation} where $\delta_{k,n} =0, k\neq n,$ and $\delta_{n,n} = 1$. \end{theorem} This type of biorthogonality has been discussed previously, among others, in \cite{Bertola-Bothner,Bertola:CBOPs}. \medskip We are mainly concerned with the convergence properties of the sequence of vector rational functions \[ \left(\frac{a_{n,0}}{a_{n,m}},\ldots,\frac{a_{n,m-1}}{a_{n,m}}\right), \qquad n\in \mathbb{N}.\] Taking into consideration the interpolation conditions and the relation \begin{equation} \label{rel-fund} 0\equiv \widehat{s}_{m,1} + \sum_{j=1}^{m-1}(-1)^{j} \widehat{s}_{m,j+1} \widehat{s}_{1,j} + (-1)^m\widehat{s}_{1, m}, \qquad z \in \mathbb{C} \setminus (\Delta_{1} \cup \Delta_m), \end{equation} whose proof may be found in \cite[Lemma 2.9]{FL4} (and is not difficult to verify), one can expect that under appropriate assumptions the limit should be the system of functions $(\widehat{s}_{m,1},\ldots,\widehat{s}_{m,m})$. This prediction is consistent with the convergence properties of type II and type I Hermite-Pad\'e approximants studied in \cite[Theorem 1]{Bus} (see also \cite{LF3}, \cite{GRS}, \cite{Stahl}) and \cite[Theorem 1.4]{LS}, respectively. For the definition of type I and type II Hermite-Pad\'e approximation see Subsection \ref{tipoI-II}. \medskip Let $\Delta \subset \mathbb{R}$ and $\sigma \in \mathcal{M}(\Delta)$. We say that $\sigma$ satisfies Carleman's condition \cite{Car} if \begin{equation}\label{Carle} \sum_{\nu \geq 0} |c_\nu|^{-1/2\nu} = \infty, \end{equation} where $c_\nu = \int x^\nu d\sigma(x)$ denotes the $\nu$-th moment of $\sigma$. \medskip For a measure $\sigma$ supported on an interval of the form $[a,+\infty)$ or $(-\infty,a], a\in \mathbb{R},$ Carleman's condition guarantees that the corresponding moment problem is determinate. Stieltjes' theorem \cite{Sti} states that if the moment problem for $\sigma$ is determinate then the diagonal sequence of Pad\'e approximants of $\widehat{\sigma}$ converges. If $\mathrm{supp}(\sigma)$ is bounded the moment problem is determinate and Markov's theorem follows (see \cite{Mar}). \begin{theorem}\label{CTPm} For each $n \in \mathbb{N}$, let $a_{n,0},a_{n,1},\ldots, a_{n,m}$ be the collection of ML (or DR) Hermite-Pad\'e polynomials associated with the Nikishin system $\mathcal{N}(\sigma_1, \sigma_2,\ldots, \sigma_{m})$. Suppose that either the sequence of moments of $\sigma_m$ satisfies Carleman's condition or $\Delta_{m-1}$ is a bounded interval contained in $\mathbb{C} \setminus \Delta_{m}$. Then, for $j=0,\ldots m-1$ \begin{equation}\label{Con01m} \lim_{n\rightarrow \infty} \frac{a_{ n,j}}{a_{ n,m}} = \widehat{s}_{m,j+1}, \end{equation} uniformly on each compact subset $\mathcal{K} \subset \mathbb{C} \setminus \Delta_m$. Moreover \begin{equation}\label{Con00m} \lim_{n\rightarrow \infty} (-1)^{j}\frac{a_{n,j}(z)}{a_{n,m}(z)}+\sum_{k=j+1}^{m-1}(-1)^{k}\frac{a_{n,k}(z)}{a_{n,m}(z)}\widehat{s}_{j+1,k}(z)+(-1)^m\widehat{s}_{j+1,m}(z)=0 \end{equation} uniformly on each compact subset $\mathcal{K} \subset \mathbb{C} \setminus (\Delta_{j+1} \cup \Delta_m)$. \end{theorem} The limit of the sequence $(a_{n,0}/a_{n,m},\ldots, a_{n,m-1}/a_{n,m}), n\in \mathbb{N},$ is the same as for type I Hermite-Pad\'e approximation of $\mathcal{N}(\sigma_1, \sigma_2,\ldots, \sigma_{m})$ and type II Hermite-Pad\'e approximation of $\mathcal{N}(\sigma_m, \sigma_{m-1},\ldots, \sigma_{1})$. For details, see \cite[Theorem 1.4]{LS} and \cite[Theorem 1]{Bus}. \medskip In \cite{jacek}, the authors study the case where the solutions (\ref{peakons}) are formed by a finite linear combination of single peakon terms $m_i e^{-|x-x_i|}$. In that case, the Cauchy transform of the spectral measures $(s_{1,1},s_{1,2},s_{2,2},s_{2,1})$ are rational fractions and for $n$ sufficiently large we have exact equalities in (\ref{JLS1}) and (\ref{JLS2}). If we are interested in the case of peakon solutions (\ref{peakons}) formed by an infinite number of peakons, or if we study the cubic string problem for which the weight $g(y)$ is not a discrete measure we need to deal with the convergence of the corresponding mixed type Hermite-Pad\'e approximants. Thus Theorem \ref{CTPm} opens a new direction of research aimed at the construction of general solutions to the DP equation using peakon approximations. \section{Proof of the main results.} \subsection{Type I and type II Hermite-Pad\'e polynomials.} \label{tipoI-II} We are considering a combination of type I and type II Hermite-Pad\'e polynomials which have received considerable attention for its many applications. Our construction falls in the category of mixed type Hermite-Pad\'e approximation. \medskip Consider a Nikishin system $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$. Fix ${\bf n}=(n_1,\ldots,n_m)\in {\mathbb{Z}}_+^m \setminus \{{\bf 0}\}$, ${\mathbb{Z}}_+=\{0,1,2,\ldots\}.$ Their exist polynomials $Q_{\bf n},P_{{\bf n},1},\ldots P_{{\bf n},m} $, called type II Hermite-Pad\'e polynomials of ${\bf \widehat{s}} = (\widehat{s}_{1,1},\ldots,\widehat{s}_{1,m})$ with respect to ${\bf n}$ that satisfy: \begin{itemize} \item[i)] $\deg Q_{\bf n} \leq n_1+\cdots+n_m$, $Q_{\bf n}\not \equiv 0$, \item[ii)] $ (Q_{\bf n} \widehat{s}_{1,j} - P_{{\bf n},j})(z)=\mathcal{O}(1/z^{n_j+1}), \qquad j=1,\ldots,m.$ \end{itemize} On the other hand, the collection of polynomials $a_{{\bf n},0},\ldots,a_{{\bf n},m}$ is called a type I Hermite-Pad\'e polynomial of ${\bf \widehat{s}}$ with respect to $\bf n$ if: \begin{itemize} \item[iii)] $\deg a_{{\bf n},j} \leq n_j -1, j=1,\ldots,m,$ not all identically equal to zero, \item[iv)] $(a_{{\bf n},0} + \sum_{j=1}^m (-1)^j a_{{\bf n},j}\widehat{s}_{1,j})(z) = \mathcal{O}(1/z^{n_1+\cdots+n_m}).$ \end{itemize} When $m=1$ both definitions coincide with classical diagonal Pad\'e approximation. In contrast with Pad\'e approximation, when $m \geq 2$ the uniqueness (up to constant multiples) of these polynomials is not a trivial matter and was solved positively in \cite{FL3,FL4} for Nikishin systems. However, for arbitrary systems of functions uniqueness is not true in general. \medskip Notice that the ML or DR Hermite Pad\'e polynomials combine interpolation conditions of the form (ii) and (iv) and are therefore called of mixed type. \subsection{Some auxiliary results and concepts.} The following lemma will be used in the proof of Theorems \ref{equiv} and \ref{CTPm}. Let us define the linear forms with polynomial coefficients \begin{equation} \label{eq:Lj} \mathcal{L}_j := \ell_j + \sum_{k=j+1}^{m} \ell_{k} \widehat{s}_{j+1,k}, \qquad j=0,\ldots,m-1, \qquad \mathcal{L}_m = \ell_m, \end{equation} where the $\ell_j$ are arbitrary polynomials. \begin{lemma} \label{lem:2} Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$ be given. Then, for each $j=0,\ldots,m-2$ and $r=j+1,\ldots,m-1$ \begin{equation} \label{eq:aux} \mathcal{L}_j + \sum_{k=j+1}^{r} (-1)^{k-j}\widehat{s}_{k,j+1} \mathcal{L}_{k} = \ell_j + (-1)^{r-j}\sum_{k= r+1}^{m} \ell_{k} \langle s_{r+1,k}, s_{r,j+1}\widehat{\rangle}. \end{equation} \end{lemma} \begin{proof} Fix $j, \, 0\leq j \leq m-2,$ and let $r=j+1$. Then the left hand side of \eqref{eq:aux} equals \[\mathcal{L}_j - \widehat{s}_{j+1,j+1} \mathcal{L}_{j+1} = \ell_j + \sum_{k=j+2}^m \ell_{k} \left(\widehat{s}_{j+1,k} - \widehat{s}_{j+1,j+1}\widehat{s}_{j+2,k}\right).\] Formula \eqref{rel-fund} applied to the Nikishin system of two measures $\mathcal{N}(s_{j+1,j+1},s_{j+2,k})$ gives \[\langle {s}_{j+1,j+1},s_{j+2,k} \widehat{\rangle} - \widehat{s}_{j+1,j+1}\widehat{s}_{j+2,k} + \langle s_{j+2,k},{s}_{j+1,j+1} \widehat{\rangle} \equiv 0.\] However, $s_{j+1,k} = \langle s_{j+1,j+1},s_{j+2,k}\rangle$, hence \[\mathcal{L}_j - \widehat{s}_{j+1,j+1} \mathcal{L}_{j+1} = \ell_j - \sum_{k=j+2}^m \ell_{k} \langle s_{j+2,k},{s}_{j+1,j+1} \widehat{\rangle}\] as needed. For $j=m-2$ the proof is complete. \medskip Now, fix $j < m-2$ and suppose that \eqref{eq:aux} is true for some $r,\, j+1 \leq r \leq m-2,$ and let us prove that it also holds for $r+1$. Using the induction hypothesis, we obtain \[\mathcal{L}_j + \sum_{k=j+1}^{r+1} (-1)^{k-j}\widehat{s}_{k,j+1} \mathcal{L}_{k} = \mathcal{L}_j + \sum_{k=j+1}^{r} (-1)^{k-j}\widehat{s}_{k,j+1} \mathcal{L}_{k} + (-1)^{r+1-j} \widehat{s}_{r+1,j+1} \mathcal{L}_{r+1}= \] \[ \ell_j + (-1)^{r-j}\sum_{k= r+1}^{m} \ell_{k} \langle s_{r+1,k}, s_{r,j+1}\widehat{\rangle} + (-1)^{r+1-j} \widehat{s}_{r+1,j+1} \mathcal{L}_{r+1} = \] \[\ell_j + (-1)^{r-j}\ell_{r+1} \widehat{s}_{r+1,j+1} + (-1)^{r-j}\sum_{k= r+2}^{m} \ell_{k} \langle s_{r+1,k}, s_{r,j+1}\widehat{\rangle} + (-1)^{r+1-j} \widehat{s}_{r+1,j+1} \mathcal{L}_{r+1} = \] \[ \ell_j + (-1)^{r-j}\sum_{k= r+2}^{m} \ell_{k} \left(\langle s_{r+1,j+1}, s_{r+2,k}\widehat{\rangle} - \widehat{s}_{r+1,j+1} \widehat{s}_{r+2,k}\right) =\] \[\ell_j + (-1)^{r+1-j}\sum_{k= r+2}^{m} \ell_{k} \langle s_{r+2,j+1}, s_{r+1,j+1} \widehat{\rangle},\] as claimed. In the second last step, we use the identity \[\langle s_{r+1,k}, s_{r,j+1} {\rangle} = \langle \langle s_{r+1,r+1}, s_{r+2,k} \rangle, s_{r,j+1} {\rangle} = \langle \langle s_{r+1,r+1}, s_{r,j+1} \rangle, s_{r+2,k} {\rangle} = \langle s_{r+1,j+1}, s_{r+2,k} {\rangle}, \] while in the last one we use \[\langle s_{r+1,j+1}, s_{r+2,j+1}\widehat{\rangle} - \widehat{s}_{r+1,j+1} \widehat{s}_{r+2,k} + \langle s_{r+2,j+1}, s_{r+1,j+1} \widehat{\rangle} \equiv 0, \] which is formula \eqref{rel-fund} applied to the Nikishin system of two measures $\mathcal{N}({s}_{r+1,j+1},s_{r+2,k})$. \end{proof} \medskip We will make frequent use of \cite[Theorem 1.3]{LS}. For convenience of the reader, we state it here as a lemma. \begin{lemma} \label{reduc} Let $(s_{1,1},\ldots,s_{1,m}) = \mathcal{N}(\sigma_1,\ldots,\sigma_m)$ be given. Assume that there exist polynomials with real coefficients $\ell_0,\ldots,\ell_m$ and a polynomial $w$ with real coefficients whose zeros lie in $\mathbb{C} \setminus \Delta_1$ such that \[\frac{\mathcal{L}_0(z)}{w(z)} \in \mathcal{H}(\mathbb{C} \setminus \Delta_1)\qquad \mbox{and} \qquad \frac{\mathcal{L}_0(z)}{w(z)} = \mathcal{O}\left(\frac{1}{z^N}\right), \quad z \to \infty, \] where $\mathcal{L}_0 := \ell_0 + \sum_{k=1}^m \ell_k \widehat{s}_{1,k} $ and $N \geq 1$. Let $\mathcal{L}_1 := \ell_1 + \sum_{k=2}^m \ell_k \widehat{s}_{2,k} $. Then \begin{equation} \label{eq:3} \frac{\mathcal{L}_0(z)}{w(z)} = \int \frac{\mathcal{L}_1(x)}{(z-x)} \frac{d\sigma_1(x)}{w(x)}. \end{equation} If $N \geq 2$, we also have \begin{equation} \label{eq:4} \int x^{\nu} \mathcal{L}_1(x) \frac{d\sigma_1(x)}{w(x)} = 0, \qquad \nu = 0,\ldots, N -2. \end{equation} In particular, $\mathcal{L}_1$ has at least $N -1$ sign changes in $\stackrel{\circ}{\Delta}_1 $. \end{lemma} Let us advance the following partial result. \begin{lemma} \label{degree} For each fixed $n\in \mathbb{N}$, the ML Hermite-Pad\'e polynomial $a_{n,m}$ has degree $n$ or it is identically equal to zero. \end{lemma} \begin{proof} Fix $n\in \mathbb{N}$ and let $(a_{n,0},\ldots,a_{n,m})$ be the corresponding $ML$ Hermite-Pad\'e polynomials. Everywhere below $\mathcal{A}_{n,j}, j=0,\ldots,m-1$ are the forms in Definition \ref{MTPm2} and $\mathcal{A}_{n,m} = a_{n,m}$. Let us show that for each $j=0,\ldots,m-1,$ there exists a polynomial $w_{n,j}$ with real coefficients whose zeros lie in $\mathbb{C} \setminus \Delta_{j+1}$ such that \begin{align}\label{partida} \frac{\mathcal{A}_{n,j}(z)}{w_{n,j}(z)}=\mathcal{O}(1/z^{n+1}), \qquad \frac{\mathcal{A}_{n,j}(z)}{w_{n,j}(z)}\in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+1}). \end{align} Due to \eqref{tipoIam}, if $j=0$ one can take $w_{n,0}\equiv 1$. Let us assume that the statement is true for some $j \in \{0,\ldots,m-2\}$ and let us show that it also holds for $j+1$. \medskip Using \eqref{eq:4} in Lemma \ref{reduc} (on $\mathcal{N}(\sigma_{j+1},\ldots,\sigma_m)$) and \eqref{partida}, we obtain that \[0 = \int x^{\nu} \mathcal{A}_{n,j+1}(x) \frac{d \sigma_{j+1}(x)}{w_{n,j}(x)}, \qquad \nu = 0,\ldots,n-1.\] These orthogonality relations imply that $\mathcal{A}_{n,j+1}$ has at least $n$ sign changes on the interval $\Delta_{j+1}$. Let $w_{n,j+1}$ be a polynomial of degree $n$ with simple zeros at points of $\Delta_{j+1}$ where $\mathcal{A}_{n,j+1}$ changes sign; therefore, its zeros belong to $\mathbb{C} \setminus \Delta_{j+2}$. By the way in which $w_{n,j+1}$ was defined, we have \[\frac{\mathcal{A}_{n,j+1}(z)}{w_{n,j+1}(z)}\in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+2}).\] Taking into account that $\deg w_{n,j+1} = n$ and since $\mathcal{A}_{n,j+1}(z) = \mathcal{O}(1/z)$ according to Definition \ref{MTPm2}, we also have ${\mathcal{A}_{n,j+1}(z)}/{w_{n,j+1}(z)} = \mathcal{O}(1/z^{n+1})$ as claimed. \medskip For $j=m-1$ equation (\ref{partida}) takes the form \begin{equation}\label{MP} \frac{\left(a_{n,m-1}-a_{n,m}\widehat{s}_{m,m}\right)(z)}{w_{n,m-1}(z)} =\mathcal{O}(1/z^{n+1}) \in \mathcal{H}(\mathbb{C} \setminus \Delta_{m}). \end{equation} Using \eqref{eq:4}, we obtain \begin{equation}\label{orto} \int x^\nu a_{n,m}(x) \frac{d s_{m,m}(x)}{w_{n,m-1}(x)} = 0, \qquad \nu =0,\ldots,n-1. \end{equation} If $\deg a_{n,m} < n$, \eqref{orto} implies that $a_{n,m} \equiv 0$ since it would be orthogonal to itself (and the measure has constant sign); otherwise, \eqref{orto} implies that $a_{n,m}$ has at least $n$ sign changes on $\Delta_m$ and since $\deg a_{n,m} \leq n$ it must have exactly $n$ simple zeros inside $\Delta_m$. With this we conclude the proof of the lemma. \end{proof} This lemma shows that $a_{n,m}$ is uniquely determined except for a constant factor (or it is identically equal to zero) because otherwise by linearity we could construct an $a_{n,m}$, not identically equal to zero, of degree smaller than $n$. \medskip We need some formulas verified by the Cauchy transforms of products of measures in a Nikishin system. It is known that for each $\sigma \in {\mathcal{M}}(\Delta),$ where $\Delta$ is an infinite subinterval of the real line different from $\mathbb{R}$, there exists a measure $\tau \in {\mathcal{M}}(\Delta)$ and ${\ell}(z)=a z+b, a = 1/|\sigma|, b \in {\mathbb{R}},$ such that \begin{equation} \label{s22} {1}/{\widehat{\sigma}(z)}={\ell}(z)+ \widehat{\tau}(z), \end{equation} where $|\sigma|$ is the total variation of the measure $\sigma.$ See \cite[Appendix]{KN} for bounded $\Delta$, and \cite[Lemma 2.3]{FL4} when $\Delta$ is unbounded. In particular, \[ {1}/{\widehat{s}_{1,1}(z)} ={\ell}_{1,1}(z)+ \widehat{\tau}_{1,1}(z). \] Sometimes we write $\langle \sigma_{\alpha},\sigma_{\beta} \widehat{\rangle}$ in place of $\widehat{s}_{\alpha,\beta}$. In \cite[Lemma 2.10]{FL4}, several formulas involving ratios of Cauchy transforms were proved. The most useful ones in this paper establish that \begin{equation} \label{4.4} \frac{\widehat{s}_{1,k}}{\widehat{s}_{1,1}} = \frac{|s_{1,k}|}{|s_{1,1}|} - \langle \tau_{1,1},\langle s_{2,k},s_{1,1} \rangle \widehat{\rangle} , \qquad 1=j < k \leq m. \end{equation} \medskip Another important ingredient in the proof of Theorem \ref{CTPm} is the notion of convergence in Hausdorff content. Let $B$ be a subset of the complex plane $\mathbb{C}$. By $\mathcal{U}(B)$ we denote the class of all coverings of $B$ by at most a numerable set of disks. Set $$ h(B)=\inf\left\{\sum_{i=1}^\infty |U_i|\,:\,\{U_i\}\in\mathcal{U}(B)\right\}, $$ where $|U_i|$ stands for the radius of the disk $U_i$. The quantity $h(B)$ is called the $1$-dimensional Hausdorff content of the set $B$. \medskip Let $(\varphi_n)_{n\in\mathbb{N}}$ be a sequence of complex functions defined on a domain $D\subset\mathbb{C}$ and $\varphi$ another function defined on $D$ (the value $\infty$ is permitted). We say that $(\varphi_n)_{n\in\mathbb{N}}$ converges in Hausdorff content to the function $\varphi$ inside $D$ if for each compact subset $\mathcal{K}$ of $D$ and for each $\varepsilon >0$, we have \begin{equation} \label{convH} \lim_{n\to\infty} h\{z\in K : |\varphi_n(z)-\varphi(z)|>\varepsilon\}=0 \end{equation} (by convention $\infty \pm \infty = \infty$). We denote this writing $h$-$\lim_{n\to \infty} \varphi_n = \varphi$ inside $D$. \medskip If the functions $\varphi_n$ are holomorphic in $D$ and \eqref{convH} takes place, then the convergence is uniform on each compact subset of $D$. This result is proved in \cite[Lemma 1]{Gon}. Therefore, in order to prove \eqref{Con01m}-\eqref{Con00m} it is sufficient to show that the convergence takes place in Hausdorff content in the corresponding region. This is what we will do. \subsection{Proof of Theorem \ref{equiv}.} \begin{proof} \noindent Fix $n \in \mathbb{N}$. Notice that relations \eqref{tipoIm} and \eqref{tipoIIam} are the same as relations \eqref{tipoIam} and \eqref{tipoIdm}, respectively. First, let us show that Definition \ref{MTPm2} implies Definition \ref{MTPm1}. Using \eqref{eq:aux} with $\mathcal{L}_j = \mathcal{A}_{n,j}, j \in \{0,1,\ldots,m-2\},$ and $r = m-1$, we obtain \[\mathcal{A}_{n,j} + \sum_{k=j+1}^{m-1} (-1)^{k-j}\widehat{s}_{k,j+1} \mathcal{A}_{n,k} = (-1)^j \left(a_{n,j} - a_{n,m} \widehat{s}_{m,j+1}\right). \] However, \eqref{tipoIam}-\eqref{tipoIdm} imply that the left hand side of this equation is $\mathcal{O}(1/z)$ and, therefore, so is the right hand side which is exactly the expression that appears in Definition \ref{MTPm1}. We also obtain the additional relation \begin{equation} \label{additional} (a_{n,0} - a_{n,m}\widehat{s}_{m.1})(z) = \mathcal{O}(1/z). \end{equation} which is redundant with respect to the equations in Definition \ref{MTPm1}, but \eqref{additional} will be needed below. \medskip To prove the converse, we observe that according to formula \eqref{rel-fund} applied to the Nikishin system $\mathcal{N}(\sigma_{j+1}, \sigma_{j+2},\ldots, \sigma_{m})$, for $j=0,\ldots,m-2,$ we have \begin{equation}\label{RM} 0\equiv (-1)^{j }\widehat{s}_{m,j+1} + \sum_{k=j+1}^{m-1}(-1)^{k} \widehat{s}_{m,k+1} \widehat{s}_{j+1,k} + (-1)^{m} \widehat{s}_{j+1, m}, \quad z \in \mathbb{C} \setminus (\Delta_{j+1} \cup \Delta_m). \end{equation} From Definition \ref{MTPm1} let us consider the following equations \begin{align} (-1)^{j}\left(a_{n,j}-a_{n,m}\widehat{s}_{m,j+1}\right)(z)=\mathcal{O}(1/z)\label{E4},\\ ..........................................................\nonumber\\ (-1)^{m-2}\left(a_{n,m-2}-a_{n,m}\widehat{s}_{m,m-1}\right)(z)=\mathcal{O}(1/z)\label{E2},\\ (-1)^{m-1}\left(a_{n,m-1}-a_{n,m}\widehat{s}_{m,m}\right)(z)=\mathcal{O}(1/z)\label{E1}. \end{align} Let us multiply equation (\ref{E1}) by $\widehat{s}_{j+1,m-1}$, (\ref{E2}) by $\widehat{s}_{j+1,m-2}$, and so on until we arrive to (\ref{E4}) multiplied by $1$. Adding up all the relations so obtained, we arrive at \begin{align} (-1)^{j} a_{n,j}+(-1)^{j+1}a_{n,j+1}\widehat{s}_{j+1,j+1}+\ldots + (-1)^{m-1}a_{n,m-1} \widehat{s}_{j+1,m-1} \nonumber\\ -a_{n,m}\left((-1)^{j}\widehat{s}_{m,j+1}+\sum_{k=j+1}^{m-1}(-1)^{k} \widehat{s}_{m,k+1} \widehat{s}_{j+1,k}\right)=\mathcal{O}(1/z). \end{align} Using relation (\ref{RM}) to replace what is inside the big parenthesis, the $j$-th equation in Definition \ref{MTPm2} follows immediately; that is $\mathcal{A}_{n,j}=\mathcal{O}(1/z).$ Therefore, DR and ML Hermite-Pad\'e polynomials coincide. \medskip Observe that \eqref{additional} and \eqref{tipoIIcm}-\eqref{tipoIIam} imply that once $a_{n,m}$ is found then $a_{n,j}, j=0,\ldots,m-1,$ is uniquely determined as the polynomial part of the asymptotic expansion at $\infty$ of $a_{n,m}\widehat{s}_{m,j+1}$. In particular, this means that if $a_{n,j}^{n-1},$ is the coefficient corresponding to the power $z^{n-1}$ of $a_{n,j}$ and $a_{n,m}^{n}$ the coefficient corresponding to the power $z^{n}$ of $a_{n,m}$ then $$a_{n,j}^{n-1} = a_{n,m}^{n}|s_{m,j+1}|, \qquad j=1,\ldots,m-1. $$ From Lemma \ref{degree} we know that $\deg a_{n,m} = n$, therefore, $\deg a_{n,j} = n-1, j=0,\ldots,m-1$ and the polynomials $a_{n,j},j=0,\ldots,m-1$ are uniquely defined up to a constant factor (the same constant as for $a_{n,m}$). Moreover, \eqref{MP} implies that $\frac{a_{n,m-1}}{a_{n,m}}$ is an $n$-th diagonal multipoint Pad\'e approximation of $\widehat{s}_{m,m}$ with $n+1$ interpolation conditions at $\infty$ and another $n$ located at the zeros of $w_{n,m-1}$ and $a_{n,m-1}$ is the $n$-th polynomial of the second lind with respect to the measure $d\sigma_m/w_{n,m-1}$ whose $n-1$ zeros are known to interlace the $n$ simple zeros of $a_{n,m}$. With this we conclude the proof. \end{proof} The property of the degrees of the polynomials $a_{n,j}, j=0,\ldots,m,$ indicates that the multi-indices $(n,\ldots,n,n+1) \in \mathbb{Z}_+^m \setminus \{{\bf 0}\}$ are normal (for the definition of normality see, for example, the introduction in \cite{FL3}). This is emphasized in the next result. \begin{corollary} \label{misc} For each fixed $n\in \mathbb{N}$ we have \begin{enumerate} \item[(a)] $\mathcal{A}_{n,0}$ has no zero in $\mathbb{C} \setminus \Delta_1$. The coefficient accompanying $1/z^{n+1}$ in the asymptotic expansion \eqref{tipoIam} is different from zero. \item[(b)] $\mathcal{A}_{n,j}, j=1,\ldots,m$ has exactly $n$ zeros in $\mathbb{C} \setminus \Delta_{j+1}\,\,(\Delta_{m+1} = \emptyset)$, they are all simple and lie in $\stackrel{\circ}\Delta_j$. The coefficient accompanying $1/z$ in the asymptotic expansions \eqref{tipoIbm}-\eqref{tipoIdm} is different from zero. \end{enumerate} \end{corollary} \begin{proof} The forms $\mathcal{A}_{n,j}$ are symmetric with respect to the real line (that is $\mathcal{A}_{n,j}(\overline{z}) = \overline{\mathcal{A}_{n,j}( {z})}$); therefore, its non real zeros come in conjugate pairs. For $j=m, \mathcal{A}_{n,m}= a_{n,m}$ and the property stated in $(b)$ about the zeros was proved in Theorem \ref{equiv}. Suppose that for some $\overline{\j} \in \{0,\ldots,m-1\}$ any one of the properties stated in $(a)$ or $(b)$ fails. From the proof of Lemma \ref{degree}, we know that $\mathcal{A}_{n,\overline{\j}}, \overline{\j} = 1,\ldots,m-1,$ has at least $n$ sign changes on $\Delta_{\overline{\j}}$, so it can have more but not less than $n$ zeros in $\mathbb{C} \setminus \Delta_{\overline{\j}}$. Thus, there exists a polynomial $w_{n,\overline{\j}}$ of degree $\geq n$ or $0$ if $\overline{\j} = 0$, with real coefficients, such that \begin{equation}\label{extra}\mathcal{A}_{n,\overline{\j}}/w_{n,\overline{\j}} = \mathcal{O}(1/z^{n+2}) \in \mathcal{H}(\mathbb{C}\setminus \Delta_{\overline{\j}}). \end{equation} Arguing as in the proof of Lemma \ref{degree} it readily follows that for $j = \overline{\j},\ldots,m-1$ one also has \eqref{extra}. This entails that for some polynomial $w_{n,m-1}$, with real coefficients, \[ \int x^\nu a_{n,m}(x) \frac{d s_{m,m}(x)}{w_{n,m-1}(x)} = 0, \qquad \nu =0,\ldots,n. \] This implies that $a_{n,m} \equiv 0$. This is impossible because according to Theorem \ref{equiv} we would also have $a_{n,j} \equiv 0, j=0,\ldots,m-1$. Thus, all properties stated in this corollary must hold. \end{proof} \subsection{Proof of Theorem \ref{TBIO}.} \begin{proof} For $k=0,\ldots,m-1$, set \[\mathcal{B}_{k,j}(z) = (-1)^j b_{k,j}+ (-1)^{j+1}b_{k,j+1}\widehat{s}_{m-j,m-j}\cdots (-1)^{m}b_{k,m}\widehat{s}_{m-j,1} \] First, let us analyze the case $n < k$. From equation \eqref{dtipoIam} and \eqref{eq:4} it follows that for $ \nu=0,1,\ldots k-1$ \begin{equation} \label{bio1} 0=\int_{\Delta_m} x_m^{\nu}\,\mathcal{B}_{k,1}(x_m)d\sigma_{m}(x_m). \end{equation} Using consecutively \eqref{dtipoIbm}-\eqref{dtipoIdm} and \eqref{eq:3}, we have \begin{equation} \label{bio2} \mathcal{B}_{k,1}(x_m) = \int_{\Delta_{m-1}} \mathcal{B}_{k,2}(x_{m-1}) \frac{d\sigma_{m-1}(x_{m-1})}{(x_m-x_{m-1})} = \cdots = \end{equation} \[ \int_{\Delta_{m-1}}\ldots\int_{\Delta_2} \mathcal{B}_{k,m-1}(x_{2}) \frac{d\sigma_{2}(x_{2}) }{(x_{3}-x_{2})}\cdots\frac{d\sigma_{m-1}(x_{m-1}) }{(x_{m}-x_{m-1})} = \] \[ \int_{\Delta_{m-1}}\cdots\int_{\Delta_1} (-1)^m b_{k,m}(x_{1}) \frac{d\sigma_{1}(x_{1}) }{(x_{2}-x_{1})}\cdots\frac{d\sigma_{m-1}(x_{m-1}) }{(x_{m}-x_{m-1})} = - \int_{\Delta_1} b_{k,m}(x_{1}) K(x_1,x_m) d\sigma_1(x_1). \] In the last equality, we use Fubini's theorem and the definition of the kernel $K(x_1,x_m)$. Combining \eqref{bio1}, \eqref{bio2} and using the fact that $n < k$, we get $$ \int_{\Delta_m}\int_{\Delta_1} a_{n,m}(x_m)K(x_1,x_m)b_{k,m}(x_1) d\sigma_1(x_1) d\sigma_m(x_m) =0, \qquad n < k. $$ For $k < n$, the proof is the same as above applied to the forms $\mathcal{A}_{n,j}$ instead of the forms $\mathcal{B}_{k,j}$. Now, suppose that \begin{equation}\label{bio3} \int_{\Delta_m}\int_{\Delta_1} a_{n,m}(x_m)K(x_1,x_m)b_{n,m}(x_1) d\sigma_1(x_1) d\sigma_m(x_m) = 0. \end{equation} Obviously, $\deg b_{k,m} = k, k \geq 0$. This, together with the orthogonality relations and \eqref{bio3} give \begin{equation}\label{bio4} \int_{\Delta_1} x_1^n \int_{\Delta_m} a_{n,m}(x_m)K(x_1,x_m) d\sigma_m(x_m) d\sigma_1(x_1) = 0. \end{equation} On the other hand, just as in the proof of \eqref{bio2} we have \begin{equation}\label{bio5} \mathcal{A}_{n,1}(x_1) = \int_{\Delta_m} a_{n,m}(x_m)K(x_1,x_m) d\sigma_m(x_m). \end{equation} Now, from \eqref{bio4} and \eqref{bio5} we obtain \[\int_{\Delta_1} x_1^n \mathcal{A}_{n,1}(x_1) d\sigma_1(x_1) = \int_{\Delta_1} x_1^n \int_{\Delta_m} a_{n,m}(x_m)K(x_1,x_m) d\sigma_m(x_m) = 0. \] We also know that \[\int_{\Delta_1} x_1^\nu \mathcal{A}_{n,1}(x_1) d\sigma_1(x_1) = 0,\qquad \nu=0,\ldots,n-1. \] However, these orthogonality relations imply that the form $\mathcal{A}_{n,1}$ has at least $n+1$ sign changes on $\Delta_1$ but this is impossible since in Corollary \ref{misc} we showed that it has exactly $n$ zeros in $\mathbb{C} \setminus \Delta_2$. Therefore, \[\int_{\Delta_m}\int_{\Delta_1} a_{n,m}(x_m)K(x_1,x_m)b_{n,m}(x_1) d\sigma_1(x_1) d\sigma_m(x_m) \neq 0 \] and we conclude the proof. \end{proof} \subsection{Proof of Theorem \ref{CTPm}.} \begin{proof} In the proof of Theorem \ref{equiv} it was indicated that ${a_{n,m-1}}/{a_{n,m}}$ is an $n$-th diagonal multipoint Pad\'e approximation of $\widehat{s}_{m,m}$. Since either the sequence of moments of $\sigma_m$ satisfies Carleman's condition or $\Delta_{m-1}$ is a finite interval contained in $\mathbb{C} \setminus \Delta_m$, using \cite[Theorem 1]{lago} we have \begin{equation} \label{multipoint} \lim_{n\rightarrow \infty} \frac{a_{ n,m-1}}{a_{ n,m}} = \widehat{s}_{m,m}, \end{equation} uniformly on each compact subset $\mathcal{K} \subset \mathbb{C} \setminus \Delta_m$. \medskip Now, consider the case $j \in \{0,\ldots,m-2\}$. Having in mind \eqref{Con01m}, we need to reduce $\mathcal{A}_{ n,j}$ so as to eliminate all $a_{n,k}, k=j+1,\ldots,m-1$. We start out eliminating $a_{n,j+1}$. Consider the ratio $\mathcal{A}_{ n,j}/\widehat{s}_{j+1,j+1}$. Using \eqref{s22} and \eqref{4.4} we obtain that \[ \frac{\mathcal{A}_{ n,j}}{\widehat{s}_{j+1,j+1}} = \left((-1)^j p_{j+1,j+1} a_{ n,j}+ \sum_{k=j+1}^m \frac{(-1)^k|s_{j+1,k}|}{|s_{j+1,j+1}|} a_{n,k} \right) + (-1)^ja_{n,j}\widehat{\tau}_{j+1,j+1} - \] \[\sum_{k=j+2}^m (-1)^ka_{n,k} \langle {\tau}_{j+1,j+1}, \langle s_{j+2,k}, s_{j+1,j+1} \rangle \widehat{\rangle}, \] has the form of $\mathcal{L}_0$ in Lemma \ref{reduc} (with respect to $\mathcal{N}(\tau_{j+1,j+1},s_{j+2,j+1},\sigma_{j+3},\ldots,\sigma_m)$, when $j+3\leq m$, or $\mathcal{N}(\tau_{j+1,j+1},s_{j+2,j+1})$ if $j=m-2$). Notice that $ {\mathcal{A}_{ n,j}}/({\widehat{\sigma}_{j+1}w_{n,j}}) \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+1})$, and \[\frac{\mathcal{A}_{ n,j}}{\widehat{s}_{j+1,j+1}w_{n,j}} \in \mathcal{O}\left(\frac{1}{z^{n}}\right), \qquad z \to \infty. \] From \eqref{eq:4} of Lemma \ref{reduc}, we obtain that for $\nu = 0,\ldots,n-2$ \[ 0 = \int x^{\nu} \left( (-1)^ja_{ n,j}(x) - \sum_{k=j+2}^m (-1)^k a_{ n,k} \langle s_{j+2,k}, s_{j+1,j+1} \widehat{\rangle}(x) \right)\frac{d\tau_{j+1,j+1}(x)}{w_{n,j}(x)} \] which implies that the function in parenthesis under the integral sign has at least $n-1$ sign changes in $\stackrel{\circ}{\Delta}_{j+1} $. In turn, it follows that there exists a polynomial $\widetilde{w}_{ n,j }, \deg \widetilde{w}_{ n,j } = n-1$, whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{j+1} $ such that \begin{equation} \label{eq:5} \frac{(-1)^ja_{ n,j} - \sum_{k=j+2}^m (-1)^ka_{ n,k} \langle s_{j+2,k}, s_{j+1,j+1} \widehat{\rangle} }{\widetilde{w}_{n,j }} \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+2}). \end{equation} \medskip On the other hand, using \eqref{eq:aux} with $r=j+1$ and Definition \ref{MTPm2}, we obtain \begin{align*} (\mathcal{A}_{ n,j}-\widehat{s}_{j+1,j+1}\mathcal{A}_{ n,j+1})(z)=\mathcal{O}\left(\frac{1}{z}\right)\\ =\left((-1)^ja_{ n,j} - \sum_{k=j+2}^m (-1)^k a_{ n,k} \langle s_{j+2,k}, s_{j+1,j+1} \widehat{\rangle}\right)(z). \end{align*} Consequently, \begin{align} \label{eq:6} \frac{(-1)^ja_{n,j} - \sum_{k=j+2}^m (-1)^k a_{ n,k} \langle s_{j+2,k}, s_{j+1,j+1} \widehat{\rangle} }{\widetilde{w}_{n,j }} = \mathcal{O}\left(\frac{1}{z^{n}}\right), \qquad z \to \infty. \end{align} Notice that $\langle s_{j+2,k}, s_{j+1,j+1}{\rangle} = s_{j+2,j+1} $ when $k=j+2$ and $\langle s_{j+2,k}, s_{j+1,j+1}{\rangle} = \langle s_{j+2,j+1},s_{j+3,k}\rangle$ when $j+3\leq k \leq m$ (if any). \medskip Suppose that $j=m-2$. In this case, \eqref{eq:5}-\eqref{eq:6} reduce to \[ \frac{a_{ n,m-2} - a_{ n,m} \widehat{s}_{m,m-1} }{\widetilde{w}_{n,m-2 }} = \mathcal{O}\left(\frac{1}{z^{n}}\right) \in \mathcal{H}(\mathbb{C} \setminus \Delta_{m}). \] In comparison with the case when $j=m-1$ we lose one interpolation condition at infinity and we say that ${a_{n,m-2}}/{a_{n,m}}$ is an incomplete diagonal Pad\'e approximation of $s_{m,m-1}$. However, using \cite[Lemma 2]{Bus} we can assert that \[ h- \lim_{n\to \infty} \frac{a_{n,m-2}}{a_{n,m}} = \widehat{s}_{m,m-1}\] inside $\mathbb{C} \setminus \Delta_m$. Since the poles of ${a_{n,m-2}}/{a_{n,m}}$ lie in $\Delta_m$, from \cite[Lemma 1]{Gon} uniform convergence on compact subsets of $\mathbb{C} \setminus \Delta_{m}$ readily follows. \medskip Incidentally, if $m=2$ and $j=0$, on the right hand side of \eqref{eq:6} we have $\mathcal{O}(1/z^{n+1})$ because $\mathcal{A}_{n,0} - \widehat{s}_{1,1} \mathcal{A}_{n,1} = \mathcal{O}(1/z^2)$. So, in this case $a_{n,0}/a_{n,2}$ is a complete multipoint Pad\'e approximation of $\widehat{s}_{2,1}$. Then $a_{n,2}$ is (also) an $n$-th orthogonal polynomial with respect to the varying measure $d s_{2,1}/\widetilde{w}_{n,0}$ and $a_{n,0}$ is the associated polynomial of second kind which implies that $a_{n,0}$ has $n-1$ simple zeros which lie in $ \stackrel{\circ}\Delta_m$ (and interlace the zeros of $a_{n,2}$). For other values of $m$, we discuss later the degree and location of the zeros of $a_{n,j}, j=0,\ldots,m-2$. \medskip Let us assume that $m \geq 3$ and $0 \leq j \leq m-3$. Then, $\langle s_{j+2,k}, s_{j+1,j+1} \rangle = \langle s_{j+2,j+1} , s_{j+3,k} \rangle, k=j+3,\ldots,m,$ and we use this equality to modify the corresponding terms in the numerators of the left hand sides of \eqref{eq:5} and \eqref{eq:6} which becomes \[(-1)^j a_{n,j} - (-1)^{j+2} a_{n,j+2}\widehat{s}_{j+2,j+1} - \sum_{k=j+3}^m (-1)^k a_{n,k} \langle s_{j+2,j+1} , s_{j+3,k} \widehat{\rangle}.\] Now, we must do away with $a_{n,j+2}$. Using \eqref{s22} and \eqref{4.4}, we obtain \[ \frac{(-1)^j a_{n,j} - (-1)^{j+2} a_{n,j+2}\widehat{s}_{j+2,j+1} - \sum_{k=j+3}^m (-1)^k a_{n,k} \langle s_{j+2,j+1} , s_{j+3,k} \widehat{\rangle}}{\widehat{s}_{j+2,j+1}} = \] \begin{align*} \left((-1)^j p_{j+2,j+1} a_{ n,j} - (-1)^{j+2} a_{n,j+2} - \sum_{k=j+3}^m \frac{(-1)^k|\langle s_{j+2,j+1} , s_{j+3,k} \rangle|}{|s_{j+2,j+1}|} a_{n,k} \right) \\ + (-1)^j a_{n,j}\widehat{\tau}_{j+2,j+1} + (-1)^2 \sum_{k=j+3}^m (-1)^k a_{n,k} \langle {\tau}_{j+2,j+1}, \langle s_{j+3,k}, s_{j+2,j+1} \rangle \widehat{\rangle}. \end{align*} This expression has the form of $\mathcal{L}_0$ in Lemma \ref{reduc}. Additionally, $$ \frac{(-1)^j a_{n,j} - (-1)^{j+2} a_{n,j+2}\widehat{s}_{j+2,j+1} - \sum_{k=j+3}^m (-1)^k a_{n,k} \langle s_{j+2,j+1} , s_{j+3,k} \widehat{\rangle}}{\widehat{s}_{j+2,j+1}\widetilde{w}_{n,j }}\in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+2})$$ and because of \eqref{eq:6}, as $z\to \infty,$ $$ \frac{(-1)^j a_{n,j} - (-1)^{j+2} a_{n,j+2}\widehat{s}_{j+2,j+1} - \sum_{k=j+3}^m (-1)^k a_{n,k} \langle s_{j+2,j+1} , s_{j+3,k} \widehat{\rangle}}{\widehat{s}_{j+2,j+1}\widetilde{w}_{n,j }}=\mathcal{O}\left(\frac{1}{z^{n-1}}\right).$$ From \eqref{eq:4} in Lemma \ref{reduc}, we obtain that for $\nu = 0,\ldots,n-3$ \[ 0 = \int x^{\nu} \left((-1)^j a_{ n,j} + (-1)^2 \sum_{k=j+3}^m (-1)^k a_{ n,k}(x) \langle s_{j+3,k}, s_{j+2,j+1} \widehat{\rangle}(x) \right)\frac{d\tau_{j+2,j+1}(x)}{\widetilde{w}_{n,j}(x)} \] which implies that the function in parenthesis under the integral sign has at least $n-2$ sign changes in $\stackrel{\circ}{\Delta}_{j+2}$. In turn, it follows that there exists a polynomial $\widetilde{w}_{ n,j+1 }, \deg \widetilde{w}_{ n,j+1 } = n-2$, whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{j+1} $ such that \[ \frac{(-1)^ja_{ n,j} + (-1)^2 \sum_{k=j+3}^m (-1)^k a_{ n,k} \langle s_{j+3,k}, s_{j+2,j+1} \rangle \widehat{\rangle}}{\widetilde{w}_{n,j+1 }} \in \mathcal{H}(\mathbb{C} \setminus \Delta_{j+3}) \] On the other hand, according to Definition \ref{MTPm2} and Lemma \ref{lem:2} with $r=j+2$ \begin{align*} (\mathcal{A}_{ n,j}-\widehat{s}_{j+1,j+1}\mathcal{A}_{ n,j+1}+\widehat{s}_{j+2,j+1}\mathcal{A}_{ n,j+2})(z)=\mathcal{O}\left(\frac{1}{z}\right)\\ =\left((-1)^ja_{ n,j} + (-1)^2 \sum_{k=j+3}^m (-1)^k a_{ n,k} \langle s_{j+3,k}, s_{j+2,j+1} \widehat{\rangle}\right)(z) \end{align*} Consequently, \begin{align*} \frac{(-1)^j a_{ n,j} + (-1)^2\sum_{k=j+3}^m (-1)^k a_{ n,k} \langle s_{j+3,k}, s_{j+2,j+1} \rangle \widehat{\rangle}}{\widetilde{w}_{n,j+1 }}=\mathcal{O}\left(\frac{1}{z^{n-1}}\right) \end{align*} Notice that $a_{ n,j+2}$ has been eliminated. If $j=m-3$ combining \cite[Lemma 2]{Bus} and \cite[Lemma 1]{Gon} we obtain that \[\lim_{n\to \infty} \frac{a_{n,m-3}}{a_{n,m}} = \widehat{s}_{m,m-2}\] uniformly on each compact subset of $\mathbb{C}\setminus \Delta_m.$ Otherwise, we have \[\frac{(-1)^ja_{ n,j} + (-1)^2 a_{n,j+3}\widehat{s}_{j+3,j+1} + (-1)^2 \sum_{k=j+4}^m (-1)^k a_{ n,k} \langle s_{j+3,j+1}, s_{j+1,k} \rangle \widehat{\rangle}}{\widetilde{w}_{n,j+1 }}=\mathcal{O}\left(\frac{1}{z^{n-1}}\right),\] and we are ready to eliminate $a_{n,j+3}$ dividing by $\widehat{s}_{j+3,j+1}$. \medskip In general, for fixed $j$, after $m-j-1$ reductions obtained applying Lemmas \ref{reduc} and \ref{lem:2}, we find that there exists a polynomial denoted $w_{ n,j}^*, \deg w_{ n,j}^* = n-m+j$, whose zeros are simple and lie in $\stackrel{\circ}{\Delta}_{m-1} $ such that \begin{equation} \label{last} \frac{a_{ n,j} - a_{ n,m} \widehat{s}_{m,j+1}}{w_{n,j}^*} = \mathcal{O}\left(\frac{1}{z^{n-m+j+2}}\right) \in \mathcal{H}(\mathbb{C} \setminus \Delta_m), \qquad z \to \infty. \end{equation} Then, \cite[Lemma 2]{Bus} and \cite[Lemma 1]{Gon}, imply \eqref{Con01m}. Now, \eqref{Con00m} is an immediate consequence of \eqref{RM} and \eqref{Con01m}. \medskip Assuming that $j \in \{0,\ldots,m-1\}$, \eqref{last} implies that \[\frac{a_{ n,j} - a_{ n,m} \widehat{s}_{m,j+1}}{\widehat{s}_{m,j+1} w_{n,j}^*} = \mathcal{O}\left(\frac{1}{z^{n-m+j+1}}\right) \in \mathcal{H}(\mathbb{C} \setminus \Delta_m), \qquad z \to \infty.\] Since \[\frac{a_{ n,j} - a_{ n,m} \widehat{s}_{m,j+1}}{\widehat{s}_{m,j+1}} = a_{ n,j} \widehat{\tau}_{m,j+1} - \left(a_{n,m} - \ell_{m,j+1}a_{n,j}\right),\] and using \eqref{eq:4} \[\int x^{\nu} a_{n,j}(x) \frac{d\tau_{m,j+1}(x)}{w_{n,j}^*(x)} = 0, \qquad \nu = 0,\ldots, n-m+j-1.\] Consequently, $a_{n,j}$ has at least $n-m+j$ sign changes in $\stackrel{\circ}{\Delta}_m$; therefore, $\deg a_{n,j} \geq n-m+j$. For $j \in \{0,\ldots,m-2\}$ it could occur that $m-j-1$ zeros of $a_{n,j}$ lie in $\mathbb{C} \setminus \Delta_m$. \end{proof} Although, there may be a certain amount (independent of $n$) of zeros of the polynomials $a_{n,0},\ldots,a_{n,m-2}$ that abandon $\Delta_m$, the next corollary shows that in that case they approach $\Delta_m$ as $n\to \infty$. \begin{corollary}\label{cor:CTPm} Under the assumptions of Theorem \ref{CTPm} we have that the accumulation points of the zeros of $a_{ n,j}, j=0,1,\cdots, m-2$ are in $\Delta_m $. \end{corollary} \begin{proof} Let $\Gamma$ be an arbitrary simple closed Jordan curve contained in $\mathbb{C} \setminus \Delta_m$. Since $\widehat{s}_{m,j+1}$ is never equals zero on this domain, the argument principle implies that \[\lim_{n\to \infty}\frac{1}{2\pi i}\int_\Gamma \frac{\left(a_{n,j}(z)/a_{n,m}(z)\right)^\prime}{\left(a_{n,j}(z)/a_{n,m}(z)\right)} dz = \frac{1}{2\pi i}\int_\Gamma \frac{ \widehat{s}^{\prime}_{m,j+1}(z)}{\widehat{s}_{m,j+1}(z)} = 0.\] But the poles of $a_{n,j}/a_{n,m}$ all lie in $\Delta_m$; consequently, for all sufficiently large $n$ the zeros of these rational function must lie in the unbounded connected component of the complement of $\Gamma$. This means that as $n \to \infty$ the zeros of $a_{n,j}$ that may lie in $\mathbb{C} \setminus \Delta_m$ must accumulate on $\Delta_m$. \end{proof}
{ "timestamp": "2018-05-08T02:11:00", "yymm": "1805", "arxiv_id": "1805.02195", "language": "en", "url": "https://arxiv.org/abs/1805.02195" }
\subsection{Abstract} {\small\bf Abstract}\\ {\small The centrality dependence of rapidity distributions of pions in Pb+Pb reactions can be understood by imposing local energy-momentum conservation in the longitudinal ``fire-streaks'' of excited matter. With no tuning nor adjustment to the experimental data, the rapidity distribution of pions produced by the fire-streak which we obtained from Pb+Pb collisions reproduces the shape of the experimental pion rapidity distribution in p+p interactions, measured by the NA49 Collaboration at the same energy. The observed difference in the absolute normalization of this distribution can be explained by the difference in the overall energy balance, induced by baryon stopping and strangeness enhancement phenomena occurring in heavy ion collisions. We estimate the latter effects using a collection of SPS experimental data on $\pi^\pm$, $K^\pm$, net $p$, and $n$ production in p+p and Pb+Pb reactions. Implications of the above findings are discussed.} \section{Introduction} \label{intro} In our recent paper on the implications of energy and momentum ($E$$-$$\vec{p}$) conservation for heavy ion collisions at CERN SPS energies~\cite{1} we formulated a simple model for the longitudinal evolution of the participant system. This model, with some degree of similarity to the fire-streak approach of Refs~\cite{2}, assumed local $E$$-$$\vec{p}$ conservation in the plane perpendicular to the collision axis and consequently, formation and independent fragmentation of finite volumes of excited primordial matter (``fire-streaks'') into finite state particles. The kinematical characteristics (rapidity, invariant mass) of the fire-streaks were directly given by the $E$$-$$\vec{p}$ conservation. We did not address the exact physical nature of the fire-streaks although to think about color string conglomerates or initial volume elements of quark-gluon plasma would not seem unnatural. With a simple, three-parameter fire-streak fragmentation function ensuring energy conservation, our model provided a surprisingly good description of the whole centrality dependence of negative pion $dn/dy$ distributions in Pb+Pb reactions at $\sqrt{s_{NN}}=17.3$~GeV, measured by the NA49 experiment~\cite{2.5na49}. A reminder of the model is presented in Fig.~\ref{fig1}, while a compilation of results is shown in Fig.~\ref{fig2}. It is noticeable that the model explains {\em both} the evolution of absolutely normalized $\pi^-$ yields and of the distribution's shape as a function of centrality. {In Fig.~\ref{fig2-40} we present the result of a first test of our model for Pb+Pb collisions at a lower SPS energy, $\sqrt{s_{NN}}=8.8$~GeV. The overall similarity of this result to that shown in Fig.~\ref{fig2}(b) suggests the applicability of our model to pion production in some extended range of collision energy, 8.8-17.3 GeV at the least.} \begin{figure}[p] \begin{center} \hspace*{-0.1cm}\includegraphics[width=10cm]{ideowy_po_armods_pp.eps} {\caption\small A schematic picture of our model of Pb+Pb collisions~\cite{1}. \label{fig1}} \end{center} \end{figure} \begin{figure}[h] \begin{center} \vspace*{-0.1cm} \vspace*{-0.5cm} \hspace*{-4.7cm} \includegraphics[width=15cm]{ab3-rapidity-distributions-fig-6.1.eps}\vspace*{-5.5cm} \hspace*{-0.7cm} \hspace*{-0.5cm} \begin{picture}(10,10) \end{picture} \includegraphics[width=6.5cm]{LargeBins_armods2dash_re.eps} \vspace*{.3cm} \begin{picture}(10,10) \put(-188,72){{\bf\Large{(a)}}} \put(187,72){{\bf\Large{(b)}}} \end{picture} {\caption\small {(a) Rapidity distributions of $\pi^-$ mesons in centrality selected Pb+Pb collisions at top SPS energy, $\sqrt{s_{NN}}=17.3$~GeV, together with our model calculations~\cite{1}, (b) change of width of the $\pi^-$ distribution from peripheral to central Pb+Pb collisions at $\sqrt{s_{NN}}=17.3$~GeV, and its description by our model~\cite{1}. In panel (b), for peripheral collisions, the experimental data and model curves have been scaled up to fit the same maximum as for central collisions.} \label{fig2}} \vspace*{0.6cm} \end{center} \end{figure} \begin{figure}[h] \begin{center} \vspace*{-0.8cm} \hspace*{-0.7cm} \vspace*{-0.5cm} \includegraphics[width=6.4cm,height=5.3cm]{40GeV_PhysicalRev_CorrectB0.eps} {\caption\small {Change of width of the $\pi^-$ rapidity distribution from peripheral to central Pb+Pb collisions at the energy $\sqrt{s_{NN}}=8.8$~GeV, and its description by our model~\cite{1}. For peripheral collisions, the experimental data and model curves have been scaled up to fit the same maximum as for central collisions. The experimental data points come from the NA49 experiment~\cite{2.5na49}.} \label{fig2-40}} \end{center} \end{figure} We interpreted the success of our simple model as a hint that energy-momentum conservation indeed plays a dominant role in the longitudinal evolution of the system created in A+A collisions, at SPS energy. Now we wish to compare the results of our work on Pb+Pb collisions to more elementary p+p reactions. The question whether the non-perturbative dynamical mechanisms governing the latter are qualitatively similar or different from those in heavy ion reactions is a long-standing one. Evident differences on the quantitative level, including in particular the enhancement of strangeness production and its energy dependence~\cite{3-40gev}, were interpreted as onset of deconfinement and transition to quark-gluon plasma~\cite{rafelski,smes}. On the other hand, qualitative similarities between p+p and Pb+Pb reactions at SPS~\cite{aduszkiewicz} and LHC energies~\cite{6-nature2017} still constitute a challenge for phenomenological models (see, e.g.,~\cite{7-ozvenchuk}). We find it therefore a key question to verify how our simple energy-momentum conservation picture in A+A reactions compares to proton-proton collisions. This paper is organized as follows. In section~\ref{II} we remind the basic formulae defining our fire-streak fragmentation function. A comparison between the latter and p+p data from the NA49 Collaboration is made in section~\ref{III}. The problem of isospin differences between p+p and Pb+Pb collisions is addressed in section~\ref{III.5}. Section~\ref{IV} includes the analysis of normalization. The implications of our study are discussed in section~\ref{V} and the summary is made in section~\ref{VI}. We note that in all the subsequent parts of this paper, we use the formulation ``fire-streaks'' or ``fire-streak approach'' to address our earlier work made for A+A collisions~\cite{1}. This is made to underline the basic similarity of our approach to the original fire-streak concept of~\cite{2}. We note that differences exist on the detailed level, which become clearly apparent from the comparison of our model formulation in section~\ref{II} to the cited original works. {Finally, we also note the correspondence of our results to the recent works aimed at the explanation of $\Lambda$ and $\overline{\Lambda}$ polarizations observed by the STAR Collaboration in Au+Au collisions~\cite{Starnature}, by the initial angular momentum generated in a fire-streak-like approach~\cite{Xie2017}. How the initial angular momentum is transferred to $\Lambda$/$\bar{\Lambda}$ baryons is still unclear. It is our hope that the work presented here will bring its modest contribution to a better understanding of the applicability of the fire-streak-like approaches to the field of high energy reactions.} \section{The fire-streak fragmentation function} \label{II} The model we formulated for ultrarelativistic Pb+Pb collisions, Fig.~\ref{fig1}, assumed the division of the 3D nuclear mass distribution into longitudinal ``bricks'' in the perpendicular plane of the reaction, and the subsequent formation of fire-streaks moving along the collision axis~\cite{1}. In the cited reference fire-streaks of finite transverse size, 1~x~1 fm$^2$, were considered. Our fire-streak fragmentation function into negative pions was parametrized in the form: \begin{equation} \frac{\mathrm{d}n}{\mathrm{d}y}(y,y_s,E^*_\text{s},m_\text{s}) = A \cdot (E^*_\text{s} - m_\text{s}) \cdot \exp\left(- \frac{[(y - y_s)^2 + \epsilon^2]^\frac{r}{2}}{r \sigma_y^r}\right) \; \; \; . \label{fragmentation} \end{equation} The formula~(\ref{fragmentation}) defines the distribution $\frac{\mathrm{d}n}{\mathrm{d}y}$ of negative pions created by the fragmentation of a single fire-streak. We named it ``fire-streak fragmentation function'' in order to differentiate from the ``standard'' parton-to-hadron fragmentation function (FF)~\cite{xxxxx}. In the above, $y$ is the rapidity of the pion, $y_s$ is the fire-streak rapidity given by energy-momentum conservation, $E^*_\text{s}$ is its total energy in its own rest frame (or equivalently, its invariant mass, also given by the $E$$-$$\vec{p}$ conservation), and $m_\text{s}$ is the sum of ``cold'' rest masses of the two ``bricks'' forming the fire-streak (given by collision geometry). $\epsilon$ is a small number ensuring the continuity of derivatives ($\epsilon=0.01$ was used in~\cite{1}). Finally, $A$, $\sigma_y$ and $r$ are the only free parameters of the function~(\ref{fragmentation}). They appeared common to all the fire-streaks in all the collisions, and independent of Pb+Pb collision centrality\footnote{Deviations from the mean value of $A$ quoted above were smaller or comparable to systematical errors of the experimental data~\cite{2.5na49}. {We note that the numerical values of the parameters discussed in the text apply only to the collision energy $\sqrt{s_{NN}}=17.3$~GeV. The energy dependence of the fire-streak fragmentation function and its parameters, which emerges from the comparison of Figs~\ref{fig2} and~\ref{fig2-40} is discussed in detail in~Appendix~B.}}. The fit made in our analysis of the NA49 centrality selected Pb+Pb data {at $\sqrt{s_{NN}}=17.3$~GeV}~\cite{2.5na49} gave $A=0.05598$, $\sigma_y=1.475$, and $r= 2.55$. In this analysis, our modelled pion rapidity distribution in a given centrality selected sample of Pb+Pb collisions of impact parameter $b$ was constructed as the sum of independent fragmentation functions, corresponding to all the constituent fire-streaks: \begin{equation} \frac{\mathrm{d}n}{\mathrm{d}y}(y,b) = \sum_{(i,j)} \frac{\mathrm{d}n}{\mathrm{d}y}\left(~y,~y_{s_{(i,j)}}(b),~E^*_{\text{s}_{(i,j)}}(b),~m_{\text{s}_{(i,j)}}(b)~\right) \; \; \; \; , \label{integrated_dn_dy_approx} \end{equation} where ($i$,$j$) denominate the position of a given fire-streak in the transverse ($x,y$) plane of the Pb+Pb collision. Using formula~(\ref{integrated_dn_dy_approx}), our simple model was able to describe the whole centrality dependence of negative pion $dn/dy$ yields as a function of rapidity, including in particular the narrowing of the rapidity distribution from peripheral to central Pb+Pb collisions as illustrated in Fig.~\ref{fig2}. Now we proceed to proton-proton collisions, where the total available energy is $\sqrt{s}$. We will naively try to apply the function~(\ref{fragmentation}) to pion production in the entire p+p system, with $E^*_{\text{s}}\rightarrow\sqrt{s}$, $m_\text{s}\rightarrow 2m_\text{p}$. The pion rapidity distribution would then be: \begin{equation} \frac{\mathrm{d}n}{\mathrm{d}y} = A \cdot ({\sqrt{s}} - 2m_\text{p}) \cdot \exp\left(- \frac{[y^2 + \epsilon^2]^\frac{r}{2}}{r \sigma_y^r}\right) \; \; \; \; , \label{eq0} \end{equation} where $\sqrt{s}=17.27$~GeV as for Pb+Pb collisions, and $m_\text{p}$ is the proton mass. We note that $y_s=0$ by definition in the p+p c.m. system. Applying $\epsilon=0.01$ and the same parameters $A=0.05598$, $\sigma_y=1.475$, and $r= 2.55$ which we obtained from the fit to Pb+Pb collisions~\cite{1}, we get explicitly: \begin{equation} \frac{\mathrm{d}n}{\mathrm{d}y} \equiv f(y) = 0.8618 \cdot \exp\left(- \frac{[y^2 + 0.01^2]^\frac{2.55}{2}}{2.55 \cdot 1.475^{~2.55}}\right) \; \; \; \; \; . \label{eq2.3} \end{equation} In the following section we will directly compare the function (\ref{eq2.3}) to the experimental rapidity distribution in p+p collisions. We will constantly address $f(y)$ as ``fire-streak fragmentation function'' in the text below, to underline that it was deduced from Pb+Pb reactions as described above. \section{The negative pion rapidity spectrum} \label{III} \begin{figure}[h] \begin{center} \hspace*{-1.5cm}\includegraphics[width=10cm,height=8cm]{plotpp8.11-2.eps} \vspace*{-0.3cm} {\caption\small Rapidity distribution of negative pions produced in inclusive inelastic p+p collisions at $\sqrt{s}=17.27$~GeV (experimental data points), compared to our function $f(y)$ from Eq.~(\ref{eq2.3}) multiplied by 0.748 (blue curve). The data points come from~\cite{ppaper} (their numerical values and errors are taken from~\cite{spshadrons}; only statistical errors are shown). {At negative rapidity reflected data points are drawn.}\label{fig3}} \end{center} \end{figure} The NA49 experiment published rapidity distributions of positively and negatively charged pions in inclusive inelastic p+p collisions at $\sqrt{s}=17.27$~GeV~\cite{ppaper}. A comparison of shapes between the experimental $p+p\rightarrow\pi^-X$ distribution and our function $f(y)$ defined by Eq.~(\ref{eq2.3}) above is presented in Fig.~\ref{fig3}. We note that the function $f(y)$ multiplied by a factor of 0.748 matches the experimental data reasonably well. Several facts are noteworthy: \begin{itemize} \item[(1.)] It is important to underline that the $p+p\rightarrow\pi^-X$ data in Fig.~\ref{fig3} are compared to the {\em single} fire-streak fragmentation function function $f(y)$. This is very different from our study of Pb+Pb collisions made in~\cite{1} and shown in {Figs~\ref{fig1}-\ref{fig2-40}.} In this latter case our model calculation was always the {\em sum} of fragmentation functions corresponding to the different fire-streaks, see Eq.~(\ref{integrated_dn_dy_approx}). Summing over many fire-streaks with different values of rapidity $y_S$ affected the width of the overall pion rapidity distribution, which was largest in peripheral and smallest in central Pb+Pb collisions, see {Figs~\ref{fig2}-\ref{fig2-40}}. \newpage \item[(2.)] Account taken that all the parameters characterizing the function $f(y)$ have been directly inherited from the fit to Pb+Pb collisions,\footnote{We note that the numerical values of $\epsilon$, $\sigma_y$ and $r$ as well as the functional shape given by Eq.~(\ref{fragmentation}) were published in~\cite{1} before we started the present analysis.} and account taken of the difference between the two analyses stated in (1.), the overall agreement of the fire-streak functional shape with the experimental p+p data is in our opinion {surprisingly good}. \item[(3.)] {Notwithstanding the above, a deviation of the data points from $f(y)$ can be seen in the central region (most evidently at $y=0$). This goes beyond the statistical errors of the data points. It is always tempting to discuss such differences in the context of systematical errors of the experimental p+p and Pb+Pb data~\cite{ppaper,2.5na49}, but it is more natural to explain them by addressing the limitations of the procedure for the extraction of the fire-streak fragmentation function which we proposed in~\cite{1}. Indeed, this function is the result of a non-perturbative process and is only approximated, in an effective way, by our simple formula~(\ref{fragmentation}). Therefore, its extraction from experimental distributions in Pb+Pb collisions, each being a sum of independent fire-streaks according to Eq.~(\ref{integrated_dn_dy_approx}), will smear out all the ``subtleties'' present in the shape of $f(y)$, leaving only its basic smoothened form which can be described by Eq.~(\ref{fragmentation}) and thus giving the result which we see in Fig.~\ref{fig3}.} \item[(4.)] Finally, a clear discrepancy in the absolute normalization of our function $f(y)$ with respect to the experimental p+p data is evident from~Fig.~\ref{fig3}. This discrepancy, which we attribute to baryon stopping and strangeness enhancement phenomena, will be addressed in section~\ref{IV}. \end{itemize} The situation described above, and most of all the somewhat intriguing fact that the experimental $p+p\rightarrow\pi^-X$ distribution can be described, or approximated, by the same shape as that obtained in $Pb+Pb\rightarrow\pi^-X$ reactions but for the {single fire-streak} (item~(2.)), raises interesting questions. Some of these will be addressed in the subsequent parts of this paper. In the following two sections we will focus on the difference in absolute normalization discussed in item (4.). \section{Correction for isospin in p+p reactions} \label{III.5} \begin{figure}[t] \begin{center} \hspace*{-1.5cm}\includegraphics[width=10cm,height=8cm]{plotpp8.11-1.eps} \vspace*{-0.5cm} {\caption\small Experimental rapidity distributions of positive and negative pions produced in inclusive inelastic p+p collisions at $\sqrt{s}=17.27$~GeV (black), together with our isospin-averaged negative pion distribution, $N$$+$$N$$\rightarrow$$\pi^-$$X$, given by Eq.~(\ref{eq2}) (red). The experimental data points come from~\cite{ppaper} (their numerical values and errors are taken from~\cite{spshadrons} and the same relative errors are assumed for the isospin-averaged distribution). {At negative rapidity reflected data points are drawn.}\label{fig4}} \end{center} \end{figure} \begin{figure}[t] \begin{center} \hspace*{-1.5cm}\includegraphics[width=10cm,height=8cm]{plotpp8.11-3.eps} \vspace*{-0.5cm} {\caption\small Comparison of negative pion rapidity distribution in inclusive inelastic p+p collisions after correction for isospin effects (red points) to our single fire-streak fragmentation function $f(y)$ from Eq.~(\ref{eq2.3}) (blue curve). The isospin-averaged negative pion distribution $N$$+$$N$$\rightarrow$$\pi^-$$X$ is the same as in Fig.~\ref{fig4}. Our function $f(y)$ is multiplied by 0.812.\label{fig5}} \end{center} \end{figure} As we specified in the precedent section, the single fire-streak fragmentation function agrees with the experimental $p+p\rightarrow\pi^-X$ distribution up to a normalization factor of 0.748. Before addressing what we consider as truly dynamical reasons for this difference in normalization, a more ``trivial'' issue is to be addressed. This is the difference in the isospin content of the p+p and Pb+Pb systems. As the Pb~($A$=208,~$Z$=82) nucleus consists of $\frac{Z}{A}$=39.4\% protons and $(1-\frac{Z}{A})$=60.6\% neutrons, the proper reference for the $Pb$$+$$Pb$$\rightarrow$${\pi^-}$$X$ spectrum is not the $p$$+$$p$$\rightarrow$$\pi^-$$X$ distribution, but rather that of negative pions obtained from a properly averaged mixture of p+p, n+p, p+n, and n+n collisions. This problem is non-negligible at SPS energies where $\pi^+$ and $\pi^-$ yields in p+p collisions differ quite significantly, as shown in Fig.~\ref{fig4}. We address this issue by estimating the proper isospin-averaged distribution following the approach proposed in~\cite{x}, invoking isospin symmetry in pion production for participating protons and neutrons $\left(~\frac{\mathrm{d}n}{dy}(n\rightarrow\pi^-) = \frac{\mathrm{d}n}{dy}(p\rightarrow\pi^+)~\right)$. On that basis the proper ``nucleon+nucleon'' reference for Pb+Pb collisions reads: \begin{equation} \frac{\mathrm{d}n}{dy}(N+N\rightarrow\pi^-X)= \left(\frac{Z}{A}\right)\cdot \frac{\mathrm{d}n}{dy}(p+p\rightarrow\pi^-X)+ \left(1-\frac{Z}{A}\right)\cdot \frac{\mathrm{d}n}{dy}(p+p\rightarrow\pi^+X) \; \; \; . \label{eq2} \end{equation} The distribution~(\ref{eq2}) is presented in Fig.~\ref{fig4}. In Fig.~\ref{fig5}, its shape is compared to our function $f(y)$ given by Eq.~(\ref{eq2.3}). We consider that after the correction for isospin differences, the agreement of the $N+N\rightarrow\pi^-X$ distribution with $f(y)$ - the latter being inherited from our description of the Pb+Pb reactions as explained in section~\ref{II} - is invariably good. The normalization factor increases from 0.748 to 0.812. In the next section we will attempt to understand this factor. \vspace*{-1cm} \section{The absolutely normalized pion yield in p+p collisions} \label{IV} In the following we will use energy conservation to estimate whether the agreement apparent in the comparison of the distribution shapes, in Figs~\ref{fig3} and~\ref{fig5}, can be reconciled with the fact that our function $f(y)$, derived from Pb+Pb reactions, brings a total pion yield which is evidently higher than what is measured in p+p collisions. This difference in total pion yield is quantified (after correction for isospin effects) by the normalization factor 0.812 addressed above. We consider it conceivable that specific dynamical mechanisms, similar in p+p and Pb+Pb collisions (dressing up of quarks into hadrons to quote the first example) would lead to a similar {\em shape} of longitudinal distributions of final state particles, while the absolutely normalized final production {\em yields} would be significantly different. Therefore we will consider the differences in the overall energy balance between nucleon-nucleon and nucleus-nucleus reactions, that is, the different repartition of collision energy into the various types of final state particles. We see two main, experimentally well established, phenomena which modify this energy balance. These are: \begin{itemize} \item[(1.)] Baryon stopping~\cite{busza84}, i.e. the change in baryon inelasticity between p+p and Pb+Pb collisions; \item[(2.)] Strangeness enhancement, that is the enhanced production of strange over non-strange particle production, since a long time interpreted as connected to quark gluon plasma formation in heavy ion reactions~\cite{rafelski}. \end{itemize} The influence of these two phenomena on the overall energy repartition in p+p and Pb+Pb reactions will be estimated below. We underline that the aim of this section is to provide both estimates in a maximally model-independent way. For this reason, we decide not to use any particular model for baryon stopping or strangeness enhancement which would need to be validated against experimental data. Instead, we decide to study these issues using experimental data {directly}, whenever available. As this will become apparent below, the fact that such a study can be made with a reasonable precision speaks very well for the completeness of experimental information at SPS energy. Consequently, the work described in sections~\ref{IV.1} and~\ref{IV.2} is to be understood as an attempt at a fair comparison of the results of our work on Pb+Pb collisions with experimental p+p data as we said in section~\ref{intro}, and not as an extension of the fire-streak model of Pb+Pb collisions to p+p reactions. \subsection{Baryon stopping} \label{IV.1} Uniquely for clarity and conciseness, the discussion made below will {\em implicitly} include the correction for isospin differences between p+p and Pb+Pb reactions addressed in section III above. Thus we will assume that formula~(\ref{eq2}) correctly describes the mixture of nucleon+nucleon (p+p, n+p, p+n and n+n) collisions representative for Pb+Pb reactions, and concisely write \begin{equation} \frac{\mathrm{d}n}{dy}(p+p) ~~~~~~\text{instead~of}~~~~~~ \frac{\mathrm{d}n}{dy}(N+N\rightarrow\pi^-X) \label{eqconcise} \end{equation} for the representative, isospin corrected distribution from Eq.~(\ref{eq2}). Consequently whenever we refer to ``p+p'' (or ``$pp$'') reactions, the representative set of nucleon+nucleon collisions will be meant. Also, we will neglect the small difference between proton and neutron masses. Finally, for simplicity, we will apply the convention $\sqrt{s_{NN}}\equiv\sqrt{s}$ independently on the considered reaction type. Let us now consider the agreement of rapidity distribution shapes shown in Fig.~\ref{fig5}, together with our formulae~(\ref{eq0}) and~(\ref{eq2.3}). Approximately, we can quantify this agreement as follows: \begin{equation} \frac{\mathrm{d}n}{\mathrm{d}y} (p+p) = A_{pp} \cdot (\sqrt{s} - 2m_\text{p}) \cdot \exp\left(- \frac{[y^2 + \epsilon_{AA}^2]^\frac{r_{AA}}{2}}{r_{AA} \cdot \sigma_{y_{AA}}^{r_{AA}}}\right) \; \; \; \; , \label{eq4} \end{equation} where we put explicitly $\epsilon_{AA}=0.01$, $\sigma_{y_{AA}}=1.475$, and $r_{AA}= 2.55$ to underline that these parameters are obtained from $AA$ (Pb+Pb) reactions with no further tuning to p+p collisions. On the other hand the normalization parameter $A_{pp}$ is specific to the p+p reactions. We know from Fig.~\ref{fig5} that \begin{equation} A_{pp}=0.812 \cdot A_{_{AA}} \approx 0.8 \cdot A_{_{AA}} \; \; \; \; \; , \label{eq4.5} \end{equation} where $A_{_{AA}}=0.05598$ was obtained from experimental data on Pb+Pb collisions as specified in section~\ref{II}. Let us now consider a central Pb+Pb collision at impact parameter $b\approx 0$. As it can be immediately seen from the energy-momentum conservation considerations made in our earlier work~\cite{1}, our model predicts, for such a collision, the formation of fire streaks - all of them build of symmetric ``bricks'' of equal mass and being at rest in the collision c.m. system ($y_s\approx 0$). For any given fire-streak made of two bricks of equal mass $M$ the outcoming $\pi^-$ distribution will be, from Eq.~(\ref{fragmentation}): \begin{equation} \begin{aligned} \frac{\mathrm{d}n}{dy}(A+A\rightarrow\pi^-X) \equiv \frac{\mathrm{d}n}{\mathrm{d}y} (A+A) & = A_{_{AA}} \cdot (E^*_\text{s} - m_\text{s}) \cdot \exp\left(- \frac{[(y - y_s)^2 + \epsilon_{AA}^2]^\frac{r_{AA}}{2}}{r_{AA} \cdot \sigma_{y_{AA}}^{r_{AA}}}\right) \\ & = A_{_{AA}} \cdot (E^*_\text{s} - m_\text{s}) \cdot \exp\left(- \frac{[y^2 + \epsilon_{AA}^2]^\frac{r_{AA}}{2}}{r_{AA} \cdot \sigma_{y_{AA}}^{r_{AA}}}\right) \\ & = A_{_{AA}} \cdot (M/m_\text{p}\cdot\sqrt{s} - 2M) \cdot F_{AA}(y) \\ & = A_{_{AA}} \cdot B_M \cdot (\sqrt{s} -2m_\text{p}) \cdot F_{AA}(y) \; \; \; \; \; , \label{eq5} \end{aligned} \end{equation} where we introduced the shape factor $F_{AA}(y)=\exp\left( - [ y^2 + \epsilon_{AA}^2]^{r_{AA}/2} / (r_{AA} \cdot \sigma_{y_{AA}}^{r_{AA}}) ~ \right)$. We note that $B_M=M/m_\text{p}$ is the baryon number of each ``brick'' (equivalent to the number of participating nucleons per fm$^2$ in the plane perpendicular to the collision axis). For p+p collisions we rewrite Eq.~(\ref{eq4}) in the same form as~(\ref{eq5}): \begin{equation} \frac{\mathrm{d}n}{dy}(p+p)= A_{pp} \cdot B_M \cdot (\sqrt{s} - 2m_\text{p}) \cdot F_{AA}(y) \; \; \; \; \; , \label{eq6} \end{equation} where $B_M=M/m_\text{p}=1$ for p+p reactions. Let us now relate the energy available for particle production per incoming nucleon pair, to the outcoming baryon inelasticity $K$~\cite{blume2007} in the final state of the collision: \begin{equation} K=\frac{2\cdot E_{inel}}{\sqrt{s}-2m_\text{p}} \; \; \; , \label{k} \end{equation} where $E_{inel}$ is the total energy lost by the incoming baryon which remains available for particle production. Let us first assume that the available energy repartition between the different types of produced particles (that is, $\pi^{+}$, $\pi^{-}$, $\pi^0$, kaons, etc) remains the same between (isospin-corrected) p+p and Pb+Pb collisions\footnote{This assumption will be re-discussed in sections~\ref{IV.2} and~\ref{IV.3}.}. Then we have for the rapidity distribution of negative pions, respectively from Eqs.~(\ref{eq6}) and~(\ref{eq5}): \begin{equation} dn/dy(p+p) = B_M \cdot \tilde{A} \cdot 2E_{inel} \cdot F_{AA}(y) \; \; \; \; \; , \label{eq7} \end{equation} \begin{equation} dn/dy(A+A) = B_M \cdot \tilde{A} \cdot 2E_{inel} \cdot F_{AA}(y) \; \; \; \; \; , \label{eq8} \end{equation} where $\tilde{A}$ in now assumed to be a {\em constant} factor. From (\ref{eq5}), (\ref{eq6}), (\ref{eq7}), and (\ref{eq8}) we have: \begin{equation} A_{pp}=\tilde{A}\cdot K_{pp} \; \; \; \; \; , \label{eq9} \end{equation} \begin{equation} A_{AA}=\tilde{A}\cdot K_{AA} \; \; \; \; \; . \label{eq10} \end{equation} Thus under the assumption made above, the difference in normalization of pion rapidity distributions in proton-proton reactions and in a single fire-streak from the Pb+Pb collisions (Figs~\ref{fig3} and~\ref{fig5}) would come from differences in final state baryon inelasticity. \begin{table} \begin{tabular}{|c|c|c|c|} \hline Reaction & ~~~$p+p\rightarrow(p-\bar{p})X$~~~ & ~~~$p+p\rightarrow(B-\bar{B})X$~~~ & ~~~$Pb+Pb\rightarrow(p-\bar{p})X$~~~ \\ \hline Ref. & ~\cite{pprot,spshadrons} & ~\cite{pprot,spshadrons}& ~\cite{blume2007} \\ \hline $K$ & 0.522 & 0.547 & 0.78 \\ \hline \multicolumn{4}{|c|}{ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~ ratio $K_{pp}/K_{{AA}}$ = 0.70 } \\ \hline \end{tabular} \caption{Compilation of our knowledge on baryon inelasticity in p+p and central Pb+Pb collisions at $\sqrt{s}_{NN}=17.27$~GeV. The value in the middle column includes both net protons and net neutrons as described in the text.} \label{XXX} \end{table} Here a lot of information is available at SPS energies. For {\em proton-proton reactions}, the common knowledge in the community is that the proton looses about half of its energy in the collisions~\cite{strob}, which gives $K_{pp}\approx 0.5$. It is to be noted that the $p+p\rightarrow pX$ distribution, best known experimentally, may be subject to isospin effects if compared to Pb+Pb reactions where more neutrons participate than protons. Both statements can at present be verified with experimental data from the NA49~\cite{pprot} and NA61/SHINE~\cite{NA61_pp} collaborations. In particular, the NA49 reference~\cite{pprot} includes not only precise, double differential in ($x_F$,$p_T$), very wide acceptance proton and antiproton data, but also the neutron $x_F$ distribution at $\sqrt{s}=17.27$~GeV. The cited paper includes also a precise numerical interpolation of the $p$ and $\bar{p}$ data~\cite{spshadrons} which can be used to obtain a model-independent evaluation of net proton inelasticity. We underline again the superiority of using such a wide acceptance interpolation of experimental data rather than relying on a particular model-dependent event generator. We performed this evaluation and obtained $K=0.522$ as shown in Table~\ref{XXX}. This was made by calculating numerically the average net proton energy in an inclusive inelastic p+p event and consequently obtaining $E_{inel}$ in Eq.~(\ref{k}): \begin{equation} E_{inel}=\frac{\sqrt{s}}{2}-\langle E_\text{net~proton}\rangle \; \; \; \; \; ; \text{~~~~~~~~~~~with} \label{einel} \end{equation} \begin{equation} \langle E_\text{net~proton}\rangle ~=~ \frac{\int_{0}^{1}\int_{0}^{p_T(\mathrm{max})}E(x_F,p_T)\cdot \left(\frac{d^2\sigma}{dx_Fdp_T}\right)_\text{net~proton} {dp_T~dx_F}}{\int_{0}^{1}\int_{0}^{p_T(\mathrm{max})}\left(\frac{d^2\sigma}{dx_Fdp_T}\right)_\text{net~proton} {dp_T~dx_F}} \; \; \; \; \; , \label{e_av_netp} \end{equation} where $E(x_F,p_T)$ is the net proton energy given by its $x_F$ and $p_T$, and the net proton density is obtained by the subtraction of the quoted interpolated proton and antiproton distributions: \begin{equation} \left(\frac{d^2\sigma}{dx_Fdp_T}\right)_\text{net~proton} ~=~ \left(\frac{d^2\sigma}{dx_Fdp_T}\right)_{p} ~-~ \left(\frac{d^2\sigma}{dx_Fdp_T}\right)_{\overline{p}}\; \; \; \; \; . \label{d2ndxfdpt_netp} \end{equation} We note that the numerical integration in Eq.~(\ref{e_av_netp}) above was performed assuming $p_T(\mathrm{max})=2$~GeV/c, over a grid of 1000~x~1000 sampling points. Subsequently, on the basis of the same data interpolation as well as of the published experimental neutron $x_F$ distribution, we estimated the (net proton)+(net neutron) spectrum assuming that neutrons have the same shape of the $p_T$ distribution as protons at a given $x_F$, an assumption that should have only a small influence on the final result. Following the considerations about antineutrons made in~\cite{pprot}, we subtracted 1.66~times~(see~\cite{pprot}) the antiproton distribution in order to obtain the net neutron spectrum. We applied formulae strictly similar to (\ref{einel})-(\ref{d2ndxfdpt_netp}), as well as the same integration sampling grid and limits. The final result for net baryons (protons+neutrons) in the final state of the p+p collision is $K_{pp}=0.547$, as shown in Table~\ref{XXX}. We note that this result is already free from isospin effects as it contains both isospin partners. We neglect the contribution of other baryons like $\Lambda$ due to their small cross-section. For {\em central Pb+Pb collisions}, we expect that the lower acceptance coverage of existing experimental distributions may induce a stronger model dependence for the estimate on $K_{AA}$. On the other hand, the net proton distribution in Pb+Pb collisions should be weakly affected by isospin effects due to the mixed isospin content of the lead nucleus. All in all, we consider the estimate provided by C.~Blume~\cite{blume2007}, where the contribution of unmeasured baryons was estimated from the statistical hadron gas model~\cite{4becattini} as secure enough for our study. The latter gives $K_{AA}\approx 0.78$ at top SPS energy. From the above, we estimate from~(\ref{eq9}) and (\ref{eq10}): \begin{equation} A_{pp} / A_{AA} = K_{pp} / K_{AA} = 0.547 / 0.78 \approx 0.70 \; \; \; \; \; \; . \label{eq0.68} \end{equation} This is to be compared to $A_{pp}/A_{AA}=0.812$ established from Fig.~\ref{fig5} in section~\ref{III}. Thus we see that energy conservation-related considerations connected to changes in baryon inelasticity can explain a part of the normalization difference between the experimental pion rapidity spectrum in inelastic p+p collisions, and that obtained from a single fire-streak in Pb+Pb reactions. However, our result overpredicts the difference which we saw in Fig.~\ref{fig5}: the fire-streak fragmentation function matches the shape of the experimental $p+p\rightarrow\pi^-X$ spectrum, but the difference in the absolute normalization of the two distributions is {\em smaller} than what is expected solely from differences in inelasticity. \subsection{Strangeness enhancement} \label{IV.2} It is very well known that production of strange particles (mostly $K$ mesons~\cite{aduszkiewicz}, but also strange baryons~\cite{na57}) is significantly enhanced in Pb+Pb with respect to p+p collisions. In the following we refrain from discussing the dynamical origin of strangeness enhancement which has been done before in very well known papers~\cite{rafelski, smes}. We focus on the energy balance between strange and non-strange particle production. For simplicity we limit ourselves to pions and kaons which dominate the yields of produced particles. The changes in baryon inelasticity must also be taken into account. Table~\ref{XXXX} displays our compilation of kaon and pion yields in central Pb+Pb as well as p+p collisions, taken together with mean pion and kaon energies in inelastic p+p events at the top SPS energy. The latter should be commented upon. The presented estimates are in our view completely model-independent as they are uniquely based on very detailed and wide acceptance two-dimensional ($x_F$,$p_T$) distributions from the NA49 experiment~\cite{ppaper,kaonpaper}. Precise numerical interpolations of these distributions have been included therein and remain available in~\cite{spshadrons}. Our estimates for mean energies are directly, numerically computed from these interpolated experimental distributions. For this purpose we use a formula similar to~(\ref{e_av_netp}): \begin{equation} \langle E_{i}\rangle ~=~ \frac{\int_{0}^{1}\int_{0}^{p_T(\mathrm{max})}E_{i}(x_F,p_T)\cdot \left(\frac{d^2\sigma}{dx_Fdp _T}\right)_{i} {~~dp_T~dx_F}}{\int_{0}^{1}\int_{0}^{p_T(\mathrm{max})}\left(\frac{d^2\sigma}{dx_Fdp_T}\right)_{i} {~~dp_T~dx_F}} \; \; \; \; \; , \label{e_av_i} \end{equation} where $i$ denotes the particle type $(i=\pi^+,\pi^-,K^+,K^-)$, for which the production cross section $\left(\frac{d^2\sigma}{dx_Fdp_T}\right)_{i}$ has been measured and numerically interpolated over a very large phase space in~\cite{ppaper,kaonpaper}. $E_{i}(x_F,p_T)$ denotes the particle's energy at a given $(x_F,p_T)$ which is uniquely defined by its mass $(m_i=m_\pi~\text{or}~m_K)$. Thanks to the symmetry of the p+p collision we can limit the integration to positive $x_F$ only. We apply $p_T(\mathrm{max})=2$~GeV/c, and a grid of 1000~x~1000 sampling points. Here we wish to emphasize again the value of these precisely interpolated data provided by~\cite{pprot,ppaper,kaonpaper}, as well as the advantage of our model-independent approach with respect to both model simulations as well as simple analytical parametrizations of experimental data. \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Reaction} & \multicolumn{4}{|c|}{total average yield per event}\\ \cline{2-5} & ~~~~~$\pi^+$~~~~~ & ~~~~~$\pi^-$~~~~~~ & ~~~~~$K^+$~~~~~ & ~~~~~$K^-$~~~~~ \\ \hline central Pb+Pb, & 560 & 602 & 97.8 & 54.0 \\ $\sqrt{s_{NN}}=17.27$~GeV & ~\cite{tof} & ~\cite{2.5na49}& ~\cite{2.5na49}& ~\cite{2.5na49} \\ \hline \multirow{3}{*}{inelastic p+p,}& 3.018 & 2.360 & 0.2267& 0.1303 \\ \multirow{3}{*}{$\sqrt{s_{NN}}=17.27$~GeV} & ~\cite{ppaper} &~\cite{ppaper} &~\cite{kaonpaper}& ~\cite{kaonpaper} \\ \cline{2-5} & \multicolumn{4}{|c|}{average energy per particle [MeV]}\\ \cline{2-5} & 905 & 781 & 1388 & 1107 \\ \hline \end{tabular} \caption{Charged pion and kaon yields in central Pb+Pb and inelastic p+p collisions at top SPS energy, put together with our estimates of mean pion and kaon energy in inelastic p+p collisions obtained numerically from interpolated experimental data as discussed in the text. The quoted values are taken from the references cited in the table.} \label{XXXX} \end{table} In the following we will assume \begin{equation} \begin{aligned} \pi^0 & \approx \frac{\pi^++\pi^-}{2} \; \; \; , \\ K^0 + & \overline{K}^0 \approx K^++K^- \; \; \; \end{aligned} \label{XA} \end{equation} for these particles' kinematical spectra and average yields; we consider these rough assumptions to be good enough for our present evaluation. On that basis, from Table~\ref{XXXX} we obtain the average total energy which an inelastic p+p collision will spend on pion, $K^+$, $K^-$ and ($K^0+\overline{K}^0$) production. These we denote as $E(pp\rightarrow\pi)$, where $\pi\equiv (\pi^++\pi^-+\pi^0)$, and then respectively $E(pp\rightarrow K^{+})$, $E(pp\rightarrow K^{-})$, and $E(pp\rightarrow K^{0\overline{0}})$ where $K^{0\overline{0}}\equiv (K^0+\overline{K}^0)$. \begin{equation} \begin{aligned} & E(pp\rightarrow\pi) = 3/2 \cdot (3.018 \cdot 905 + 2.360 \cdot 781) = 6862~\mathrm{MeV} \; \; \; , \\ & E(pp\rightarrow K^{+}) = 0.2267 \cdot 1388 = 315~\mathrm{MeV} \; \; \; , \\ & E(pp\rightarrow K^{-}) = 0.1303 \cdot 1107 = 144~\mathrm{MeV} \; \; \; ,\\ & E(pp\rightarrow K^{0\overline{0}}) =~~ 315 + 144 ~~~= 459~\mathrm{MeV} \; \; \; . \end{aligned} \label{XPP} \end{equation} As we consider the above values to be useful for future studies, we include them in Table~\ref{X5} together with values of kaon/pion ratios in p+p and central Pb+Pb reactions extracted from Table~\ref{XXXX} on the basis of assumptions~(\ref{XA}). In addition, we calculate the ratios of energy spent on kaons ($K^+$, $K^-$ and $K^0$$+$$\overline{K}^0$) relative to that spent on pions ($\pi^++\pi^-+\pi^0$) in p+p reactions and in central Pb+Pb collisions. These are respectively: \begin{table} \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}{*}{Reaction} & \multicolumn{4}{|c|}{kaon/pion ratios}\\ \cline{2-5} & ~~~~~$K^+/\pi$~~~~~ & ~~~~~$K^-/\pi$~~~~~~ & \multicolumn{2}{|c|}{$(K^0+\overline{K}^0)/\pi$} \\ \hline central Pb+Pb, & 0.0561 & 0.0310 & \multicolumn{2}{|c|}{0.0871} \\ $\sqrt{s_{NN}}=17.27$~GeV & & & \multicolumn{2}{|c|}{ } \\ \hline \multirow{3}{*}{inelastic p+p,}& 0.0281 & 0.0162 & \multicolumn{2}{|c|}{0.0443} \\ \cline{2-5} \multirow{3}{*}{$\sqrt{s_{NN}}=17.27$~GeV} & \multicolumn{4}{|c|}{average energy per particle type [MeV]}\\ & $E(pp\rightarrow\pi)$ & $E(pp\rightarrow K^{+})$ & $E(pp\rightarrow K^{-})$ & $E(pp\rightarrow K^{0\overline{0}})$ \\ \cline{2-5} & 6862 & 315 & 144 & 459 \\ \hline \end{tabular} \caption{Kaon over pion ratios in central Pb+Pb and inclusive inelastic p+p reactions, and average energies spent on pion and kaon production in a single inelastic p+p event. By pion ($\pi$) the summed $\pi$ mesons $(\pi^++\pi^-+\pi^0)$ are meant.} \label{X5} \end{table} \begin{equation} \begin{aligned} R_\text{energy}(pp\rightarrow& K^{+}/\pi) = \frac{E(pp\rightarrow K^{+})}{E(pp\rightarrow\pi)} = \frac{315~\mathrm{MeV}}{6862~\mathrm{MeV}} = 0.04590 \; \; \; ,\\ \end{aligned} \label{XPPB} \end{equation} \begin{equation} \begin{aligned} R_\text{energy}(pp\rightarrow& K^{-}/\pi) = \frac{E(pp\rightarrow K^{-})}{E(pp\rightarrow\pi)} = \frac{144~\mathrm{MeV}}{6862~\mathrm{MeV}} = 0.02099 \; \; \; ,\\ \end{aligned} \label{XPPC} \end{equation} \begin{equation} \begin{aligned} R_\text{energy}(pp\rightarrow& K^{0\overline{0}}/\pi) = \frac{E(pp\rightarrow K^{0\overline{0}})}{E(pp\rightarrow\pi)} = \frac{459~\mathrm{MeV}}{6862~\mathrm{MeV}} = 0.06689 \; \; \; ,\\ \end{aligned} \label{XPPCC} \end{equation} \begin{equation} \begin{aligned} R_\text{energy}(pp\rightarrow&\text{all~kaons}/\pi) = 0.04590 + 0.02099 + 0.06689 = 0.13378 \; \; \; ,\\ \end{aligned} \label{XPPD} \end{equation} \begin{equation} \begin{aligned} & R_\text{energy}(PbPb\rightarrow K^{+}/\pi) = \frac{\frac{K^+}{\pi}(PbPb)}{\frac{K^+}{\pi}(pp)~~~} \cdot R_\text{energy}(pp\rightarrow K^{+}/\pi) = 0.09164 \; \; \; \; \; \; \; ,\\ \end{aligned} \label{XB} \end{equation} \begin{equation} \begin{aligned} & R_\text{energy}(PbPb\rightarrow K^{-}/\pi) = \frac{\frac{K^-}{\pi}(PbPb)}{\frac{K^-}{\pi}(pp)~~~} \cdot R_\text{energy}(pp\rightarrow K^{-}/\pi) = 0.04017 \; \; \; \; \; \; \; ,\\ \end{aligned} \label{XC} \end{equation} \begin{equation} \begin{aligned} & R_\text{energy}(PbPb\rightarrow K^{0\overline{0}}/\pi) = \frac{\frac{K^0+\overline{K}^0}{\pi}(PbPb)}{\frac{K^0+\overline{K}^0}{\pi}(pp)~~~~~} \cdot R_\text{energy}(pp\rightarrow K^{0\overline{0}}/\pi) = 0.13152 \; \; ,\\ \end{aligned} \label{XCC} \end{equation} \begin{equation} \begin{aligned} & R_\text{energy}(PbPb\rightarrow\text{all~kaons}/\pi) = 0.09164 + 0.04017 + 0.13152 = 0.26333 \; \; \; \; \; \; \; .\\ \end{aligned} \label{XD} \end{equation} \vspace*{0.1cm} We note that in Eqs.~(\ref{XB})-(\ref{XCC}) above, we make the important assumption that the ratio of average energy of one kaon over that of one pion remains constant between inelastic p+p and central Pb+Pb collisions. This assumption, which we consider good enough for our present evaluation, calls for an experimental verification. However, we note that as this requires a precise knowledge of $d^2n/dydp_T(y,p_T)$ distributions over a very wide range of both $y$ and $p_T$, a model-independent evaluation of these quantities in Pb+Pb collisions seems difficult on the level of accuracy attainable for the p+p data, summarized by Eq.~(\ref{XPP}). Under this assumption we see that the kaon contribution to the overall energy balance, evaluated with respect to that of pion emission, changes by a factor of about two: from $13\%$ in inelastic p+p to $26\%$ in central Pb+Pb reactions. \subsection{Energy balance in particle emission} \label{IV.3} We will now estimate the basic balance of energy in the emission of strange and non-strange particles in the final state of p+p and Pb+Pb reactions. This we will do to investigate whether it can explain the differences in the absolute pion yield between the experimental spectrum in p+p collisions and the fire-streak fragmentation function which we obtained from the Pb+Pb data (sections~\ref{III} and~\ref{III.5}). In p+p collisions, the inelastic energy (difference between baryon energy in the initial and the final state) writes: \begin{equation} E_{inel} \approx \mathrm{(pion~energy)}+\mathrm{(kaon~energy)} \; \; \; ,\\ \label{YPP} \end{equation} where by ``$\approx$'' we mean that we neglect particles not considered in our discussion, i.e., mainly baryon and anti-baryon pairs as well as strange baryons (mainly $\Lambda$). We justify this assumption by the approximate character of our evaluation. Furthermore, we state that our estimated overall energy balance in inelastic p+p collisions holds within 3.7\% even when we omit the above particles. The corresponding estimate, and a demonstration of even better consistency after the inclusion of non-strange baryon-antibaryon pairs, are presented in Appendix~A. Account taken of the quantitative relations described in sections~\ref{IV.1} and~\ref{IV.2} (formula~(\ref{XPPD})), Eq.~(\ref{YPP}) writes: \begin{equation} E_{inel}(K=0.547) \approx \mathrm{(pion~energy)}\cdot(1 + 0.13378) \; \; \; ,\\ \label{YPPK} \end{equation} where $K$ is the baryon inelasticity obtained in section~\ref{IV.1}. In central Pb+Pb collisions, from formula~(\ref{XD}) the corresponding energy balance writes: \begin{equation} E_{inel}(K=0.78) \approx \mathrm{(pion~energy)}\cdot(1 + 0.26333) \; \; \; ,\\ \label{YPBPBK} \end{equation} where the left term is given by the change in baryon inelasticity and the right term by the strangeness enhancement. Thus the inelastic energy ``lost'' by one incoming baryon and spent on pion production changes from p+p to central Pb+Pb collisions. It increases by the enhancement of baryon inelasticity but then decreases by the different sharing between pions and particles containing strange quarks. The overall change of energy spent on pion production can thus be described as: \begin{equation} \frac{\text{Energy~spent~on~pions~in~Pb+Pb}}{\text{Energy~spent~on~pions~in~p+p}} = \frac{0.78 / (1 + 0.26333)}{0.547 / (1 + 0.13378)} = 1.280 = \frac{1}{0.781} \approx \frac{1}{0.70}\cdot 0.9 \; , \label{XUXU} \end{equation} where the last transformation states explicitly the terms induced by the change in inelasticity (section~\ref{IV.1}) and by the strangeness enhancement (section~\ref{IV.2}). \subsection{Normalization of pion emission in p+p and Pb+Pb collisions} \label{IV.4} Now let us calculate the relative normalization of the pion rapidity distribution in p+p collisions, with respect to that of the fire-streak fragmentation function obtained from the Pb+Pb data (Fig.~\ref{fig5}). Eqs.~(\ref{YPPK}),~(\ref{YPBPBK}) quantify the fact that the amount of inelastic energy available for particle production, and its sharing between the emission of particles containing and not containing strange quarks, are both different in p+p and Pb+Pb collisions. Consequently, Eqs.~(\ref{eq5})-(\ref{eq6}),~(\ref{eq7})-(\ref{eq8}), and (\ref{eq9})-(\ref{eq10}) get rewritten in a new form which explicitly takes both issues into account. This gives respectively the formulae~(\ref{eq15})-(\ref{eq16}), (\ref{eq17})-(\ref{eq18}), and~(\ref{eq19})-(\ref{eq20}), presented below. \begin{equation} \begin{aligned} & \frac{dn}{dy}(Pb+Pb) = A_{_{AA}}(K_{_{AA}},EnergySharing_{_{AA}}) \cdot B_M \cdot (\sqrt{s} -2m_\text{p}) \cdot F_{_{AA}}(y) \; , \end{aligned} \label{eq15} \end{equation} \vspace*{-0.9cm} \begin{equation} \begin{aligned} & \frac{dn}{dy}(p+p) = A_{pp}(K_{pp},EnergySharing_{pp}) \cdot B_M \cdot (\sqrt{s} - 2m_\text{p}) \cdot F_{_{AA}}(y) \; \; \; , \\ \end{aligned} \label{eq16} \end{equation} \begin{equation} \begin{aligned} & \frac{dn}{dy}(p+p) = B_M \cdot \tilde{\tilde{A}} \cdot EnergySharing_{pp} \cdot 2E_{inel} \cdot F_{_{AA}}(y) \; \; \; , \\ \end{aligned} \label{eq17} \end{equation} \begin{equation} \begin{aligned} & \frac{dn}{dy}(Pb+Pb) = B_M \cdot \tilde{\tilde{A}} \cdot EnergySharing_{_{AA}} \cdot 2E_{inel} \cdot F_{_{AA}}(y) \; \; \; , \\ \end{aligned} \label{eq18} \end{equation} \begin{equation} \begin{aligned} & A_{pp}(K_{pp},EnergySharing_{pp}) = \tilde{\tilde{A}}\cdot EnergySharing_{pp}\cdot K_{pp} \; \; \; , \\ \end{aligned} \label{eq19} \end{equation} \begin{equation} \begin{aligned} & A_{_{AA}}(K_{AA},EnergySharing_{_{AA}}) = \tilde{\tilde{A}}\cdot EnergySharing_{_{AA}}\cdot K_{_{AA}} \; \; \; . \\ \end{aligned} \label{eq20} \end{equation} In the formulae above, the normalization of the pion $\frac{dn}{dy}$ distribution is now a function of both the baryon inelasticity $K$ and of the sharing of the available inelastic energy. The quantity $EnergySharing$ describes the part of this available energy spent on pions. $\tilde{\tilde{A}}$ is a constant factor. Following section~\ref{IV.3}, $EnergySharing$ is respectively: \begin{equation} \begin{aligned} & EnergySharing_{pp} \approx 1/(1 + 0.13378)\; \; \; , ~~~~\text{from Eq.~(\ref{YPPK}), for p+p collisions},\\ & EnergySharing_{_{AA}} \approx 1/(1 + 0.26333)\; \; \; , ~~~\text{ from Eq.~(\ref{YPBPBK}), for Pb+Pb collisions}. \end{aligned} \label{eq21} \end{equation} Thus the normalization ratio for the two distributions (\ref{eq16}) and (\ref{eq15}) is \begin{equation} \begin{aligned} & \frac{A_{pp}}{A_{_{AA}}}= \frac{EnergySharing_{pp}\cdot K_{pp}}{EnergySharing_{_{AA}}\cdot K_{_{AA}}}= 0.781 \; \; \; , \end{aligned} \label{eq22} \end{equation} which is a direct reflection of Eq.~(\ref{XUXU}). Let us underline that the normalization ratio of 0.781 given above is the {\em only difference} between the function with which we approximated the $\frac{dn}{dy}$ distribution of negative pions in p+p reactions (Eq.~(\ref{eq4}), consequently~(\ref{eq6}) and (\ref{eq16})) and the one which we obtained for the fire-streak in Pb+Pb collisions (Eq.~(\ref{fragmentation}), consequently~(\ref{eq5}) and~(\ref{eq15})). This value of 0.781 has been deduced solely from our estimates of the energy balance between pion, kaon and baryon emission in p+p and in Pb+Pb events. These latter estimates have been obtained directly from interpolated experimental data on $\pi^\pm$, $K^\pm$, net $p$, and $n$ production, with only a minimal set of basic assumptions in sections~\ref{IV.1},~\ref{IV.2}, and~\ref{IV.3}. The value of 0.781 is now to be compared with the factor 0.812 which we found from the comparison of our function $f(y)$ to the isospin-corrected $\pi^-$ rapidity distribution in Fig.~\ref{fig5}, and subsequently stated in Eq.~(\ref{eq4.5}). This gives us a 4\% agreement which we consider as very good, account taken of the uncertainties inherent to our study.\footnote{We note that the latter include both our assumptions and approximations as well as the uncertainties of the experimental p+p and Pb+Pb data which we used. For instance, the systematic errors of the experimental pion $dn/dy$ yields in Pb+Pb collisions reach 5-10\% depending on centrality~\cite{2.5na49}.} From the above, we find it justified to conclude that the agreement of shapes shown in Fig.~\ref{fig5} can now be re-interpreted as a {\em full overall consistency} of the experimental $\pi^-$ rapidity distribution in p+p collisions with the {\em absolutely normalized} fire-streak fragmentation function. Indeed, directly from Eqs.~(\ref{eq4}) and~(\ref{eq22}), the following becomes true: \hspace*{-0.0cm} \begin{center} {Experimental $\pi^-$ rapidity distribution in p+p collisions \\ $\approx$~ fire streak fragmentation function into $\pi^-$} \end{center} \vspace*{-1cm} \begin{equation} \label{eqtext} \end{equation} \vspace*{-0.1cm} \hspace*{-0.6cm} - up to the 4\% accuracy in normalization mentioned above.\ This occurs once the correction for isospin effects is taken into account (Eq.~(\ref{eq2})), and another correction for strangeness enhancement and baryon inelasticity differences between p+p and Pb+Pb reactions is included in the comparison (Eq.~(\ref{eq22})). We will further discuss these issues in section~\ref{V}. \subsection{Comment on Eq.~(\ref{eq15}) } \label{IV.5} For completeness and clarity of the discussion made in section~\ref{V}, below we rewrite formula~(\ref{eq15}) in the form evident from Eq.~(\ref{eq20}): \begin{equation} \frac{dn}{dy}= \tilde{\tilde{A}} \; \cdot \; EnergySharing \; \cdot \; K \; \cdot \; B_M \; \cdot \; (\sqrt{s} -2m_\text{p}) \; \cdot \; \exp\left( - \frac{[ y^2 + \epsilon^2]^\frac{r}{2}}{r \cdot \sigma_{y}^{r}} ~ \right)\; \; . \label{eq23} \end{equation} In the above we dropped all the reaction-specific indices and wrote explicitly the shape factor introduced in Eq.~(\ref{eq5}). The parameters $\epsilon$, $\sigma_y$, and $r$ are obtained from the fit to Pb+Pb collisions (section~\ref{II}), and $\tilde{\tilde{A}}=0.0907$ from Eq.~(\ref{eq20}). The formula~(\ref{eq23}) gives our fire-streak fragmentation function in central Pb+Pb collisions, at $b=0$. After the correction for strangeness suppression in p+p relative to Pb+Pb collisions and for the difference in baryon inelasticity (parametrized respectively by $EnergySharing$ and $K$), the same formula gives the blue curve which approximately describes the isospin corrected p+p data points in Fig.~\ref{fig5} (within 4\% accuracy as discussed in section~\ref{IV.4}). \vspace*{-0.2cm} \section{Discussion} \label{V} \vspace*{-0.2cm} In this section we will attempt to draw the conclusions from the findings made in the present study, partially in the context of these made in our earlier work~\cite{1}. Our initial concept~\cite{1}, with some similarity to the fire-streak picture~\cite{2}, was introduced in order to explain the role of geometry and local energy-momentum conservation in the centrality dependence of Pb+Pb collisions at SPS energies. Simultaneously, our work~\cite{1} was inspired by, and meant to explain, our observations from spectator-induced electromagnetic effects on $\pi^+/\pi^-$ ratios and directed flow in heavy ion collisions~\cite{twospec07,Rybicki_v1,Rybicki2015,wpcf}, indicating that pions at higher rapidity are produced closer to the spectator system as it is suggested by Fig.~\ref{fig1}. The result was that the full centrality dependence of pion rapidity distributions and total pion yields could be understood from three elements: (a) collision geometry (b) local energy-momentum conservation, and (c) our simple fire-streak fragmentation function, producing pions proportionally to the available energy (Eq.~(\ref{fragmentation})). With the present work, however, a new element appears in the picture which is the (exact or approximate) consistency of the isospin corrected experimental $\pi^-$ rapidity distribution in p+p reactions with the fire-streak fragmentation function, as shown in Fig.~\ref{fig5} and stated in section~\ref{IV.4}. This consistency emerges {\em only} when the normalization of the latter is corrected for the change in baryon inelasticity and the strangeness enhancement between p+p and Pb+Pb collisions. This brings specific implications, some of which we will point below. \vspace*{-0.1cm} \subsection{Pion rapidity spectra} \vspace*{-0.1cm} In the present study, one component of our successful description of pion production in Pb+Pb reactions from Ref.~\cite{1} - the fire streak fragmentation function - appears ``available'' in p+p collisions once the effects of baryon inelasticity and strangeness suppression are taken into account. Thus one can think of the following simple ``prescription'' to follow in order to describe, or parametrize, the centrality dependence of pion rapidity distributions and their total yields in Pb+Pb reactions, starting from p+p collisions: \vspace*{0.2cm} \begin{center} pion $dn/dy$ distribution in p+p collisions (Fig.~\ref{fig3}) {\large $\Downarrow$} correction for isospin (Eq.{\small~(\ref{eq2})}) {\large $\Downarrow$} isospin corrected pion $dn/dy$ distribution in p+p (Eq.{\small~(\ref{eq23}))} {\large $\Downarrow$} correction for change in baryon inelasticity and strangeness enhancement (Eq.{\small~(\ref{eq23}))} {\large $\Downarrow$} fire-streak fragmentation function in Pb+Pb (Eq.{\small~(\ref{eq23}))} {\large $\Downarrow$} collision geometry {\Large +} local $E-\vec{p}$ conservation (\cite{1}, Fig.~\ref{fig1}) {\large $\Downarrow$} pion $dn/dy$ distributions in Pb+Pb as a function of centrality (Fig.~\ref{fig2})\\ \end{center} \vspace*{0.3cm} We underline that the scheme above may be followed both ``down'' and ``up''. For instance, our study made in Ref.~\cite{1} supplemented by the present analysis, follows it ``up'' from the centrality dependence of the Pb+Pb reactions up to the pion spectrum in p+p collisions. The prescription established above will keep track of the whole shape evolution of the $dn/dy$ distribution from p+p through peripheral up to central Pb+Pb collisions, and of the relative increase of pion multiplicity as a function of decreasing impact parameter of the Pb+Pb collision. In our view, this ``correspondence'' between rapidity distributions in p+p and Pb+Pb interactions established by our prescription brings additional support to our simple picture of the longitudinal evolution of the Pb+Pb system. In this picture, finite size volumes of deconfined primordial matter initially move following local energy-momentum conservation, and a number of mechanisms resulting in production of final state particles in Pb+Pb collisions (dressing up of quarks into hadrons, etc) preserve some degree of similarity to p+p reactions. \vspace*{-0.1cm} \subsection{Differences between p+p and Pb+Pb collisions} \vspace*{-0.1cm} As a continuation of our paper~\cite{1}, the present work is aimed at pointing out possible common points and similarities in pion rapidity distributions for the two reactions. Its limitations should also be pointed out. Evidently, our work does not genuinely ``explain'' strangeness enhancement nor the changes in inelasticity $K$ between proton-proton and nucleus-nucleus collisions. Both of these we had to estimate from experimental data in section~\ref{IV} for the purpose of formula~(\ref{eq23}). Specifically, our ``correction'' for strangeness suppression in p+p or strangeness enhancement in Pb+Pb reactions, introduced by the estimated quantity $EnergySharing$ in Eq.~(\ref{eq23}), is in fact a simple ``translation'' of enhanced strange particle yields into the overall energy balance of particle production. The origin of this correction - the enhanced abundance of strange quarks in the deconfined matter produced in Pb+Pb collisions - is an independent dynamical phenomenon explained elsewhere~\cite{rafelski}. It evidently modifies the overall energy balance in particle emission but it is only parametrized in our study. As such, no claim can be made about bulk properties of heavy ion collisions being predictable {solely} from p+p reactions on the basis of the present work. {Also, in our view, our results do not point towards the applicability of the geometrical picture of many fire-streaks, as drawn in Fig.~\ref{fig2}, to proton-proton reactions.} This is in contrast to our work on pion $dn/dy$ distributions in Pb+Pb collisions~\cite{1}. The fact that consistency can be found between the experimental pion rapidity distribution in p+p collisions and the fragmentation function of the {\em single} fire-streak, rather than a {\em sum} of fire-streaks, {would suggest a difference between the two reactions. While Pb+Pb data can be described by a superposition of many independent fire-streaks, only a single fire-streak would be formed in the p+p collision.} \vspace*{-0.2cm} \section{Summary} \label{VI} \vspace*{-0.2cm} In the present paper we investigated to which extent the phenomenological rapidity distribution of pions from the fire-streak in Pb+Pb collisions, extracted recently, is similar to the pion rapidity distribution in p+p collisions. With no tuning nor adjustment to experimental data, our single fire-streak pion $\frac{dn}{dy}$ distribution obtained from Pb+Pb reactions reproduced the shape of the experimental pion rapidity spectrum in p+p interactions at the same energy. Isospin differences between Pb+Pb and p+p collisions have been taken into account. The absolute normalization of pion spectra between the two reactions could be fully (up to 4\% precision) explained by changes in the energy balance induced by baryon stopping and strangeness enhancement phenomena. From the above we conclude that once the above phenomena are taken into account, and the influence of Pb+Pb reaction geometry as well as local energy-momentum conservation are properly considered, an interesting correspondence emerges between absolutely normalized pion rapidity spectra in inelastic p+p collisions and pion rapidity distributions in centrality selected Pb+Pb reactions.\\ {\bf Acknowledgments}\\ We gratefully thank Adam Bzdak for pointing to us the importance of the extension of our study to p+p reactions. We acknowledge the work of Hans~Gerhard~Fischer on the release and especially the precise numerical interpolation of NA49 proton+proton data which allowed a model-independent calculation of the energy balance in p+p collisions. We are indebted to Jan Rafelski for his remarks on the fire-streak model, {and to our referees for very valuable comments and constructive criticism.} This work was supported by the National Science Centre, Poland (grant number 2014/14/E/ST2/00018).
{ "timestamp": "2019-10-31T01:02:08", "yymm": "1805", "arxiv_id": "1805.02552", "language": "en", "url": "https://arxiv.org/abs/1805.02552" }
\section{Introduction} \input{intro.tex} \section{Background} \label{sec:rel-work} \input{rel-work.tex} \section{Extending Neural Entity Grid} \label{sec:neural-grids} \input{ext-neural-grids.tex} \section{Coherence Models for Asynchronous Conversations} \label{sec:coh-model-conv} \input{coh-model-conv} \section{Experiments on Monologue} \label{sec:exp} \input{experiments-mono} \section{Experiments on Conversation} \label{sec:exp2} \input{experiments-conv} \section{Conclusion} \label{sec:con} We presented a coherence model for asynchronous conversations. We first extended the existing neural grid model by lexicalizing its entity transitions. We then adapt the model to conversational discourse by incorporating the thread structure in its grid representation and feature computation. We designed a 3D grid representation for capturing spatio-temporal entity transitions in a conversation tree, and employed a 2D convolution to compose high-level features from this representation. Our lexicalized grid model yields state of the art results on standard coherence assessment tasks in monologue and conversations. We also show a novel application of our model {in} forum thread reconstruction. Our future goal is to use the coherence model to generate new conversations. \subsection{Conversational Entity Grid} \label{subsec:conv-egrid} The conversation tree captures how topics flow in an asynchronous conversation. Our key hypothesis is that in a coherent conversation entities exhibit certain local patterns in the conversation tree in terms of their distribution and syntactic realization. Figure \ref{fig:thread-tree}(c) shows how the grammatical roles of entity \emph{`registry'} in our example conversation change over the tree. For coherence assessment, we wish to model entity transitions along each of the conversation paths (top-to-bottom), and also their spatial relations across the paths (left-to-right). The existing grid representation is insufficient to model the \emph{two-dimensional (2D) spatial} entity transitions in a conversation tree. \begin{figure*}[t] \centering \includegraphics[width=0.80\textwidth]{figures/new_model2.png} \vspace{-0.5em} \caption{\textbf{Conversational Neural Grid} model for assessing coherence in asynchronous conversations.} \label{fig:cnn_model} \end{figure*} {We propose a three-dimensional (3D) grid for representing entity transitions in an asynchronous conversation. The first dimension in our grid represents \emph{entities}, while the second and third dimensions represent \emph{depth} and \emph{path} of the tree, respectively. Figure \ref{fig:thread-tree}(d) shows an example representation for {an} entity `\emph{registry}'. Each column in the matrix represents transitions of the entity along a path, whereas each row represents transitions of the entity at a level of the conversation tree.} {Although illustrated with a tree structure, our method is applicable to general graph-structured conversations, where a post can reply to multiple previous posts. Our model relies on paths from the root to the leaf nodes, which can be extracted for any graph as long as we avoid loops.} \subsection{Modeling Entity Transitions} \label{subsec:model-tran} As shown in Figure \ref{fig:cnn_model}, given a 3D entity grid as input, the look-up layer (Eq. \ref{lookup}) of our neural grid model produces a 4D tensor $L$$\in$$\real^{I \times J \times P \times d}$, where $I$ is the total number of entities in the conversation, $J$ is the depth of the tree, $P$ is the number of paths in the tree, and $d$ is the embedding dimension. The convolution layer then uses a 2D {filter} $\mathbf{w} \in \real^{m.n.d}$ to convolve local patches of entity transitions \begin{equation} z_i = h(\mathbf{w}^T L_{i,j:j+m, p:p+n} + {b}_i) \end{equation} \noindent where $m$ and $n$ are the height and width of the filter, and $L_{i, j:j+m, p:p+n} \in \real^{m.n.d}$ denotes a concatenated vector containing $(m \times n)$ embeddings representing a 2D window of entity transitions. As we repeatedly apply the filter to each possible window with stride size $1$, we get a 2D feature map $Z^i$ of dimensions $(I.J+m-1) \times ( I.P+n-1)$. Employing $N$ different filters, we get $N$ such 2D feature maps, $[{Z}^1, \cdots, {Z}^N]$, based on which the max pooling layer extracts the most salient features: \begin{equation} \mathbf{p} = [\mu_{l \times w}({Z}^1), \cdots, \mu_{l \times w}({Z}^N)] \label{eq:max_pool_conv} \end{equation} \noindent where $\mu_{l \times w}$ refers to the $\max$ operation applied to each non-overlapping 2D window of $l \times w$ features in a feature map. The pooled features are then linearized and used for coherence scoring in the final layer of the network as described by Equation \ref{dense}. \subsection{Evaluation on Discrimination} The discrimination tasks are applicable to conversations also. We first present the dataset we use, then we describe how we create coherent and incoherent examples to train and test our models. \paragraph{Dataset:} Our conversational corpus contains discussion threads regarding \emph{computer troubleshooting} from the technology related news site {CNET}.\footnote{https://www.cnet.com/} This corpus was originally collected by \citet{louis2015conversation}, and it contains 13,352 threads. For our experiments, we selected 3,825 threads assuring that each contains at least 3 and at most 15 posts. We use 2,400 threads for training, 750 for testing and 675 for development purposes. Table \ref{table:corpora} shows some basic statistics about the resulting dataset. The threads roughly contain 29 sentences and 6 comments on average. \paragraph{Model Settings and Training:} \label{subsec:train} To validate the efficacy of our conversational grid model, we compare it with the following baseline settings: \begin{noindlist}\setlength\itemsep{-0.0em} \item \textbf{Temporal:} In the temporal setting, we construct an entity grid from the chronological order of the sentences in a conversation, and use it with our monologue-based coherence models. Models in this setting thus disregard the structure of the conversation and treat it as a monologue. \item \textbf{Path-level:} This is a special case of our model, where we consider each path (a column in our conversational grid) in the conversation tree separately. We construct an entity grid for a path and provide as input to our monologue-based models. \end{noindlist} To train the models with pairwise ranking, we create 20 incoherent conversations for each original conversation by shuffling the sentences in their temporal order. For models involving conversation trees (path-level and our model), the tree structure remains unchanged for original and permuted conversations, only the position of the sentences vary based on the permutation. {Since the shuffling is done globally at the conversation level, this scheme allows us to compare the three representations (temporal, path-level and tree-level) fairly with the same set of permutations.} \begin{table}[t!] \resizebox{1.00\columnwidth}{!}{% \begin{tabular}{l|ccccc} & \#Thread & Avg Com & Avg Sen & \#Pairs (tree) & \#Pairs (path) \\ \midrule {Train} & 2,400 & 6.01 & 28.76 & 47,948 & 106,122\\ {Test} & 750 & 5.75 & 27.79 & 14,986 & 33,852\\ {Dev} & 675 & 6.27 & 30.70 & 13,485 & 28,897\\ \midrule {Total} & 3,825 & 5.98 & 28.77 & 76,419 & 168,871\\ \bottomrule \end{tabular} } \vspace{-0.5em} \caption{Statistics on the \textbf{CNET} dataset.} \vspace{-0.3em} \label{table:corpora} \end{table} {An incoherent conversation may have paths in the tree that match the original paths. We remove those matched paths when training the path-level model. See Table \ref{table:corpora} for number of pairs used for training and testing our models. We evaluate path-level models by aggregating correct/wrong decisions for the paths -- if the model makes more correct decisions for the original conversation than the incoherent one, it is counted as a correct decision overall. Aggregating path-level \emph{coherence scores} ({\em e.g.,}\xspace\ by averaging or summing) would allow a coherence model to get awarded for assigning higher score to an original path (hence, correct) while making wrong decisions for the rest; see supplementary document for an example. {Similar to the setting in Monologue, we did not train explicitly on the inverse-order task, rather use the trained model from the standard setting.} \paragraph{Results and Discussions:} \label{subsec:discrim-results} {Table \ref{table:order-results} compares the results of our models on the two discrimination tasks. We observe more gains in conversation than in monologue for the lexicalized models -- 4.9\% to 7.3\% on the standard task, and 10\% to 13.6\% on the inverse-order task. Notice especially the huge gains on the inverse-order task. This indicates lexicalization helps to better adapt to new domains.} {A comparison of the results on the standard task across the representations shows that path-level models perform on par with the temporal models, whereas the tree-level models outperform others by a significant margin. The improvements are 2.7\% for randomly initialized word vectors and 4\% for Google embeddings.} Although, the path-level model considers some conversational structures, it observes only a portion of the conversation in its input. The common topics (expressed by entities) of a conversation get distributed across multiple conversational paths. This limits the path-level model to learn complex relationships between entities in a conversation. By encoding an entire conversation into a single grid and by modeling the spatial relations between the entities, our conversational grid model captures both local and global information (topic) of a conversation. {Interestingly, the improvements are higher on the inverse-order task for both path- and tree-level models. The inverse order yields more dissimilarity at the paths with respect to the original order, thus making them easier to distinguish.} \begin{table}[tb!] \resizebox{0.99\columnwidth}{!}{% \begin{tabular}{llc|cc} Conv. Rep & Model & Emb. & \textbf{Std} ($F_1$) & {\textbf{Inv} ($F_1$)} \\ \midrule \multirow{3}{*}{\textbf{Temporal}} & \nobreak{Neural Grid}\ (N\&J) & random & 82.28 & 70.53\\ & \nobreak{ Neural Grid$_{l}$} & random & 86.63 & 80.40\\ & \nobreak{ Neural Grid$_{l}$} & Google & 87.17 & 80.76\\ \midrule \multirow{3}{*}{\textbf{Path-level}} &\nobreak{Neural Grid}\ (N\&J) & random & 82.39 & 75.68$^\dagger$\\ &\nobreak{ Neural Grid$_{l}$} & random & 88.13 & 88.38$^\dagger$ \\ &\nobreak{ Neural Grid$_{l}$} & Google & 88.44 & 89.31$^\dagger$ \\ \midrule \multirow{3}{*}{\textbf{Tree-level}} & \nobreak{Neural Grid}\ (N\&J) & random & 83.98$^\dagger$ & 77.33$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & random & 89.87$^\dagger$ & 89.23$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & Google & \textbf{91.29}$^\dagger$ & \textbf{90.40}$^\dagger$\\ \bottomrule \end{tabular} } \vspace{-0.2em} \caption{Discrimination results on \textbf{CNET}. Superscript $\dagger$ indicates a model is significantly superior to its temporal counterpart with p-value $<0.01$.} \label{table:order-results} \end{table} \begin{comment} \begin{table}[tb!] \resizebox{0.99\columnwidth}{!}{% \begin{tabular}{llc|cc} Conv. Rep & Model & Emb. & Acc & $F_1$ \\ \midrule \multirow{3}{*}{\textbf{Temporal}} & \nobreak{Neural Grid}\ (N\&J) & random & 82.28 & 82.28 \\ & \nobreak{ Neural Grid$_{l}$} & random & 86.63 & 86.63 \\ & \nobreak{ Neural Grid$_{l}$} & Google & 87.17 & 87.17 \\ \midrule \multirow{3}{*}{\textbf{Path-level}} &\nobreak{Neural Grid}\ (N\&J) & random & 81.47 & 82.39 \\ &\nobreak{ Neural Grid$_{l}$} & random & 86.13 & 88.13 \\ &\nobreak{ Neural Grid$_{l}$} & Google & 86.67 & 88.44 \\ \midrule \multirow{3}{*}{\textbf{Tree-level}} & \nobreak{Neural Grid}\ (N\&J) & random & 83.98$^\dagger$ & 83.98$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & random & 89.87$^\dagger$ & 89.87$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & Google & \textbf{91.29}$^\dagger$ & \textbf{91.29}$^\dagger$ \\ \bottomrule \end{tabular} } \caption{Discrimination results on \textbf{CNET}. Superscript $\dagger$ indicates a model is significantly superior to its temporal counterpart with p-value $<0.01$.} \label{table:order-results} \end{table} \end{comment} If we notice the hyperparameter settings for the best models on this task (see supplementary document), we see they use a filter width of 1. This indicates that to find the right order of the sentences in conversations, it is sufficient to consider entity transitions along the conversational paths in a tree. \subsection{Evaluation on Thread Reconstruction} One crucial advantage of our tree-level model over other models is that we can use it to build predictive models to uncover the thread structure of a conversation from its posts. Consider again the thread in Figure \ref{fig:thread-tree}. Our goal is to train a coherence model that can recover the tree structure in Figure \ref{fig:thread-tree}(b) from the sequence of posts $(p_1, p_2, \ldots, p_5)$. This task has been addressed previously \cite{yi:2008,Wang:2011:PTD}. Most methods learn an edge-level classifier to decide for a possible link between two posts using features like distance in position/time, cosine similarity, etc. To our knowledge, we are the first to use coherence models for this problem. However, our goal in this paper is not to build a state-of-the-art system for thread reconstruction, rather to evaluate coherence models by showing its effectiveness in scoring candidate tree hypotheses. In contrast to previous methods, our approach therefore considers the whole thread structure at once, and computes coherence scores for all possible candidate trees of a conversation. The tree that receives the highest score is predicted as the thread structure of the conversation. \paragraph{Training:} We train our coherence model for thread reconstruction using {pairwise ranking} loss as before. For a given sequence of comments in a thread, we construct a set of valid candidate trees; a valid tree is one that respects the chronological order of the comments, {\em i.e.,}\xspace\ a comment can only reply to a comment that precedes it. The training set contains ordered pairs $(T_i, T_j)$, where $T_i$ is a true (gold) tree and $T_j$ is a valid but false tree. \paragraph{Experiments:} The number of valid trees grows exponentially with the number of posts in a thread, which makes the inference difficult. As a proof of concept that coherence models are useful for finding the right tree, we built a simpler dataset by selecting forum threads from the CNET corpus ensuring that a thread contains at most 5 posts. The final dataset contains 1200 threads with an average of 3.8 posts and 27.64 sentences per thread. \begin{comment} \begin{table}[tb] \centering \resizebox{0.98\linewidth}{!}{ \begin{tabular}{cccc} \# Threads & Avg. \# Posts & Avg. \# Sent & Non-trivial replies \\ \midrule 1,200 & 3.6 & 27.64 & 57\%\\ \bottomrule \end{tabular} } \vspace{-0.5em} \caption{Forum dataset used for thread reconstruction experiments. \emph{Non-trivial} replies are posts that reply to other posts except the first post.} \label{table:recon-corpus} \end{table} \end{comment} We assess the performance of the models at two levels: ({\em i})~ \textbf{thread-level}, where we evaluate if the model could identify the entire conversation thread correctly, and ({\em ii})~ \textbf{edge-level}, where we evaluate if the model could identify individual replies correctly. For comparison, we use a number of simple but well performing baselines: \begin{noindlist}\setlength\itemsep{-1.2em} \item{\bf All-previous} creates thread structure by linking a comment to its previous (in time) comment.\\ \item{\bf All-first} creates thread structure by linking all the comments to the initial comment.\\ \item{\bf COS-sim} creates thread structure by linking a comment to one of the previous comments with which it has the highest cosine similarity. We use TF.IDF representation for the comments. \end{noindlist} \begin{table}[tb] \centering \resizebox{0.95\linewidth}{!}{ \begin{tabular}{l|cll} & \textbf{Thread-level} & \multicolumn{2}{c}{\textbf{Edge-level}} \\ \cmidrule(lr){2-2}\cmidrule(lr){3-4} & Acc & $F_1$ & ~Acc \\ \midrule All-previous & 27.00 & 52.00 & 61.83\\ All-first & 25.67 & 48.23 & 58.19\\ COS-sim & 27.66 & 50.56 & 60.30 \\ \midrule Conv. Entity Grid & \bf 30.33$^\dagger$ & \bf 53.59$^\dagger$ & \bf 62.81$^\dagger$ \\ \bottomrule \end{tabular} } \vspace{-0.2em} \caption{Thread reconstruction results; $^\dagger$ indicates significant difference from COS-sim (p$<.01$).} \label{tab:recon_results} \end{table} Table \ref{tab:recon_results} compares our best conversational grid model (tree-level with Google vectors) with the baselines. The low thread-level accuracy across all the systems prove that reconstructing an entire tree is a difficult task. Models are reasonably accurate at the edge level. Our coherence model shows promising results, yielding substantial improvements over the baselines. It delivers {2.7\%} improvements in thread-level and {2.5\%} in edge-level accuracy over the best baseline (COS-sim). Interestingly, our best model for this task uses a filter width of 2 (maximum can be 4 for 5 posts). This indicates that spatial (left-to-right) relations between entity transitions are important to find the right thread structure of a conversation. \begin{comment} \begin{table}[t!] \resizebox{1.0\columnwidth}{!}{% \begin{tabular}{l|ccrcc} \toprule & \# Thread & Avg. \# Com. & \# Pairs & Avg. \# Sen. & Non-trivial \\ \midrule \textsc{train} & 2,500 & 6.01 & 123,201 & 28.76 & 58.23\%\\ \textsc{test} & 800 & 5.75 & 38,772 & 27.79 & 57.96\%\\ \textsc{dev} & 700 & 6.27 & 31,257 & 30.70 & 57.83\%\\ \midrule \textsc{total} & 4,000 & 5.98 & 193,000 & 28.77 & 57.74\%\\ \end{tabular} } \caption{Statistics on the \textbf{CNET} dataset. \textbf{Non-trivial} replies are the comments that reply to other comments except the first comment.} \label{table:corpora} \end{table} \begin{table*}[tb!] \resizebox{2.0\columnwidth}{!}{% \begin{tabular}{llcc|cc|c} & & & & \multicolumn{2}{c|}{\textbf{Discr.}} & \textbf{Inser.} \\ Conv. Rep & Model & Filter\# & Embedding & Acc & $F_1$ & \\ \midrule & Random & - & - & 50 & 50 & \red{?} \\ \midrule \multirow{ 7}{*}{\textbf{Temporal}} & \nobreak{Grid}\ & - & - & 81.31 & 81.27 & 9.51 \\ & \nobreak{Neural Grid} & single & - & 82.48*\blue{(81.99*)} & 82.48*\blue{(81.99*)} & ? \\ & \nobreak{Neural Grid} & multi-filter \{9,10,11\} & - & 82.28*\blue{(83.05*)} & 82.28*\blue{(83.05*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & single & random & 85.64*\blue{(85.79*)} & 85.64*\blue{(85.79*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & multi-filter \{9,10,11\} & random & 86.10*\blue{(86.56*)} & 86.10*\blue{(86.56*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & single & Google & 87.39*\blue{(87.94*)} & 87.39*\blue{(87.94*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & multi-filter \{9,10,11\} & Google & 87.79*\blue{(88.5*)} & 87.79*\blue{(88.5*)} & ? \\ \midrule \multirow{3}{*}{\textbf{Path-level}} & \nobreak{Grid}\ & & & 58.29 & 58.25 & 8.06 \\ & \red{\nobreak{Grid}\ (aggr)} & ? & ? & ? & ? & ?\\ & \nobreak{Neural Grid} & single & - & 85.2*\blue{(84.13*)} & 85.2*\blue{(84.13*)} & ? \\ & \nobreak{Neural Grid} & multi-filter \{9,10,11\} & - & 83.7*\blue{(82.9*)} & 83.7*\blue{(82.9*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & single & random & 89.8*\blue{(88.5*)} & 89.8*\blue{(88.5*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & multi-filter \{9,10,11\} & random & 90.1*\blue{(90.75*)} & 90.1*\blue{(90.75*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & single & Google & 91.5*\blue{(90.13*)} & 91.5*\blue{(90.13*)} & ? \\ & \nobreak{Neural Grid}$_{emb}$ & multi-filter \{9,10,11\} & Google & 91.6*\blue{(91.75*)} & 91.6*\blue{(91.75*)} & ? \\ \midrule \multirow{2}{*}{\textbf{Tree-level}} & \nobreak{Grid}\ & \red{?} & \red{?} & \red{?} & \red{?} & \red{?}\\ & \nobreak{Neural Grid}$_{emb}$ & ? & ? & ? & ? & ?\\ \end{tabular} } \caption{\red{[These results are directly tested on Test Data.]} \blue{[Parenthesized are test data acc on best dev model]} Performance of the entity-based coherence models in \textbf{Discrimination} and \textbf{Insertion} tasks on \textbf{CNET} forum conversations for \textbf{Temporal}, \textbf{Path-level} and \textbf{Tree-level} representations. Superscript $\dagger$ indicates a \textbf{neural} model is significantly superior to its \textbf{non-neural} counterpart with p-value $<0.01$, and $\bigstar$ indicates a \textbf{path or tree} level model is significantly superior to its \textbf{temporal} counterpart with p-value $<0.01$,} \label{tab:conversation} \label{table:order-results} \end{table*} \begin{table}[tb!] \resizebox{1.0\columnwidth}{!}{% \begin{tabular}{llc|cc} \toprule & & & {\textbf{Dis.}} & \textbf{Ins.} \\ Conv. Rep & Model & Emb. & $F_1$ & \\ \midrule & Random & - & 50 & \red{?} \\ \midrule \multirow{3}{*}{\textbf{Temporal}} & \nobreak{Neural Grid}\ (N\&J) & - & 81.99 & ? \\ & \nobreak{ Neural Grid$_{l}$} & random & 86.53 & ? \\ & \nobreak{ Neural Grid$_{l}$} & Google & 87.47 & ? \\ \midrule \multirow{3}{*}{\textbf{Path-level}} &\nobreak{Neural Grid}\ (N\&J) & - & 84.13 & ? \\ &\nobreak{ Neural Grid$_{l}$} & random & \red{90.75} & ? \\ &\nobreak{ Neural Grid$_{l}$} & Google & \red{91.75} & ? \\ \midrule \multirow{3}{*}{\textbf{Tree-level}} & \nobreak{Neural Grid}\ (N\&J) & - & \red{?} & \red{?}\\ & \nobreak{ Neural Grid$_{l}$} & random & ? & ? \\ & \nobreak{ Neural Grid$_{l}$} & Google & 90.63 & ? \\ \bottomrule \end{tabular} } \caption{Performance of the entity-based coherence models in \textbf{Dis}crimination and \textbf{Ins}ertion tasks on \textbf{CNET} forum conversations for \textbf{Temporal}, \textbf{Path-level} and \textbf{Tree-level} representations.} \label{tab:conversation} \label{table:order-results} \end{table} \end{comment} \subsection{Evaluation Tasks and Dataset} We evaluate our models on two standard sentence ordering tasks: \textbf{discrimination} and \textbf{insertion}. In {discrimination} \cite{Barzilay:2008}, a coherence model is asked to distinguish an original (coherent) document from its incoherent renderings generated by random permutations of its sentences. In the {insertion} task \cite{Elsner:2008,Elsner:2011}, the models are judged based on their ability to locate the original position of a sentence in the document from which it was previously removed. To measure this, each sentence in the document is removed and reinserted into every position at a time, and the model is asked to evaluate each such candidate orderings of the document. An insertion place is proposed for which the model gives the highest coherence score to the document. The overall insertion score is then computed as the average fraction of sentences per document reinserted in their original position. \paragraph{Dataset:} We use the same train/test split of the \textsc{wsj}\ dataset as used in \cite{dat-joty:2017} and other studies \cite{Elsner:2011,Lin:2011,Feng:2014}. In accordance with previous studies, we use $20$ random permutations of each article, and exclude permutations that match the original article; see \# Pairs in Table \ref{table:data} for the resulting number of (\emph{original}, \emph{permuted}) pairs. \citet{dat-joty:2017} randomly selected $10\%$ of the training pairs for development purposes, which we also use for hyperparameter tuning.\footnote{We received from them in a personal communication} \end{comment} \begin{comment} \subsection{Model Settings and Training} Similar to \citet{dat-joty:2017}, we did not train our model on the insertion task, rather use the model that was trained for the discrimination task. \red{\paragraph{Applying multiple filters and batch normalization:} The existing model employs only one convolution operation at a time, thus cannot extract and combine features from varying window sizes. In our extended model, we concurrently employ multiple filters of different window sizes. In addition, we apply \emph{batch normalization} to optimize network training \cite{Ioffe:2015:BNA}.} \end{comment} \paragraph{Results and Discussions:} {We present our results on the standard discrimination task and the inverse-order task in Table \ref{table:ordering}; see Std ($F_1$) and Inv ($F_1$) columns, respectively. For space limitations, we only show $F_1$ scores here, and report both accuracy and $F_1$ in the supplementary document.} We compare our lexicalized models (group III) with the unlexicalized models (group II) of \citet{dat-joty:2017}.\footnote{Our reproduced results for the neural grid model are slightly lower than their reported results ($\sim$ 1\%). We suspect this is due to the randomness in the experimental setup.} We also report the results of non-neural entity grid models \cite{Elsner:2011} in group I. The extended versions use entity-specific features. \begin{table}[tb!] \resizebox{0.99\columnwidth}{!}{% \begin{tabular}{cl|ccc} \toprule & Model & Emb. & \textbf{Std} (${F_1}$) & {\textbf{Inv} ($F_1$)} \\ \midrule \multirow{2}{*}{I} & \nobreak{Grid}\ (E\&C) & - & 81.60 & 75.78 \\ & Ext. Grid (E\&C) & - & 84.95 & 80.34 \\ \midrule \multirow{2}{*}{II} & \nobreak{Neural Grid}\ (N\&J) & Random & 84.36 & 83.94 \\ & Ext. \nobreak{Neural Grid}\ (N\&J) & Random & {85.93} & 83.00 \\ \midrule \multirow{2}{*}{III} & \nobreak{ Neural Grid$_{l}$} & Random & 87.03$^\dagger$ & 86.88$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & Google & \textbf{88.56}$^\dagger$ & \textbf{88.23}$^\dagger$\\ \bottomrule \end{tabular} } \vspace{-0.3em} \caption{{Dis}crimination results on the \textbf{\textsc{wsj}} dataset. Superscript $^\dagger$ indicates a lexicalized model is significantly superior to the unlexicalized \nobreak{Neural Grid}\ (N\&J) model with p-value $<0.01$.} \label{table:ordering} \vspace{-0.3em} \end{table} We experimented with both \emph{random} and \emph{pre-trained} initialization for word embeddings in our lexicalized models. As can be noticed in Table \ref{table:ordering}, both versions give significant improvements over the unlexicalized models on both the standard and the inverse-order discrimination tasks (2.7 - 4.3\% absolute). Our best model with Google pre-trained embeddings \cite{Mikolov.Sutskever:13} yields state-of-the-art results. {We also experimented with Glove \cite{pennington2014glove}, which has more vocabulary coverage than word2vec -- Glove covers $89.77\%$ of our vocabulary items, whereas word2vec covers $85.66\%$. However, Glove did not perform well giving $F_1$ score of $86\%$ in the standard discrimination task. \citet{schnabel2015} also report similar results where word2vec was found to be superior to Glove in most evaluation tasks.} Our model also outperforms the extended neural grid model that relies on an additional feature extraction step for entity features. These results demonstrate the efficacy of lexicalization in capturing fine-grained entity information without loosing generalizability, thanks to distributed representation and pre-trained embeddings. \begin{comment} \begin{table}[tb!] \resizebox{0.98\columnwidth}{!}{% \begin{tabular}{cl|cccc} \toprule & Model & Emb. & Acc & $F_1$ \\ \midrule \multirow{2}{*}{I} & \nobreak{Grid}\ (E\&C) & - & 81.58 & 81.60 \\ & Ext. Grid (E\&C) & - & 84.95 & 84.95 \\ \midrule \multirow{2}{*}{II} & \nobreak{Neural Grid}\ (N\&J) & Random & 84.36 & 84.36 \\ & Ext. \nobreak{Neural Grid}\ (N\&J) & Random & {86.93} & {86.93} \\ \midrule \multirow{2}{*}{III} & \nobreak{ Neural Grid$_{l}$} & Random & 87.03$^\dagger$ & 87.03$^\dagger$ \\ & \nobreak{ Neural Grid$_{l}$} & Google & \textbf{88.56}$^\dagger$ & \textbf{88.56}$^\dagger$ \\ \bottomrule \end{tabular} } \caption{{Dis}crimination results on the \textbf{\textsc{wsj}} dataset. Superscript $^\dagger$ indicates a lexicalized model is significantly superior to the unlexicalized \nobreak{Neural Grid}\ (N\&J) model with p-value $<0.01$.} \label{table:ordering} \end{table} \end{comment} \begin{comment} \begin{table}[tb!] \resizebox{1.0\columnwidth}{!}{% \begin{tabular}{l|cc|cc|c} \toprule & & & \multicolumn{2}{c|}{\textbf{Dis.}} & \textbf{Ins.} \\ Model & Filter\# & Embed. & Acc & $F_1$ & \\ \midrule Random & - & - & 50.0 & 50.0 & 12.60 \\ \midrule \nobreak{Grid}\ (E\&C) & - & - & 81.58 & 81.60 & 22.13 \\ Ext. Grid (E\&C) & - & - & 84.95 & 84.95 & 23.28 \\ \midrule \nobreak{Neural Grid}\ (N\&J) & single & - & 84.36 & 84.36 & {\bf 22.56} \\ \midrule \nobreak{ Neural Grid$_{l}$} & single & random & 86.48 & 86.48 & \red{21.29} \\ \nobreak{ Neural Grid$_{l}$} & multi & random & 87.53 & 87.53 & \\ \midrule \nobreak{ Neural Grid$_{l}$} & single & Google & 88.13 & 88.13 & - \\ \nobreak{ Neural Grid$_{l}$} & multi & Google & \textbf{89.00} & \textbf{89.00} & \\ \bottomrule \end{tabular} } \caption{Results on \textbf{Dis}crimination and \textbf{Ins}ertion tasks on the \textbf{\textsc{wsj}} (monologue) dataset.} \label{table:ordering} \end{table} \end{comment} \subsection{Neural Entity Grid} {Figure \ref{fig:neural-grid} depicts the neural grid model of \citet{dat-joty:2017}}. Given an entity grid $E$, they first transform each entry $E_{i,j}$ (a grammatical role) into a distributed representation of $d$ dimensions by looking up a shared embedding matrix $M$ $\in$ $\real^{|G| \times d}$, where $G$ is the vocabulary of possible grammatical roles, {\em i.e.,}\xspace\ $G=\{S,O,X,-\}$. Formally, the look-up operation can be expressed as: \vspace{-1.0em} \begin{equation} L = \Big[ M({E_{1,1}}) \cdots M({E_{i,j}}) \cdots M({E_{I,J}}) \Big] \label{lookup} \end{equation} \begin{figure}[t] \centering \includegraphics[width=0.48\textwidth]{figures/cnn-egrid-02.png} \vspace{-1em} \caption{Neural entity grid model proposed by \citet{dat-joty:2017}. The model is trained using a pairwise ranking approach with shared parameters for positive and negative documents.} \label{fig:neural-grid} \end{figure} \noindent where $M({E_{i,j}})$ refers to the row in $M$ that corresponds to grammatical role $E_{i,j}$, and $I$ and $J$ are the number of rows (sentences) and columns (entities) in the entity grid, respectively. The result of the look-up operation is a tensor $L \in \real^{I \times J \times d}$, which is fed to a convolution layer to model local entity transitions in the distributed space. The convolution layer of the neural network composes patches of entity transitions into high-level abstract features by treating entities independently ({\em i.e.,}\xspace\ 1D convolution). Formally, it applies a \emph{filter} $\mathbf{w} \in \real^{m.d}$ to each local entity transition of length $m$ to generate a new abstract feature $z_i$: \begin{equation} z_i = h(\mathbf{w}^T L_{i:i+m, j} + b_i) \label{eq:conv} \end{equation} \noindent where $L_{i:i+m, j}$ denotes concatenation of $m$ vectors in $L$ for entity $e_j$, $b_i$ is a bias term, and $h$ is a nonlinear activation function. Repeated application of this filter to each possible $m$-length transitions of different entities in the grid generates a \emph{feature map}, $\mathbf{z}^i = [z_1, \cdots, z_{I.J+m-1}]$. This process is repeated $N$ times with $N$ different filters to get $N$ different feature maps, $[\mathbf{z}^1, \cdots, \mathbf{z}^N]$. A \emph{max-pooling} operation is then applied to extract the most salient features from each feature map: \begin{equation} \mathbf{p} = [\mu_l(\mathbf{z}^1), \cdots, \mu_l(\mathbf{z}^N)] \label{max_pool} \end{equation} \noindent where $\mu_l(\mathbf{z}^i)$ refers to the $\max$ operation applied to each non-overlapping window of $l$ features in the feature map $\mathbf{z}^i$. Finally, the pooled features are used in a linear layer to produce a \emph{coherence score}: \begin{equation} y = \mathbf{u}^T \mathbf{p} + b \label{dense} \end{equation} \noindent where $\mathbf{u}$ is the weight vector and ${b}$ is a bias term. The model is trained with a \emph{pairwise ranking} loss based on ordered training pairs $(E_i, E_j)$: \vspace{-1.0em} \begin{equation} \mathcal{L}(\theta)= \max \{0, 1 - \phi(E_i|\theta) + \phi(E_j|\theta)\} \label{loss} \end{equation} \noindent where entity grid $E_i$ exhibits a higher degree of coherence than grid $E_j$, and $y=\phi(E_k|\theta)$ denotes the transformation of input grid $E_k$ to a coherence score $y$ done by the model with parameters $\theta$. {We will see later that such ordering of documents (grids) can be obtained automatically by permuting the original document. Notice that the network shares its parameters ($\theta$) between the positive ($E_i$) and the negative ($E_j$) instances in a pair.} Since entity transitions in the convolution step are modeled in a continuous space, it can effectively capture longer transitions compared to traditional grid models. Unlike traditional grid models that compute transition probabilities from a \emph{single} grid, convolution filters and role embeddings in the neural model are learned from all training instances, which helps the model to generalize well. {Since the abstract features in the feature maps are generated by convolving over role transitions of different entities in a document, the model implicitly considers relations between entities in a document, whereas transition probabilities in traditional entity grid models are computed without considering any such relation between entities. Convolution over the entire grid also incorporates \emph{global} information ({\em e.g.,}\xspace\ topic) of a discourse.} \subsection{Lexicalized Neural Entity Grid} Despite its effectiveness, the neural grid model presented above has a limitation. It does not consider any lexical information regarding the entities, thus, cannot distinguish between transitions of different entities. Although the extended neural grid model proposed in \cite{dat-joty:2017} does incorporate entity features like named entity type and proper mention, it requires an explicit feature extraction step using tools like named entity recognizer. This can prevent us in transferring the model to resource-poor languages or domains. \input{figures/example-thread-grid} To address this limitation, we propose to lexicalize entity transitions. This can be achieved by attaching the entity with the grammatical roles. For example, if an entity $e_j$ appears as a subject (S) in sentence $s_i$, the grid entry $E_{i,j}$ will be encoded as \textsc{$e_j$-s}. This way, an entity \textsc{obama} as subject (\textsc{obama-s}) and as object (\textsc{obama-o}) will have separate entries in the embedding matrix $M$. We can initialize the word-role embeddings randomly, or with pre-trained embeddings for the word (\textsc{obama}). In another variation, we kept word and role embeddings separate and concatenated them after the look-up, thus enforcing \textsc{obama-s} and \textsc{obama-o} to share a part of their representations. However, in our experiments, we found the former approach to be more effective. \subsection{Traditional Entity Grid Models} \label{egrids} Introduced by \citet{Barzilay:2008}, the \textbf{entity grid} model represents a text by a two-dimensional matrix. As shown in Table \ref{table:doc}, the rows correspond to sentences, and the columns correspond to entities (noun phrases). Each entry $E_{i,j}$ represents the syntactic role that entity $e_j$ plays in sentence $s_i$, which can be one of: subject (S), object (O), other (X), or absent ({--}). {In cases where an entity appears more than once with different grammatical roles in the same sentence, the role with the highest rank (S $\succ$ O $\succ$ X) is considered.} Motivated by the Centering Theory \cite{Grosz:1995}, the model considers \textbf{local entity transitions} as the deciding patterns for assessing coherence. A {local entity transition} of length $k$ is a sequence of $\{$S,O,X,--$\}^k$, representing grammatical roles played by an entity in $k$ consecutive sentences. Each grid is represented by a vector of $4^k$ transition probabilities computed from the grid. To distinguish between transitions of important entities from unimportant ones, the model considers the \emph{salience} of the entities, which is measured by their occurrence frequency in the document. With the feature vector representation, coherence assessment task is formulated as a ranking problem in a SVM preference ranking framework \cite{Joachims:2002}. \citet{Barzilay:2008} showed significant improvements in two out of three evaluation tasks when a coreference resolver is used to identify coreferent entities in a text. \citet{Elsner:2011} show improvements to the grid model by including non-head nouns as entities. Instead of employing a coreference resolver, they match the nouns to detect coreferent entities. They demonstrate further improvements by extending the grid to distinguish between entities of different types. They do so by incorporating entity-specific features like named entity, noun class and modifiers. \citet{Lin:2011} model transitions of discourse roles for entities as opposed to their grammatical roles. They instantiate discourse roles by discourse relations in Penn Discourse Treebank \cite{PRASAD08.754}. In a follow up work, \citet{Feng:2014} trained the same model but using relations derived from deep discourse structures annotated with Rhetorical Structure Theory \cite{mann1988rhetorical}. \subsection{Other Existing Models} \label{other-models} \citet{Guinaudeau:2013} proposed a \textbf{graph-based} unsupervised method. They convert an entity grid into a bipartite graph consisting of two sets of nodes, representing sentences and entities, respectively. {The edges are assigned weights based on the grammatical role of the entities in the respective sentences. They perform one-mode projections to transform the bipartite graph to a directed graph containing only sentence nodes. The coherence score of the document is then computed as the average \emph{out-degree} of sentence nodes.} \citet{Louis:2012:CMB} introduced a coherence model based on \textbf{syntactic patterns} by assuming that sentences in a coherent text exhibit certain syntactic regularities. They propose a local coherence model that captures the co-occurrence of structural features in adjacent sentences, and a global model based on a hidden Markov model, which learns the global syntactic patterns from clusters of sentences with similar syntax. \citet{li-hovy:EMNLP20142} proposed a \textbf{neural} framework to compute the coherence score of a document by estimating coherence probability for every window of three sentences. They encode each sentence in the window using either a recurrent or a recursive neural network. To get a document-level coherence score, they sum up the window-level log probabilities. \citet{li-jurafsky:2017} proposed two encoder-decoder models augmented with latent variables for both coherence evaluation and discourse generation. Their first model incorporates global discourse information (topics) by feeding the output of a sentence-level HMM-LDA model \cite{pmlr-v2-gruber07a} into the encoder-decoder model. Their second model is trained end-to-end with variational inference. {In our work, we take an entity-based approach, and extend the neural grid model proposed recently by \citet{dat-joty:2017}}.
{ "timestamp": "2018-05-08T02:12:53", "yymm": "1805", "arxiv_id": "1805.02275", "language": "en", "url": "https://arxiv.org/abs/1805.02275" }
\section{Introduction}\label{sec:one We consider the problem of extracting a common signal from heterogeneous groups of data. This is exemplified by spatio-temporal array data on neuronal activity recorded repeatedly over time for 13 ferrets, the objective being to extract a common neuronal response to a visual stimulus. We will regard each 3D neuronal activity recording as a sample from a linear model with a mean component expressed in a basis expansion. If the mean components across all recordings are identical, the common mean component can be interpreted as the common signal and extracted via least squares estimation of the basis coefficient, say. However, the recordings are heterogeneous in the sense that the mean component cannot be regarded as fixed. Heterogeneity can, for instance, arise across recordings for a single animal due to slightly varying experimental conditions or to fatigue, and spatial heterogeneity is expected across animals due to dif\-fe\-ren\-ces in the cytoarchitecture. Various preprocessing techniques such as registration are used to alleviate heterogeneity, but preprocessing may only be partially successful, and human assessment for e.g. exclusion of outliers was needed in \cite{roland2006}. Explicit modeling of heterogeneity is possible and studied in the field of functional data analysis, \cite{Scheipl:2014, Staicu:2010, Wang:2016}, but we will not pursue this more sophisticated modeling framework. Though heterogeneity may represent structured variation, it may have many different known as well as unknown origins, and our focus is on fast, robust estimation of a common signal. \cite{meinshausen2015} proposed the maximin method as a way to aggregate heterogeneous data within the framework of linear models. Their population quantity called the maximin effect is the common signal, and they proposed families of estimators, see (9) in \cite{meinshausen2015}. These maximin estimators are, however, difficult to compute. Though they are given as solutions to convex minimization problems, the objective functions are nondifferentiable as well as nonseparable. An approach to circumvent the computational difficulties was proposed in another paper by \cite{buhlmann2016}. Using a theoretical representation of the maximin effect combined with the plug-in principle, they proposed magging (maximin aggregation) as an estimator of the maximin effect. Though magging is computationally applicable to the neuronal activity recordings, we will demonstrate that it does not successfully extract a common signal. We propose the soft maximin estimator, which may be viewed as a computationally well behaved approximation to maximin estimation and an alternative to magging. More importantly, it offers an entire range of estimators of independent interest interpolating magging and mean aggregation. By aggregating explained variances (or more generally convex group loss functions) using a type of soft minimum we obtain the estimator as a solution to a minimization problem with a differentiable loss. We refer to this loss function as the soft maximin loss and the estimator solves the soft maximin problem. Furthermore, to obtain a sparse solution across groups we consider an $\ell_1$-penalized version of this problem. For array data, such as the 3D neuronal activity recordings, we have previously demonstrated the efficiency of proximal gradient algorithms for sparse smoothing using tensor product bases, \cite{lund2017a}. In this paper we establish that the soft maximin loss is strongly convex under a full rank assumption on the design matrices (strongly convex group loss functions). We also show that the soft maximin loss has a Lipschitz continuous gradient when the design is identical across groups. Using this its possible to show convergence of a proximal gradient based algorithm when applied to the penalized soft maximin problem. As in \cite{lund2017a} we can then exploit the array-tensor structure of the data to obtain a time and space efficient solution algorithm for this type of problem. An implementation is provided in the R package \verb+SMMA+ available from CRAN, \cite{lund2017b}. The paper is organized as follows: The model setup and the soft maximin estimator is introduced in Section \ref{sec:two} and a small 1D example with simulated data is presented. In Section \ref{sec:three} we establish properties of the soft maximin loss and the convergence of the NPG algorithm within this setup. We also discuss how to exploit the array-tensor structure with this algorithm, and illustrate our method on a 3D signal extraction example. Section \ref{sec:four} presents the application to the neuronal activity data, and in Section \ref{sec:five} we discuss soft maximin estimation and how it relates to alternative methods. \section{Soft maximin problem}\label{sec:two We consider the linear model \begin{alignat}{4}\label{eq1} Y_{g, i} =X_{g, i}^\top B_g + \varepsilon_{g,i}, \quad g =1, \ldots, G, \ i = 1, \ldots, n_g \end{alignat} with $G$ groups, and with $X_{g,i}$ as well as $B_g$ $p$-dimensional vectors. Depending on the context, $X_{g,i}$ and $B_g$ may be regarded as fixed or they may be regarded as random as in \cite{meinshausen2015}. In any case, the errors, $\varepsilon_{g,i}$, are assumed uncorrelated with mean zero given $(X_{g,i}, B_g)_{g, i}$. Within this linear modeling framework, heterogeneity across the groups is captured by the variation in the $B_g$-coefficients. We let $Y_g = (Y_{g, 1},\ldots,Y_{g, n_g})^\top$ denote the group-specific response vector of length $n_g$, $X_g =(X_{g, 1} \ldots X_{g, n_g})^\top$ denotes the corresponding $n_g\times p$ design matrix, and $\varepsilon_g = (\varepsilon_{g,1},\ldots,\varepsilon_{g,n_g})^\top$ denotes the vector of error terms. The linear model for the $g$th group is then \begin{alignat}{4}\label{eq5} Y_g=X_gB_g+\varepsilon_g. \end{alignat} A \emph{common signal} in this framework is represented by a single $\beta \in \mathbb{R}^p$ such that $X_g \beta$ is a good approximation of $X_gB_g$ across all $G$ groups. Following \cite{meinshausen2015}, the empirical explained variance of $\beta \in \mathbb{R}^p$ for group $g$ is defined as \begin{alignat}{4}\label{eq9} \hat V_g(\beta) \coloneqq \frac{1}{n_g}(2\beta^\top X_g^\top y_g-\beta^\top X^\top_gX_g\beta). \end{alignat} Clearly, $\hat{\beta}_g = \argmax_{\beta} \hat V_g(\beta)$ is the OLS estimator within group $g$. The maximin effects estimator proposed in \cite{meinshausen2015} is obtained by maximizing the minimum of \eqref{eq9} across groups. The resulting optimization problem is difficult given the nondifferentiability and nonseparability of the $\min$ function. We propose the soft maximin estimator obtained by maximizing a soft minimum of \eqref{eq9} across groups. For $x\in \mathbb{R}^G$ and $\zeta \neq 0$ consider the scaled log-sum exponential function \begin{alignat*}{4} \mathrm{lse}_\zeta(x) \coloneqq \frac {\log(\sum_g e^{\zeta x_g} )}{\zeta}. \end{alignat*} As argued below $\mathrm{lse}_{\zeta}$ behaves as a soft maximum (minimum) for large positive (negative) values of $\zeta$. Letting $\hat V(\beta) = (\hat V_1(\beta),\ldots, \hat V_G(\beta))^\top$ denote the vector of explained variances, we shall refer to \begin{alignat*}{4} l_{\zeta}(\beta) \coloneqq \mathrm{lse}_{\zeta}(-\hat V(\beta)) \end{alignat*} as the soft maximin loss function. Noting that $\mathrm{lse}_{-\zeta}(x)=-\mathrm{lse}_{\zeta}(-x)$, the soft maximin estimator is then defined for $\zeta > 0$ as \begin{alignat}{4} \beta_{smm}:=\argmax_{\beta\in\mathbb{R}^p} \mathrm{lse}_{-\zeta}(\hat V(\beta))=\argmin_{\beta\in\mathbb{R}^p} l_{\zeta}(\beta). \label{def:mm} \end{alignat} Note that l'H\^ospital's rule gives $\mathrm{lse}_{-\zeta}(x)\to\min\{x\}$ for $\zeta\to\infty$. For large $\zeta > 0$ we can therefore view the soft maximin estimator \eqref{def:mm} as an approximation to the maximin estimator proposed in \cite{meinshausen2015}. Note also that soft maximin estimation puts less weight on the groups with the smallest explained variance than maximin estimation. Especially, using that \begin{alignat*}{4} \frac {\log(\frac {1}{G}\sum_g e^{\zeta x_g} )}{\zeta}\to \frac {1}{G}\sum_g x_g \end{alignat*} for $\zeta \to 0$, we see that $\mathrm{lse}_{\zeta}(x)\sim \frac {1}{G}\sum_g x_g + \frac {\log(G)}{\zeta} $ for small $\zeta$. Thus the soft maximin loss can be seen as an interpolation between mean aggregation and max aggregation of minus the explained variances. \subsection{Smoothing}\label{subsec:2.1 As a main example of soft maximin aggregation we will consider smoothing of signals over a multivariate domain from $G$ groups. Thus \begin{alignat}{4}\label{eq6} Y_{g, i}= f_g(z_{g, i}) + \varepsilon_{g, i}, \quad z_{g, i}\in \mathbb{R}^d, \ i = 1,\ldots,n_g, \end{alignat} with $f_g$ a group specific smooth function. If we represent $f_g $ using a basis expansion as \begin{alignat}{4}\label{eq7} f_g(z)=\sum_{m=1}^{p} \Theta_{g,m}\varphi_m(z), \quad \end{alignat} for $\varphi_1, \ldots, \varphi_p$ a set of basis functions, we can collect the basis function evaluations into the $n_g\times p$ matrix $\Phi_g = (\varphi_m(z_{g, i}))_{i, m}$, in which case model \eqref{eq6} is given as the linear model \eqref{eq5} with $X_g = \Phi_g$ and $B_g = (\Theta_{g,1},\ldots,\Theta_{g,p})^\top$. \subsection{1-dimensional signal extraction}\label{subsec:1dim To illustrate how soft maximin estimation works, we reproduce and extend the numerical example from \cite{buhlmann2016}. We simu\-late signals with three components: i) a common signal of interest $f(x)=\cos(10 (2 \pi) x) + 1.5 \sin(5 (2 \pi ) x)$ superimposed with ii) periodic signals with randomly varying frequency and phase and iii) additive white noise. In particular, we simulate $G=50$ signals where for each $g\in \{1,\ldots,50\}$ \begin{alignat*}{4} Y_{g,i}=f(x_i)+ 50 \sum_{j\in J_g} \varphi_j (x_i + p_g)+\varepsilon_{g,i}, \quad i = 1,\ldots,2001. \end{alignat*} Here $J_g$ is a set of $7$ integers sampled uniformly from $ \{1,\ldots,101\} $, $\varphi_j$ is the $j$th Fourier basis function, $p_g\sim \mathrm{unif}(-\pi,\pi)$, and $\varepsilon_{g,i}\sim \mathcal{N}(0,10)$. We simulate observations for each $x_i= 0,1,\ldots, 2000$. \begin{figure}[H] \begin{center} {\includegraphics[scale=0.4]{1dsimtwocol.pdf}} \caption{True signal in red. From top left we have the magging estimate, the soft maximin estimates for $\zeta=2000$, $200$, and $20$, the mean aggregated estimate and the mean signal, which is simply the average across groups. The MSE for the magging estimate is $1.301 \times 10^{-4}$ and $1.953 \times 10^{-4}$ for the soft maximin estimate ($\zeta=2000$).} \label{fig:1} \end{center} \end{figure} With $\Phi$ containing the 101 first Fourier basis functions evaluated at $x_i= 0,1,\ldots, 2000$ we solved an $\ell_1$ penalized soft maximin problem (see \eqref{eq13} below) for a sequence of penalty parameters and for $\zeta = 20$, $200$, and $2000$. In addition, we aggregated the groupwise OLS estimates, $\hat{\beta}_1, \ldots, \hat{\beta}_{50}$, using magging as proposed in \cite{buhlmann2016} as well as by mean aggregation. The mean signal across groups was also computed. Figure \ref{fig:1} shows the results of the different estimation procedures. Both the magging estimate and the soft maximin estimate for $\zeta = 2000$ extracted the true common signal quite well, while the mean aggregated estimate resembled the mean signal showing little similarity to the common signal. We note that for larger $\zeta$ soft maximin behaved similarly to magging, while for smaller $\zeta$ soft maximin resembled mean aggregation as expected. \section{Penalized soft maximin aggregation}\label{sec:three Here we formulate a general penalized soft maximin \emph{aggregation} problem. Instead of $-\hat V$ defined in \eqref{eq9} we consider a general set of group loss functions $h\coloneqq (h_1,\ldots,h_G)$ and the soft maximin aggregation loss $ s_\zeta:\mathbb{R}^p\to\mathbb{R}$, given by \begin{alignat*}{4} s_\zeta(\beta):=\mathrm{lse}_\zeta\circ h(\beta) = \frac{\log(\sum_{g=1}^G e^{\zeta h_g(\beta)})}{\zeta}, \quad \zeta>0. \end{alignat*} We are then interested in obtaining the penalized soft maximin aggregation estimator defined as the solution to the problem \begin{alignat}{4}\label{eq13} \min_{\beta\in \mathbb{R}^p} s_\zeta(\beta) +\lambda J(\beta), \quad \zeta>0, \end{alignat} where $J$ is a proper convex function and $\lambda\geq0 $ is the penalty parameter. When $h = -\hat V$ as in section \ref{sec:two}, we refer to $s_\zeta = l_\zeta$ as the soft maximin loss and to \eqref{eq13} as the penalized soft maximin problem. Thus the term \emph{aggregation} is used to emphasize that we are considering general group loss functions $h_1,\ldots,h_G$. Solving \eqref{eq13} in a large scale setting requires an efficient optimization algorithm for non-differentiable problems. We note that when $h=-\hat V$, in contrast to the hard maximin problem from \cite{meinshausen2015}, \eqref{eq13} is a convex nondifferentiable and also separable problem (see \cite{tseng2009}) implying that the coordinate descent algorithm is a viable for the problem \eqref{eq13}. Here however, since we are particularly interested in solving \eqref{eq13} for data with array-tensor structure we are going to consider modified versions of the proximal gradient algorithm. As demonstrated in \cite{lund2017a} this algorithm is very well suited to handle this particular setup and can outperform the coordinate descent algorithm. The proximal gradient algorithm fundamentally works by iteratively applying the proximal operator \begin{alignat}{4}\label{eq:4.6} \mathrm{prox}_{\delta J}(\beta) = \argmin_{ \gamma \in \mathbb{R}^p} \Big\{\frac{1}{2\delta}\Vert \gamma - \beta \Vert_{2}^2 + J(\gamma)\Big\},\quad \delta>0 \end{alignat} to gradient based proposal steps. For loss functions whose gradient is Lipschitz continuous with constant $L$, such an algorithm is guaranteed to converge to the solution as long as $\delta \in (0,2/L)$. In practice, $\delta$ is chosen as large as possible, and we are interested in finding the smallest possible Lipschitz constant $L$. With known $L$ and fixed $\delta \in (0,2/L)$ a proximal gradient algorithm consists of the following essential computations: \begin{enumerate} \item\label{gradeval} evaluation of the gradient of the loss \item\label{proxeval} evaluation of the proximal operator $\mathrm{prox}_{\delta J}$ \item\label{objeval} evaluation of the loss function and penalty function. \end{enumerate} The computational complexity in steps \ref{gradeval} and \ref{objeval} is dominated by matrix-vector products, (see e.g. \eqref{eq9} for the soft maximin problem). The complexity in step \ref{proxeval} is determined by $J$. As noted in \cite{beck2009} when $J$ is separable (e.g. the $\ell_1$-norm) $ \mathrm{prox}_{\delta J}$ can be computed analytically or at low cost. If $L$ is not known (or if $\delta \geq 2/L$ for a known, but perhaps conservative, $L$) we cannot guarantee convergence with a fixed choice of $\delta$, but adding a backtracking step will ensure convergence of the iterates. This extra step will increase the per-step computational complexity of the algorithm. When the gradient is not globally Lipschitz, it is no longer guaranteed that iterating steps \ref{gradeval}-\ref{objeval} will yield a solution to \eqref{eq13} for any fixed $\delta$. However, it is possible to show that the NPG algorithm will converge to a solution of \eqref{eq13} under some regularity conditions. \begin{algorithm} \caption{NPG minimizing $F = f + \lambda J$} \label{alg:1} \begin{algorithmic}[1] \REQUIRE $\beta^0$, $L_{\max}\geq L_{\min}>0$, $\tau>1$, $c>0$, $M\geq 0$. \FOR{$k=0$ to $K\in \mathbb{N}$} \STATE\label{start} choose $L_k\in [L_{\min},L_{\max}]$ \STATE\label{prox} solve $\beta =\mathrm{prox}_{ \lambda J/L_k}(\beta ^{(k)}- \frac{1}{L_k}\nabla f (\beta ^{(k)}))$ \label{alg:1_3} \IF{ $F(\beta)\leq \max_{[k- M]_+\geq i\geq k} F(\beta^{(i)})-c/2\Vert \beta-\beta^{(k)}\Vert^2$} \label{alg:1_4} \STATE $\beta^{(k+1)} = \beta$ \ELSE \STATE $L_k = \tau L_k$ and go to \ref{prox} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} We show that $s_\zeta$ does not have a Lipschitz continuous gradient in general, but convergence of the NPG algorithm can be established under general conditions on the group loss functions $h_1,\ldots,h_G$. Furthermore, in the special case where $h_g = - \hat{V}_g$ with all groups sharing the same design we establish that $s_\zeta$ has a globally Lipschitz continuous gradient, and we find a bound on the Lipschitz constant. The first result states that $s_\zeta$ inherits strong convexity from an individual group loss function $h_g$ given all $h_1,\ldots,h_G$ are convex and twice continuously differentiable. The proof is given in the appendix. \begin{thm_prop} \label{prop:one} Assume $h_1,\ldots, h_G$ are twice continuously differentiable. Defining $w_{g,\zeta}(\beta) := e^{\zeta h_g(\beta) - \zeta s_\zeta(\beta)}$, then $\sum_gw_{g,\zeta}(\beta) =1$ for all $\beta \in \mathbb{R}^p$ and \begin{alignat}{4}\label{eq8new} \nabla s_\zeta(\beta)&=&&\sum_{g=1}^Gw_{g,\zeta}(\beta)\nabla h_g(\beta)\\ \nabla^2 s_\zeta(\beta) &=&& \sum_{i=1}^G\sum_{j = i + 1}^G w_{i,\zeta}(\beta)w_{j,\zeta}(\beta) (\nabla h_i(\beta)-\nabla h_{j}(\beta))(\nabla h_i(\beta)-\nabla h_{j}(\beta))^\top\nonumber\\ &&&+ \sum_{g=1}^G w_{g,\zeta}(\beta) \nabla^2 h_g(\beta). \label{eq10new} \end{alignat} Furthermore if $h_1,\ldots, h_G$ are convex with at least one $h_g$ strongly convex, then $s_\zeta$ and $e^{\zeta s_\zeta}$ are strongly convex. \end{thm_prop} Proposition \ref{prop:one} applies to the soft maximin loss with $h_g =-\hat{V}_g$. In this case $\nabla^2 h_g = 2X^\top_gX_g / n_g$, and $h_g$ is strongly convex if and only if $X_g$ has rank $p$. Proposition \ref{prop:one} implies that if one of the matrices $X_g$ has rank $p$, $l_{\zeta}$ is strongly convex. However, we also see from Proposition \ref{prop:one} that $\nabla^2 s_\zeta(\beta)$ is not globally bounded in general even for the soft maximin loss. Consider, for instance, the case with $G = 2$ and $p = n_1 = n_2 = 2$ with $$X_1 = \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) \quad \textrm{and} \quad X_2 = \left(\begin{array}{cc} 0 & 0 \\ \sqrt{2} & 0 \end{array}\right). $$ Take also $y_1=y_2=0$. When $\beta_1=\beta_2 = \kappa$ it holds that $h_1(\beta) = h_2(\beta) = \kappa^2$ and thus $w_{1,\zeta}=w_{2,\zeta}=1/2$ for any $\zeta$, while \begin{align*} (\nabla h_1(\beta)-\nabla h_{2}(\beta))(\nabla h_1(\beta)-\nabla h_{2}(\beta))^\top & \\ = \left(\begin{array}{cc} \beta_1^2 & -\beta_1\beta_2 \\ -\beta_1\beta_2 & \beta_2^2 \end{array}\right) & = \left(\begin{array}{cc} \kappa^2 & -\kappa^2 \\ -\kappa^2 & \kappa^2 \end{array}\right) \end{align*} is unbounded. The following result shows, on the other hand, that for soft maximin estimation with identical $X_g$-matrices across the groups, $\nabla l_\zeta$ is, in fact, Lipschitz continuous. The proof is in the appendix. \begin{thm_cor}\label{coro:one} Let $h_g =-\hat{V}_g, g\in\{1,\ldots,G\}$, with identical $n\times p$ design matrix $X$ across all $G$ groups. Then $\nabla l_\zeta^2$ is bounded by \begin{alignat}{4} \label{eq11new} L\Big(\frac{2}{n}\sum_{i=1}^G\sum_{j = i + 1}^G w_{i,\zeta}(\beta)w_{j,\zeta}(\beta) \Vert y_i-y_j\Vert_2^2+ 1\Big)\leq L\Big(\frac{2}{n}\sum_{i=1}^G\sum_{j = i + 1}^G \Vert y_i-y_{j}\Vert_2^2+1\Big), \end{alignat} where $L := 2\Vert X^\top X\Vert/n$ is the Lipschitz constant of $\nabla h_g$ implying $ l_\zeta$ has Lipschitz continuous gradient. \end{thm_cor} By Corollary \ref{coro:one} if we have identical design across groups we can obtain the soft maximin estimator by applying the fast proximal gradient algorithm from \cite{beck2009} to the optimization problem \eqref{eq13}. Furthermore in this setting the corollary also gives an explicit upper bound on the Lipschitz constant. When $L$, the Lipschitz constant of the gradient of the group loss, is computable it provides a way to find an efficient step size. Finally, in the general setup the following proposition shows that the non-monotone proximal gradient (NPG) algorithm (see \cite{wright2009} and \cite{chen2016}), which does not rely on a global Lipschitz property, solves the problem \eqref{eq13} given the assumptions in Proposition \ref{prop:one}. The proof of the proposition is given in the appendix. \begin{thm_prop}\label{prop:two} Assume $h_1,\ldots, h_G$ satisfy the assumptions in Proposition \ref{prop:one}. Let $(\beta^{(k)})_k$ be a sequence of iterates obtained by applying the NPG algorithm to \eqref{eq13}. Then $\beta^{(k)}\to \beta^\ast$ where $\beta^\ast$ is a critical point of $s_\zeta+\lambda J$. \end{thm_prop} In summary given strong convexity, e.g. satisfied in the maximin setup when one $X_g$ has full rank, we can always solve the problem \eqref{eq13} using a proximal gradient based algorithm. Furthermore for soft maximin estimation with identical design across groups we can even apply a standard version of this algorithm. This is particularly convenient in the array tensor setup described next where the bound \eqref{eq11new} is easy to compute. \subsection{Array tensor smoothing} \label{subsec:atsmooth Consider the situation where the observations in \eqref{eq6} are made in a $d$-dimensional grid $G$ times. That is, for each $g\in \{1,\ldots,G\}$ we have samples from all points in a product set \begin{alignat}{4}\label{eq9new} \mathcal{X}_1\times\mathcal{X}_2\times\ldots \times \mathcal{X}_{d} \end{alignat} where $ \mathcal{X}_j=\{x_{j,1},\ldots, x_{j,n_j}\}\subset \mathbb{R}$ with $x_{j,k_j}<x_{j,k_j+1}$ for $k_j = 1,\ldots,n_j-1$. We may organize such a sample as a $d$-dimensional (response) array $\bs{Y}_g$. Preserving this array structure when formulating the smoothing model in Section \ref{sec:two} leads to an estimation problem with array-tensor structure. Especially, when considering the smoothing model \eqref{eq6} with array data the tensor structure arise if we use tensor product basis functions. Letting $n=\prod_j^{d}n_j$ and $p=\prod_j^{d}p_j$ we can use the tensor product construction to specify the multivariate basis functions appearing in \eqref{eq7} in terms of $d$ univariate functions as \begin{alignat}{4}\label{eq14} \varphi_{m} = \varphi_{1,m_1} \varphi_{2,m_2}\cdots \varphi_{d,m_d}. \end{alignat} Here $\varphi_{j,m_j} : \mathbb{R} \to \mathbb{R}$ for $j = 1, \ldots, d$ and $m_j = 1, \ldots, p_j$ are marginal basis functions. Evaluating each of the $ p_j$ univariate functions at the $n_j$ points in $\mathcal{X}_j$ results in an $n_j\times p_j$ marginal design matrix $\Phi_j = (\varphi_{j, m_j}(x_{j,k_j}))_{k_j,m_j}$. It follows that the tensor (Kronecker) product of these marginal design matrices, \begin{alignat}{4}\label{eq15} \Phi = \Phi_{d}\otimes \cdots\otimes \Phi_2 \otimes \Phi_{1}, \end{alignat} is a design matrix for the $g$th group in \eqref{eq6}. Organizing the corresponding basis coefficients in a $p_1\times \cdots\times p_d$ array $\bs{\Theta}_g=(\Theta_{j_1,\ldots,j_d,g})_{j_1=1,\ldots,j_d=1}^{p_1,\ldots,p_d}$ and using the rotated $H$-transform $\rho$, see \cite{currie2006}, it follows that we can write the model \eqref{eq6} for the $g$th group as \begin{alignat}{4}\label{eq12} \bs{Y}_g=\rho(\Phi_{d},\rho(\Phi_{d-1},\ldots, \rho(\Phi_{1}, \bs{\Theta}_g))) + \bs{E}_g \end{alignat} where $\bs{E}_g$ is a $n_1\times n_2\times\cdots\times n_d$ array containing the error terms. As detailed in \cite{currie2006}, using $\rho$ the matrix-vector products needed when evaluating the gradient and the loss in steps \ref{gradeval} and \ref{objeval} above can be computed without having access to the (large) matrix $\Phi$. In addition this computation is very efficient. Furthermore because of the tensor structure in \eqref{eq15} the constant $L$ from Corollary \ref{coro:one} is easy to compute, see (30) in \cite{lund2017a}. Thus the upper bound in the corollary is computable which in turn implies that we can run the proximal gradient algorithm without performing any backtracking. Note however that the sum on the left hand side of \eqref{eq11new} is potentially much smaller than the sum on the right since the weights are convex. Thus an efficient implementation could e.g. entail scaling down this sum and then monitor the convergence. Also note that this type of step size optimization may also be used in the NPG algorithm to enhance performance. Following \cite{lund2017a} we have implemented both a fast proximal algorithm as well as a NPG algorithm in a way that exploits the array-tensor structure described above. These implementations are available for 1D, 2D, and 3D array data in the R package \verb+SMMA+. The result is a computationally efficient numerical procedure for solving the soft maximin problem \eqref{eq13} with a small memory footprint. \subsection{3-dimensional signal extraction}\label{subsec:3dim To demonstrate soft maximin estimation in a multi-dimensional setting we simulated $G = 50$ groups of 3-dimensional signals. The signals were generated in a way similar to the 1-dimensional example from Section \ref{subsec:1dim} and bear some resemblance to the neuronal activity imaging data. Specifically, we simulated signals with the common signal $f(x,y,t)=\varphi_{12.5,4}(x)\varphi_{12.5,4}(y)\varphi_{50,25}(t)$ ($\varphi_{\mu,\sigma^2}$ is the density for the $\mathcal{N}(\mu, \sigma^2)$ distribution) that we want to extract. This signal was superimposed with random cyclic components and white noise. The 4-dimensional raw data array was generated as \begin{alignat*}{4} Y_{i,j,k,g}&=f(x_i,y_j,t_k)\\ &+5 \sum_{j\in J_g} \varphi_j (x_i + p_g)\varphi_j (y_i + p_g)\varphi_j (t_k + p_g)+\epsilon_{i,j,k,g} \end{alignat*} with all components and quantities but $f$ as in Section \ref{subsec:1dim}, and with $x_i=1,2,\ldots,25$, $y_i=1,2,\ldots,25$ and $t_i=1,2,\ldots,101$. We note that compared to the 1-dimensional example the common signal is spatially as well as temporally localized. \begin{figure} \centering \includegraphics[scale=0.65]{3dsimdat.pdf} \caption{Three examples of 3D simulated signals at time $t_k=50$. The common signal is not visible.} \label{fig:two} \end{figure} Figure \ref{fig:two} shows the simulated signals for three different groups at time $t_k=50$ where $f$ attains its maximum. The common signal is visually undetectable from the individual signals. However, systematic fluctuations caused by the spatial part of the periodic random signal are visible and can be seen to differ between groups. To extract the common signal we used the array-tensor formulation from Section \ref{subsec:atsmooth} of the smoothing model from Section \ref{subsec:2.1}. Using B-splines as basis functions in each dimension we obtained an array model with tensor design components $\Phi^x$, $\Phi^y$, and $\Phi^t$ given by the B-spline basis function evaluations. We solved the soft maximin problem \eqref{eq13} with $\ell_1$-norm penalty and $\zeta =100$. \begin{figure} \begin{center} \includegraphics[scale=0.35]{CV3dsim.pdf} \caption{Generalization error for soft maximin. Dashed line is the minimum.} \label{fig:3new} \end{center} \end{figure \begin{figure} \centering \includegraphics[scale=0.5]{3dsimtemp.pdf} \caption{Bottom: Temporal plots for $(x,y)=(12,12)$. True signal in red. Soft maximin estimate, model no. 7 and $\zeta=100$ (top left), magging estimate (top right), mean aggregated estimate (bottom left) and mean over trials (bottom right). Soft maximin MSE is $5.5 \times 10^{-4}$ and magging MSE $2.8 \times 10^{-3}$.} \label{fig:3} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{3dsimspa.pdf} \caption{Spatial plots for $t_k=50$. True signal (top left), soft maximin estimate (model no. 7), $\zeta=100$ (top right), magging estimate (bottom left), mean aggregated estimate (bottom right). } \label{fig:4} \end{figure} To obtain the magging estimates we also solved an $\ell_1$-norm penalized least squares estimation problem for each group using the same design components and the same sequence of 10 penalty parameters as for the soft maximin problem using the R package \verb+glamlasso+, \cite{lund2018a}. Given the $G$ estimates we aggregated them as described in \cite{buhlmann2016}. We note that the time to compute the soft maximin estimate was around 30 seconds while it took around 140 seconds to compute the magging estimate. For the magging estimate the bulk of the computational time was spent estimating the group parameters. Finally, we computed the mean aggregated estimate across groups as well as the mean signal. To select the penalty parameter we performed the following variation of 10 fold cross-validation. In each fold we left out all observations in a randomly selected $5\times 5\times 101$ block and fitted the model on the remaining data for each of the 10 penalty values $\lambda_1,\ldots,\lambda_{10}$ from the original fit. We did this 10 times and then computed the average (over folds) soft maximin loss on the held out observations for each $\lambda_m$. The result is shown in Figure \ref{fig:3new}. Figure \ref{fig:3} shows the resulting estimate along the temporal dimension for one spatial coordinate. Soft maximin (for the optimal model no. 7) with $\zeta=100$ was able to extract the common signal quite well. The magging estimate (likewise using model no. 7 for each group) also extracted the common signal but with some additional fluctuations giving the estimate more variability. The mean aggregated estimate (model no. 7) was not able to clearly extract the common signal but rather extracted some spurious periodic fluctuations. Finally, the mean signal across the groups does not reveal the common signal at all. Figure \ref{fig:4} shows the same results but plotted in the two spatial dimensions for the single time point $t_k=50$. The figure confirms the findings form Figure \ref{fig:3}. \section{Brain imaging data}\label{sec:four The neuronal activity recordings were obtained using voltage-sensitive dye imaging (VSDI) in an experiment previously described in \cite{roland2006}. The experiment consisted of a total of $G=275$ trials (groups) of recordings on 13 different ferrets. Each recording consists of a movie representing neuronal activity, which we have mapped into a 3-dimensional array for our analysis. In short, the experimental setup was as follows. Part of the visual cortex of a live ferret was exposed and stained with a voltage-sensitive dye. Changes in neuron cell membrane potentials affect the absorption or emission fluorescence of the dye, and neuronal activity can be recorded indirectly in terms of emitted fluorescent light. The recording used 464 channels organized in a two-dimensional (hexagonal) array producing images of \textit{in vivo} neuronal activity. In each trial a visual stimulus was presented to the live ferret (a white square on a grey screen) for 250 ms. Over the course of the trial images were recorded every $0.6136$ ms producing a movie of neuronal activity. For the purpose of our analysis, the 464 channels were mapped to a $25\times25$ array yielding an image with $625$ pixels. Note that data for 161 pixels are then unobserved. Several sources of heterogeneity are potentially present in the data. We list some here. \begin{enumerate} \item\label{list:iv} The heart beat affects the light emission by expanding the blood vessels in the brain, creating a cyclic heart rate dependent artefact. A changing heart rate over trials for one animal (fatigue) as well as differences in heart rate between animals will cause heterogeneity in the data. \item\label{list:ii} Spatial inhomogeneities can arise due to differences in the cytoarchitectural borders between the animals causing misalignment problems. \item\label{list:iii} The VSDI technique is very sensitive, see \cite{grinwald2002}. Even small changes in the experimental surroundings could affect the recordings and create heterogeneity. \item\label{list:v} There are differences between animals in how they respond to the visual stimulus. \end{enumerate} To alleviate the heart rate artefact, the raw VSDI recordings were preprocessed as follows. Two consecutive recordings were actually made in each trial; one with a visual stimulus and one without stimulus. These recordings were temporally aligned using electrocardiography (ECG) data, and the difference between these two aligned recordings was computed and normalized with the pixel-specific pre-stimulus standard deviation. We refer to the result as the preprocessed recordings. \begin{figure \begin{center} \includegraphics[scale=0.5]{dattemptwocol.pdf} \caption{Temporal evolution in the raw (left) and preprocessed (right) VSDI recording from pixel $(14, 14)$ for trials 30 (top), 40 (middle), and 50 (bottom). Vertical lines indicate stimulus start (200 ms) and stop (450 ms).} \label{fig:5} \end{center} \end{figure} Figures \ref{fig:5} and \ref{fig:6} show examples of the raw recordings as well as the preprocessed recordings for three trials. Figure \ref{fig:5} shows the recordings in the temporal dimension for one pixel, while Figure \ref{fig:6} shows the recordings in the spatial dimension around the time of an expected maximal stimulus response. Following the onset of the visual stimulus (200 ms), the recordings are expected to show the result of a depolarization of neuron cells in the visual cortex, but we do not observe a clear stimulus response for all trials. While trial 40 shows clear evidence of depolarization, the other two trials do not. Visual inspection of Figure \ref{fig:5} also indicates the presence of systematic noise components, that is, artefacts as described in \ref{list:iv}) in the list above, which are most pronounced for the raw recordings. \begin{figure \begin{center} \includegraphics[scale=0.5]{datspatwocol.pdf} \caption{The raw recordings (left) and the preprocessed recordings (right) for three different trials around the time of an expected maximal response. Trial 40 shows the strongest response to the stimulus whereas the other two trials show less response. The response is strongest in the preprocessed data.} \label{fig:6} \end{center} \end{figure} \subsection{Model fitting For both the raw and the preprocessed recordings we extracted a common signal across trials and animals by soft maximin estimation, which we compared to mean aggregation and magging of the OLS estimates. The data consists of 275 spatio-temporal recordings each with dimensions $25\times 25 \times 977$, that is, 625 pixels recorded over 977 time points (600 ms). We used 10 B-splines in each spatial dimension and 196 B-splines in the temporal dimension to obtain a linear array model with tensor design components $\Phi^x$, $\Phi^y$, and $\Phi^t$, as described in Section \ref{subsec:atsmooth}, given by the B-splines evaluated over the marginal domains. The resulting model has a total of $p = $ 19,600 parameters. The soft maximin problem \eqref{eq13} was solved for the entire data set using the $\ell_1$-penalty for 10 values of the penalty parameter $\lambda$ and $\zeta = 2$ and $\zeta = 100$ , while the magging estimate was obtained by computing the OLS estimate for each trial and then applying maximin aggregation. The mean aggregated fit was computed likewise. All estimates were computed for the raw as well as for the preprocessed recordings. We note that to compute the 10 soft maximin estimates it took around 60 seconds (110 seconds) for the raw (preprocessed) recordings. The computation of one magging estimate took around 100 seconds (110 seconds) for the raw (preprocessed) recordings. All computations were carried out on a Macbook Pro with a 2.8 GHz Intel core i7 processor and 16 GB of 1600 MHz DDR3 memory. Movies of the estimates for both raw and preprocessed recordings are available as supplementary material To choose the optimal penalty parameter we randomly excluded two $5\times 5 \times 977$ blocks of data for all trials and fitted the model on the remaining data using the 10 penalty values $\lambda_1,\ldots,\lambda_{10}$ from the original fit. The soft maximin loss was then computed on the excluded data blocks for each value of the \begin{figure \begin{center} \includegraphics[scale = 0.5]{CVdattwocol.pdf} \caption{Validation estimates for the soft maximin loss with $\zeta=2$ (applied to the raw recordings (left) and preprocessed recordings (right)). Dashed lines indicate minimum average soft maximin loss on held-out observations.} \label{fig:7} \end{center} \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 0.5]{Stemptwocol.pdf} \caption{Temporal estimates for two different pixels using mean aggregation (black), soft maximin for $\zeta=2$ (red) and $\zeta=100$ (green), and magging (blue). For the raw recordings (top) model 8 was selected in the validation step while for the preprocessed recordings (bottom) model 7 was selected. Vertical lines indicate stimulus start and stop.} \label{fig:8} \end{figure} \noindent penalty parameter. The entire procedure was repeated ten times, the average loss was computed, and the penalty parameter with the minimal average loss was selected. This resulted in model number 8 for the raw recordings and model number 7 for the preprocessed recordings, see Figure \ref{fig:7}. Figure \ref{fig:8} shows the soft maximin (model 8), mean aggregation and magging estimates in the temporal dimension for pixels $(14, 14)$ and $(10, 20)$. Mean aggregation and soft maximin estimation extract fairly clear signals both for the raw and preprocessed recordings, and a clear on-signal (stimulus start) and off-signal (stimulus stop) for these pixels are picked up. Soft maximin gives some smoothing but also some shrinkage compared to mean aggregation. The magging estimator extracts mostly noise for the preprocessed data, while showing a weak signal for pixel $(14, 14)$ for the raw recordings. We note that for the raw recordings both estimates display some variation, which is possibly periodic. In particular, for pixel $(10,20)$ a notable polarization before the stimulus is presented is picked up. This could be due to the heart rate artefact. \begin{figure \centering \includegraphics[scale = 0.35]{Ssparaw.pdf} \caption{Spatial estimates at six different time points using the raw recordings and mean aggregation (col. 1), soft maximin for model no. 8 and $\zeta=2$ (col. 2) and $\zeta=100$ (col. 3), and magging (col. 4).} \label{fig:10} \end{figure} \begin{figure \centering \includegraphics[scale = 0.35]{Sspapre.pdf} \caption{Spatial estimates at six different time points for the preprocessed recordings using mean aggregation (col. 1), soft maximin (model no. 8) with $\zeta=2$ (col. 2), with $\zeta=100$ (col. 3), and magging (col. 4).} \label{fig:11} \end{figure} Figures \ref{fig:10} and \ref{fig:11} show soft maximin, mean aggregation and magging estimates in the spatial dimensions for six different time points. For the preprocessed recordings, mean aggregation resulted in a signal with a clear stimulus response. Soft maximin provided a similar result with a greater spatial localization but also shrinkage of the signal magnitude. The more compactly supported spatial area identified by soft maximin corresponds to the representation on the image of the center of field of view. For the raw data, mean aggregation resulted in some spurious spatial fluctuations that were smoothed away by soft maximin. Magging was not able to extract a signal from neither the raw nor the preprocessed recordings. \section{Discussion}\label{sec:five The maximin estimator with the $\ell_1$-penalty, as defined in \cite{meinshausen2015}, solves the minimization problem \begin{equation} \label{eq:maximin} \min_{\beta} \max_g\{ - \hat{V}_g(\beta) \} + \lambda \| \beta \|_1. \end{equation} Though the objective function is convex, it is nondifferentiable as well as nonseparable, and contrary to the claim in Section 4 of \cite{meinshausen2015}, coordinate descent will not always solve \eqref{eq:maximin}. Two approximate approaches for solving \eqref{eq:maximin} were suggested in \cite{meinshausen2015}, the first consisting of a proposed smooth approximation of the term $\max_g \{ - \hat{V}_g(\beta)\}$. However, we did not find this approximation to work in practice, and we developed the soft maximin loss as a better alternative. We note that the solution path of \eqref{eq:maximin} is piecewise linear in $\lambda$, and it may thus be computed using a method like LARS, see \cite{roll2008}. A LARS-type algorithm or a coordinate descent algorithm of a smooth majorant, such as the soft maximin loss, was also proposed to us by Meinshausen (personal communication) as better alternatives to those suggested in \cite{meinshausen2015}. In our experience, the LARS-type algorithm scales poorly with the size of the problem, and neither LARS nor coordinate descent can exploit the array-tensor structure. Magging, as proposed in \cite{buhlmann2016} as yet another alternative to \eqref{eq:maximin} for estimation of maximin effects, is computationally straightforward and easy to parallelize, but as we demonstrated not necessarily computationally faster than using soft maximin aggregation. From the definition of the soft maximin loss the intention of $\zeta$ is to control the tradeoff in the estimation between groups with large explained variance and groups with small explained variance. The gradient representation \eqref{eq8new} shows explicitly how this tradeoff works in the NPG algorithm: the gradient of the soft maximin loss is a convex combination of the gradients of the groupwise squared error loss functions with weights controlled by $\zeta$. The largest weights are on those groups with the smallest explained variances and as $\zeta \to \infty$ the weights concentrate on the groups with minimal explained variance. Thus our proposed algorithm and implementation in the R package \verb+SMMA+ provides a means for approximately minimizing \eqref{eq:maximin} and is as such an alternative to magging as an estimator of the maximin effect. More importantly, by the introduction of the tuning parameter $\zeta$ in the soft maximin loss we not only achieved an approximate solution of \eqref{eq:maximin} but an interpolation between max aggregation and mean aggregation across groups. We have demonstrated via simulations and the application to VSDI recordings how soft maximin is able to extract a signal in the context of multivariate array data and how the choice of the tuning parameter $\zeta$ affects the extracted signal. The simulations showed that magging as well as soft maximin estimation can extract a signal even in the presence of large heterogeneous noise components, but for the VSDI recordings, magging was not successful. We expect that soft maximin aggregation will be practically useful in a number of different contexts as a way of aggregating explained variances across groups. In particular because it down weights groups with a large explained variance that might simply be outliers, while it does not go to the extreme of the maximin effect, that can kill the signal completely as in the example of the VSDI recordings.
{ "timestamp": "2019-05-22T02:12:42", "yymm": "1805", "arxiv_id": "1805.02407", "language": "en", "url": "https://arxiv.org/abs/1805.02407" }
\section{Introduction: Kaluza-Klein theory}\label{sec1} The possibility that the universe is embedded in a $(4+d) $ dimensional world has gained the attention of many researches. In the Randall and Sundrum \cite{RS} theory the matter and fields are restricted to a four dimensional space-time known as brane which is embedded in a five dimensional spacetime (bulk). In Space-Time-Matter (STM) theory \cite{stm} all physical quantities such as matter density and pressure, gain a geometrical interpretation. Among these higher dimensional theories, the original Kaluza-Klein theory unifies gravity and electromagnetism \cite{1}, assuming that the fifth dimension is compact \cite{5}. Kaluza\rq{}s idea was that the universe has four spatial dimensions, and the extra dimension is compactified to form a circle so small as to be unobservable \cite{17}. Klein's contribution was to make a reasonable physical basis for the compactification of the fifth dimension \cite{13}, \cite{18}. This school of thinking later led to the eleven-dimensional supergravity theories in 1980s and to the \lq\lq{}theory of everything\rq\rq{} or ten-dimensional superstring theory \cite{14}. With this unification the vacuum $(4+1)$ dimensional Kaluza-Klein solutions that is $\hat G_{AB}=0$ (the indices $A,B,...$ run over $0...4$ ), will reduce to the $(3+1)$ Einstein field equations with effective matter and the curvature in $(4+1)$ spacetime induces matter in $(3+1)$ dimensional spacetime \cite{16}. In the context of Kaluza-Klein, the Einstein tensor has the usual definition $\hat G_{AB} \equiv \hat R_{AB}-1/2\hat R \hat g_{AB}$, where $\hat R_{AB}$ and $\hat R = \hat g_{AB} \hat R^{AB}$ are the five-dimensional Ricci tensor and scalar, respectively, and $\hat g_{AB}$ is the metric tensor in five dimensions \cite{14}. The $\mu \nu$ part of $\hat g^{AB}$, which is $g^{\mu \nu}$ is the contravariant four dimensional metric tensor, and the electromagnetic potential and the scalar field are given by $A^{\mu}$ and $\phi$, respectively. The general correspondence between the above components is given by \begin{equation}\label{eq1} \hat g^{AB}= \left( ‎\begin{array}{ccccccc}‎ ‎g^{\mu \nu}‎ & -\kappa A^{\mu} & \\‎ ‎‎ &‎\\‎ -\kappa A^{\nu} ~~& \kappa^2 A^{\sigma}A_{\sigma}+\phi^2 & \\‎ ‎\end{array ‎\right), \end{equation} where $\kappa$ is a coupling constant for the electromagnetic potential $A^{\mu}$ \cite{8},\cite{Y}. Many spherically symmetric solutions of Kaluza-Klein type are investigated in \cite{2} and \cite{7}. Among these solutions, the Gross-Perry-Sorkin (GPS) spacetime \cite{5},\cite{26} is one of the exact vacuum solutions of Einstein field equations in five-dimensional gravity which is stationary and without event horizon representing a magnetic monopole which is usually called the Kaluza-Klein monopole\cite{G},\cite{j}. As it is well known, the theory of magnetic monopole was formulated by Dirac in $1931$ \cite{Dirac}. He showed that the electric charge quantization can be explained by the existence of a magnetic monopole. In addition to magnetic charge, monopoles are characterized by their peculiar topology. They carry one unit of Euler character, and consequently one can construct stationary dipole solutions from them \cite{M}. The Kaluza-Klein monopole has an important role in M/String theory. As an example, a Ricci-flat eleven dimensional Lorentzian metric can be obtained from Kaluza-Klein monopole metric times six flat Euclidean dimensions, which when reduced to ten dimensional spacetime can be interpreted as a {\rm}D$6$ brane solution of {\rm } II{\rm}A string theory \cite{G}. In this paper, we consider a vacuum solution of Kaluza-Klein theory in five-dimensional spacetime which is closely related to the Taub-NUT and GPS metric. The Taub-NUT solution has many interesting features; it carries a particular type of charge (NUT charge), which has topological origins and can be regarded as \lq\lq{}gravitational magnetic charge\rq\rq{}\cite{ortin}, \cite{UK}. We boost the magnetic monopole along the fifth dimension and investigate its properties in the four-dimensional spacetime by using the Kaluza-Klein reduction. The monopole turns into a dyon connected to a magnetically charged string. The plan of this paper is as follows. In section \ref{3}, we will review a Taub-NUT-like Kaluza-Klein solution and investigate its physical properties in four dimensions. In section \ref{4}, we will study the boosted Kaluza-Klein magnetic monopole, and explore its physical properties. In the last section we will draw our main conclusions. \section{ Kaluza-Klein Magnetic Monopole and Taub-NUT Solution}\label{3} \textbf{In this section, we first review the main features of the Kaluza-Klein monopole before the boost.} The Kaluza-Klein monopole, known also as Gross-Perry-Sorkin solution, is a generalization of the self-dual Euclidean Taub-NUT solution. The Taub-NUT solution was first discovered by Taub (1951), and subsequently by Newman, Tamburino and Unti (1963) as a generalization of the Schwarzschild spacetime \cite{21}, \cite{22}. This solution is a single, non-radiating and analytic extension of the Taub universe, the anisotropic but spatially homogeneous vacuum solution of Einstein field equations with topology $R^1 \times S^3$. The Taub metric is given by \begin{equation} {\rm d}s^2=-\frac{1}{V(t)}{\rm d}t^2+4b^2V(t)({\rm d}\psi+\cos \theta {\rm d}\phi)^2+(t^2+b^2)({\rm d}\theta^2+\sin \theta^2{\rm d}\phi^2), \end{equation} where $V(t)=-1+ 2(mt+b^2)(t^2+b^2)^{-1}$, $m$ and $b$ are positive constants, $\psi, \phi, \theta$ are Euler angels with usual ranges \cite{k}. The Taub-NUT solution is nowadays being involved in the context of higher-dimensional theories of semi-classical quantum gravity \cite{20}. As an example, in the work by Gross and Perry \cite{5} and Sorkin \cite{4}, soliton solutions were obtained by embedding the Taub-NUT gravitational instanton inside the five dimensional Kaluza-Klein manifold \cite{21}. One such solution which obeys the Dirac quantization condition is considered in \cite{5}. The Kaluza-Klein monopole of Gross-Perry-Sorkin is represented by the following metric \cite{5} \begin{align}\label{eq4} {\rm d}s^2=- {\rm d}t^2+V( {\rm d}x^5+4m(1-\cos\theta) {\rm d}\phi)^2+\frac{1}{V}( {\rm d}r^2+r^2 {\rm d} \theta^2 +r^2 \sin ^2\theta {\rm d}\phi^2), \end{align} where \begin{align} \frac{1}{V}=&1+\frac{4m}{r}. \end{align} The Taub-NUT instanton is obtained by putting ${\rm d}t=0$. For this solution the coordinate singularity is located at $r=0$, which is called NUT singularity. This can be vanished if the extra coordinate $x^5$ is periodic with period $16\pi m=2\pi R$, where $R$ is the radius of the fifth dimension. Thus $m=\sqrt{\pi G}/2e$ \cite{28}. The gauge field $A_{\nu}$ is given by $A_{\phi}=4m(1-cos\theta)$, and the magnetic field is $B=4m\bold{r}/r^3$, which is clearly that of a monopole and has a Dirac string singularity in the range $r=0 $ to $\infty$. The magnetic charge of this monopole is $g=m/\sqrt{\pi G}$ which has one unit of Dirac charge. In this model, the total magnetic flux is constant. For this solution, the soliton mass is determined to be $M^2=m_{p}^2/16\alpha$ where $m_{p}$ is the Planck mass and $\alpha$ is the fine-structure constant. In our previous work \cite{ssh}, we presented a metric which is a vacuum five dimensional solution, having some properties in common with the monopole of Gross-Perry-Sorkin, despite some differences. In this part, we will briefly review the results we obtained there (see \cite{ssh}), and will then extend the results in coming sections. The metric is given by \begin{align}\label{55} {\rm d}s^2_{(5)}= - {\rm d}t^2+(1-\frac{2m}{r})\left( {\rm d}r^2+r^2 {\rm d}\theta^2 +r^2\sin^2 \theta {\rm d}\phi^2\right)+ \left(\frac{4m^2}{1-\frac{2m}{r}}\right)\left( {\rm d}\psi + \cos\theta {\rm d}\phi\right)^2, \end{align} where, the extra coordinate is represented by $\psi$ \footnote{Note that not only the sign of $m$ is taken differently from (\ref{eq4}), but also the structure of the metric is different, leading to some essentially different results.}. The coordinates take on the usual ranges $r\geq0$, $0\leq\theta\leq\pi$, $0\leq \phi\leq 2\pi$ and $0\leq \psi \leq 2\pi$. It should be noted that the metric (\ref{55}) can be obtained from (\ref{eq4}) by replacing ${\rm d}x^5=2m\left({\rm d}\psi + {\rm d}\phi\right)$ and $m \rightarrow -m/2$. It should be noted that, for negative $m$, we will still have a vacuum solution. In the next section, we will consider this case for some of our results. The Killing vectors associated with metric (\ref{55}) are given by \begin{align} &K_{0}=(1,0,0,0,0),\quad K_{1}=(0,0,0,0,1), \quad K_{2}=(0,0,0,1,0),\nonumber\\ &K_{3}=(0,0, -\sin\phi, -\cot\theta \cos \phi, \csc \theta \cos\phi),\nonumber\\ &K_{4}=(0,0, \cos\phi, -\cot\theta \sin\phi, \csc \theta \sin\phi), \end{align} which are the same as in the Taub-NUT metric discussed in \cite{25}, where the authors studied spinning particles in the Taub-NUT space. The gauge field $A_{\mu}$, and the scalar field $\phi$ deduced from the metric (\ref{14}) with the help of (\ref{eq1}) are $A_{\phi}=cos\theta/\kappa$, and $ \phi^2=4m^2/(1-\frac{2m}{r})$, respectively. Moreover, the electromagnetic tensor is $F_{\theta \phi}=-F_{\phi \theta}= -\sin \theta/\kappa$, which corresponds to a radial magnetic field $B_{r}=1/\kappa r^2$ with a magnetic charge $Q_{M}=1/\kappa$. The total magnetic flux through any spherical surface centered at the origin can be calculated via \cite{26} leading to the result $\Phi_{B}=2\pi/\kappa$, which is a constant (i.e. we have a point-like magnetic charge). The four dimensional metric deduced from Eq. (\ref{14}) with the use of (\ref{eq1}) leads to the following asymptotically flat spacetime: \begin{align}\label{26} {\rm d}s^2_{\left(4\right)}=- {\rm d}t^2+\left(1-\frac{2m}{r}\right) {\rm d}r^2+r^2\left(1-\frac{2m}{r}\right)\left( {\rm d}\theta^2 +\sin ^2\theta {\rm d}\phi^2\right). \end{align} The four dimensional metric (\ref{26}) has two curvature singularities at $r=0$ and $r=2m$ unless $m<0$. If we calculate the surface area of a $S^2$ hypersurface at constant $t$ and $r$, we see that the surface area $A\left(r\right)$ becomes zero at $r=0$, as well as $r=2m$. This means that the $r=2m$ hypersurface is in fact a point (i.e. a sphere with zero surface area). For $r>2m$ the signature of the metric is proper $(-,+,+,+)$ but for the range $0<r<2m$ the signature of the metric will be improper and non-Lorentzian $ (-,-,-,-)$, thus the patch $r<2m$ is excluded from the physical spacetime. therefore, this spacetime is considered only in the range $r\geq2m$. Since the range $0<r<2m$ is removed from the spacetime, there remains only one curvature singularity at $r=2m$. By computing the components of the energy-momentum tensor for the metric (\ref{26}), one can show that the effective matter field around the singularity can not be considered as an ultra-relativistic quantum field (or radiation) in contrast to the Kaluza-Klein solitons described in \cite{23}. On the other hand, the gravitational mass was derived in two ways and it was shown to vanish ($M_{g}=0$). It turns out that the Kaluza-Klein monopole in isotropic coordinates gets the asymptotic form \begin{equation}\label{35} {\rm d}s^2=-(1+\frac{2m}{r})^{-1/2} {\rm d}t^2+(1+\frac{2m}{r})^{1/2}( {\rm d}r^2+r^2 {\rm d}\Omega^2). \end{equation} If in the process of compactification from $(4+1)$ to $(3+1)$ dimensions we use the ansatz \begin{equation} \hat G_{AB}= \phi^{\beta}\left( ‎\begin{array}{ccccccc}‎ ‎g_{\mu \nu}+\phi A_{\mu}A_{\nu}‎ & \phi A_{\mu} & \\‎ ‎‎ &‎\\‎ \phi A_{\nu} ~~& \phi & \\‎ ‎\end{array ‎\right), \end{equation} then the choice of $\beta$ for which $\phi$ does not appear explicitly is called the Einstein frame. Using the above equation will lead to the four dimensional metric \begin{align}\label{37} {\rm d}s^2_{\left(4\right)}=\phi^{-\beta}\left[- {\rm d}t^2+\left(1-\frac{2m}{r}\right) {\rm d}r^2+r^2\left(1-\frac{2m}{r}\right) {\rm d}\Omega^2\right]. \end{align} By choosing $\phi^{-\beta}=(1-\frac{2m}{r})^{-\frac{1}{2}}$, equation (\ref{37}) reduces to \begin{align}\label{38} {\rm d}s^2=-(1-\frac{2m}{r})^{-1/2} {\rm d}t^2+(1-\frac{2m}{r})^{1/2}( {\rm d}r^2+r^2 {\rm d}\Omega^2), \end{align} which is the same as (\ref{35}) if we replace $m$ by $-m$. \section{The Boosted Kaluza-Klein Magnetic Monopole}\label{4} In this section, we apply a boost to the Kaluza-Klein magnetic monopole which satisfies the vacuum Einstein field equations. The proposed boost is along the extra dimension $\psi$ with the boost parameter $\alpha$. We consider metric (\ref{55}) with coordinate renamed as $(t^{\prime}, r^{\prime},\theta ^{\prime}, \phi ^{\prime}, \psi ^{\prime})$, and define the boosted coordinates as $(t, r, \theta, \phi, \psi)$. Then we apply the following transformations \begin{eqnarray} t^{\prime}&= &t\cosh \alpha -\psi \sinh \alpha~, \\ \psi ^{\prime}&= &\psi \cosh \alpha-t \sinh \alpha, \end{eqnarray} with the above transformations, the metric becomes \begin{align}\label{14} {\rm d}s^2&=-\left(\cosh ^2\alpha -\frac{4 m^2 \sinh ^2\alpha}{1-\frac{2 m}{r}}\right){\rm d}t^2+(1-\frac{2m}{r})\left({\rm d}r^2+r^2{\rm d}\theta^2\right) \nonumber\\&+\left( \frac{4 m^2 \cos ^2\theta}{1-\frac{2 m}{r}}+r^2 (1-\frac{2 m}{r}) \sin ^2\theta \right){\rm d}\phi^2+\left(-\sinh ^2\alpha +\frac{4 m^2 \cosh ^2\alpha}{1-\frac{2 m}{r}}\right){\rm d}\psi^2 \nonumber\\&+\sinh 2\alpha\left(1-\frac{4 m^2 }{1-\frac{2 m}{r}}\right){\rm d}t{\rm d}\psi+ \frac{8 m^2 }{1-\frac{2 m}{r}}\cos \theta \cosh \alpha {\rm d}\phi {\rm d}\psi-\frac{8m^2}{1-\frac{2m}{r}}\cos \theta \sinh \alpha{\rm d}\phi {\rm d}t. \end{align} which is no longer a static solution because of the ${\rm d}\phi {\rm d}t$ term. The metric, however, remains stationary (i.e. $\partial {g_{AB}}/\partial {t}=0$). It should be stressed that for obtaining the main results of the present paper, which appear after the boost, it is not essential to choose a particular sign for $m$ (we will consider both possibilities in what follows). Let us rewrite (\ref{14}) in the form \begin{align}\label{15} {\rm d}s^2&=-\left(\cosh ^2\alpha -\frac{4 m^2 \sinh ^2\alpha}{1+\frac{2 m}{r}}\right){\rm d}t^2+(1+\frac{2m}{r})\left({\rm d}r^2+r^2{\rm d}\theta^2\right) \nonumber\\&+\left( \frac{4 m^2 \cos ^2\theta}{1+\frac{2 m}{r}}+r^2 (1+\frac{2 m}{r}) \sin ^2\theta \right){\rm d}\phi^2+\left(-\sinh ^2\alpha +\frac{4 m^2 \cosh ^2\alpha}{1+\frac{2 m}{r}}\right){\rm d}\psi^2 \nonumber\\&+\sinh 2\alpha\left(1-\frac{4 m^2 }{1+\frac{2 m}{r}}\right){\rm d}t{\rm d}\psi+ \frac{8 m^2 }{1+\frac{2 m}{r}}\cos \theta \cosh \alpha {\rm d}\phi {\rm d}\psi-\frac{8m^2}{1+\frac{2m}{r}}\cos \theta \sinh \alpha{\rm d}\phi {\rm d}t. \end{align} The transformed scalar and gauge fields from (\ref{14}) are \begin{equation} \phi^2=\frac{4 m^2 \cosh ^2\alpha }{1-\frac{2 m}{r}}-\sinh ^2\alpha, \end{equation} \begin{equation} A_t=-\frac{\left(4 m^2 r+2 m-r\right) \sinh 2 \alpha}{2 \kappa \left(4 m^2 r \cosh ^2\alpha +(2 m-r) \sinh ^2\alpha\right)}, \end{equation} and \begin{equation} A_{\phi}=\frac{4 m^2 r \cosh \alpha \cos \theta}{\kappa \left(4 m^2 r \cosh ^2\alpha+(2 m-r) \sinh ^2\alpha\right)}, \end{equation} which lead to the following electromagnetic field components \begin{equation} F_{r\phi}=\frac{8 m^3 \sinh ^2\alpha \cosh \alpha \cos \theta }{\kappa \left(4 m^2 r \cosh ^2\alpha+(2 m-r) \sinh ^2\alpha\right)^2}=-r\sin \theta B_{\theta}, \end{equation} \begin{equation}\label{43} F_{rt}=\frac{4 m^3 \sinh 2 \alpha}{\kappa \left(4 m^2 r \cosh ^2\alpha+(2 m-r) \sinh ^2\alpha \right)^2}=E_r, \end{equation} \begin{equation}\label{br} F_{\theta \phi}=-\frac{4 m^2 r \cosh \alpha \sin \theta}{\kappa \left(4 m^2 r \cosh ^2\alpha+(2 m-r) \sinh ^2\alpha\right)}=r^2\sin \theta B_r. \end{equation} It can be seen that by setting $\sinh \alpha=0$, the boosted solution will reduce to the previous metric (\ref{55}). The results just obtained are valid for both signs of $m$. If we perform a Kaluza-Klein reduction, the four dimensional spacetime for the boosted metric (\ref{14}) becomes \begin{align}\label{44} {\rm d}s^2_{(4)}=&-\frac{4 m^2 r}{4 m^2 r \cosh ^2\alpha +(2 m-r) \sinh ^2\alpha}{\rm d}t^2+\left(1-\frac{2m}{r}\right)\left({\rm d}r^2+r^2{\rm d}\theta^2\right) \nonumber\\& +\frac{r \left(-\sinh ^2\alpha \left(4 m^2 \cos ^2\theta+(r-2 m)^2 \sin ^2\theta \right)-4 m^2 r (2 m-r) \cosh ^2\alpha \sin ^2\theta\right)}{4 m^2 r \cosh ^2\alpha +(2 m-r) \sinh ^2\alpha}{\rm d}\phi^2\nonumber\\&-\frac{4 m^2 r \sinh \alpha \cos \theta }{4 m^2 r \cosh ^2\alpha + (2 m-r)\sinh ^2\alpha }{\rm d}t{\rm d}\phi. \end{align} The four dimensional solution (\ref{44}) is singular at \begin{equation}\label{24} r_1=-\frac{2 m \sinh ^2\alpha }{4 m^2 \cosh ^2\alpha -\sinh ^2\alpha},~~~~r_2=0. \end{equation} The Ricci scalar for metric (\ref{44}) can be calculated easily, and diverges at the following locations in addition to $r_1$, and $r_2$ \begin{eqnarray} r_3&=&\frac{1}{7} \csc ^2\theta \left(6 \sin ^2\theta +\sqrt{\sin ^2\theta \left(64-43 \cos ^2\theta \right)}\right)~, \\ r_4&=&\frac{1}{7} \csc ^2\theta \left(6 \sin ^2\theta-\sqrt{\sin ^2\theta \left(64-43 \cos ^2\theta \right)}\right)~, \\ r_5&=&0,~~~ r_6=2,~~~ r_7=-\frac{2}{7}. \end{eqnarray} in which $r_3$ to $r_7$ are simplified after arbitrarily setting $\sinh \alpha=1$, $\cosh \alpha=\sqrt{2}$, and $m=1$. These points are curvature singularities since the Ricci scalar, and the nontrivial quadratic curvature invariant $R^{\mu \nu \alpha \beta}R_{\mu \nu \alpha \beta}$ diverge. With these assumptions, $r_1$ equals $-2/7$, therefore $r_7$ and $r_{1}$ are irrelevant since they become negative. It is also easy to see that $r_4$ is a negative function of coordinate $\theta$. Therefore, only $r_2=r_5=0$, and $r_3(\theta)$ are relevant. It should be noted that the singularity $r_{3}$ can be removed by choosing suitable values for the parameters $m$ and $\alpha$ (e.g. $ m \simeq 5/100$, $\alpha\simeq \pi/6$ lead to an imaginary value for $r_{3}$). Also, the singularities for four dimensional metric deduced from (\ref{15}) with arbitrarily setting $\sinh \alpha=1$, $\cosh \alpha=\sqrt{2}$, and $m=1$ are given by \begin{eqnarray} r\rq{}_1&=&-\frac{1}{7} \csc ^2\theta \left(6 \sin ^2\theta +\sqrt{\sin ^2\theta \left(64-43 \cos ^2\theta \right)}\right)~, \\ r\rq{}_2&=&\frac{1}{7} \csc ^2\theta \left(-6 \sin ^2\theta+\sqrt{\sin ^2\theta \left(64-43 \cos ^2\theta \right)}\right),\\ r\rq{}_3 &= &\frac{2}{7}, \quad r\rq{}_4 = 0\quad r\rq{}_5 = -2, \end{eqnarray} in which, $r\rq{}_1$ is a negative function of $\theta$, and $r\rq{}_5$ is also negative, thus not physical. In order to analyze the nature of the singularities, let us calculate the surface area of a $S^{2}$ hypersurface of constant $t$ and $r$ for metric (\ref{44}) \begin{equation} A\left(r\right)=\int \sqrt{\left|g^{\left(2\right)}\right|} {\rm d}x^2=8\pi \sqrt{\left|\frac{(2-r)r^2}{7 r+2}\right|} E\left(\frac{7 r^2}{4}-3 r\right), \end{equation} where $E$ stands for the elliptic integral. The result of the surface area is zero for the singularities $r=0$, and $r=2$ (the integrand is simplified after setting $\sinh \alpha=1$, $\cosh \alpha=\sqrt{2}$), which means that in our coordinate system $r=0$, and $r=2$ are points. By using a coordinate transformation $\tilde{r}^2=r^2(1-2m/r)$, these two singularities will be transformed to $\tilde{r}=0$. Furthermore, the determinant of the metric (\ref{44}) is positive in the range $0<r<2m$, and negative for the range $r>2m$, thus the range $0<r<2m$ is removed from the spacetime because of having an improper signature. The infinite redshift surface $r_{ir}$ for metric (\ref{44}) can exist if the condition $r_{ir}>0$ holds, i.e. \begin{equation} 4 m^2 \cosh ^2\alpha -\sinh ^2\alpha <0 \Rightarrow m^2<\frac{1}{4}\tanh^2 \alpha. \end{equation} The Killing horizon can be obtained with the condition $\xi^2=0$, where $\xi$ is a timelike Killing vector, that is $ \xi^{\mu}=\partial x^{\mu}/\partial t$, therefore we have \begin{equation} g_{\mu \nu}\xi^{\mu}\xi^{\nu}=-\frac{4 m^2 r}{4 m^2 r \cosh ^2\alpha +(2 m-r) \sinh ^2\alpha}=0, \end{equation} which gives $r=0$. There are therefore no Killing or event horizons in the physical spacetime. The event horizon can be obtained by $g^{rr}=0$, which also corresponds to $r=0$. For the four dimensional boosted Kaluza-Klein solution, we infer from (\ref{43}) and (\ref{br}) that the radial electric field $E_{r}$, and the radial magnetic field $B_{r}$ do not vanish and consequently, one can find the net electric and magnetic fluxes through any two-dimensional surface. The electric flux may be computed via \cite{caroll} \begin{align} Q_{\rm E}=-\int _{\partial \Sigma}{\rm d}^{n-2}z\sqrt{|\gamma^{\partial \Sigma}|}n_{\mu}\sigma_{\nu}F^{\mu \nu}, \end{align} where $\Sigma$ is a hypersurface which is typically a hypersurface of constant $t$ and $r$, $|\gamma^{\partial \Sigma}|$ is the determinant of the induced metric on the boundary $\partial \Sigma$, $n$ and $\sigma$ are the unit normal vectors to boundary given by \begin{align} n^{\mu}=(1,0,0,0), ~~~\sigma^{\mu}=(0,1,0,0), \end{align} hence \begin{align} Q_{\rm E}=\lim_{r\rightarrow \infty }\int_{s^2} E_rn^{t}\sigma^{r}g^{tt}g_{tt} r^2\sin \theta {\rm d}\theta {\rm d}\phi. \end{align} After calculating the integral for metric (\ref{44}), the result will be a function of $r$, which indicates that the charge is not point-like, but extended. If we take the limit $r \rightarrow \infty$, the electric flux approaches the constant value \begin{align} Q_{\rm E}=\frac{4\pi}{\kappa}{\frac {{4m}^{3}\sinh 2\alpha }{ \left( \cosh^2 \alpha (4m^2-1)+1 \right)^2 }}, \end{align} which for the typical choice $\sinh \alpha=1$, $\cosh \alpha =\sqrt{2}$, and $m=1$, takes the constant value \begin{equation} Q_{\rm E}=\frac{32 \pi \sqrt{2}}{49 \kappa}. \end{equation} The magnetic flux for the boosted Kaluza-Klein magnetic monopole turns out to be \begin{align} \Phi_{{\rm B}}=&\oint_{s^2} \frac{1}{2}\tilde{ F}^{\alpha \beta}{\rm d}s_{\alpha \beta}=\oint_{s^2} \frac{1}{4}\eta^{\alpha \beta \mu \nu}F_{\mu \nu}{\rm d}s_{\alpha \beta}= \oint_{s^2} \frac{1}{2}|g^{(2)}| g^{tt}g^{rr}g^{\theta \theta}g^{\phi \phi}F_{\theta \phi}{\rm d}\theta{\rm d}\phi, \end{align} which gives \begin{equation} \Phi_{{\rm B}}=\frac{2\pi}{\kappa}\oint_{s^2}-\frac{r \sin \theta \cosh \alpha \left(\sinh ^2\alpha \left(8 m^2+r \cos 2 \theta (4 m-r)-4 m r+r^2\right)+8 m^2 r \sin ^2\theta (2 m-r) \cosh ^2\alpha\right)^2}{8 (2 m-r) \left(4 m^2 r \sin ^2\theta (2 m-r) \cosh ^2\alpha+\sinh ^2\alpha\left(3 m^2 \cos ^2\theta +\sin ^2\theta (r-2 m)^2\right)\right)^2}{\rm d}\theta, \end{equation} which again, is a function of $r$. Taking the limit, $r \rightarrow \infty$ one obtains \begin{equation} \Phi_{{\rm B}}=\frac{2\pi}{\kappa} \cosh \alpha. \end{equation} The electric, and magnetic fluxes for the four dimensional metric deduced from (\ref{15}) are valid for both signs of $m$. We conclude that, the magnetic monopole gives rise to a dyon plus after the boost \cite{kible}. To see this, we convert the magnetic fields $B_r$ and $B_{\theta}$ from spherical coordinates to a cartesian one on the $(x,z)$ plane, that is \begin{eqnarray} B_{x}&=&B_{r}\sin \theta+B_{\theta}\cos\theta~, \\ B_{z}&=&B_{r}\cos \theta-B_{\theta}\sin\theta, \end{eqnarray} using $x=r\sin\theta$, $z=r\cos\theta$. The magnetic fields $B_{x}$ and $B_{z}$ are shown in Fig. (\ref{B}). The vector field illustrates that there should be a dyon at the origin plus a string along the $z$ axis. For the boosted Kaluza-Klein magnetic monopole, we can obtain the conserved quantities in the spacetime, in the presence of Killing vectors $\xi^{\mu}=\delta ^{\mu}_{t}$ and $\xi^{\mu}=\delta ^{\mu}_{\phi}$, which correspond to time translation and axial symmetry, respectively. To do this, we use the following integral for the conserved quantities \cite{padm} \begin{equation} I=\frac{1}{8\pi G}\int_{s}\nabla^{n}\xi^{m} {\rm d}^2\Sigma_{mn}, \end{equation} where ${\rm d}^2\Sigma_{mn}$ should be taken over a two-dimensional surface located at the spatial infinity, that can be calculated via the metric components (\ref{44}). If we consider the time translation invariance $\xi^{\mu}=\delta ^{\mu}_{t}$, and substituting into the integral we get \begin{equation} M=\frac{m}{2G}\frac{\sinh 2\alpha}{4m^2\cosh^2\alpha-\sinh^2 \alpha}, \end{equation} which is the total mass of the system. Note however, that in the case of axial symmetry, where the relevant Killing vector is $\xi^{\mu}=\delta ^{\mu}_{\phi}$, the integral will give $J=0$, where $J$ is the angular momentum. Also note that $M=0$ for $\alpha=0$, in agreement with the results of section \ref{3}. The boost, therefore generates mass and electric charge. \begin{center} \begin{figure} \hspace{4.cm}\includegraphics[width=8.cm]{dyon}\caption{\label{B} \small Vector plots for $B_x$ and $B_z$ in the case with $\cosh \alpha=\sqrt{2}$, $\sinh \alpha =1$, $m=1$ in the x-z plane, which corresponds to a dyon at the origin plus a magnetically charged string along the $z$ axis. } \end{figure} \end{center} \section{Conclusion} Inspired by the Taub-NUT solution, we considered a Kaluza-Klein vacuum solution in five dimensions, which described a point-like magnetic monopole in four dimensional spacetime. The source supporting the four dimensional space-time was shown to differ from that of an ultra-relativistic fluid, in contrast to the solution of Wesson and Leon \cite{23}. The pressure is anisotropic in both cases. We calculated the magnetic charge and showed that the total magnetic flux of the monopole through any spherical surface centered at the origin was constant, indicating that there is no extended magnetized source. The gravitational mass was derived in two ways and it was shown to vanish using both definitions. It was pointed out that the singularity which appears at finite $r$ is neither a horizon, nor a surface of finite, non-vanishing surface area. In a more appropriate coordinate system, this was shown to be a curvature singularity at ${\tilde r}=0$. The main contribution of this paper was studying the properties of the boosted Kaluza-Klein magnetic monopole. It was shown that the boosted solution acquires significantly different physical properties in $(3+1)$ dimensions, including the appearance of a magnetically charged string attached to a dyon with extended electric and magnetic charges.We considered both signs $\pm$ for $m$ in our calculations, and showed that for obtaining the main results of the paper, which appear after boost, it was not essential to choose a particular sign for $m$. \\ {\bf Acknowledgements}\\ The authors would like to thank the anonymous referee for helpful comments. N.R. Acknowledges the support of Shahid Beheshti University.
{ "timestamp": "2018-05-08T02:17:29", "yymm": "1805", "arxiv_id": "1805.02509", "language": "en", "url": "https://arxiv.org/abs/1805.02509" }
\section{Introduction}\label{sec:introduction} Dictionary learning and sparse coding have found wide applicability in Image/Signal processing, Machine Learning and Computer Vision in recent times. Examples applications include but are not limited to, image classification \cite{Mairal2009,qiu2014information}, image restoration \cite{Wright2010} and face recognition \cite{Wright2009} and many others. The traditional dictionary learning (DL) and sparse coding (SC) formulation assumes that the input data lie in a vector space, and assumes a linear generative model for the data by approximating the data with a sparse linear combination of the dictionary atoms (elements). Thus, the objective function of the DL problem typically has a data fidelity term to minimize the ``reconstruction error'' in the least squares sense. Sparsity is then enforced on the weights in the linear combination via a tolerance threshold on the $\ell_0$-norm of the weight vector. This however leads to an NP-hard problem and the most popular approach for solving this problem (with no convergence guarantees) is the K-SVD based approach \cite{aharon2005k}. For a fixed dictionary, a convex approximation to the $\ell_0$-norm minimization to induce sparsity can be achieved using the $\ell_1$-norm constraint on the weight vector \cite{candes2006stable,donoho2012sparse,akhtar2016discriminative}. The problem of finding both the optimal dictionary and the sparse-codes however remains to be a hard computational problem in general. For further discussion on this topic and the problem of complete dictionary recovery over a sphere, we refer the readers to \cite{Sun2015}, where authors provide a provably convergent algorithm. In many application domains however, the data do not reside in a vector space, instead they reside on a Riemannian manifold such as the Grassmannian \cite{chakraborty2015iccv,cetingul2009intrinsic}, the hypersphere \cite{MardiaBook,srivastava2007riemannian,salehian2015efficient}, the manifold of symmetric positive definite (SPD) matrices \cite{Moakher_simax05,LengletRDF_jmiv06,fletcher2007riemannian,sra2011generalized,Xie2013} and many others. Generalizing the DL \& SC problem from the case of vector space inputs to the case when the input data reside on a Riemannian manifold is however difficult because of the nonlinear structure of Riemannian manifolds \cite{Xie2013}. One could consider embedding the Riemannian manifold into a Euclidean space, but a problem with this method is that there does not exist a canonical embedding for a general Riemannian manifold. This motivated researchers \cite{Xie2013,Harandi2012,sra2011generalized,Li2013,zhang2017analytic} to generalize the DL and SC problem to Riemannian manifolds. Though the formulation on a Riemannian manifold involves a ``reconstruction error'' term analogous to the vector space case, defining a sparsity inducing constraint on a manifold is nontrivial and should be done with caution. This is because, a Riemannian manifold lacks ``global'' vector space structure since it does not have the concept of a global origin. Hence, as argued in \cite{Xie2013}, one way to impose the sparsity inducing constraint is via an an affine constraint, i.e., the sparsity constraint is over an affine subspace defined by the tangent space at each data point on the manifold. We now briefly review a few representative algorithms for the DL \& SC problem on Riemannian manifolds. A popular solution to the DL problem is to make use of the tangent spaces, which are linear spaces associated with each point on a Riemannian manifold. This approach essentially involves use of linear approximation in the smooth neighborhood of a point. Guo et al. \cite{Guo2013} use a Log-Euclidean framework described at length in \cite{arsigny2007geometric} to achieve a sparse linear representation in the tangent space at the Fr\'{e}chet mean of the data. Xie et al. \cite{Xie2013} developed a general dictionary learning formulation that can be used for data on any Riemannian manifold. In their approach, for the SC problem, authors use the Riemannian Exponential (Exp.) and Logarithm (Log.) maps to define a generative process for each data point involving a sparse combination of the Log.-mapped dictionary atoms residing on the manifold. This sparse combination is then realized on the manifold via the Exp.-map. Their formulation is a direct generalization of the linear sparsity condition with the exception of the origin of the linear space being at the data point. Further, they impose an affine constraint in the form of the weights in the weight vector summing to one. This constraint implies the use of affine subspaces to approximate the data. For fixed weights however, estimating the dictionary atoms is a hard problem and a manifold line search method is used in their approach. In another method involving DL and SC on the manifold of SPD matrices, Cherian et al. \cite{Cherian2014} proposed an efficient optimization technique to compute the sparse codes. Most recently, authors in \cite{schmitz2017wasserstein} introduced a novel nonlinear DL and SC method for histograms residing in a simplex. They use the well known Wasserstein distance along with an entropy regularization \cite{Cuturi} to reconstruct the histograms that are Wasserstein barycenter approximations of the given data (histograms). They solve the resulting optimization for both the dictionary atoms and weights using a gradient based technique. Authors point out that using the entropy regularization leads to a convex optimization problem. However, authors did not discuss sparsity of the ensuing Wasserstein barycenter dictionary based representation. Sparsity property is of significant importance in many applications and the focus of our work here is on how to achieve sparsity without explicitly enforcing sparsity inducing constraints. Several recent works report the use of kernels to accomplish dictionary learning and sparse coding on Riemannian manifolds \cite{Harandi2015,Li2013,Harandi2012}. In these, the Riemannian manifold is embedded into the Reproducing Kernel Hilbert Space (RKHS). DL and SC problems are then formulated in the RKHS. RKHS is a linear space, and hence it is easier to derive simple and effective solutions for the DL and SC problems. Recently, authors in \cite{feragen2015geodesic} presented conditions that must be strictly satisfied by geodesic exponential Kernels on general Riemannian manifolds. This important and significant result provides guidelines for designing a kernel based approach for general Riemannian manifolds. In this work, we present a novel formulation of the DL and SC problems for data residing on a statistical manifold, without explicitly enforcing a sparsity inducing constraint. The proposed formulation circumvents the difficulty of directly defining a sparsity constraint on a Riemannian manifold. Our formulation is based on an information theoretic framework and is shown to yield sparse codes. Further, we extend this framework to the manifold of SPD matrices. Note that SPD matrices can be identified with the space of zero mean Gaussian distributions, which is a statistical manifold. Several experimental results are presented that demonstrate the competitive performance of our proposed algorithm in comparison to the state-of-the-art. The rest of the paper is organized as follows: in Section \ref{sec3}, we first present the conventional DL and SC problem formulation in vector spaces and motivate the need for a new formulation of the DL and SC problem on Riemannian manifolds. This is followed by a brief summary of relevant mathematical background on statistical manifolds. Following this, we summarize the mathematical results in this paper and then present the details along with our algorithm for the DL and SC problem. In Section \ref{sec4}, we present several experimental results and comparisons to the state-of-the-art. Finally, in Section \ref{sec5}, we draw conclusions. \section{An Information Theoretic Formulation}\label{sec3} In the traditional SC problem, a set of data vectors $X=\{\mathbf{x}_i\}_{i=1}^N \subset \mathbf{R}^n$, and a collection of atoms, $A=\{\mathbf{a}_j\}_{j=1}^r \subset \mathbf{R}^n$, are given. The goal is to express each $\mathbf{x}_i$ as a sparse linear combination of atoms in $A$. Let, $A$ be the (overcomplete) dictionary matrix of size $n \times r$ whose $i^{th}$ column consists of $\mathbf{a}_i$. Let $W =[\mathbf{w}_1, \cdots, \mathbf{w}_N]$ be a $r \times N$ matrix where each $\mathbf{w}_i \in \mathbf{R}^r$ consists of the coefficients of the sparse linear combination. In the DL and SC problem, the goal is to minimize the following objective function: \begin{equation} \min_{A, w_1, \cdots, w_n} \sum_{i=1}^n \|x_i - A w_i\|^2 + \mathbf {Sp}(w_i), \label{EQ:One} \end{equation} Here, $\mathbf{Sp}(w_i)$ denotes the sparsity promoting term, which can be either an $\ell_0$ norm or an $\ell_1$ norm. Since, in the above optimization problem, both the dictionary $A$ and the coefficient matrix $W$ are unknown, it leads to a hard optimization problem. As this optimization problem is computationally intractable when the sparsity promoting term is an $\ell_0$ norm constraint, most existing approaches use a convex relaxation to this objective using an $\ell_1$ norm in place of the $\ell_0$ norm constraint when performing the sparse coding. Now, instead of the traditional DL \& SC setup where data as well as atoms are vector valued, we address the problem when each data point and the atom are probability densities, which are elements of a statistical manifold (see formal definition below). In this paper, we present a novel DL and SC framework for data residing on a statistical manifold. Before delving into the details, we will briefly introduce some pertinent mathematical concepts from Differential Geometry and Statistical Manifolds and refer the reader to \cite{do1992riemannian,amari1987differential} for details. \subsection{Statistical Manifolds: Mathematical Preliminaries} \label{sec21} Let $\mathcal{M}$ be a smooth ($C^{\infty}$) manifold \cite{do1992riemannian}. We say that $\mathcal{M}$ is $n$-dimensional if $\mathcal{M}$ is locally Euclidean of dimension $n$, i.e., locally diffeomorphic to $\mathbf{R}^n$. Equipped with the {\emph Levi-Civita connection} $\nabla$, the triplet $(M, g, \nabla)$ is called a {\it statistical manifold} whenever both $\nabla$ and the dual connection, $\nabla^*$ are torsion free \cite{do1992riemannian,amari1987differential}. A point on an $n$-dimensional {\it statistical manifold}, $\mathfrak{D}$ (from here on, we will use the symbol $\mathfrak{D}$ to denote a {\it statistical manifold} unless specifically mentioned otherwise), can be identified with a (smooth) probability distribution function on a measurable topological space $\Omega$, denoted by $P(\mathbf{x};\bm{\theta})$ \cite{suzuki2014information,amari1987differential}. Here, each distribution function can be parametrized using $n$ real variables $(\theta_1, \cdots, \theta_n)$. So, an open subset $S$ of a {\it statistical manifold}, $\mathfrak{D}$, is a collection of the probability distribution functions on $\Omega$. And the chart map is the mapping from $S$ to the {\it parameter space}, $\Theta = \{\bm{\theta}\} \subset \mathbf{R}^n$. Let $\mu$ be a $\sigma$-finite additive measure defined on a $\sigma$-algebra of subsets of $\Omega$. Let $f(\mathbf{x};\bm{\theta})$ be the density of $P(\mathbf{x};\bm{\theta})$ with respect to the measure $\mu$ and assume the densities to be smooth $(C^{\infty})$ functions. Now, after giving $\mathfrak{D}$ a topological structure, we can define a Riemannian metric as follows. Let $l(\mathbf{x};\bm{\theta}) = \log f(\mathbf{x};\bm{\theta})$, then a Riemannian metric, $g$ can be defined as $g_{ij}(\bm{\theta}) = E_{\bm{\theta}} \left[ \frac{\partial l(\mathbf{x};\bm{\theta})}{\partial \theta_i} \frac{\partial l(\mathbf{x};\bm{\theta})}{\partial \theta_j}\right]$, where $E_{\bm{\theta}}[\mathbf{y}]$ is the expectation of $\mathbf{y}$ with respect to $\bm{\theta}$. In general, $g=[g_{ij}]$ is symmetric and positive semi-definite. We can make $g$ positive definite by assuming the functions $\{\frac{\partial l(\mathbf{x};\bm{\theta})}{\partial \theta_i}\}_{i=1}^n$ to be linearly independent. This metric is called the {\it Fisher-Rao} metric \cite{rao2009fisher,shun2012differential} on $\mathfrak{D}$. \subsection{Summary of the mathematical results} In the next section, we propose an alternative formulation to the DL and SC problem. We first state a few theorems as background material that will be used subsequently. Then, we define the new objective function for the DL and SC problem posed on a statistical manifold in Section \ref{sec111}. Our key mathematical results are stated in Theorems \ref{thm4} and \ref{thm5}, Corollary \ref{cor1} and \ref{cor2} respectively. Using these results, we show that our DL \& SC framework, {\it which does not have an explicit sparsity constraint}, yields sparse codes. Then, we extend our DL and SC framework to the manifold of SPD matrices, $\mathcal{P}_n$, in Section \ref{sec112}. \subsection{Detailed mathematical results} Let the manifold of probability densities, hereafter denoted by $\mathfrak{D}$ be the $n$-dimensional statistical manifold, i.e., each point on $\mathfrak{D}$ is a probability density. We will use the following notations in the rest of the paper. \begin{itemize} \item Let $\mathfrak{G}$ be a dictionary with $r$ atoms $g_1$, $\cdots,$ $g_r$, where each $g_i \in \mathfrak{D}$. \item Let $\mathfrak{F} = \{f_i\}_{i=1}^N$ $\subset \mathfrak{D}$ be a set of data points. \item And $w_{ij}$ be nonnegative weights corresponding to $i^{th}$ data point and $j^{th}$ atom, $i \in \{1,\cdots, N\}$ and $j \in \{1, \cdots, r\}$. \end{itemize} Note that, here we assume that each density $f$ or $g$ is parameterized by $\bm{\theta}$. There are many ways to measure the discrepancy between probability densities. One can choose an intrinsic metric and the corresponding distance on a statistical manifold to measure this discrepancy, such as the Fisher-Rao metric \cite{rao2009fisher,shun2012differential}, which however is expensive to compute. In this paper, we choose an extrinsic measure namely the non-negative divergence measure called the Kullback-Leibler (KL) divergence. The KL divergence \cite{cover2012elements} between two densities $f_1$ and $f_2$ on $\mathfrak{D}$ is defined by \begin{equation} \text{KL}(f_1, f_2) = \int f_1(x) \log \frac{f_1(x)}{f_2(x)} dx \end{equation} The Hessian of KL-divergence is the Fisher-Rao metric defined earlier. In other words, the KL-divergence between two nearby probability densities can be approximated by half of the squared geodesic distance (induced by the Fisher-Rao metric) between them \cite{shun2012differential}. {\it The KL-divergence is not a distance as it is not symmetric and does not satisfy the triangle inequality}. It is a special case of a broader class of divergences called the $f$-divergences as well as of the well known class of Bregman-divergences. We refer the reader to \cite{basu1998robust,liese2006divergences} for more details in this context. Given a set of densities $\mathfrak{F} = \{f_i\}$, the KL divergence from $\mathfrak{F}$ to a density $f$ can be defined by, \begin{equation} \label{neq1} \text{KL}(\mathfrak{F}, f) = \displaystyle \max_i \text{KL}(f_i, f). \end{equation} We can define the {\it KL-center} of $\mathfrak{F}$, denoted by $f_m(\mathfrak{F})$, by \begin{equation} \label{eq2} f_m(\mathfrak{F}) = \argmin_f \text{KL}(\mathfrak{F},f). \end{equation} The symmetrized KL divergence, also called the Jensen-Shannon divergence (JSD) \cite{cover2012elements} between two densities $f_1$ and $f_2$ is defined by, \begin{equation} \label{eq11} \text{JSD}(f_1, f_2) = \frac{1}{2} \text{KL}(f_1, f_2) + \frac{1}{2} \text{KL}(f_2, f_1). \end{equation} In general, given the set $\mathfrak{F}=\{f_i\}$, define a mixture of densities as, $f = \sum_i \alpha_i f_i$, $\sum \alpha_i =1$, $\alpha_i \geq 0$, $ \forall i$. It is evident that the set of $\{\alpha_i\}$ forms a simplex, which is denoted here by $\Delta$. Then, the JSD of the set $\mathfrak{F}$ with the mixture weights $\{\alpha_i\}$ is defined as, \begin{equation} \label{eq3} \text{JSD}(\{f_i\}) = H(\sum_i \alpha_i f_i) - \sum_i \alpha_i H(f_i), \end{equation} where $H(f) = -\int f(x) \log f(x) dx$ is the Shannon entropy of the density $f$. It is easy to see the following Lemma. \begin{lemma} $\text{JSD}(\{f_i\})$ is concave in $\{\alpha_i\}$ and JSD attains the minimum at an extreme point of the simplex $\Delta$. \end{lemma} \begin{proof} We refer the reader to \cite{ericthesis} for a proof of this Lemma. \end{proof} In \cite{ericthesis}, it was shown that one can compute the {\it KL- center} of $\mathfrak{F}$, $f_m(\mathfrak{F})$, in Equation \ref{eq2} using the following theorem: \begin{theorem} \label{thm2} The {\it KL center} of $\mathfrak{F}$, $f_m(\mathfrak{F})$, is given by \begin{eqnarray*} f_m(\mathfrak{F}) &=& \sum_i \hat{\alpha}_i f_i \\ \text{where } \hat{\bm{\alpha}} &=& \argmax_{\bm{\alpha}} \text{JSD}(\{f_i\}). \end{eqnarray*} \end{theorem} \begin{proof} We refer the reader to \cite{ericthesis} for a proof of this theorem. \end{proof} Observe that, the $\text{KL}(\mathfrak{F}, f)$ defined in Eq. \ref{neq1} has the positive-definiteness property, i.e., $\text{KL}(\mathfrak{F}, f) \geq 0$ for any $\mathfrak{F}$ and $f$ and $\text{KL}(\mathfrak{F}, f) = 0$ if and only if $f \in \mathfrak{F}$. Both of these properties are evident from the definition of the KL divergence between two densities. {\bf Coding theory interpretation:} It should be noted that the above result is the same as the well known redundance-capacity theorem of coding theory presented in \cite{gallager1968information,davisson1980source,ryabko1994fast,akhtar2016discriminative}. The theorem establishes the equivalence of: the minmax excess risk i.e., redundancy for estimating a parameter $\theta$ from a family/class $\Theta$ of sources, the Bayes risk associated with the least favorable prior and the channel capacity when viewing statistical modeling as a communication via a noisy channel. In \cite{akhtar2016discriminative}, a stronger result was shown, namely that, the capacity is also a lower bound for ``most'' sources in the class. The results in \cite{ericthesis} however approached this problem from a geometric viewpoint i.e., one of finding barycenters of probability distributions using the KL-divergence as the ``distance'' measure. Our work presented here takes a similar geometric view point to the problem at hand, namely the DL-SC problem. Moving on, we now define the $\ell_p$ KL divergence, denoted by $\text{KL}_p(\mathfrak{F}, f)$, $p > 0$ as: \begin{equation} \label{neq2} \text{KL}_p(\mathfrak{F}, f) = \|\left(\text{KL}(f_1, f), \cdots, \text{KL}(f_N, f)\right)^t\|_p. \end{equation} where, $\|.\|_p$ is the $\ell_p$ norm of the vector and $\mathfrak{F} = \{f_i\}_{i=1}^N$. It is easy to prove the following property of $\text{KL}_p(\mathfrak{F}, f)$ in the following Lemma. \begin{lemma} $\text{KL}_p(\mathfrak{F}, f)$ as defined in Eq. \ref{neq2} is a well-defined {\it statistical divergence} for any $p > 0$. Furthermore, the KL divergence as defined in Eq. \ref{neq1} is a special case of $\text{KL}_p$ when $p =\infty$. \end{lemma} Without any loss of generality, we will assume $p=1$ and refer $\ell_1$ KL-center to be simply the KL-center for the rest of the paper (unless mentioned otherwise). Now, given the set of densities $\mathfrak{F} = \{f_i\}_{i=1}^N$ and a set of weights $\{\alpha_i\}_{i=1}^N$, we can define the {\it weighted KL-center}, denoted by $f_m(\mathfrak{F}, \{\alpha_i\})$ as follows: \begin{equation} \label{eq4} f_m(\mathfrak{F}, \{\alpha_i\}) = \argmin_f \sum_i\alpha_i\text{KL}(f_i, f). \end{equation} We like to point out that it is however easy to see that the $\ell_{\infty}$ KL-center can not be generalized to the corresponding weighted KL center. The above defined weighted KL-center has the following nice property: \begin{lemma} The weighted KL-center as defined in Eq. \ref{eq4} is a generalization of the KL-center in Eq. \ref{eq2} (with $p = 1$). The KL-center can be obtained from the weighted KL-center by substituting $\alpha_i = 1/N$, for all $i$. \end{lemma} \begin{theorem} \label{thm3} Given $\mathfrak{F}$ and $\{\alpha_i\}$ as above, $f_m(\mathfrak{F}, \{\alpha_i\}) = \sum_i \alpha_i f_i$ \end{theorem} \begin{proof} For simplicity, assume that each $f_i$ is discrete and assume that it can take on $k$ discrete values, $x_1, \cdots, x_k$. Then, consider the minimization of $\sum_i\text{KL}(f_i, f)$ with respect to $f$ subject to the constraint that $f$ is a density, i.e., for the discrete case, $\sum_j f(x_j)=1$. By using a Lagrange multiplier $\lambda$, we get, \begin{eqnarray*} \frac{\partial}{\partial f(x_j)} \left\{\sum_i\alpha_i\text{KL}(f_i, f) + \lambda \left(\sum_j f(x_j) - 1\right)\right\} &=& 0 , \:\forall j\\ \implies \left(\lambda - \frac{\sum_i \alpha_i f_i(x_j)}{f(x_j)}\right) &=& 0, \: \forall j \\ \implies f(x_j) = \frac{\sum_i \alpha_i f_i(x_j)}{\lambda}, \: \forall j. \end{eqnarray*} Now, by taking $\frac{\partial}{\partial \lambda} \left\{\sum_i\alpha_i\text{KL}(f_i, f) + \lambda \left(\sum_j f(x_j) - 1\right)\right\}$ and equating to $0$, we get $f(x_j) = \sum_i \alpha_i f_i(x_j)$, $\forall j$. Thus, $f = \sum_i \alpha_i f_i$. We can easily extend this to the case of continuous $f_i$ by replacing summation with integration and obtain a similar result. \end{proof} \subsubsection{DL and SC on a {\it statistical manifold}} \label{sec111} Now, we will formulate the DL and SC problems on a {\it statistical manifold}. The idea is to express each data point, $f_i$ as a sparse weighted combination of the dictionary atoms, $\{g_j\}$. Given the above hypothesis, our objective function is given by: \begin{align} \label{eq5} \displaystyle \argmin_{\mathfrak{G}^*, W^*} \:\:\:\:E & = \sum_{i=1}^N \text{KL}\left(f_i, \hat{f}_i\right) \\ \text{subject to} & \: \: \: w_{ij} \geq 0, \forall i,j \\ & \sum_j w_{ij} = 1, \forall i. \end{align} where, $\hat{f}_i = \sum_{j=1}^r w_{ij}g_j$, $\forall i$. In the above objective function, $\hat{f}_i$ is the minimizer of the {\it weighted KL-center} of $\{g_j\}_{j=1}^r$ with weights $\{w_{ij}\}_{j=1}^r$. The constraint $w_{ij} \geq 0$ and $\sum_j w_{ij} = 1$ is required to make $\hat{f}_i$ a probability density. Note that, we can view $\hat{f}_i$ as a reconstructed density from the dictionary elements $\{g_j\}$ and weights $\{w_{ij}\}$. {\it We will now prove one of our key result} namely, that the minimization of the above objective function with respect to $\{w_{ij}\}$ yields a sparse set of weights. \begin{theorem} \label{thm4} Let $\mathfrak{G} = \{g_j\}$ and $W = [w_{ij}]$ be the solution of the objective function $E$ in Equation \ref{eq5}. Then, \begin{align*} \left(\forall j \right), \text{KL}(f_i, g_j) \geq r_i , \text{ where } r_i = \sum_k w_{ik} \text{KL}(f_i, g_j) \end{align*} \end{theorem} \begin{proof} Consider the random variables $X_1, \cdots, X_N$ with the respective densities $f_1, \cdots, f_N$. Since each dictionary element $g_j$ is ``derived'' from $\{f_i\}$, hence, we can view each $g_j$ to be associated with a random variable $Y_j$ such that $Y_j = \tilde{g}_j\left(\{X_i\}\right)$, i.e., $Y_j$ is a transformation of random variables $\{X_i\}$. We now have, \begin{eqnarray*} \begin{split} E &= \sum_{i=1}^N \text{KL}\left(f_i, \sum_{j=1}^r w_{ij}g_j\right) \\ &= \sum_{i=1}^N \left[\int \left\{f_i(x) \log(f_i(x)) dx\right\} \right. - \\ & \left. \int \left\{f_i(x) \log\left( \sum_j w_{ij} g_j(x)\right)\right\} dx\right] \end{split} \end{eqnarray*} Using Jensen's inequality we have, \begin{eqnarray*} \begin{split} E&\leq& \sum_{i=1}^N \left[\int \left\{f_i(x) \log(f_i(x)) dx\right\} - \right. \\ && \left. \int \left\{f_i(x) \sum_j w_{ij} \log(g_j(x))\right\} dx\right] \\ &=& \sum_{i=1}^N E_{X_i}\left[ \log(f_i) - \sum_j w_{ij} \log(g_j)\right] \end{split} \end{eqnarray*} where, $E_{X}[h(X)]$ is the expectation of $h(X)$, where $h(X)$ is a transformation of the random variable $X$. So, \begin{eqnarray*} E&\leq& \sum_{i=1}^N E_{X_i} [\log(f_i)] - \sum_{i=1}^N \sum_j w_{ij} E_{X_i}[\log(g_j)] \\ &=& \sum_{i=1}^N \sum_{j=1}^r w_{ij} E_{X_i} [\log(f_i)] - \sum_{i=1}^N \sum_{j=1}^r w_{ij} E_{X_i}[\log(g_j)] \\ &=& \sum_{i=1}^N \sum_j w_{ij} E_{X_i} [\log(f_i) - \log(g_j)] \\ &=& \sum_{i=1}^N \sum_j w_{ij} \text{KL}(f_i, g_j) \end{eqnarray*} So, $E \leq \sum_{i=1}^N \sum_j w_{ij} \text{KL}(f_i, g_j)$. Since, both $E$ and $\sum_{i=1}^N \sum_j w_{ij} \text{KL}(f_i, g_j)$ attain minima at the same value (equal to $0$), we can minimize $ \sum_{i=1}^N \sum_j w_{ij} \text{KL}(f_i, g_j)$ instead of $E$. Using a Lagrange multiplier $r_i$ for each constraint $\sum_j w_{ij} = 1$, and $\gamma_{ij}$ for each constraint $w_{ij} \geq 0$, we get the following function $$ \sum_j w_{ij} \text{KL}(f_i, g_j) + \sum_{i=1}^N r_i (1 - \sum_j w_{ij}) - \sum_{i, j} \gamma_{ij} w_{ij} $$ We minimize the above objective function and add the KKT conditions $$ \gamma_{ij} w_{ij} = 0 $$ to get, \begin{align} \label{eq6} \text{KL}(f_i, g_j) = \left\{\begin{array}{lr} r_i + \gamma_{ij}, & \text{if } w_{ij} = 0\\ r_i, & \text{if } w_{ij} > 0 \end{array}\right. \end{align} As each $\gamma_{ij} \geq 0$, this concludes the proof. \end{proof} A straightforward Corollary of the above theorem is as follows: \begin{corollary} \label{cor1} The objective function $E$ is bounded above by $\sum_{i=1}^N r_i$, i.e., $E \leq \sum_{i=1}^N r_i$. \end{corollary} \begin{proof} From Theorem \ref{thm4}, we know that, $E \leq \sum_{i=1}^N \sum_j w_{ij} \text{KL}(f_i, g_j)$. From Equation \ref{eq6}, we can now get $\sum_j w_{ij} \text{KL}(f_i, g_j) = r_i$, $\forall i$. Thus the Corollary holds. \end{proof} \begin{figure}[!ht] \centering \includegraphics[scale=0.55]{sdl_proof} \caption{Illustrative figure for Theorem \ref{thm5}. Blue and brown circles are atoms with non-zero and zero weights respectively.}\label{fig0.5} \end{figure} We can see that the dictionary elements, $g_j$, for which the associated weights are positive, are exactly at the same distance $r_i$ from the density $f_i$. Corollary \ref{cor1} implies that solving the objective function in Equation \ref{eq5} yields a ``tight cluster'' structure around each $f_i$, as minimizing $E$ is equivalent to minimizing each $r_i$. \begin{corollary} \label{cor2} Let, $f_i$ be well approximated by a single dictionary element $g_{l}$. Further assume that $g_l$ is a convex combination of a set of dictionary atoms, i.e., $g_l = \sum_{k=1}^{r_1} w_{ij_k} g_{j_k}$. Without loss of generality (WLOG), assume that $w_{ij_k} > 0$, $\forall k$. Let, $r_i = \text{KL}(f_i, g_l)$ and $\hat{r}_i = \text{KL}(f_i, g_{j_k})$, $\forall k=1, \cdots, r_1$. Then, $r_i < \hat{r}_i$. \end{corollary} \begin{proof} Using the hypothesis in Theorem \ref{thm4}, we have, \begin{eqnarray*} r_i &=& \int f_i(x)\log(f_i(x)) dx - \int f_i(x) \log(g_l(x)) dx \\ & < & \left[ \int f_i(x) \log(f_i(x)) dx - \int f_i(x) \sum_{k=1}^{r_1} w_{ij_k} \log(g_{j_k}) dx\right] \\ &=& \sum_{k=1}^{r_1} w_{ij_k} \text{KL}(f_i, g_{j_k}) \\ &=& \hat{r}_i \end{eqnarray*} Hence, $r_i < \hat{r}_i$. Using Corollary \ref{cor1}, we can see that, in order to represent $f_i$, the objective function is to minimize $r_i$. Thus, using Corollary \ref{cor1}, we can say that a sparse set of weights, i.e., corresponding to $g_l$, is preferable over a set of non-zero weights, i.e., corresponding to a set of $\{g_{j_k}\}$. \end{proof} Now, we will state and prove the second key result namely, a theorem which states that our proposed algorithm yields non-zero number of atoms whose corresponding weights are zero i.e., $k-sparsity$ for some $k > 0$. \begin{theorem} \label{thm5} Let $\mathfrak{S}_i = \left\{w_{ij} | w_{ij}=0\right\}$, then, with probability $1$, the cardinality of $\mathfrak{S}_i$, i.e., $|\mathfrak{S}_i| >0$, for all $i$. \end{theorem} \begin{proof} Let $\mu^*$ be the probability measure on $\mathfrak{D}$. let $\mathcal{B}(f, r)$ denote a closed ball of radius $r$ centered at $f \in \mathfrak{D}$, we assume that the measure is bounded, i.e., $\exists$ constants, $\kappa_1>0$ and $\kappa_2>0$ such that, $ \left(\kappa_1 r\right)^n \leq \mu^*\left(\mathcal{B}(f, r)\right) \leq \left(\kappa_2 r\right)^n $, for all $0<r\leq 1$. Let us assume, $\mathfrak{G}$ is an $\epsilon$-separated set for some $0<\epsilon \leq 1$. Furthermore, assume that $\mathfrak{F}$ has finite variance, i.e., $\exists C$ such that $\forall i, j$, $\text{KL}\left(f_i, f_j\right) \leq C$, we will call this radius $C$ (closed) ball as the \emph{data ball}. Let the optimum value of $E/N$ be $r^*$, i.e., $ r^* = \max_i \text{KL}\left(f_i, \hat{f}_i\right)$ (see Figure \ref{fig0.5}). Now, for a given $i$, consider $\mathcal{B}\left(f_i, r^*\right)$, from Theorem \ref{thm4}, we know that if for some $j$, $w_{ij} >0$, then, $\text{KL}(f_i, g_j) = r^*$, else, $\text{KL}(f_i, g_j) > r^*$. Thus, $\mathfrak{S}_i$ can be rewritten as, $\mathfrak{S}_i = \left\{ g_j | \text{KL}(f_i, g_j) > r^*\right\}$. Let, $N\left(r^*, C\right)$ be the number of $g_j$s in $\mathcal{B}(f_i, C) \setminus \mathcal{B}(f_i, r^*)$. Then, $N\left(r^*, C\right)$ follows a Poisson distribution with rate $\lambda = \mu^*\left(\mathcal{B}(f, \epsilon/2)\right)$. Hence, it is easy to see that $|\mathfrak{S}_i| = E\left[N\left(r^*, C\right)\right]$. Now, \begin{align*} E\left[N\left(r^*, C\right)\right] &= \mu^*\left(\mathcal{B}(f, \epsilon/2)\right) \left(\frac{2(C-r^*)}{\epsilon}\right)^n \\ &\geq \left(\frac{\kappa_1 \epsilon}{2}\right)^n \left(\frac{2(C-r^*)}{\epsilon}\right)^n \\ &= \left(\kappa_1 (C-r^*)\right)^n \end{align*} Since we are reconstructing $f_i$ as a convex combination of $g_j$s, the only case when $r^* = C$ occurs when all $f_i$s lie on the boundary of the \emph{data ball}. Let $\mathcal{T} = \left\{f_i| f_i \in \mathfrak{F}, f_i \text{is on the boundary of \emph{data ball}}\right\}$, clearly, $\mu^*\left(\mathcal{T}\right) = 0$. Hence, with probability $1$, $C-r^*>0$. Now, as $\kappa_1>0$, we can say with probability $1$, $|\mathfrak{S}_i| = E\left[N\left(r^*, C\right)\right] >0$. Since $i$ is arbitrary, the claim holds. This comletes the proof. \end{proof} \emph{Theorem \ref{thm5} states that our proposed algorithm yields $k$-sparse atoms, for some $k>0$}. \vspace*{12pts} {\bf Comment on using Hellinger distance:} On the space of densities, one can define Hellinger distance (denoted by $d_{L2}$) as follows: Given $f, g \in \mathfrak{D}$, one can use square root parametrization to map these densities on the unit Hilbert sphere, let the points be denoted by $\bar{f}$, $\bar{g}$. Then, one can define the $\ell_2$ distance between $\bar{f}$ and $\bar{g}$ (the Hellinger distance between $f$ and $g$) as $d^2_{L2}\left(f,g\right) = \frac{1}{2}\:\int \left(\bar{f}-\bar{g}\right)^2$. One can easily see that the above expression is equal to $d^2_{L2}\left(f,g\right) = \left( 1 - \int \left(\bar{f}\bar{g}\right) \right)$. This metric is the chordal metric on the hypersphere, and hence an {\it extrinsic} metric. We now replace $\text{KL}$ divergence by the Hellinger distance in our objective function in Eq. \ref{eq5}. The modified objective function is given in Eq. \ref{eq115}. \begin{align} \label{eq115} \displaystyle \argmin_{\mathfrak{G}^*, W^*} \:\:\:\:E & = \sum_{i=1}^N d^2_{L2}\left(f_i, \hat{f}_i\right) \\ \text{subject to} & \: \: \: w_{ij} \geq 0, \forall i,j \\ & \sum_j w_{ij} = 1, \forall i. \end{align} One can easily show that the above analysis of sparsity also holds when we replace the $\text{KL}$ divergence by the Hellinger distance (as done in Eq. \ref{eq115}). The following Theorem (without proof) states this result. \begin{theorem} \label{thm114} Let $\mathfrak{G} = \{g_j\}$ and $W = [w_{ij}]$ be the solution of the objective function $E$ in Equation \ref{eq115}. Then, \begin{align*} \left(\forall j\right), d^2_{L2}(f_i, g_j) \geq r_i , \text{ where } r_i = \sum_k w_{ik} d^2_{L2}(f_i, g_j) \end{align*} Then, analogous to Corollary \ref{cor2}, it can be easily shown that the set of weights is sparse. \end{theorem} \subsubsection{DL and SC on the {\it manifold of SPD matrices}} \label{sec112} Let, the manifold of $n\times n$ SPD matrices be denoted by $\mathcal{P}_n$. We will use the following notations for the rest of the paper. On $\mathcal{P}_n$, \begin{itemize} \item $\mathcal{C}$ be a dictionary with $r$ atoms $C_1$, $\cdots,$ $C_r$, where each $C_i \in \mathcal{P}_n$. \item $\mathcal{X} = \{X_i\}_{i=1}^N$ $\subset \mathcal{P}_n$ be a set of data points. \item $w_{ij}$ be nonnegative weights corresponding to the $i^{th}$ data point and the $j^{th}$ atom, $i \in \{1,\cdots, N\}$ and $j \in \{1, \cdots, r\}$. \end{itemize} We now extend the DL and SC formulation to $\mathcal{P}_n$. Note that, a point, $C \in \mathcal{P}_n$ can be identified with a Gaussian density with zero mean and covariance matrix $C$. Hence, it is natural to extend our information theoretic DL \& SC framework from a statistical manifold to $\mathcal{P}_n$. Recall that the symmetrized $\text{KL}$ divergence between two densities $f$ and $g$ can be defined by the JSD in Equation \ref{eq11}. Using the square root of the JSD, one can define a ``distance'' between two SPD matrices on $\mathcal{P}_n$ (the quotes on distance are used because JSD does not satisfy the traingle inequality for it to be a distance measure). Similar to Equation \ref{eq4}, we can analogously define the {\it symmetrized weighted KL center}, denoted by $M_{\text{KL}}$, as the minimizer of the sum of symmetrized squared KL divergences. Given, $\mathcal{X} = \{X_i\}_{i=1}^N$, we can define the {\it symmetrized KL-center} of $\mathcal{X}$ as follows \cite{wang2005dti} $$ M_{\text{KL}}(\mathcal{X}) = \sqrt{B^{-1}}\sqrt{\sqrt{B}A\sqrt{B}}\sqrt{B^{-1}} $$ where $A = \frac{1}{N} \sum_i X_i$, $B = \frac{1}{N} \sum_i X_i^{-1}$. We can extend the above result to define the {\it symmetrized weighted KL-center} via the following Lemma. \begin{lemma} On $\mathcal{X} = \{X_i\}_{i=1}^N$ with weights $\{w_i\}_{i=1}^N$, the {\it symmetrized weighted KL-center}, $M_{\text{KL}}(\mathcal{X}, \{w_i\})$ is defined as $$ M_{\text{KL}}(\mathcal{X}, \{w_i\}) = \sqrt{B^{-1}}\sqrt{\sqrt{B}A\sqrt{B}}\sqrt{B^{-1}} $$ where $A = \frac{1}{\sum_j w_j} \sum_i w_i X_i$, $B = \frac{1}{\sum_j w_j} \sum_i w_i X_i^{-1}$ \end{lemma} Analogous to Equation \ref{eq5}, we can define our formulation for DL and SC on $\mathcal{P}_n$ as follows: \begin{align} \label{eq15} \displaystyle \argmin_{\mathcal{C}^*, W^*} \:\:\:\:E & = \sum_{i=1}^N \text{J}(X_i, \hat{X}_i) \\ \text{where} & \: \: \: \hat{X}_i = M_{\text{KL}}(\mathcal{C}, \{w_{ij}\}_{j=1}^r) \\ \text{subject to} & \: \: \: w_{ij} \geq 0, \forall i,j \\ & \sum_j w_{ij} = 1, \forall i. \end{align} Here $\text{J}(X, \hat{X})$ is the {\it symmetrized-KL} also known as the J-divergence and is defined as: $$ \text{J}(X,\hat{X}) = \frac{1}{4} \left[X^{-1}\hat{X} + \hat{X}^{-1}X - 2n\right] $$ \begin{algorithm} \KwIn{$\mathcal{X} = \{X_i\}_{i=1}^N$ $\subset \mathcal{P}_n$, $\eta > 0$, $\epsilon>0$} \KwOut{$\mathcal{C} = \{C_j\}_{j=1}^r$ $\subset \mathcal{P}_n$, $W = [w_{ij}] \geq 0$} Initialize $\mathcal{C}$ by using the k-means algorithm on$ \mathcal{P}_n$\; Initialize $W$ randomly using random non-negative numbers from $[0,1]$\; \For{$i = 1, \cdots, N$}{ Normalize the vector $w(i,.)$ so that it sums to $1$ \; } $flag \leftarrow 1$\; Compute the objective function, $E$ using Equation \ref{eq15}\; $E^\text{old} \leftarrow E$\; $\text{iter} \leftarrow 1$\; $\lambda(1) \leftarrow 1$\; $Y^W \leftarrow W$\; $Y^C_j \leftarrow C_j$, $\forall j$\; \While{flag $=1$}{ Perform an alternating step optimization by alternating between $\mathcal{C}$ and $W$ using the accelerated gradient descent method \cite{bubeck2014theory}\; $\lambda(\text{iter}+1) \leftarrow \frac{1+\sqrt{1+4\lambda(\text{iter})^2}}{2}$\; $\gamma(\text{iter}) \leftarrow \frac{1-\lambda(\text{iter})}{\lambda(\text{iter}+1)}$\; $nY^W \leftarrow W - \eta*\frac{\nabla E}{\nabla W}$\; $W \leftarrow (1-\gamma(\text{iter}))nY^W + \gamma(\text{iter})Y^W$\; Using $W$ update $C_j$ using the following steps, $\forall j$\; $nY^C_j \leftarrow Exp_{C_j} \left(-\eta \nabla E(C_j)\right)$\; $C_j \leftarrow Exp_{nY^C_j}\left(\gamma(\text{iter})Log_{nY^C_j}Y^C_j\right)$\; Recompute the objective function, $E$ using new $\mathcal{C}$ and $W$, using Eq. \ref{eq15}\; \If{$|E - E^\text{old}| < \epsilon$}{ {\it flag} $\leftarrow 0$\; } $\text{iter} \leftarrow \text{iter} + 1$ } \caption{{The SDL algorithm}} \label{alg1} \end{algorithm} Now, we present an algorithm for DL and SC on $\mathcal{P}_n$ that will henceforth be labeled as the {\it information theoretic dictionary learning and sparse coding} (SDL) algorithm. We use an alternating step optimization procedure, i.e., first learn $W$ with $\mathcal{C}$ held fixed, and then learn $\mathcal{C}$ with $W$ held fixed. We use the well known Nesterov's accelerated gradient descent \cite{bubeck2014theory} adapted to Riemannian manifolds for the optimization. The algorithm is summarized in the Algorithm block \ref{alg1}. In the algorithm, after the initialization steps up to line $13$, we do an alternative step optimization between $\mathcal{C}$ and $W$. Line $15$-$18$ are updates of $W$ using the accelerated gradient descent. In line $20$, we use the Riemannian gradient descent to map the gradient vector on the manifold (to get $nY^C$) using Riemannian Exponential map ($Exp$) \cite{do1992riemannian}. Then, we update $C_j$ using the Riemannian accelerated gradient descent steps by first lifting $Y^C$ on to the tangent space anchored at $nY^C$ (using Riemannian inverse exponential map ($Log$) and then map it on the manifold using $Exp$ map. Then, we recompute the error using the updated $\mathcal{C}$ and $W$ and then iterate. \section{Experimental Results}\label{sec4} In this section, we present experimental results on several real data sets demonstrating the performance of our algorithm, the {\it SDL}. We present two sets of experiments showing the performance in terms of (1) reconstruction error and achieved sparsity on a statistical manifold and, (2) classification accuracy and achieved sparsity on the manifold of $n \times n$ SPD matrices, $P_n$. Though the objective of a DL and SC algorithm is to minimize reconstruction error, due to the common trend (in literature) of using classification accuracy as measure, we report the classification accuracy measure on popular datasets for data on $P_n$. But since the main thrust of the paper is a novel DL and SC algorithm on a statistical manifold, we present reconstruction error experiments in support of the algorithm performances. All the experimental results reported here were obtained on a desktop with a single 3.33 GHz Intel-i7 CPU with 24 GB RAM. We did not compare our work with the algorithm proposed in \cite{Xie2013} since for a moderately large data, their publicly available code makes comparisons computationally infeasible. \subsection{Experimental results on the statistical manifold} In order to demonstrate the performance of {\it SDL} on {\it MNIST data} \cite{lecun1998gradient}, we randomly chose $100$ images from each of the $10$ classes. We then represent each image as a probability vector as follows. We consider the image graph $Z = \left(x, y, I(x,y)\right)_{(x,y)}$ and take $Z$ as the random vector in $\mathbf{R}^2 \times \mathbf{Z}_{+}$. The probability mass function (p.m.f.) of $Z$ is given as: $\text{Pr}\left(Z = \left(x_0,y_0,I(x_0,y_0)\right)\right) = \frac{I(x_0,y_0)}{\sum_{x,y}I(x,y)}$. Now, each image is mapped as a probability vector (or discrete density) and we use our formulation of DL and SC to reconstruct the images. Note that, the reconstruction is upto a scale factor. In order to compare, we used two popular methods, namely (i) the K-SVD based method in \cite{aharon2005k} (we chose number of atoms to be twice the number of classes and chose $50\%$ sparsity) and (ii) the Log-Euclidean sparse coding (LE-SC) method \cite{Guo2013}. Both of these methods assume that the data lie in a vector space. As the objective functions for these methods are different, hence we use {\it mean squared error} (MSE) as a metric to measure reconstruction error. We also report the achieved sparsity by these methods. \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|c|} \hline \multicolumn{2}{|c|}{SDL} & \multicolumn{2}{c|}{K-SVD \cite{aharon2005k}} & \multicolumn{2}{c|}{LE-SC \cite{Guo2013}} \\ \cline{1-6} MSE & $\varsigma (\%)$ & MSE & $\varsigma (\%)$ & MSE & $\varsigma (\%)$ \\ \hline \textbf{0.052} & 97.8 & 0.070 & 98.3 & 0.098 & \textbf{98.5} \\ \hline \end{tabular} \end{center} \caption{Comparison results on MNIST data} \label{tab0} \end{table} From the Table \ref{tab0}, it is evident that, though K-SVD and LE-SC perform better in terms of sparsity, SDL achieved the best reconstruction error while retaining sparse atoms. Some reconstruction results are also shown in Fig. \ref{fig0}. The results clearly indicate that SDL gives ``sharper'' reconstruction compared to the two competing methods. This is because the formulation of SDL respects the geometry of the underlying data while other two methods do not. \begin{figure}[!ht] \centering \begin{minipage}[b]{0.45\textwidth} \centering \includegraphics[width=0.8\textwidth]{montage.png} \end{minipage} \caption{Reconstruction of MNIST data (left to right) (a) original data (b) SDL (c) K-SVD (d) LE-SC} \label{fig0} \end{figure} \subsection{Experimental results on $P_n$} Now, we will demonstrate the effectiveness of our proposed method {\it SDL} compared to the state-of-the-art algorithms on classification using the SCs as features for the classification problem on the manifold $P_n$ of SPD matrices. We report the classification accuracy to measure the performance in the context of classification experiments. Moreover, we also report a measure of sparsity, denoted by $\varsigma$, which captures the percentage of the elements of $W$ that are $\leq 0.01$. We performed comparisons to three state-of-the-art methods namely, (i) Riemannian sparse coding for SPD matrices (Riem-SC) \cite{Cherian2014}, (ii) Sparse coding using the kernel defined by the symmetric Stein divergence (K-Stein-SC) \cite{Harandi2012}, (iii) Log-Euclidean sparse coding (LE-SC) \cite{Guo2013}. For the LE-SC, we used the highly cited SPAMS toolbox \cite{mairal2009online} to perform the DL and SC on the tangent space. We tested our algorithm on three commonly used (in this context) and publicly available data sets namely, (i) the Brodatz texture data \cite{brodatz1966textures}, (ii) the Yale ExtendedB face data \cite{KCLee05}, and (iii) the ETH80 object recognition data \cite{eth80}. The data sets are described in detail below. From each of data set, we first extract $\mathcal{P}_n$ valued features. Then, {\it SDL} learns the dictionary atoms and the sparse codes. Whereas, for the Riem-SC and the kStein-SC, we used k-means on $\mathcal{P}_n$ and used the cluster centers as the dictionary atoms. For the Log-Euclidean sparse coding, we used the Riemannian Inverse Exponential map \cite{do1992riemannian} at the Fr{\'e}chet mean (FM) of the data and performed a Euclidean DL and SC on the tangent space at the FM. For classification, we used the $\nu-SVM$ \cite{scholkopf2002learning} on the sparse codes taken as the features. The SVM parameters are learned using a cross-validation scheme. \ \\ \vspace{0.2cm} \ \\ {\bf Brodatz texture data:} This dataset contains $111$ texture images. We used the same experimental setup as was used in \cite{sivalingam2010tensor}. Each image is of $256 \times 256$ dimension and we first partitioned each image into $64$ non-overlapping blocks of size $32\times 32$. From each block, we computed a $5\times 5$ covariance matrix $FF^t$, summing over the block, where $F = (I, |\frac{\partial I}{\partial x}|, |\frac{\partial I}{\partial y}|, |\frac{\partial^2 I}{\partial x^2}|, |\frac{\partial^2 I}{\partial y^2}|)^t$. The matrix $FF^t$ is symmetric positive semidefinite. To make this matrix an SPD matrix, we add $\sigma I$ to it, where $\sigma$ is a small positive real number. Thus, the covariance descriptor from each image lies on $\mathcal{P}_5$. For this data, we consider each image as a class, resulting in a $111$ class classification problem. As DLM is computationally very expensive, this $111$ class classification is infeasible using this method, hence we also randomly selected $16$ texture images and performed classification on $16$ classes to facilitate this comparison. We took the number of dictionary atoms ($r$) to be $555$ and $80$ for the $111$ classes and $16$ classes respectively. \begin{figure}[!ht] \centering \begin{minipage}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{brodatz.jpg} \end{minipage} \caption{Brodatz data samples. } \label{fig1} \end{figure} \ \\ \vspace{0.2cm} \ \\ {\bf Yale face data:} This YaleExtendedB face data set contains $16128$ face images acquired from $28$ human subjects under varying pose and illumination conditions. We randomly fixed a pose and for that pose consider all the illuminations, leading to $252$ face images taken from $28$ human subjects. We used a similar type of experimental setup as described in \cite{chakraborty2015iccv}. From each face image, we construct a SIFT descriptor \cite{sift} and take the first $4$ principal vectors of this descriptor. Thus, each image is identified with a point on the Grassmann manifold of appropriate dimension. And then, inspired by the isometric mapping between the Grassmannian and $\mathcal{P}_n$ \cite{huang2015projection}, we construct the covariance descriptor from the aforementioned principal vectors. Here, we used $84$ dictionary atoms. \begin{figure}[!ht] \centering \begin{minipage}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{yale_faces.png} \end{minipage} \caption{Yale face data samples. } \label{fig2} \end{figure} \ \\ {\bf ETH80 object recognition data:} This dataset contains $8$ different objects, each having $10$ different instances from $41$ different views resulting in $3280$ images. We first segment the objects from each image using the provided ground truth. We used both texture and edge features to construct the covariance matrix. For the texture feature, we used three texture filters \cite{laws1980rapid}. The filter bank is $[H_1H_1^t, H_2H_2^t, H_3H_3^t]$, where $H_1 = [1,2,1]^t$, $H_2 = [-1,0,1]^t$, $H_3 = [-1,2,-1]^t$. In addition to the three texture features, we used the image intensity gradient and the magnitude of the smoothed image using Laplacian of the Gaussian filter. We used $40$ dictionary atoms for this data. \begin{figure}[!ht] \centering \begin{minipage}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{eth80.png} \end{minipage} \caption{Eth80 data samples. } \label{fig3} \end{figure} \ \\ Performance comparisons are depicted in Tables \ref{tab1}-\ref{tab2} respectively. All of these three methods are intrinsic, i.e., the DL and SC are tailored to the underlying manifold, i.e., $\mathcal{P}_n$. In order to compute the reconstruction error, we have used the intrinsic affine invariant metric on $\mathcal{P}_n$. From the tables, we can see that SDL yields the best sparsity amongst the three manifold-valued methods (excluding LE-SC). Furthermore, on the Yale-face data set, the SDL is computationally most efficient algorithm when compared to Riem-SC and kStein-SC respectively. In terms of reconstruction error, our proposed method outperforms it's competitors. Note that, for kStein-SC, computing the reconstruction error is not meaningful as they solved the DLSC problem on the Hilbert space after using a kernel mapping. \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{1em}{Data} & \multicolumn{2}{c|}{SDL} & \multicolumn{2}{c|}{Riem-SC \cite{Cherian2014}} & \multicolumn{2}{c|}{kStein-SC \cite{Harandi2012}} & \multicolumn{2}{c|}{LE-SC \cite{Guo2013}} \\ \cline{2-9} & acc. (\%) & $\varsigma (\%)$ & acc. (\%) & $\varsigma (\%)$ & acc. (\%) & $\varsigma (\%)$ & acc. (\%) & $\varsigma (\%)$ \\ \hline Brodatz & \textbf{95.02} & 95.96 & 66.21 & 61.54 & 88.57 & 95.40 & 60.25 & \textbf{98.52} \\ Yale & \textbf{68.68} & 92.95 & 59.98 & 69.10 & 53.55 & 97.58 & 8.35 & \textbf{99.78} \\ Eth80 & \textbf{96.10} & 88.07 & 91.46 & 78.64 & 45.79 & 97.97 & 45.79 & \textbf{97.97} \\ \hline \end{tabular} \end{center} \caption{Comparison results on three data sets in terms of classification accuracy and amount of sparsity} \label{tab1} \end{table*} \begin{table*} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline \multirow{2}{1em}{Data} & \multicolumn{2}{c|}{SDL} & \multicolumn{2}{c|}{Riem-SC \cite{Cherian2014}} & \multicolumn{2}{c|}{kStein-SC \cite{Harandi2012}} & \multicolumn{2}{c|}{LE-SC \cite{Guo2013}} \\ \cline{2-9} & recon. err. & Time(s) & recon. err. & Time(s) & recon. err. & Time(s) & recon. err. & Time(s) \\ \hline Brodatz & \textbf{0.55} & 599.33 & 1.24 & 539.63 & N/A & 527.57 & 3.21 & \textbf{5.46} \\ Yale & \textbf{0.001} & 47.25 & 0.005 & 293.91 & N/A & 131.97 & 0.25 & \textbf{9.16} \\ Eth80 & \textbf{0.005} & 153.60 & 0.017 & 240.96 & N/A & 213.93 & 0.16 & \textbf{3.62} \\ \hline \end{tabular} \end{center} \caption{Comparison results on three data sets in terms of reonstruction error and computation time} \label{tab2} \end{table*} We also depict the comparative performance as a function of number of dictionary atoms, for the four algorithms in Fig. \ref{fig4} (for Brodatz data) and in Fig. \ref{fig5} (for the Yale face data set). Here, we have shown the comparative performance in terms of classification accuracy, reconstruction error and required CPU time. For both these data sets, we can see the superior performance of SDL over it's competitors in terms of classification accuracy and sparsity. As the objective of any DL algorithm is to reconstruct the samples, we have also shown the reconstruction error thereby depicting the competitive performance of SDL over the other algorithms. \begin{figure*} \centering \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{brodatz_res1.png} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{brodatz_res2.png} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{brodatz_res3.png} \end{minipage} \caption{Brodatz data results: \emph{Left: } Sparsity, \emph{Middle: } Classification accuracy and \emph{Right: } Reconstruction error with varying number of dictionary atoms.} \label{fig4} \end{figure*} \begin{figure*} \centering \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{yale_faces_res1.png} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{yale_faces_res2.png} \end{minipage} \begin{minipage}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{yale_faces_res3.png} \end{minipage} \caption{Yale face data results: \emph{Left: } Sparsity, \emph{Middle: } Classification accuracy and \emph{Right: } Reconstruction error with varying number of dictionary atoms.} \label{fig5} \end{figure*} \section{Conclusions}\label{sec5} In this paper, we presented an information theoretic dictionary learning and sparse coding algorithm for data residing on a statistical manifold. In the traditional dictionary learning approach on a vector space, the goal is to express each data point as a sparse linear combination of the dictionary atoms. This is typically achieved via the use of a data fidelity term and a term to induce sparsity on the coefficients of the linear combination. In this paper, we proposed an alternative formulation of the DL and SC problem for data residing on statistical manifolds, where we do not have an explicit sparsity constraint in our objective function. Our algorithm, SDL, expresses each data point, which is a probability distribution, as a weighted KL-center of the dictionary atoms. We presented a proof that our proposed formulation yields sparsity without explicit enforcement of this constraint and this result holds true when the KL-divergence is replaced by the Hellinger distance between probability densities. Further, we presented an extension of this formulation to data residing on $\mathcal{P}_n$. A Riemannian accelerated gradient descent algorithm was employed to learn the dictionary atoms and an accelerated gradient descent algorithm was employed to learn the sparse weights in a two stage alternating optimization framework. The experimental results demonstrate the effectiveness of the SDL algorithm in terms of reconstruction and classification accuracy as well as sparsity. \begin{center} {\bf Acknowledgements} \end{center} This research was funded in part by the NSF grants IIS-1525431 and IIS-1724174 to BCV. We thank Dr. Shun-ichi Amari for his insightful comments on a preliminary draft of this manuscript. \bibliographystyle{splncs03}
{ "timestamp": "2018-05-08T02:17:25", "yymm": "1805", "arxiv_id": "1805.02505", "language": "en", "url": "https://arxiv.org/abs/1805.02505" }
\section{Introduction} Since the introduction of generative adversarial networks (GANs) \cite{goodfellow2014generative}, researchers have dove deeply into improving the quality of generated images. Recently, a number of new approaches have been proposed for high-quality image generation, e.g., ProgressiveGAN~\cite{karras2017progressive}, SplittingGAN~\cite{grinblat2017class}, SGAN~\cite{huang2017stacked}, and WGAN-GP~\cite{salimans2016improved}. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{images/intro.PNG} \caption{Multiple generators specialized in particular data clusters}\label{fig:intro} \end{figure} We propose a novel GAN model equipped with multiple generators, each of which specializes in learning a certain modality of dataset (see Fig.~\ref{fig:intro}). In addition to the generators, we employ an auxiliary network that determines a generator that will be trained from a certain training instance. We name the auxiliary network as \textit{gating networks} following the precedent~\cite{jacobs1991adaptive}. Ensembling multiple neural networks coupled with gating networks was first introduced to achieve a higher performance in multi-speaker phoneme recognition~\cite{hampshire1990meta}. In their method, the design of the loss function caused the neural networks to cooperate. A later research introduced a new loss function that stimulates competitions among neural networks, where the involved neural networks attempt to specialize in a certain task rather than redundantly learn the same feature~\cite{jacobs1991adaptive}. The algorithm is now called \textit{mixture of experts} in various machine learning domains. Reminiscent of their work, we name our proposed GAN approach as MEGAN, short for the mixture of experts GAN. The gating networks in our proposed MEGAN are responsible for selecting one particular generator that would perform best given a certain condition. The gating networks consist of two submodules, an \textit{assignment module} and \textit{Straight-Through Gumbel-Softmax}~\cite{jang2016categorical}, which we will discuss in detail in Section~\ref{sec:gating}. Although MEGAN inherits the idea of multiple generators and the gating networks, we will not adopt the proposed loss function~\cite{jacobs1991adaptive} but utilize adversarial learning to leverage the latest success of GANs. Our work has two contributions. First, we build a mixture of experts GAN algorithms that are capable of encouraging generators to learn different modalities existing in our data. Second, we utilize the newly discovered Gumbel-Softmax reparameterization trick and develop the regularization for \textit{load-balancing} to further stabilize the training of MEGAN. We evaluate our model using various criteria, notably achieving an MS-SSIM score of 0.2470 for CelebA, which suggests that MEGAN generates more diverse images compared to other baseline models. Our generated samples also achieve a competitive inception score of 8.33 in an unsupervised setting. \section{Related Work} Several studies on GANs have been proposed to stabilize the learning process and improve the quality of generated samples. Some of these studies incorporated novel distance metrics to achieve better results. For instance, the original GAN~\cite{goodfellow2014generative} suffers from the vanishing gradient problem arising from the sigmoid cross-entropy loss function used by the discriminator. LSGAN~\cite{mao2017least} solves this problem by substituting the cross-entropy loss with the least-squares loss function. WGAN~\cite{arjovsky2017wasserstein} adopts the Earth mover's distance that enables an optimal training and solves the infamous mode collapse. WGAN-GP progresses one step further by adopting a gradient penalty term for a stable training and higher performance. Meanwhile, BEGAN~\cite{berthelot2017began} aims to match auto-encoder loss distributions using a loss elicited from the Wasserstein distance, instead of matching the data distributions directly. In addition, DRAGAN~\cite{kodali2017train} prevents mode collapse using a no-regret algorithm. Other algorithms such as AdaGAN~\cite{tolstikhin2017adagan} and MGAN~\cite{hoang2017multi} employ ensembling approaches, owing to having multiple generators to learn complex distributions. Based on the idea of boosting in the context of the ensemble model, AdaGAN trains generators sequentially while adding a new individual generator into a mixture of generators. While AdaGan gradually decreases the importance of generators as more generators are added into the model, within the framework of our proposed MEGAN, we pursue an equal balancing between generators by explicitly regularizing the model, which avoids the problem of being dominated by a particular generator. MGAN adopts a predefined mixture weight of generators and trains all generators simultaneously; however, our proposed MEGAN dynamically alters generators through gating networks and train the generators one at a time. The MGAN's fixed mixture model is sub-optimal, compared to our trainable mixture model. Our proposed MEGAN is different from these models in that each generator can generate images on its own and learn different and salient features. MAD-GAN~\cite{ghosh2017multi} has strong relevance to our work. Having multiple generators and a carefully designed discriminator, MAD-GAN overcomes the mode-collapsing problem by explicitly forcing each generator to learn different mode clusters of a dataset. Our MEGAN and MAD-GAN are similar in that both models allow the generators to specialize in different submodalities. However, MEGAN is differentiated from MAD-GAN in two aspects. First, all generators in MEGAN share the same latent vector space, while the generators of MAD-GAN are built on separated latent vector spaces. Second, the generators of MAD-GAN can theoretically learn the identical mode clusters; however, the gating networks built in our MEGAN ensure that each generator learns different modes by its design. \section{Categorical Reparameterization} \label{sec:categorical} Essentially, GANs generate images when given latent vectors. Given $n$ generators and a latent vector $\mathbf{z}$, our model aims to select a particular generator that will produce the best-quality image. It essentially raises the question as to how a categorical decision is made. The Gumbel-Max trick~\cite{gumbel1954statistical,maddison2014sampling} allows to sample a one-hot vector $\mathbf{s}$ based on the underlying probability distribution $\pi_i$: \begin{align} \begin{split} \mathbf{s} &= one\_hot(\argmax_{i}[a_{i}+\log\pi_{i}]) \\ a_{i} &= -\log({-\log(u_{i})}) \end{split}\end{align} where $u_i$ is sampled from Uniform(0,1). However, the $\argmax$ operator in the Gumbel-Max trick is a stumbling block when training via back propagation because it gives zero gradients passing through the stochastic variable and precludes gradients from flowing further. A recent finding~\cite{jang2016categorical} suggests an efficient detour to back propagate even in the presence of discrete random variables by a categorical reparameterization trick. The Gumbel-Softmax function generates a sample $\mathbf{y}$ that approximates $\mathbf{s}$ as follows: \begin{equation} \begin{split} \mathbf{y}_i &= \frac{\exp((\log\pi_i+a_i)/\tau)}{\sum_{j=1}^k\exp((\log\pi_j+a_j)/\mathbf{\tau})} \end{split}\end{equation} where $\mathbf{y}_i$ is an $i$-th component of the vector $\mathbf{y}$, and $\mathbf{\tau}$ is the temperature that determines how closely the function approximates the sample $\mathbf{s}$. It is noteworthy that in practice, we directly predict $\log\pi_i$ through the \textit{assignment module} that we will discuss in Section~\ref{sec:gating}. \subsection{Straight-Through Gumbel-Softmax} The Gumbel-Softmax method approximates the discrete sampling by gradually annealing the temperature $\tau$. It appears to be problematic in our setting because when the temperature is high; a Gumbel-Softmax distribution is not categorical, leading all generators to be engaged in producing a fake image for a given latent vector $\mathbf{z}$. Our objective is to choose the most appropriate generator. Therefore, we do not use the Gumbel-Softmax but adopt the following Straight-Through Gumbel-Softmax (STGS). The STGS always generates discrete outputs (even when the temperature is high) while allowing the gradients flow. In practice, the STGS calculates $\mathbf{y}_i$ but returns $\mathbf{y\textsubscript{hard}}$: \begin{equation}\label{eq:stgs} \begin{split} \mathbf{y\textsubscript{hard}} &= \mathbf{y}_i + (one\_hot(\argmax_{i}(\mathbf{y}_i)) - \mathbf{y}_d) \end{split}\end{equation} where $\mathbf{y}_d$ is a variable having the same value as $\mathbf{y}_i$ but is detached from the computation graph. With this trick, the gradients flow through $\mathbf{y}_i$ and allows the networks to be trained with the annealing temperature. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth]{images/main_networks} \caption{\textbf{The proposed architecture of MEGAN}; \textbf{(a)} shows the overview of our main networks. Given a latent vector $\mathbf{z}$, each of the $n$ generators produces an output ${o_i}$. The latent vector \textbf{z} and $n$ feature vectors (denoted in yellow) extracted from the generators are given as input to the gating networks that produce a one-hot vector $\mathbf{g}$, as shown in the middle. The chosen image by the one-hot vector (marked as ``Fake Image") will be fed into the discriminator that measures the adversarial loss with regard to both real and fake classes. \textbf{(b)} illustrates an in-depth view on the gating networks. The gating networks output a one-hot vector $\mathbf{g}$.}\label{fig:mainnet} \end{figure*} \section{Mixture of Experts GAN} In this section, we illustrate the details of our proposed MEGAN and discuss how generators become specialized in generating images with particular characteristics following the notion of the mixture of experts~\cite{jacobs1991adaptive}. \subsection{Proposed Network Architecture} Let ${G_i}$ denote a generator in a set $\mathbf{G}=\{{G_1}, {G_2}, \cdots, {G_n}$\}, and $\mathbf{z}\sim \mathcal{N}(0,1)$ is a random latent vector. A latent vector $\mathbf{z}$ is fed into each generator, yielding images $\mathbf{o}_i\in \mathbf{O} = \{\mathbf{o}_1, \mathbf{o}_2, \cdots, \mathbf{o}_n$\} and their feature vectors $\mathbf{f}_i\in \mathbf{F} = \{\mathbf{f}_1, \mathbf{f}_2, \cdots, \mathbf{f}_n$\}. Each feature vector is produced in the middle of ${G_i}$. In our experiments, we particularly used the ReLU activation map from the second transposed convolution layer of the generator as the representative feature vector. The latent vector $\mathbf{z}$ and all of the feature vectors $\mathbf{f}_i$ are then passed to the gating networks to measure how well $\mathbf{z}$ fits each generator. The gating networks produce a one-hot vector $\mathbf{g} = \langle{g_1},{g_2}, \cdots, {g_n}\rangle$ where ${g_i} \in \{0,1\}$. We formulate the entire process as follows: \begin{gather} (\mathbf{f}_i, \mathbf{o}_i) = \mathit{G}_i(\mathbf{z})\label{eq:gen_feature}\\ \mathbf{g} = \textbf{GN}(\mathbf{z}, \mathbf{f}_1, \mathbf{f}_2, \cdots, \mathbf{f}_n)\label{eq:}\\ \mathbf{FI} =\sum_{i=1}^{n}{g}_i\mathbf{o}_i\label{eq:mainnet} \end{gather} where $\mathbf{FI}$ denotes the generated fake image that will be delivered to the discriminator, and \textbf{GN} is the gating network. Fig.~\ref{fig:mainnet} provides an overview of the proposed networks. \subsection{Gating Networks}\label{sec:gating} In the context of the mixture of experts, the gating networks play a central role in specialization of submodules~\cite{jacobs1991adaptive}. We use auxiliary gating networks that assign each $\mathbf{z}$ to a single generator so as to motivate each generator to learn different features. Concretely, we aim to train each generator ${G_i}$ to (1) be in charge of the images with certain characteristics potentially corresponding to a particular area in the entire feature space, and (2) learn the specialized area accordingly. The gating networks consist of two distinctive modules, an \textit{assignment module} that measures how well the latent vector $\mathbf{z}$ fits each generator and an \textit{STGS module} that samples a generator based on the underlying distribution given by the assignment module. \paragraph{Assignment Module} The assignment module uses the feature vectors $\mathbf{f}_i\in\mathcal{R}^{k}$ and first encodes each of them into a hidden state $\mathbf{h}_i\in\mathcal{R}^{m}$ in a smaller dimension ${m}$: \begin{equation} \begin{split} \mathbf{h}_i =ReLU(W^i\mathbf{f}_i) \end{split}\end{equation} where $W^i$ denotes a linear transformation for feature vector $\mathbf{f}_i$. Encoding each feature vector reduces the total complexity significantly, because $k$, the dimension of a feature map $\mathbf{f}_i$, is typically large, e.g., $k$ = 8192 in our implementation. The reduced dimension $m$ is a hyperparameter that we set as 100. The $\mathbf{h}_i$ are then concatenated along with the latent vector. The merged vector is then passed to a three-layer perceptron, which consists of batch normalizations and ReLU activations: \begin{equation} \begin{split} \textit{\textbf{l}} = MLP([\mathbf{z}, \mathbf{h}_1, \mathbf{h}_2, \dots, \mathbf{h}_n]) \end{split}\end{equation} where the resulting $\textit{\textbf{l}}\in\mathcal{R}^{n}$ is a logit vector, an input for the STGS. $\textit{\textbf{l}}$ also corresponds to $\log\pi_i$ explained in Section~\ref{sec:categorical}. \paragraph{STGS Module} \textit{\textbf{l}} is an unnormalized density that determines the generator that most adequately fits the latent vector. The STGS samples a one-hot vector with \textit{\textbf{l}} as an underlying distribution. We denote the sampled one-hot vector as $\mathbf{g}$, which corresponds to y\textsubscript{hard} illustrated in Eq.~\eqref{eq:stgs}. It strictly yields one-hot vectors. Thus, with the STGS, we can select a particular generator among many, enabling each generator to focus on a sub-area of the latent vector space decided by the gating networks. It is noteworthy that the assignment module is updated by the gradients flowing through the STGS module. \subsection{Load-Balancing Regularization} We observed that the gating networks converge too quickly, often resorting to only a few generators. The networks tend to be strongly affected by the first few data and favor generators chosen in its initial training stages over others. The fast convergence of the gating networks is undesirable because it leaves little room for other generators to learn in the later stages. Our goal is to assign the data space to all the generators involved. To prevent the networks from resorting to a few generators, we force the networks to choose the generators in equal frequencies in a mini-batch. Thus, we introduce a regularization to the model as follows: \begin{align}\label{load_balance} \begin{split} \mathcal{L}_{LB} &= \sum_{i=1}^n\norm{p_i-\frac{1}{n}}_2^2, \\ p_i &= \frac{\sum_{j=1}^b\mathds{1}[g_{i}^j=1]}{b}, \end{split}\end{align} where $\mathcal{L}_{LB}$ indicates \textit{the load-balancing} loss, $b$ is the mini-batch size, ${g_{i}^j}$ is the $i$-th element of the one-hot vector $\mathbf{g}$ for the $j$-th data of a training mini-batch. $\mathit{p_i}$ is the probability that a certain generator will be chosen. The indicator function $\mathds{1}[g_{i}^j=1]$ returns 1 if and only if $g_{i}^j=1$. Concretely speaking, we train the model with mini-batch data; further, for all the data in a mini-batch, we count every assignment to each generator. Thus, the regularization loss pushes $\mathbf{g}$ to equally select generators. \begin{algorithm2e}[t] \DontPrintSemicolon \caption{\strut Mini-batch training algorithm of MEGAN.\label{alg:gan}} \hrule \KwIn{Real Samples: $\{x_1, x_2, \cdots\}$; Mini-batch Size: $m$} \KwOut{Generators: $\{G_1, \cdots, G_n\}$} \hrule $G_1, G_2, \cdots, G_n \gets$ $n$ generative neural networks\; $D \gets$ one discriminator neural networks\; $\lambda \gets$ a weight for \textit{the load-balancing} regularization \; \While{until converge} { \textit{iter} $\gets$ 0\; $X \gets \{x_1, \cdots, x_m\}$, a mini-batch of real samples\; $Z \gets \{\mathbf{z}_1, \cdots, \mathbf{z}_m\}$, a mini-batch of latent vectors\; $\mathbf{\tau} \gets 0.5\exp^{-0.001\times{\textit{iter}}}$\; \For{each $\mathbf{z}_i \in Z$} { \For{j in $(1,2,\cdots,n)$}{ $\mathbf{f}_{i,j}$, $\mathbf{o}_{i,j}$ = $G_j(\mathbf{z}_i)$\; } $\textit{\textbf{l}}_i \gets AssignModule(\mathbf{z}_i, \mathbf{f}_{i,1}, \mathbf{f}_{i,2}, \cdots, \mathbf{f}_{i,n})$\; $\mathbf{g}_i = \langle\mathit{g_{i,1}}, \cdots, \mathit{g_{i,n}}\rangle = STGS(\textit{\textbf{l}}_i, \mathbf{\tau})$\; Generate a fake image $\mathbf{FI}_i$ by $\sum_{j=1}^{n}\mathit{g}_{i,j}\mathbf{o_{i,j}}$\; } Train the discriminator $D$ using $\mathcal{L}_{adv}$\; Train the generators $G_1, \cdots, G_n$ using $\mathcal{L}_{adv}$ \; Train the gating networks using $\mathcal{L}_{adv}$ + $\lambda\mathcal{L}_{LB}$ \; \textit{iter} += 1 } \Return $\{G_1, G_2, \cdots, G_n\}$\; \hrule \end{algorithm2e} \subsection{Total Loss} The total loss of our model set for training is as follows: \begin{equation}\label{load_balance} \begin{split} \mathcal{L} = \mathcal{L}_{adv} + \lambda\mathcal{L}_{LB} \end{split}\end{equation} where $\mathcal{L}_{adv}$ is any adversarial loss computed through an existing GAN model. We do not specify $\mathcal{L}_{adv}$ in this section, because it may vary based on the GAN framework used for training. We set $\lambda$ to control the impact of the load-balancing regularization. \subsection{Discussions} In this chapter, we discuss a couple of potential issues about some difficulties in the mixture model. \paragraph{Mechanism of Specialization} In MEGAN, what forces the generators to specialize? We presume it is the implicit dynamics between multiple generators and the STGS. No explicit loss function exists to teach the generators to be specialized. Nevertheless, they should learn how to generate realistic images because the STGS isolates a generator from others by a categorical sampling. The gating networks learn the type of $\mathbf{z}$ that best suits a certain generator and keep assigning similar ones to the generator. The generators learn that specializing on a particular subset of data distribution helps to obfuscate the discriminator by generating more realistic images. As the training iterations proceed, the generators converge to different local clusters of data distribution. \paragraph{Effect of Load-Balancing on Specialization} Another important aspect in training MEGAN is determining the hyperparameter $\lambda$ for the load-balancing regularization. A desired outcome from the assignment module is a logit vector \textit{\textbf{l}} with high variance among its elements, while maintaining the training of generators in a balanced manner. Although the load-balancing regularization is designed to balance workloads between generators, it slightly nudges the assignment module to yield a logit vector closer to a uniform distribution. Thus we observe when an extremely large value is set for $\lambda$ (e.g., 1000), the logit values follow a uniform distribution. It is not a desired consequence, because a uniform distribution of $l$ means the gating networks failed to properly perform the generator assignment, and the specialization effect of generators is minimized. To prevent this, we suggest two solutions. The first solution is to obtain an optimal value of $\lambda$ where training is stable, and the logit values are not too uniform. It is a simple but reliable remedy, for finding the optimal $\lambda$ is not demanding. The second possible solution is to increase $\lambda$ when the logit values follow a uniform distribution. Most of our experiments were performed by the first method in which we fix $\lambda$, because a stable point could be found quickly, and it allows us to focus more on the general capability of the model. \paragraph{Data Efficiency} Some may claim that our model lacks data efficiency because each generator focuses on a small subset of a dataset. When trained with our algorithm, a single generator is exposed to a smaller number of images, because the generators specialize in a certain subset of the images. However, it also means that each generator can focus on learning fewer modes. Consequently, we observed that our model produces images with an improved quality, as described in detail in Section~\ref{sec: experimental results}. \section{Experiment Details} In this section, we describe our experiment environments and objectives. All the program codes are available in https://github.com/heykeetae/MEGAN. \subsection{Experiment Environments} \label{sec:ex_en} We describe the detailed experiment environments in this section, such as baseline methods, datasets, etc. \paragraph{Underlying GANs} We apply our algorithm on both DCGAN and WGAN-GP (DCGAN layer architecture) frameworks, chosen based on their stability and high performance. The experiments consist of visual inspections, visual expertise analysis, quantitative evaluations, and user studies for generalized qualitative analyses. \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{images/visual_celeb.PNG} \caption{\textbf{Visual Inspection}; CelebA dataset, 64x64 samples from MEGAN with each block of four images generated by the same generator. Noticeable differences between each block indicate that different generators produce images with different features.}\label{fig:celeb} \end{figure} \paragraph{Baseline algorithms} We compared our quantitative results with many state-of-the-art GAN models such as BEGAN, LSGAN, WGAN-GP, improved GAN (-L+HA)~\cite{salimans2016improved}, MGAN and SplittingGAN. AdaGAN could not be included for our evaluation, because the official code for AdaGAN does not provide a stable training for the datasets we used. \paragraph{Datasets} We used three datasets for our evaluation: CIFAR-10, CelebA, and LSUN. CIFAR-10 has 60,000 images from 10 different object classes. CelebA has 202,599 facial images of 10,177 celebrities. LSUN has various scenic images but we evaluated with the church outdoor subset, which consists of 126,227 images. \paragraph{Evaluation Metric} We evaluated our model on two standard metrics to quantitatively measure the quality of the generated images: inception score (IS) and multiscale structural similarity (MS-SSIM). The IS is calculated through the inception object detection network and returns high scores when \textit{various} and \textit{high-quality} images are generated. MS-SSIM is also a widely used measure to check the similarity of two different images. We generated 2,000 images and checked their average pairwise MS-SSIM scores. The lower the score is, the better is the algorithm in terms of diversity. \paragraph{User Study} We also conducted web-based user studies. In our test website,\footnote{\url{http://gantest.herokuapp.com} - a test run can be made by entering the following key: 5B3309} randomly selected real and fake images are displayed, and users are asked to downvote images that they think are fake. Nine images were provided per test and users iterate the test for 100 times. Regarding the CelebA dataset, we observed that the participants were good at detecting the generated facial images of the same race. Therefore, we diversified the ethnicity in our user groups by having thirty participants from three different continents. \paragraph{Hyperparameters} We tested the following hyperparameter setups: the number of generators $n$ as 3, 5, and 10; the mini-batch size $b$ = $64$; annealing temperature $\tau = 0.5\exp^{(-0.001\times{iter})}$~\cite{jang2016categorical} where $iter$ denotes the iteration number as in the Algorithm~\ref{alg:gan} ; load-balancing parameter $\lambda$=100; and feature vector $\mathbf{f}_i$ of dimension $k = 8192$ for CIFAR-10 and $16384$ for both LSUN and CelebA. \section{Experimental Results} \label{sec: experimental results} \begin{figure}[t] \centering \includegraphics[width=0.49\textwidth]{images/visual_lsun.PNG} \caption{\textbf{Visual Inspection}; LSUN-Church outdoor dataset, 64x64 samples from MEGAN with each block of four images generated by the same generator. Distinguishable features include the church architectural style, the location, and the cloud cover.}\label{fig:lsun} \end{figure} \subsection{Evaluation on Specialization} We describe our results based on various visual inspections. \begin{figure}[t] \centering \includegraphics[width=0.5\textwidth]{images/visual.PNG} \caption{\textbf{Visual Expertise Analysis}; 2,000 images are generated by MEGAN on CIFAR-10 dataset and feature vectors of those images are extracted from the \textit{relu4\_2} layer of VGG-19 networks. All 2,000 feature vectors are visualized in a two-dimensional space by the t-SNE algorithm.}\label{fig:visual} \end{figure} \paragraph{Visual Inspection} Throughout our evaluations, each generator is found to learn different context and features, for at least up to 10 generators that we have inspected. The decision to assign a particular subset of data to a particular generator is typically based on visually recognizable features, such as background colors or the shape of the primary objects. Figs.~\ref{fig:celeb} and~\ref{fig:lsun} show the generated samples drawn from different generators trained with 10 generators on CelebA and LSUN-Church outdoor, respectively. Each of the block of four images are from the same generator. We chose six generators that have learned the most conspicuous and distinctive features readily captured even by the human eyes. All four images share some features in common, while having at least one distinguishing characteristic from other blocks of images. For instance, the top-left celebrities in Fig.~\ref{fig:celeb} have black hair without noticeable facial expressions. On the contrary, the top-right celebrities have light-colored hair with smiling faces. Among the samples from LSUN in Fig.~\ref{fig:lsun}, we also detected distinguishing patterns specific to each generator. \paragraph{Visual Expertise Analysis} If the model learns properly, a desirable outcome is that each generator produces images of different features. We generated 2,000 CIFAR-10 images from MEGAN trained for 20 epochs, fed them to a pretrained VGG19 network, and extracted the feature vectors from the \textit{relu4\_2} layer. Subsequently, the 8,192-dimensional feature vectors are reduced to two-dimensional vectors using the t-SNE algorithm~\cite{maaten2008visualizing}. Fig.~\ref{fig:visual} shows the results. Each two-dimensional vector is represented as a dot in the figure, and samples from the same generator are of the same color. The colored shades indicate the clusters of images that are generated by the same generator. We tested MEGAN with 5 generators and 10 generators, confirming that each generator occupies its own region in the feature vector space. It is noteworthy that they overlap in the figure owing to dimensionality reduction for visualization purposes. In the original 8192-dimension space, they may overlap much less. \begin{table}[t] \begin{center} \caption{\textbf{\label{table:cifar}Inception Score on CIFAR-10} (trained without labels)} \begin{tabular}{lcr} \toprule \textbf{Method} & \textbf{Score} \\ \midrule DCGAN & $6.16 \pm 0.06$ \\ Improved GAN (-L+HA)& $6.86 \pm 0.07$ \\ WGAN-GP (Resnet) & $7.86 \pm 0.07$ \\ SplittingGAN & $7.90 \pm 0.09$ \\ AdaGAN & Not being properly trained\\ \textbf{MGAN} & $\textbf{8.33}\ \mathbf{\pm}\ \textbf{0.10}$\\ \textbf{MEGAN (DCGAN)} & $\textbf{8.33}\ \mathbf{\pm}\ \textbf{0.09}$ \\ \bottomrule \end{tabular} \end{center} \end{table} \begin{table}[t] \begin{center} \caption{\label{table:celeb}\textbf{MS-SSIM Score on CelebA}} \begin{tabular}{lccc} \toprule \textbf{Method} & $n$ &\textbf{CelebA} & \textbf{LSUN} \\ \midrule BEGAN &1& $0.4636 \pm 0.019$ & $0.1969 \pm 0.024$ \\ DRAGAN &1& $0.3711 \pm 0.020$ & $0.1733 \pm 0.015$ \\ LSGAN &1& $0.3487 \pm 0.019$ & $0.1067 \pm 0.016$ \\ \\[-0.9em] MGAN &5& $0.2611 \pm 0.021$ & $0.1142 \pm 0.015$\\ MGAN &10& $0.2816 \pm 0.012$ & $0.1055 \pm 0.013$\\ \\[-0.9em] MEGAN &3& $0.2818 \pm 0.020$ & $0.1024 \pm 0.015$\\ MEGAN &5& $\mathbf{0.2470 \pm 0.024}$ & $\mathbf{0.0997 \pm 0.021}$\\ MEGAN &10& $0.2665 \pm 0.027$ & $0.1085 \pm 0.014$\\ \bottomrule \end{tabular} \end{center} \caption{\textbf{\label{table:user}: User study results}} \scalebox{0.85}{ \begin{tabular}{p{1.98cm}rclp{0.01cm}rcl} \toprule & & \textbf{CelebA} & & & & \textbf{LSUN} & \\ \midrule \\[-0.9em] & Avg & Min & Max & & Avg & Min & Max \\ \\[-0.9em] \hline \\[-0.8em] BEGAN & \textbf{0.70} & 0.50 & 0.89 & & 0.73 & 0.49 &0.93 \\ DRAGAN& 0.91 & 0.71 &0.98 & & 0.81 & 0.50 & 0.96 \\ LSGAN & 0.88 & 0.71 & 0.96 & & 0.59 & 0.36 & 0.91 \\ WGAN-GP& 0.82 & 0.70 & 0.96 & &0.58 & 0.33 & 0.85 \\ \begin{tabular}[c]{@{}c@{}}MEGAN\\ ($n=3$)\end{tabular} & 0.76 & 0.58 & 0.95 & & \textbf{0.49} & 0.20 & 0.71 \\ \begin{tabular}[c]{@{}c@{}}MEGAN\\ ($n=5$)\end{tabular} & 0.73 & 0.57 & 0.93 & & 0.61 & 0.40 & 0.81 \\ \begin{tabular}[c]{@{}c@{}}MEGAN\\ ($n=10$)\end{tabular} & 0.74 & 0.60 & 0.92 & & 0.58 & 0.28 & 0.92 \\ \bottomrule \end{tabular}} \end{table} \subsection{Quantitative analysis} We introduce our quantitative experiment results using the inception score, MS-SSIM, and user study. \paragraph{CIFAR-10} Table~\ref{table:cifar} lists the inception scores (Section~\ref{sec:ex_en}) of various models on CIFAR-10. MEGAN trained on the DCGAN records an inception score of 8.33 --- our MEGAN shows a slightly better variance, i.e., 0.09 in MEGAN vs.0.1 in the MGAN. The official code for the AdaGAN does not provide a stable training for the CIFAR-10 dataset, and its inception score is not comparable to other baseline methods. \paragraph{CelebA and LSUN-Church outdoor} The MS-SSIM scores (Section~\ref{sec:ex_en}) measured for CelebA and LSUN-Church outdoor are reported in Table~\ref{table:celeb}. As the MS-SSIM scores of the baseline models are missing in their papers, we evaluate them after generating many samples using their official codes. In this experiment, MEGAN is trained to minimize the WGAN-GP loss function that was found to perform best in our preliminary experiments. We report that MEGAN outperforms all baseline models in terms of the diversity of generated images, as shown by its lowest MS-SSIM scores. Notably, MEGAN with five generators achieve the lowest MS-SSIM scores for both datasets. \paragraph{User Study} Table~\ref{table:user} shows the result of the web-based user study on CelebA and LSUN-Church outdoor datasets. The score is computed by dividing the number of downvoted fake images by the total number of fake images shown to users. Thus, a low score indicates that users struggle to distinguish generated images from real images. For both datasets, MEGAN records competitive performance, and especially for LSUN-Church outdoor it outperforms all the baseline models. In conjunction with the previous MS-SSIM results, MEGAN's low detection rates indicate that it can generate more diverse images in better quality than other baseline methods. We observed that BEGAN achieves the lowest detection rate for the CelebA dataset, in exchange for the low diversity of generated images, as indicated by the high MS-SSIM score of BEGAN in Table~\ref{table:celeb}. \section{Conclusion} This paper proposed a novel generative adversarial networks model called MEGAN, for learning the complex underlying modalities of datasets. Both our quantitative and qualitative analyses suggest that our method is suitable for various datasets. Future work involves extending our algorithm to other variants of GANs and a broader range of generative models. \bibliographystyle{named}
{ "timestamp": "2018-05-09T02:05:45", "yymm": "1805", "arxiv_id": "1805.02481", "language": "en", "url": "https://arxiv.org/abs/1805.02481" }
\section{I. Introduction} The Bose-Einstein Condensation (BEC) becomes one of the intriguing phenomena after the experimental achievement of BEC in low density gas of $^8$$^7$Rb atoms confined in an optically trapped nearly absolute zero temperature\cite{Wieman} by Wieman and Cornell in 1995. After this breakthrough BEC is also achieved in other alkali atoms like $^2$$^2$Na \cite{sodium}, $^5$$^2$Cr \cite{cromium}, $^7$Li \cite{lith}, $^1$$^3$$^3$Cs \cite{cesium} etc. A beautiful Abrikosov vortex lattice (triangular lattice) forms in trapped rotating BEC which have been reported by several experimental groups such as the MIT group \cite{sodium} in 1995, JILA group \cite{35, 57} in 1999, the ENS group \cite{56} in 2000 etc. This vortex lattice has also been established theoretically by several researchers \cite{wbao rot, spectral}. Vortex lattice has many diverse phases, which have been studied for the last two decades. In two dimension system, one can observe the phase transitions in an interacting system at finite temperature ($T > 0$). One of the remarkable phenomenon is the BKT phase transitions which is the transition from bound vortex-antivortex pair to free vortices named after Berezinskii, Kosterlitz and Thouless \cite{pitsing}. Richard J. Fletcher \textit{et al.} have shown that BKT transition smoothly converges onto BEC \cite{BKTBEC}. It is quite an unconventional phase transition which doesn't break continuous symmetries. Another phenomenon studied by the researchers is Tkachenko oscillations. It is the oscillation of vortex centers in rapidly rotating condensate \cite{Tkachenko}. The curvilinear rows of vortex centers go through the center of the cloud and fit with sine curve very well. At very low temperature the thermal fluctuation is very less but the quantum fluctuation is high. So, in this situation, the microscopic fluctuation can produce macroscopic phase transitions. Greiner \textit{et al.} have studied BE condensate in the 3D optical lattice where they have seen a phase transition from the superfluid to Mott insulator phase \cite{Greiner}. It is governed by lattice potential where the increase in lattice potential results in the phase transition. Tosihira Sato \textit{et al.} have shown the phase transition from Abrikosov vortex lattice to pinned vortex lattice \cite{pin}. In case of fast rotating BEC in single planer condensate with dipole-dipole interaction when s-wave interaction becomes attractive and exceeds a critical value then a phase transition occurs which transits triangular lattice to square lattice \cite{sqr}. In nature sometimes we have seen non-uniform rotation such as rotation of the planets around the Sun, as we move away from the Sun the rotation frequency become less. In the superfluid system, we have little attention on the non-uniform rotation. Now in this article, we are going to study BEC in a non-uniform rotating system for single species atomic system. We shall describe the system like a superfluid is confined in a container and we are applying rotation in such a way that rotation in center of the condensate is maximum and away from the center the rotation is gradually decreasing. \section{II. theoretical framework \& Numerical Technique} The low energy interaction between Bose atoms in momentum space is constant, $U_0 = \frac{4\pi \hbar^2 a}{m}$ with $m$ is the mass of an atom and $a$ the s-wave scattering length (for repulsive interaction $a$ is positive and for attractive interaction it is negative), the Fourier transformation of this interaction in the coordinate space is the delta function potential, contact interaction. In this interaction, the condensate is governed by a nonlinear Schrodinger like equation, known as GP equation, which was first investigated in the superfluid system \cite{pitsing}. It is common practice to solve the GP equation numerically to study the different properties of the condensate \cite{bao, stu GP} in the field of BEC. Time-dependent GP equation is studied in different geometrical dimensions with isotropic and anisotropic trapping potentials with different interactions between the atoms like spin-orbit interactions, dipole-dipole interactions \cite{sadhan1, sadhan prog, c sadhan, spino, poschl, morse, dipole}. Rotating BEC has been studied taking account of dipolar and spin-orbit interactions \cite{rot dipole, rot spin}. The GP equation of a condensate, rotating about z-axis with $\Omega$ angular velocity is \small \begin{eqnarray} i\hbar\frac{\partial\psi(\textbf{x},t) }{\partial t}=\left(-\frac{\hbar^2}{2m}\nabla^2 + V(\textbf x) + NU_0|\psi|^2 - \Omega L_z\right )\psi(\textbf{x},t) \end{eqnarray} \normalsize where $N$ is the number of atoms in the condensate and $L_z = xp_y - yp_x = -i\hbar (x\partial _y - y\partial _x)$ is the $z$ component of angular momentum. We have considered harmonic trapping $V(\textbf{x}) = \frac{1}{2} m({\omega}_x^2x^2+{\omega}_y^2y^2)$ where $\omega_x, \omega_y$ being the trap frequencies in the $x, y$ directions, and trapping frequency along the z-axis is very high such that the condensate will confine in the xy-plane. Also we have to normalize the wave-function by \begin{equation} \int_{R_d} |\psi(\textbf{x},t)|^2 d\textbf{x} = N \end{equation} We consider a cylindrical symmetric condensate for which we have $\omega_x = \omega_y$. We transform the variables of the GP equation in dimensionless parameters, to make it numerically convenient. We take the transformation of variables as $t \rightarrow {\omega}t, \textbf{x} \rightarrow \frac {\textbf{x}} {x_s}$, $\Omega \rightarrow \Omega/\omega, \psi(\textbf{x},t) \rightarrow {x^{3/2}_s} \psi(\textbf{x},t) $\cite{bao, wbao rot} where $x_s$ is the characteristic length of the condensate. Plugging these values in equation (1) and multiplying by $1/m\omega^2{x^2_s}$ we get \small \begin{eqnarray} i\varepsilon \frac{\partial\psi(\textbf{x},t) }{\partial t}=\left(-\frac{\varepsilon^2}{2}\nabla^2 + V(\textbf x) + \delta {\varepsilon}^{5/2}|\psi|^2 - \Omega L_z\right )\psi(\textbf{x},t) \end{eqnarray} \normalsize \begin{equation} \varepsilon = \frac {\hbar}{m{\omega}x_s^2} = \left(\frac {a_0} {x_s}\right)^2 \end{equation} \begin{equation} \delta = \frac {NU_0} {a_0^3\hbar\omega} , a_0 = \sqrt \frac{\hbar} {m\omega} \end{equation} The coefficient of the nonlinearity i.e. the interaction strength parameter \begin{equation} g = \delta {\varepsilon}^{5/2} = {\frac {4\pi aN}{a_0}} {\left(\frac {a_0} {x_s}\right)^5} \end{equation} So the GP equation takes the form in the dimensionless variables \begin{equation} i\frac{\partial\psi }{\partial t}=\left(-\nabla ^2 + \frac{r^2}{2} + g|\psi|^2 - \Omega L_z\right )\psi \end{equation} A typical set of parameters used in experiments with $^8$$^7$Rb is given by $m = 1.44 \times 10^{-25}$ (kg), $\omega = 20\pi$ (rad/sec), $a = 5.1 \times 10^{-9}$ (m) and $\hbar = 1.05 \times 10^{-34}$ (J.s). If one chooses $x_s = a_0$ then $g = \delta$ and the relation between $g$ and $N$ using those parameters $N \approx 53.155g$. In our case, the rotation is maximum at center and it starts to decrease in radially outward directions. In case of uniformly rotating condensate the angular momentum operator ($L_z$) is multiplied only by angular velocity ($\Omega$) whether in our case there is an additional term $\lambda e^{-{\frac{r^2}{2}}}$ along with angular velocity term ($\Omega$). Now to describe such kind of system the corresponding GP equation will take it the dimensionless form like \begin{equation} i\frac{\partial\psi }{\partial t}=\left[-\nabla ^2 + \frac{r^2}{2} + g|\psi|^2 - (\Omega + \lambda e^{-{\frac{r^2}{2}}}) L_z\right ]\psi \end{equation} Where $\lambda e^{-{\frac{r^2}{2}}}$ describes the non-uniform rotating term which is decreasing radially from the center of the condensate. \begin{figure}[htbb] \begin{center} \includegraphics[width=3.7cm]{norot.jpg} \includegraphics[width=3.7cm]{Picture2.jpg} \includegraphics[width=2.5cm]{71.jpg} \includegraphics[width=2.5cm]{72.jpg} \includegraphics[width=2.5cm]{73.jpg} \includegraphics[width=2.5cm]{74.jpg} \includegraphics[width=2.5cm]{75-1.jpg} \includegraphics[width=2.5cm]{76.jpg} \caption{(color online) Surface plots of ground state density of the condensate $|\psi(x,y)|^2$ for the system with interaction parameter $g = 1000$. Gaussian type of distribution is observed for non-rotating (top-left) and triangular vortex lattice is observed for the uniform rotation, $\lambda=0$ (top-right), from second row in text sequence, the density profile of the condensate has been shown for different values of $\lambda$, starting from 1 to 6 with an increment of 1 with $\Omega = 0.70$ as fixed value. It is clear that there is a sharp circular ring shape pattern at the boundary at around $\lambda=5$.} \label{fig:1} \end{center} \end{figure} The system is in the stationary state with a constant rotation about the z-axis. We have solved the time dependent GP equation by backward Euler method to get the stationary state with some suitable interaction parameter $g = 500,\; 1000,\; 2000$. We start with a Gaussian function $\psi(x,y) = exp[{-{\frac{(x^2+y^2)}{2}}}]$ as initial guess and find the solution after one million iteration with time step 0.001 sec for a square geometry of are $[20 \times 20]$ in natural unit After getting the state $\psi(x,y)$ we have calculated the energy and chemical potential ($\mu$) of the system. \begin{eqnarray} E = \int _{R_d} ( \frac{1}{2}|\nabla \psi| ^2 &+& V(x)\psi(x) + \frac{g}{2}|\psi|^4 \\ &-&(\Omega+\lambda e^{-{\frac{r^2}{2}}}) \Re (\psi^*L_z\psi) ) d\textbf{x} \nonumber \end{eqnarray} here $\Re$ represents the real part of a function. \begin{equation} \mu = E + \frac{g}{2} \int _{R_d} |\psi|^4 d\textbf{x} \end{equation} \section{III. Results \& discussions } In FIG. 1, FIG. 2 and FIG. 3 we have plotted the density profile for a range of $\lambda$ values at a fixed value of angular velocity $\Omega$ for different interaction parameters $g=1000$, 2000 and 500 respectively for a system size of 20$\times$20. \begin{figure}[htbb] \begin{center} \includegraphics[width=2.5cm]{26.png} \includegraphics[width=2.5cm]{25.png} \includegraphics[width=2.5cm]{24.png} \includegraphics[width=2.5cm]{23.png} \includegraphics[width=2.5cm]{22.png} \includegraphics[width=2.5cm]{21.png} \caption{(colour online) Surface plots of ground state density of the condensate $|\psi(x,y)|^2$ for the system with interaction parameter $g =2000$ for different $\lambda$ values. The density profile of the condensate has been shown in text sequence for different values of $\lambda$, starting from 12 to 17 with an increment of 1 with $\Omega = 0.70$ as fixed value. It is clear that there is a sharp circular ring shape pattern at the boundary at around $\lambda=17$. } \label{fig:2} \end{center} \end{figure} \begin{figure}[htbb] \begin{center} \includegraphics[width=2.5cm]{56.png} \includegraphics[width=2.5cm]{55.png} \includegraphics[width=2.5cm]{54.png} \includegraphics[width=2.5cm]{53.png} \includegraphics[width=2.5cm]{52.png} \includegraphics[width=2.5cm]{51.png} \caption{(colour online) Surface plots of ground state density of the condensate $|\psi(x,y)|^2$ for the system with interaction parameter $g =500$ for different $\lambda$ values. The density profile of the condensate has been shown in text sequence for different values of $\lambda$, starting from 5 to 10 with an increment of 1 with $\Omega = 0.90$ as fixed value. It is clear that there is a sharp circular ring shape pattern at the boundary at around $\lambda=10$. } \label{fig:2} \end{center} \end{figure} The triangular symmetry of the vortex lattice destroys in the presence of non-uniform rotation, irrespective of the value of the non-uniform rotation parameter $\lambda$. Now if we increase the value of $\lambda$, we see that at a particular value of $\lambda$ we represent this value as $\lambda_c$ the vortices arrange themselves in a ring shape pattern at the boundary of the condensate and remain there if we further increase the value of $\lambda$ at a fixed value of $\Omega$. So we have two phases, one is the disordered phase below the critical non-uniform rotation parameter $\lambda_c$ and ring shape arrangement above $\lambda_c$. \begin{figure}[htbb] \begin{center} \includegraphics[width=2.0cm]{60.jpg} \includegraphics[width=2.0cm]{65.jpg} \includegraphics[width=2.0cm]{70.jpg} \includegraphics[width=2.0cm]{75.jpg} \includegraphics[width=2.0cm]{80.jpg} \includegraphics[width=2.0cm]{85.jpg} \includegraphics[width=2.0cm]{90.jpg} \includegraphics[width=2.0cm]{95.png} \caption{(colour online) Surface plots of ground state density function $|\psi(x,y)|^2$ in 2D $g = 1000$ for different values of $\Omega$ above $\lambda_c$ (starting from $\Omega = 0.60$ to $\Omega = 0.95$ with an increment of $0.05$ in text sequence) } \label{fig:3} \end{center} \end{figure} \begin{figure}[htbb] \begin{center} \includegraphics[width=2.0cm]{260131.jpg} \includegraphics[width=2.0cm]{265166.png} \includegraphics[width=2.0cm]{270.jpg} \includegraphics[width=2.0cm]{275215.png} \includegraphics[width=2.0cm]{280243.jpg} \includegraphics[width=2.0cm]{285273.jpg} \includegraphics[width=2.0cm]{290308.jpg} \includegraphics[width=2.0cm]{295348.png} \caption{(colour online) Surface plots of ground state density function $|\psi(x,y)|^2$ in 2D $g = 2000$ for different values of $\Omega$ above $\lambda_c$ (starting from $\Omega = 0.60$ to $\Omega = 0.95$ with an increment of $0.05$ in text sequence) } \label{fig:4} \end{center} \end{figure} \begin{figure}[htbb] \begin{center} \includegraphics[width=3.0cm]{57521.jpg} \includegraphics[width=3.0cm]{580.jpg} \includegraphics[width=2.5cm]{58548.jpg} \includegraphics[width=2.50cm]{590100.jpg} \includegraphics[width=2.5cm]{595115.png} \caption{(colour online) Surface plots of ground state density function $|\psi(x,y)|^2$ in 2D $g = 500$ for different values of $\Omega$ above $\lambda_c$ (starting from $\Omega = 0.75$ to $\Omega = 0.95$ with an increment of $0.05$ in text sequence) } \label{fig:5} \end{center} \end{figure} \begin{figure}[htbb] \begin{center} \includegraphics[width=4.0cm]{5me.jpg} \includegraphics[width=4.0cm]{1me.jpg} \includegraphics[width=5.0cm]{2me.jpg} \caption{(colour online) Variation of chemical potential ($\mu$) and energy, with $\lambda$ for [$\Omega = 0.95$ $g$ = 500], [$\Omega = 0.80$ $g$ = 1000] and [$\Omega = 0.70$ $g$ = 2000]} \label{energy-mu} \end{center} \end{figure} \begin{figure} \includegraphics[width=7.2cm]{phase.jpg} \caption{(colour online) {\bf Phase diagram:} Below the line we observed the disordered lattice and above the line circular ring shape pattern} \label{phase} \end{figure} In FIG. 4, FIG. 5 and FIG. 6 we have plotted the density profile of the condensate for different values of $\Omega$ with a suitable value of $\lambda$ (above $\lambda_c$). We see that the ring shape pattern remain the same if we increase the value of $\Omega$. These studies show that there is a phase diagram associated with the transition of vortex lattice form disorder to ring shape pattern. We have drawn this diagram in FIG. \ref{phase}, in the high rotation limit and high non-rotating parameter region is the ring shape pattern region and below the line is the disordered region. To complete the study we have calculated the energy and chemical potential as a function of the non-rotating parameter. We have seen that at phase transition point there is a change of these physical quantities. This is due to the different mass distribution of the condensate. All this study suggest that the presence of non-uniform rotation has a rich phase of vortex lattice. At present, we don't have any experimental result regarding this non-uniform rotation, but there is the possibility to get this in near future.
{ "timestamp": "2018-06-21T02:10:50", "yymm": "1805", "arxiv_id": "1805.02417", "language": "en", "url": "https://arxiv.org/abs/1805.02417" }
\section{Introduction} By integrating the classical Newtonian equations of motion, \gls{md} simulations naturally sample the microcanonical (NVE) ensemble due to conservation laws.\cite{fre011,lei151} For comparison with experiment, it is often desirable to sample constant-temperature ensembles such as the canonical (NVT) or isothermal-isobaric (NPT) ensembles. In analogy with experiment, these ensembles could be generated by sampling a subspace of a much larger microcanonical system that serves as a heat bath, but such an approach is usually too computationally-expensive to implement in practice. Instead, various thermostatting algorithms are typically applied to change the Hamiltonian dynamics in a manner such that the intended ensemble is sampled. Many such algorithms have been proposed, and some of the more well-known choices include: \begin{itemize} \item Simple velocity rescaling, pioneered by \citet{woo711} for thermal equilibration, rescales the velocities of all particles at the end of each timestep (it can also be conducted with a less frequent time rescaling period) by a factor $\lambda$ to achieve a target instantaneous temperature: $\lambda = \left(\frac{{\gls{Ktar}}}{K}\right)^{\frac{1}{2}}$ with $\gls{Ktar}= \frac{1}{2}N_{\text{DOF}}k_{\text{B}}T_{\text{target}}$, where $N_{\text{DOF}}$ is the number of \acrlongpl{dof} in the system. \item The Gaussian thermostat supplements Newton's second law with a force intended to keep the kinetic energy constant:\cite{eva831,nos842,eva902} $\mathbf{\dot{p}}_i = -\nabla U_i - \alpha \mathbf{p}_i$, where $\alpha$ is a Lagrange multiplier determined using Gauss' principle of least constraint to be $\alpha = \left. \left(\sum_{i=1}^{N} \mathbf{F}_i \cdot \mathbf{p}_i /m_i\right) \middle/ \left(\sum_{i=1}^{N} \mathbf{p}_i^2/m_i\right) \right.$. \item Langevin dynamics supplements Newton's second law with terms describing Brownian motion:\cite{sch781} $\mathbf{\dot{p}}_i = -\nabla U_i - \gamma \mathbf{p}_i + \mathbf{\eta}$, where $\gamma$ represents a frictional dissipative force and $\mathbf{\eta}(t,T,\gamma,m_i)$ is a stochastic term representing random collisions. \item The Berendsen thermostat takes the Langevin equation, removes the stochastic term, and modifies the frictional dissipative force to yield similar temperature time dependence as with the stochastic term present:\cite{ber841} $\mathbf{\dot{p}}_i = -\nabla U_i - \gamma \mathbf{p}_i \left( \frac{\gls{Ktar}}{K} -1 \right)$, where $\gls{Ktar}= \frac{1}{2}N_{\text{DOF}}k_{\text{B}}T_{\text{target}}$. In practice, this is implemented as a smoother version of the simple velocity rescaling technique, in which the velocities of all particles are rescaled at the end of each timestep by a factor $\lambda$, with $\lambda=\left[1+\frac{\Delta t}{\tau_T}\left(\frac{\gls{Ktar}}{K}-1\right)\right]^{\frac{1}{2}}$. $\tau_T$ represents a time damping constant; if it is set equal to the timestep, the Berendsen algorithm recovers simple velocity rescaling, and as the time damping constant approaches infinity, the Berendsen algorithm recovers conventional microcanonical dynamics. \item The \gls{csvr} thermostat is a velocity rescaling algorithm in which the velocities of all particles are rescaled at the end of each timestep by a factor $\lambda$ designed such that the kinetic energy exhibits the distribution of the canonical ensemble.\cite{bus071,hey831} To this end, $\lambda = \left(\frac{{\gls{Ktar}}}{K}\right)^{\frac{1}{2}}$, where $\gls{Ktar}$ is stochastically drawn from the probability density function $P(\gls{Ktar}) \propto \gls{Ktar}^{\sfrac{N_{\text{DOF}}}{2}-1} e^{-\beta \gls{Ktar}}$. This algorithm can be adjusted to yield a smoother evolution in a similar manner as the Berendsen algorithm smoothes simple velocity rescaling.\cite{bus071} \item The \gls{nosehoover} thermostat extends the classical Lagrangian to include the additional coordinate $s$ and its time-derivative:\cite{nos841,hoo851} $\mathcal{L} = s^2\sum_{i=1}^N \frac{\mathbf{p}_i^2}{2m_i} - U + \frac{1}{2}Q\dot{s}^2 - k_{\text{B}}T_{\text{target}}L\ln s$, where $Q$ is the effective ``mass'' associated with $s$ and $L$ is set by the number of \acrlongpl{dof}. A single \gls{nosehoover} thermostat may be used, or chains of thermostats may be implemented to improve ergodicity and to take into account additional conservation laws.\cite{mar921} \end{itemize} There exist numerous additional thermostats (e.g., the Andersen thermostat\cite{and801}), and small changes can be made to the listed thermostats, such as implementing the originally global \gls{nosehoover} thermostat in a local ``massive'' manner by pairing a separate \gls{nosehoover} thermostat to each \acrlong{dof}.\cite{tob931} The reader is referred to a non-comprehensive list of reviews and textbooks for additional information.\cite{mor981,hun051,fre011,tuc101} Simple velocity rescaling and the Gaussian thermostat aim to sample the isokinetic ensemble (NVK). However, they are often presented as equivalent to the canonical ensemble with respect to position-dependent equilibrium properties, with justification for this based on the argument that the configurational part of the isokinetic ensemble's partition function is exactly equal to that of the canonical ensemble's.\cite{hai831,eva832,nos842,min031,col101} Meanwhile, the Berendsen thermostat does not correspond to a known ensemble but is rather supposed to sample a configurational phase space intermediate to the canonical and microcanonical ensembles.\cite{ber841,mor001,mor031} In the 1990s, it was found that the simple velocity rescaling and Berendsen thermostat algorithms introduce an artifact:\cite{lem941,har981} the ``flying ice cube effect,'' as coined by \citet{har981}, describes a violation of the equipartition theorem observed when using these algorithms in which kinetic energy drains from high-frequency modes such as bond stretching into low-frequency modes such as \gls{com} translation. This was shown to affect systems' structural, thermodynamic, and dynamic properties.\cite{har981} As it can be proven that the equipartition theorem holds in the canonical ensemble, microcanonical ensemble, and isokinetic ensemble (see Appendix),\cite{cal851,cag881,shi061,uli081,sib131} a simulation exhibiting the flying ice cube effect is not ergodically sampling any of these ensembles, neither in configurational phase space nor in momentum phase space. Nonetheless, simple velocity rescaling and the Berendsen thermostat continue to be commonly used,\cite{hun051,coo081} with \citet{coo081} stating, ``By far the most commonly used algorithm for constant temperature MD of biomolecules is the Berendsen heat bath, due to its ease of implementation and availability in standard software packages.'' Use of the Berendsen thermostat can be approximated by tracking citations of its canonical reference,\cite{ber841} which have continued to grow over time (Fig.~\ref{fig:thermostatcitations}). \begin{figure} \centering \includegraphics[width=3.46in]{figs/ThermostatPaperCitations.png} \caption{\label{fig:thermostatcitations} Citations of \citet{ber841} and \citet{bus071} over time. Data provided by Web of Science, extracted on \DTMdate{2018-05-04}.} \end{figure} Some technical aspects of the flying ice cube effect are as of yet still unclear. Since \citet{har981}, there has been continued discussion about whether the flying ice cube effect may occur with other thermostats.\cite{lin081,gog121} The \gls{csvr} thermostat rescales velocities to yield the canonical ensemble's distribution of kinetic energies, similar to how simple velocity scaling yields the isokinetic ensemble's distribution of kinetic energies and the Berendsen thermostat yields a kinetic energy distribution intermediate to the two ensembles. If all velocity rescaling algorithms always lead to the flying ice cube effect, then it may be suspected that the same flying ice cube artifact occurs when using the \gls{csvr} thermostat,\cite{bas131} which would be worrisome because the \gls{csvr} thermostat has been quickly adopted into widespread use (Fig.~\ref{fig:thermostatcitations}). In addition, since the Gaussian thermostat has been shown to be similar to simple velocity rescaling,\cite{nos911} it may be suspected that the Gaussian thermostat exhibits the artifact as well. Given the wide-spread use of these algorithms in \gls{md} simulations, more understanding is warranted, and we will show that neither the \gls{csvr} thermostat nor the Gaussian thermostat bring about the flying ice cube effect. In the present work we refer to the flying ice cube effect as the term was originally used to describe the violation of the equipartition theorem as caused by velocity rescaling procedures.\cite{har981} Other \gls{md} simulation methods that fail to conserve energy in the microcanonical ensemble can also bring about equipartition theorem violations.\cite{lin081} These methods include approximate treatment of long-range electrostatic interactions, certain multiple timestep algorithms, constraining molecular geometries with too loose of a tolerance, not updating neighbor lists frequently enough, and using too large of a timestep.\cite{chi001,lin081,eas101} In some cases these issues are also referred to as flying ice cube effects,\cite{sag991,wag131,yan132} but these are not related to the artifact with which we are concerned. In this work, we have revisited the simple model system of united-atom diatomic ethane molecules that \citet{har981} first used to illustrate the flying ice cube effect. By explicitly calculating the partitioning of kinetic energies between translational, rotational, and vibrational \acrlongpl{dof}, we are able to determine which thermostats and conditions lead to the violation of equipartition, as well as the manner and degree to which they do so. We go on to rationalize these findings by illustrating how simple velocity rescaling violates balance, while the \gls{csvr} thermostat satisfies detailed balance. We end by illustrating some severe errors that are directly caused by these subtleties related to thermostatting. \section{Simulation Details} Diatomic ethane molecule simulations were conducted with the open-source LAMMPS code.\cite{pli951} LAMMPS input scripts are available. \bibnote{We used the \DTMdate{2016-11-17} release of LAMMPS to conduct our simulations. The Gaussian thermostat was not implemented in LAMMPS, so we wrote an extension that integrates the equations of motion given by \citet{min031}. This extension was later incorporated into the LAMMPS code and made publicly available starting with the \DTMdate{2017-01-06} update as part of the ``fix nvk'' command.} Except where stated otherwise, the simulations consisted of cubic simulation boxes with \gls{pbc}, setup by placing the ethane molecules on a simple cubic lattice, equilibrated with a Langevin thermostat for at least \SI{50}{\nano\second}, switched to the target thermostat for at least a further \SI{50}{\nano\second} of equilibration, and finally ran with the target thermostat for at least \SI{50}{\nano\second} of production. We verified that all simulations were conducted for sufficient time periods for the energies to equilibrate and be well sampled. The velocities of the particles in microcanonical simulations were rescaled once after Langevin equilibration such that the total energy was equal to the average total energy seen in the Langevin simulation. For the simulations in which the \gls{com} linear momentum was fixed to zero (stated in the figure captions), the system's linear momentum was zeroed every timestep, followed by a rescaling of velocities to maintain the same total kinetic energy as before the zeroing had occurred to prevent energy leakage. The equations of motion were integrated with a standard Velocity Verlet algorithm using half-step velocity calculations. The timestep used was \SI{0.5}{\femto\second}, which was found to give adequate energy conservation in the microcanonical ensemble. Thermostat parameters were as follows, except where stated otherwise. Simple velocity rescaling was done every timestep. The \gls{nosehoover} chain consisted of three thermostats. The Berendsen, \gls{nosehoover}, and \gls{csvr} thermostats were used with time damping constants ($\tau_T$) of \SI{100}{\femto\second}, and the \gls{nosehoover} thermostat used effective thermostat masses of $Q_1=N_{\text{DOF}} k_{\text{B}} T {\tau_T}^2$ and $Q_{i>1}=k_{\text{B}} T {\tau_T}^2$.\cite{mar921} When doing simulations in the microcanonical ensemble, the total energy was set such that a simulation temperature equal to the canonical ensemble simulations' target temperature was achieved. The target simulation temperature was set to \SI{350}{\kelvin}, well above the critical temperature of ethane.\cite{mar982} Kinetic energies of each diatomic molecule were partitioned into translational, rotational, and vibrational kinetic energies, as shown in the Appendix. In all figures that plot kinetic energies, the error bars shown represent $\pm1$ standard error of the mean. This was calculated by dividing the data from the production timesteps into \num{20} consecutive blocks, averaging the data for each block, and computing the standard error over the \num{20} data values.\cite{fre011} Error bars are not shown when they would be smaller than the symbols or the line widths. Bonded parameters for the united-atom ethane molecule were taken from \citet{har981} (harmonic bond potential $U(r)=k(r-r_0)^2$ with $r_0=$\SI{1.54}{\angstrom} and $k=$\SI{240}{\kcal\per\mol\per\angstrom\squared}) and non-bonded parameters were taken from \citet{mar982} (Lennard-Jones potential with $\epsilon=$\SI{0.195}{\kcal\per\mole}, $\sigma=$\SI{3.75}{\angstrom}, truncated and shifted at \SI{14}{\angstrom}, and no charges). Details on the simulations of benzene in \acrshort{mof}-5 can be found in the Appendix. \section{Results and Discussion} \subsection{Examining equipartition under different thermostats} It is instructive to reconsider the simple case previously examined by \citet{har981}: that of a single ethane molecule moving in one-dimensional space along its bond axis. In the microcanonical ensemble under perfect energy conservation, the translational kinetic energy will remain constant at its set initial energy and the vibrational kinetic energy will oscillate. In the canonical ensemble, equipartition states that the translational and vibrational \acrlong{dof} should each have an average kinetic energy of $\frac{1}{2}k_{\text{B}} T$. As expected, the Langevin thermostat satisfies the equipartition theorem (see Fig.~\ref{fig:1partsequipartition}). In agreement with the work of \citet{har981}, we find that simple velocity rescaling and the Berendsen thermostat bring about a violation of equipartition in the kinetic \acrlongpl{dof}, with all kinetic energy flowing to translational motion, in the plainest illustration of the flying ice cube effect. We find that the \gls{csvr} thermostat correctly partitions the energies. \begin{figure} \centering \includegraphics[width=3.32in]{figs/avg-energy-per-degreeoffreedom-publication-noPE.png} \caption{\label{fig:1partsequipartition} Partitioning of the kinetic energies obtained from one-dimensional \gls{md} simulations of a single ethane molecule using various thermostats. Both atoms were given a starting velocity of \SI{100}{\meter\per\second} along the same direction as the bond vector. For the thermostats shown, the same energy partitionings were observed regardless of initial bond length and initial \gls{com} momentum. The microcanonical, \gls{nosehoover} thermostat, and Gaussian thermostat results are not shown here since we found that the energy partitionings are dependent on the initial conditions, indicative of these thermostats' well-known lacks of ergodicity that are more manifest for small systems.\cite{fre011,nos842,tox901,nos911,mar921,tuc011,hes031,hun051}} \end{figure} We next consider the more complex case of a large number of ethane molecules interacting in three dimensions with anharmonic Lennard-Jones potentials. Each diatomic ethane molecule now has three translational modes, two rotational modes, and one vibrational modes, so the equipartition theorem states that these modes' kinetic energies should be equal to $\frac{3}{2}k_{\text{B}} T$, $\frac{2}{2}k_{\text{B}} T$, and $\frac{1}{2}k_{\text{B}} T$ respectively, with a correction of $\frac{3}{2}k_{\text{B}} T/N_{\rm{molecs}}$ to the translational kinetic energy in cases where the \gls{com} momentum is constrained. In Fig.~\ref{fig:50partsequipartition}, we show that the Langevin, \gls{nosehoover}, \gls{csvr}, and Gaussian thermostats all exhibit correctly equipartitioned energies, as does the microcanonical ensemble. As in the case of the single ethane molecule in one dimension, the simple velocity rescaling and Berendsen thermostat algorithms lead to a violation of equipartition, with translational and rotational modes having too much kinetic energy and vibrational modes having too little. \begin{figure} \centering \includegraphics[width=6.87in]{figs/avg-energy-per-degreeoffreedom-presentation.png} \caption{\label{fig:50partsequipartition} Partitioning of the kinetic energies obtained from \gls{md} simulations of 50 ethane molecules in a \SI{30}{\angstrom} cubic simulation box using various thermostats. In all simulations shown, the \gls{com} momentum was fixed to zero.} \end{figure} \subsection{Equivalence of simple velocity rescaling and the Gaussian thermostat} Since the thermostatting under simple velocity rescaling does not take place within the equations of motion, this ad hoc temperature control algorithm was initially difficult to investigate theoretically, and its validity was considered questionable.\cite{nos842,eva902} The algorithm's use was justified on the basis of empirical arguments, such as that simple velocity rescaling and the Gaussian thermostat give similar static and dynamic properties for the Lennard-Jones fluid.\cite{hai831} It was eventually proven that simple velocity rescaling is analytically equivalent to the Gaussian thermostat within an error of $\mathcal{O} \left(\text{timestep}\right)$ when the velocity rescaling time period is set equal to the timestep,\cite{nos911} which gave support for the legitimacy of using simple velocity rescaling to sample the isokinetic ensemble. However, we have shown that the Gaussian thermostat exhibits correct energy equipartitioning while simple velocity rescaling does not. We prove in the Appendix that the isokinetic ensemble should satisfy the equipartition theorem. Thus, it is clear that simple velocity rescaling does not actually sample the isokinetic ensemble. The equivalence of simple velocity rescaling and the Gaussian thermostat under small timesteps leads to the expectation that the flying ice cube effect will gradually disappear under simple velocity rescaling as the timestep is decreased. We demonstrate confirmation of this expectation in Fig.~\ref{fig:timestep}. However, Fig.~\ref{fig:timestep} shows that the timestep needs to be reduced by over three orders of magnitude from typical simulation timesteps before the flying ice cube effect is no longer discerned. Of course, such a decreased timestep requires an equivalent three orders of magnitude increase in CPU time; if the timestep between integrations is so small, the forces on the particles should not need to be recalculated every timestep, and so one could envision implementing a multiple-time-step algorithm to mitigate the increase in CPU time. We also note that under the Berendsen thermostat, lowering the timestep does not correct the energy partitioning. \begin{figure} \centering \includegraphics[width=6.87in]{figs/avg-energy-per-degreeoffreedom-plot_both.png} \caption{\label{fig:timestep} Partitioning of the kinetic energies obtained from \gls{md} simulations performed under the same conditions as in Fig.~\ref{fig:50partsequipartition} but changing the timestep, using (left) simple velocity rescaling and (right) the Berendsen thermostat with the time damping constant maintained at \SI{100}{\femto\second}. Lines are a guide to the eye.} \end{figure} \subsection{Violation of balance causes the flying ice cube effect} The mechanism underlying the flying ice cube effect can be elucidated graphically for the first test case we examined, that of a single ethane molecule. In Fig.~\ref{fig:balanceviolation}, we show this system's phase space, putting translational kinetic energy on the $x$-axis and vibrational kinetic energy on the $y$-axis. \begin{figure} \centering \includegraphics[width=3.43in]{figs/balance-violation.png} \caption{\label{fig:balanceviolation} Kinetic phase space of a single ethane molecule moving in one-dimensional space along its bond axis under simple velocity rescaling. $\gls{Ktar}= k_{\text{B}}T_{\text{target}}$, $K_{\rm{trans}}=\frac{1}{2}\left(m_1+m_2\right)\left(\frac{m_1v_{1,x}+m_2v_{2,x}}{m_1+m_2}\right)^{^2}$, and $K_{\rm{vib}}=\frac{1}{2}\left(\frac{m_1m_2}{m_1+m_2}\right)\left(v_{2,x}-v_{1,x}\right)^2$. Solid lines show a particular path in phase space between labeled points, referred to in the text. Dotted lines are guides useful to understanding the velocity rescaling moves. Dashed lines show the boundaries of phase space accessible by any sequence of \gls{md} and velocity rescalings from lines $\overline{AB}$, $\overline{CD}$, and $\overline{AG}$, with the accessible phase spaces shaded.} \end{figure} During microcanonical \gls{md}, the system can only explore phase space on a vertical line between $y=0$ and $y=U_{\rm{max}}$ because a constant total energy and translational kinetic energy is maintained, with energy exchanges only allowed between vibrational kinetic energy and potential energy. Consider a \gls{md} simulation initially on such a vertical line in phase space, $\overline{AB}$. Under simple velocity rescaling, if a rescaling move is conducted at point $B$, the system will move to point $C$; this occurs because the translational and vibrational energies are both scaled by the same factor $\lambda^2$ such that their sum is equal to the target kinetic energy, moving the system to the intersection of the lines $y=\frac{y_B}{x_B}x$ and the target isokinetic line ($y=-x+K_{\rm{target}}$). Since points $B$ and $C$ have the same configuration with zero potential energy, \gls{md} will now explore line $\overline{CD}$. Let us examine whether we can reach point $B$ by rescaling from line $\overline{CD}$ back to a line with the same translational energy of line $\overline{AB}$. With a single rescaling, we would need to rescale from point $E$ to point $F$. From point $F$, \gls{md} will explore phase space on line $\overline{AG}$, where the lengths of lines $\overline{FG}$ and $\overline{CE}$ are equal, with both representing the stored potential energy of the system prior to the rescaling. Obviously, line $\overline{EF}$ must have a smaller slope than line $\overline{BC}$; accordingly, $y_G$ will necessarily be smaller than $y_B$. Hence, with a single velocity rescaling, point $B$ cannot be reached. Multiple velocity rescalings from line $\overline{CD}$ allows us to reach a point with greater vibrational kinetic energy than point $G$. However, all phase space reachable by any number of velocity rescalings from line $\overline{CD}$ is bounded by the red dashed line in Fig.~\ref{fig:balanceviolation} (see Appendix for derivation). Continuing to rescale will continue to shrink the volume of accessible phase space, as rescaling from lines $\overline{AB}$ to $\overline{CD}$ to $\overline{AG}$ lowers the boundary from the blue to the red to the green dashed lines; eventually, accessible phase space will be confined only to the point with all kinetic energy in the translational mode. Notably, the decrease in accessible phase space becomes smaller as velocity rescaling occurs closer to the isokinetic line. In a simulation, this occurs when the timestep between velocity rescalings is reduced. This explains why the flying ice cube effect is reduced under simple velocity rescaling by decreasing the timestep (Fig.~\ref{fig:timestep}). \subsubsection{Monte Carlo perspective} We can view the combination of \gls{md} and velocity scaling moves as a Monte Carlo simulation. Hence, our previous example shows that simple velocity rescaling violates the condition of balance.\cite{man991,fre011} In contrast, the \gls{csvr} thermostat can explicitly be proven to sample the desired distribution by considering the condition of detailed balance. Let us assume that we do a large and random number of \gls{md} steps between velocity rescaling moves. We define $A$ as the set of all configurations of the system with a total energy $E_A$. The flow of configurations from set $A$ to set $B$ is given by: \begin{align} & K\left(A \rightarrow B\right) = \nonumber \\ & \quad P\left(E_A\right) \sum_{{\bf r}_1^n} \sum_{{\bf p}_1^n} \sum_{{\bf r}_2^n} \sum_{{\bf p}_2^n} p \left({\bf r}_1^n,{\bf p}_1^n | E_A\right) \delta \left(E\left({\bf r}_1^n,{\bf p}_1^n\right)-E_A\right) \alpha \left({\bf r}_1^n,{\bf p}_1^n \rightarrow {\bf r}_2^n,{\bf p}_2^n\right) \delta \left(E\left({\bf r}_2^n,{\bf p}_2^n\right)-E_B\right) \label{eq:detailedbalance1} \end{align} where ${\bf r}_1^n,{\bf p}_1^n$ is the configuration with position vector ${\bf r}_1^n$ and momentum vector ${\bf p}_1^n$, $p\left({\bf r}_1^n,{\bf p}_1^n | E_A\right)$ is the probability to find the configuration ${\bf r}_1^n,{\bf p}_1^n$ from all configurations with energy $E_A$ during \gls{md}, and $\alpha \left({\bf r}_1^n,{\bf p}_1^n \rightarrow {\bf r}_2^n,{\bf p}_2^n\right)$ is the \latin{a priori} probability to velocity rescale from configuration ${\bf r}_1^n,{\bf p}_1^n$ to configuration ${\bf r}_2^n,{\bf p}_2^n$. Recognizing that velocity rescaling does not alter positions: \begin{equation} K\left(A \rightarrow B\right) = P\left(E_A\right) \sum_{{\bf r}^n} \sum_{{\bf p}_1^n} \sum_{{\bf p}_2^n} p \left({\bf r}^n,{\bf p}_1^n | E_A\right) \delta \left(E\left({\bf r}_1^n,{\bf p}_1^n\right)-E_A\right) \alpha \left({\bf r}^n,{\bf p}_1^n \rightarrow {\bf r}^n,{\bf p}_2^n\right) \delta \left(E\left({\bf r}^n,{\bf p}_2^n\right)-E_B\right) \label{eq:detailedbalance2} \end{equation} Next, recognizing that velocity rescaling can only give one configuration in momentum space with $E\left({\bf r}^n,{\bf p}_2^n\right)=E_B$ from starting configuration ${\bf r}^n,{\bf p}_1^n$, and that the acceptance probabilities only involve the kinetic energy: \begin{equation} K\left(A \rightarrow B\right) = P\left(E_A\right) \sum_{{\bf r}^n} \sum_{{\bf p}^n} p \left({\bf r}^n,{\bf p}^n | E_A\right) \delta \left(E\left({\bf r}_1^n,{\bf p}_1^n\right)-E_A\right) \alpha \left(K=E_A-U\left( {\bf r}^n \right) \rightarrow E_B-U\left( {\bf r}^n \right)\right) \label{eq:detailedbalance3} \end{equation} where $\alpha \left(K=E_A-U\left( {\bf r}^n \right) \rightarrow E_B-U\left( {\bf r}^n \right)\right)$ is the \latin{a priori} probability to velocity rescale to the configuration having kinetic energy $K=E_B-U\left( {\bf r}^n \right)$ given we start with a configuration having kinetic energy $K=E_A-U\left( {\bf r}^n \right)$. Then, recognizing that momentum and position are decoupled, i.e., the number of possible states in momentum space only depends on the total kinetic energy but does not depend on the details of the potential energy surface, and each of these possible states in momentum space are equally likely: \begin{equation} K\left(A \rightarrow B\right) = P\left(E_A\right) \sum_{{\bf r}^n} \omega \left( E_A-U \left( {\bf r}^n \right) \right) p \left({\bf r}^n,{\bf p}^n | E_A\right) \alpha \left(K=E_A-U\left( {\bf r}^n \right) \rightarrow E_B-U\left( {\bf r}^n \right)\right) \label{eq:detailedbalance4} \end{equation} where $\omega\left( K\right)$ is the number of configurations in momentum space for a given kinetic energy $K$ (equivalent to the ideal gas microcanonical partition function). Finally, by making the substitutions $p\left({\bf r}^n,{\bf p}^n | E_A\right) = \Omega_{NVE_A}^{-1}$ and $P\left(E_A\right) = \frac{e^{-\beta E_A} \Omega_{NVE_A}}{Z_{NVT}}$: \begin{equation} K\left(A \rightarrow B\right) = \frac{e^{-\beta E_A}}{Z_{NVT}} \sum_{{\bf r}^n} \omega \left(E_A-U \left( {\bf r}^n \right) \right) \alpha \left(K=E_A-U\left( {\bf r}^n \right) \rightarrow E_B-U\left( {\bf r}^n \right)\right) \label{eq:detailedbalance5} \end{equation} The two flows, $K\left(A \rightarrow B\right)$ and $K\left(B \rightarrow A\right)$, are equal if we impose as condition for the \latin{a priori} probabilities: \begin{align} \frac{\alpha \left(K=E_A-U\left( {\bf r}^n \right) \rightarrow E_B-U\left( {\bf r}^n \right)\right)} {\alpha \left(K=E_B-U\left( {\bf r}^n \right) \rightarrow E_A-U\left( {\bf r}^n \right)\right)} &= \frac{e^{-\beta E_B} \omega \left( E_B-U \left( {\bf r}^n \right) \right)} {e^{-\beta E_A} \omega \left( E_A-U \left( {\bf r}^n \right) \right)} \nonumber \\ &= \frac{e^{-\beta \left(E_B-U \left( {\bf r}^n \right)\right)} \left( E_B-U \left( {\bf r}^n \right) \right)^{\sfrac{N_{\text{DOF}}}{2}-1}} {e^{-\beta \left(E_A-U \left( {\bf r}^n \right)\right)} \left( E_A-U \left( {\bf r}^n \right) \right)^{\sfrac{N_{\text{DOF}}}{2}-1}} \label{eq:detailedbalance6} \end{align} in which we used the known expression for the ideal gas microcanonical partition function.\cite{tuc101} Eq.~\ref{eq:detailedbalance6} is satisfied by the \gls{csvr} thermostat, which rescales velocities to the target kinetic energy distribution given by the gamma distribution: \begin{equation} \label{eq:gamma} P(K) =\frac{e^{-\beta K} K^{\sfrac{N_{\text{DOF}}}{2}-1}}{\int_0^{\infty} dK K^{\sfrac{N_{\text{DOF}}}{2}-1} e^{-\beta K}} =\frac{e^{-\beta K} K^{\sfrac{N_{\text{DOF}}}{2}-1}} {\beta^{-\sfrac{N_{\text{DOF}}}{2}}\Gamma\left(\sfrac{N_{\text{DOF}}}{2}\right)} \end{equation} Hence, the \gls{csvr} thermostat satisfies detailed balance. \subsubsection{Velocity rescaling to other kinetic energy distributions} We have seen that simple velocity rescaling violates balance and brings about the flying ice cube effect, while the \gls{csvr} thermostat satisfies detailed balance and does not exhibit the artifact. One key difference between these algorithms is that simple velocity rescaling restricts the rescaling factor ($\lambda$) to be less than one when the system's instantaneous temperature is greater than the target temperature and greater than one when the instantaneous temperature is less than the target temperature. It is this restriction which allowed us to show graphically that simple velocity rescaling moves decrease accessible phase space. It is instructive to consider the effects of relaxing this restriction while rescaling velocities to a non-canonical kinetic energy distribution. This procedure would not render any areas of phase space inaccessible, but the rescaling would be to a distribution that is not necessarily invariant under Hamiltonian dynamics.\cite{and801,man991} To change the target kinetic energy distribution, we modified the \gls{csvr} thermostat's value of $N_{\text{DOF}}$ in Eq.~\ref{eq:gamma} from the actual number of \acrlongpl{dof} ($N_{\text{DOF},0}$) while simultaneously adjusting $\beta$ from its initial value ($\beta_0$) such that $\beta=\beta_0\frac{N_{\text{DOF}}}{N_{\text{DOF},0}}$ in order to maintain a constant average kinetic energy. The resulting kinetic energy distributions are shown in the top of Fig.~\ref{fig:bussidof} and include distributions that are sharper ($N_{\text{DOF}}>N_{\text{DOF},0}$) and broader ($N_{\text{DOF}}<N_{\text{DOF},0}$) than the canonical distribution. In the limit of $N_{\text{DOF}}\to\infty$, this method closely approximates simple velocity rescaling or the Berendsen thermostat, depending on the time damping constant used. \begin{figure} \centering \includegraphics[width=3.46in]{figs/KE-dists-300-subset.png} \\ \begin{minipage}[b]{\textwidth} \includegraphics[width=3.67in]{figs/avg-energy-per-degreeoffreedom-plot-V24-title.png} ~ \includegraphics[width=3.33in]{figs/avg-energy-per-degreeoffreedom-plot-V25-title.png} \end{minipage} \caption{\label{fig:bussidof} (top) Probability density function of kinetic energies following $P(K) =\frac{e^{-\beta K} K^{\sfrac{N_{\text{DOF}}}{2}-1}} {\beta^{-\sfrac{N_{\text{DOF}}}{2}}\Gamma\left(\sfrac{N_{\text{DOF}}}{2}\right)}$, where $\beta$ is chosen such that the average kinetic energy (temperature) is the same for all choices of $N_{\text{DOF}}$ via $\beta=\beta_0\frac{N_{\text{DOF}}}{N_{\text{DOF},0}}$, $N_{\text{DOF},0}=300$, and $\beta_0=\left(k_{\text{B}}\times350\text{ K}\right)^{-1}$. (bottom) Partitioning of the kinetic energies obtained from \gls{md} simulations of 50 ethane molecules in a \SI{30}{\angstrom} cubic simulation box using the \gls{csvr} thermostat, modified such that the target distribution of kinetic energies was set to those shown in the top part of the figure for the proper $N_{\text{DOF},0}$ value. (bottom left) Here, the \gls{com} momentum was fixed at zero and $N_{\text{DOF},0}$ was set to 297. (bottom right) Here, the \gls{com} momentum was not fixed after the Langevin thermostat equilibration, allowing the \gls{com} momentum to drift, and $N_{\text{DOF},0}$ was set to 300. Lines are a guide to the eye.} \end{figure} The energy partitionings that resulted from setting these target kinetic energy distributions are shown for simulations in the bottom of Fig.~\ref{fig:bussidof}. It can be seen that with sharper distributions, the flying ice cube effect is observed, with more kinetic energy partitioned in low-frequency modes and less in high-frequency modes. Interestingly, the opposite effect is observed with broader distributions, with more kinetic energy partitioned in high-frequency modes and less in low-frequency modes. When the \gls{com} momentum is not constrained to zero, a more drastic effect is observed, such that rotational kinetic energy decreases both with decreasing $N_{\text{DOF}}$ as energy flows to the higher-frequency vibrational modes and with increasing $N_{\text{DOF}}$ as almost all energy flows to the lower-frequency translational modes. Only at the canonical kinetic energy distribution ($N_{\text{DOF}}=297$ and $N_{\text{DOF}}=300$ for the constrained and not-constrained \gls{com} momentum simulations, respectively) is proper equipartitioning observed. \subsection{Conditions affecting the flying ice cube effect's conspicuousness} Artifacts relating to the flying ice cube effect do not always appear when the simple velocity rescaling or Berendsen thermostat algorithms are used.\cite{mar021,mud041,bas131} Indeed, when the flying ice effect was first found,\cite{lem941,har981} fewer alternatives to these thermostatting algorithms were available than at present, e.g., the \gls{csvr} thermostat had not yet come into popular use, and so protective measures were recommended to lower the likelihood of the artifact occurring under these faulty thermostats.\cite{har981} Here, we investigate these recommendations and other conditions which we found affect the conspicuousness of the flying ice cube effect for our system of interacting diatomic ethane molecules. One recommendation given in \citet{har981} was to lower the thermostat's coupling strength, either by less frequent rescaling under simple velocity rescaling or by increasing the time damping constant under the Berendsen thermostat. Decreasing the coupling strength allows for the system's natural dynamics to bring about energy equipartitioning faster than the thermostat can disturb it. In Fig.~\ref{fig:dampingconst}, we show that this recommendation does indeed reduce the violation of equipartition. However, the flying ice cube artifact was not fully resolved until these time parameters were larger than \SI{100}{\pico\second}, a value much greater than the \SI{0.5}{\pico\second} time damping constant above which \citet{ber841} showed that energy fluctuations under the Berendsen thermostat are similar to energy fluctuations in the microcanonical ensemble and thus concluded that the thermostat has little influence on the dynamics. This discrepancy may be partially explained by the use of the rigid SPC water model\cite{ber811} to evaluate the Berendsen thermostat in \citet{ber841}, as a rigid molecule lacks the high-frequency vibrational modes that lead most directly to the flying ice cube effect. Meanwhile, we found that energy equipartitioning held under the \gls{csvr} thermostat regardless of the value of the time damping constant. At the weakest coupling strengths shown in Fig.~\ref{fig:dampingconst}, it can be seen that the desired temperature was not well established in these \SI{100}{\nano\second} simulations. Varying the coupling strength does not come without its risks. Fig.~\ref{fig:dampingconst} shows an anomalous data point when simple velocity rescaling is performed every \SI{500}{\femto\second}. Further investigation allowed us to characterize this anomaly as a resonance effect associated with bond vibration. The characteristic period of the \ce{CH3-CH3} harmonic bond is \SI{38.4}{\femto\second}. When the time rescaling period is set close to an integer multiple of half this characteristic period, large amplitude bond vibrations occur, becoming stronger when the time rescaling period more exactly matches the multiple. These resonance effects become weaker as the multiple grows, which explains why the vibrational energy at the time rescaling period of \SI{1000}{\femto\second} is greater than at \SI{2000}{\femto\second}. We observed resonance effects when rescaling close to other multiples of half the bond's characteristic period that we also tested. We will shortly show that altering the coupling strength can bring about resonance effects under the Berendsen thermostat as well. \begin{figure} \centering \includegraphics[width=6.87in]{figs/avg-energy-per-degreeoffreedom-plot-dampingconst-subset-plot_three_inset.png} \caption{\label{fig:dampingconst} Partitioning of the kinetic energies obtained from \gls{md} simulations performed under the same conditions as in Fig.~\ref{fig:50partsequipartition}, but changing (left) the time rescaling period for simple velocity rescaling, (middle) the time damping constant for the Berendsen thermostat, and (right) the time damping constant for the \gls{csvr} thermostat, all three with the timestep maintained at \SI{0.5}{\femto\second}. The inset shown in the simple velocity rescaling graph shows additional data near the time rescaling period of \SI{500}{\femto\second}, at which point a resonance artifact associated with the \ce{CH3-CH3} bond's characteristic vibrational frequency can be observed.} \end{figure} Another precautionary measure recommended in \citet{har981} was to periodically zero the \gls{com} momentum, as it represents the lowest-frequency \acrlong{dof} into which most kinetic energy flows. The Newtonian equations of motion preserve \gls{com} momentum, but numeric errors cause this preservation to be inexact. Constraint of the \gls{com} momentum to zero is oftentimes used to safeguard against these numeric errors: a safeguard we used throughput this paper except where stated. In Fig.~\ref{fig:50partsequipartitionberendsen}, we show that releasing this constraint does indeed significantly worsen the flying ice cube effect, though equipartition is violated both with and without the constraint. We further explored the effects of allowing the \gls{com} momentum to vary by replacing the \gls{pbc} with reflecting walls, which we found gets rid of the flying ice cube effect completely, with no violation of the equipartition theorem. In both of these cases, \gls{com} momentum is not conserved, but with opposite results observed (though in the former case, \gls{com} momentum can build-up, while in the latter case, it cannot), We hypothesize that reflecting walls void the flying ice cube effect because the additional collisions with the walls give additional opportunities for energy to be transferred between kinetic modes, which acts more quickly than the Berendsen thermostat works to incorrectly partition the energy. To test this hypothesis, we made the walls softer so that a smaller redistribution of intramolecular kinetic energy would take place upon collision. Instead of reflecting walls, we used wall-particle interactions with a softer 9-3 Lennard-Jones potential,\cite{ste731} $U(r)=\epsilon\left[\frac{2}{15}\left(\frac{\sigma}{r}\right)^{9}-\left(\frac{\sigma}{r}\right)^{3}\right]$ with arbitrary $\epsilon$ and $\sigma$ values of \SI{0.195}{\kcal\per\mole} and \SI{3.75}{\angstrom}, respectively, and a shifted cutoff of \SI{14}{\angstrom}. We found that with this softer wall, energy equipartitioning holds less well than with the harder wall, giving some support to our hypothesis. We note further that the presence of the reflecting wall did not significantly change the distribution of total kinetic energies, i.e., the wall did not bring about equipartition indirectly through bringing about a more proper kinetic energy distribution. \begin{figure} \centering \includegraphics[width=6.53in]{figs/avg-energy-per-degreeoffreedom-paper-berendsenvariants-nomolecnumbers-plot_both.png} \caption{\label{fig:50partsequipartitionberendsen} Partitioning of the kinetic energies obtained from \gls{md} simulations of 50 ethane molecules in a \SI{30}{\angstrom} cubic simulation box under different conditions using (left) simple velocity rescaling and (right) the Berendsen thermostat. In each, the first simulation from left is the same simulation as shown in Fig.~\ref{fig:50partsequipartition} and provides a basis for comparison. The second simulation shows the effects of letting the \gls{com} momentum drift (COM: free) as opposed to fixing it to zero (COM: fixed). The third and fourth simulations show the effects of hard (PBC: reflecting) and soft (PBC: 9-3 Lennard-Jones) wall boundaries, respectively, as opposed to \gls{pbc} (Walls: PBC). Note that the dashed lines meant as a guide to the eye do not include the \gls{com} momentum constraint correction of $\frac{\frac{3}{2}k_{\text{B}} T}{N_{\rm{molecs}}}$ that is reflected in the first simulation.} \end{figure} Finally, we found that increasing the size of the simulation box reduces the flying ice cube effect, as can be seen in Fig.~\ref{fig:numparts}. As with decreasing the timestep (Fig.~\ref{fig:timestep}), here too we find that simple velocity rescaling recovers equipartition more easily than the Berendsen thermostat. We conjecture that this finite size effect occurs because the canonical ensemble's distribution of kinetic energy becomes more sharply peaked with increasing number of particles, i.e., the ratio of the standard deviation to the mean of the canonical kinetic energy distribution (the gamma distribution given in Eq.~\ref{eq:gamma}) scales as $\mathcal{O} \left(\frac{1}{\sqrt{N_{\text{DOF}}}}\right)$ at constant temperature. Thus, as the number of particles increases, simple velocity rescaling and the Berendsen thermostat become more similar to the \gls{csvr} thermostat. \begin{figure} \centering \includegraphics[width=6.53in]{figs/avg-energy-per-degreeoffreedom-vsNumPart-plot_both.png} \caption{\label{fig:numparts} Partitioning of the kinetic energies obtained from \gls{md} simulations performed under the same conditions as in Fig.~\ref{fig:50partsequipartition} but changing the number of ethane molecules, using (left) simple velocity rescaling and (right) the Berendsen thermostat. The simulation with 50 ethane molecules took place in a \SI{30}{\angstrom} cubic simulation box, and the other simulations had their simulation boxes enlarged to maintain the same density. Note that the dashed lines meant as a guide to the eye do not include the \gls{com} momentum constraint correction of $\frac{\frac{3}{2}k_{\text{B}} T}{N_{\rm{molecs}}}$, which is responsible for the slight deviation of the total kinetic energy from $\frac{6}{2}k_{\text{B}}T$ that is more evident for the simulations with less molecules.} \end{figure} \subsection{Sampling configurational \acrlongpl{dof}} So far, we have exclusively used kinetic \acrlongpl{dof} to show that the simple velocity rescaling and Berendsen thermostat algorithms cause the violation of equipartition. These methods are sometimes used only to sample configurational \acrlongpl{dof}, justified on the grounds that the isokinetic ensemble samples the same configurational phase space as the canonical ensemble.\cite{hai831,eva832,nos842,min031,col101} Since we have proven that the violation of equipartition is incommensurate with sampling the isokinetic ensemble, it follows that this justification is invalid. We now wish to show this explicitly. To do so, we will examine the \gls{rdf}, which is solely dependent on configurational \acrlongpl{dof}. In Fig.~\ref{fig:rdf} (top-left), we show the \glspl{rdf} of the supercritical ethane simulations whose kinetic energy partitionings are shown in Fig.~\ref{fig:50partsequipartition}. The \gls{nosehoover}, \gls{csvr}, Langevin, and Gaussian thermostat simulations exhibit identical \glspl{rdf}, but the simple velocity rescaling and Berendsen thermostat simulations show a subtly different \gls{rdf}. Although the difference is slight, it is sufficient to demonstrably disprove the claims that simple velocity rescaling samples the same configurational phase space as the canonical ensemble and that the Berendsen thermostat samples a configurational phase space intermediate between the canonical and microcanonical ensembles.\cite{mor001,mor031} \begin{figure} \centering \begin{minipage}{0.49\textwidth} \includegraphics[width=3.4in]{figs/rdf-vapor-broken-title.png} \end{minipage} \begin{minipage}{0.49\textwidth} \includegraphics[width=3.34in]{figs/rdf-liquid-broken-title.png} \end{minipage} \\ \includegraphics[width=5.14in]{figs/avg-energy-per-degreeoffreedom-presentation-limited-title.png} \caption{\label{fig:rdf} (top-left) \Acrfull{rdf} of the \ce{CH3-CH3} distance obtained from the \gls{md} simulations of 50 ethane molecules in a \SI{30}{\angstrom} cubic simulation box with a target temperature set to \SI{350}{\kelvin} using various thermostats. These simulations were the same as the ones whose kinetic energy partitionings are shown in Fig.~\ref{fig:50partsequipartition}. (top-right) \gls{rdf} of the \ce{CH3-CH3} distance obtained from \gls{md} simulations of 235 ethane molecules in a \SI{30}{\angstrom} cubic simulation box with a target temperature set to \SI{256}{\kelvin} using various thermostats. These conditions were chosen such that the simulation would take place under saturated liquid conditions.\cite{mar982} For both sets of simulations, \gls{com} momentum was fixed to zero throughout. The \glspl{rdf} of both sets of simulations done using the Langevin and \gls{csvr} thermostats were indistinguishable from the \gls{rdf} using the \gls{nosehoover} thermostat within the line width. (bottom) Partitioning of the kinetic energies obtained from the saturated liquid simulations. The results of the simulations using the Langevin and \gls{csvr} thermostats were indistinguishable from the dashed lines of equipartition within the line width.} \end{figure} We next turn to saturated liquid phase ethane simulations, for which we show \glspl{rdf} under various thermostats in Fig.~\ref{fig:rdf} (top-right). The \gls{nosehoover}, Langevin, \gls{csvr}, and Gaussian thermostats all give identical results typical of a simple diatomic liquid.\cite{cha871} The simple velocity rescaling algorithm once again shows a subtle difference, but the Berendsen thermostat shows a very different \gls{rdf} more reminiscent of the solid phase than the liquid phase,\cite{cha871} and visualization of the Berendsen thermostat system shows that the ethane molecules have indeed packed into a volume smaller than available in the simulation box. Examination of the kinetic energy partitionings in Fig.~\ref{fig:rdf} (bottom) shows that most of the kinetic energy is in vibrational modes, which is unexpected since that is the opposite of the usual flying ice cube result. The Berendsen thermostat's results are heavily dependent on the choice of time damping constant, with the \gls{rdf} indicating a solid-like phase for time damping constants approximately from \SIrange{10}{150}{\femto\second} (Fig.~\ref{fig:rdfSI}). This effect of intermediate time damping constants giving larger deviations than small or large ones has been observed before in simulations of bulk water, where the effect was attributed to the intermediate time constant matching a characteristic time scale on which dynamical correlations are most pronounced.\cite{mud041} It appears clear that the Berendsen thermostat is not immune to the resonance artifacts that we have also seen with simple velocity rescaling (Fig.~\ref{fig:dampingconst}). \subsection{Contemporary use of the simple velocity rescaling and Berendsen thermostat algorithms} Ours is not the first publication to warn against the use of simple velocity rescaling and the Berendsen thermostat.\cite{har981,coo081,shi131} Nonetheless, as we have stated, these algorithms continue to be widely used (Fig.~\ref{fig:thermostatcitations}). As we have just shown, for some systems the improper velocity rescaling algorithms may not greatly affect the system properties, and there are a slew of studies in which these thermostats are tested for specific systems, with some showing artifacts and others showing indistinguishability.\cite{che961,mar021,mud041,mor081,ros091,spi111,bas131} However, slight changes to a system could introduce artifacts in an unpredictable fashion. Rather than testing for the correctness of simple velocity rescaling or the Berendsen thermostat in every specific system, we advocate for the cessation of their use. We find no reason to use simple velocity rescaling or the Berendsen thermostat instead of the \gls{csvr} thermostat given their similar ease of implementation, likely similar speeds of equilibration,\cite{bus081} and our study's finding that the \gls{csvr} thermostat does not lead to the flying ice cube effect, As a case study on the dangers of continuing to use these thermostat algorithms, we examine a highly-cited study in depth, the replication of which initially led us to examine the flying ice cube phenomenon. In 2007, a flexible force field intended for use with \acrshort{mof}-5 was parameterized,\cite{taf071} and it was shortly thereafter used to study the confined transport of guest molecules within the framework.\cite{ami071} The authors were able to replicate the experimental diffusion coefficient of confined benzene, but they found that this replicability only held when the \gls{mof} was allowed to be flexible; when the \gls{mof} atoms were held rigid, the benzene diffusion coefficient increased by an order of magnitude. The conclusions of this manuscript are often evoked to question the validity of the rigid framework assumption that is commonly used in many \gls{mof} molecular simulation studies. The finding continues to be accepted since it is known that the effect of framework flexibility on guest diffusion is complex,\cite{smi081} though surprise has been expressed\cite{see091} since a rigid lattice more typically leads to a decrease in the diffusion coefficient for tight fitting molecules.\cite{smi081} In addition, using a different flexible force field for \acrshort{mof}-5,\cite{dub071} it was found that flexibility had little effect on the diffusion coefficient, increasing it by less than a factor of 1.5.\cite{for091} As the reader now anticipates, \citet{ami071} used the Berendsen thermostat, which was the default option in the Tinker simulation package at the time (the default has since been changed to the \gls{csvr} thermostat).\cite{pon871} As we show in Fig.~\ref{fig:diffcoeff}, the result of \citet{ami071} was completely an artifact of the Berendsen thermostat. Using the same force field, no dependence of the benzene diffusion coefficient on the framework flexibility is observed when a \gls{nosehoover} or \gls{csvr} thermostat is used. Apparently, when the Berendsen thermostat is thermostatted to fewer \acrlongpl{dof} during rigid framework simulations, the flying ice cube effect becomes more noticeable and kinetic energy is drawn into the translational modes of the guest benzene molecules, accounting for the result observed by \citet{ami071}. We also found that changing the time damping constant of the Berendsen thermostat had a large effect on the diffusion coefficient (Fig.~\ref{fig:diffcoeffSI}). \begin{figure} \centering \includegraphics[width=3.48in]{figs/Diffusion-Publication-tau100.png} \caption{\label{fig:diffcoeff} Self-diffusion coefficient of benzene in \acrshort{mof}-5 at a loading of 10 molecules per unit cell as a function of inverse temperature. Data are shown for flexible and rigid frameworks, and using the Berendsen and \gls{nosehoover} chain thermostats (use of the \gls{csvr} thermostat gives diffusion coefficients that are statistically indistinguishable from use of the \gls{nosehoover} thermostat). With the Berendsen thermostat, it appears that the framework flexibility has a large effect on the calculated diffusion coefficient, replicating the main finding of \citet{ami071}. However, it is seen that this result is a flying ice cube artifact, as no flexibility effect is seen with the \gls{nosehoover} thermostat. Error bars represent $\pm1$ standard error of the mean using block averaging,\cite{fre011} and are not shown for the data from \citet{ami071} or if they would be smaller than the symbol size.} \end{figure} As an aside, it is now known that bulk-like vapor and liquid phases of benzene exist in \acrshort{mof}-5 below a critical temperature.\cite{bra152} It is actually improper to calculate the diffusion coefficient at a loading that is within the vapor-liquid phase envelope, e.g., \numrange{3}{67} molecules per unit cell at \SI{300}{\kelvin} in this system,\cite{bra152} since there is not a single homogeneous phase present at these conditions. Here, we are not attempting to calculate correct diffusion coefficients of benzene in \acrshort{mof}-5, but rather to compare results with the prior work of \citet{ami071}, which conducted the simulations at a loading of 10 molecules per unit cell. The importance of framework flexibility on the simulated diffusion coefficient is expected to be independent of the choice of loading. Other errors, varying in severity, are likely present in many of the thousands of studies that have used simple velocity rescaling or the Berendsen thermostat. Occasionally, one of these errors is explicitly pointed out,\cite{ley081,won101} but negative replications are not commonly published,\cite{bak161} so the extent to which these articles contain data contaminated by the flying ice cube artifact cannot be estimated. \section{Concluding Remarks} In this work, we have shown that rescaling velocities to a non-canonical distribution of kinetic energies, as is done with the simple velocity rescaling and Berendsen thermostat algorithms, causes the flying ice cube effect whereby the equipartition theorem is violated. Thus, simple velocity rescaling does not sample the isokinetic ensemble, and the Berendsen thermostat does not sample a configurational phase space intermediate between the canonical and microcanonical ensembles; justifications for their use do not hold. The flying ice cube effect is brought about by a violation of balance causing systematic redistributions of kinetic energy; this violation is lessened as the timestep between simple velocity rescalings is decreased, eventually making simple velocity rescaling equivalent to the Gaussian thermostat. Equipartition violation is completely avoided when velocities are rescaled to the canonical distribution of kinetic energies, as is done under the \gls{csvr} thermostat, because detailed balance is obeyed. We have identified several simulation parameters which affect the prominence of the flying ice cube effect under simple velocity rescaling and the Berendsen thermostat. These include the timestep, the thermostat's coupling strength, the frequency of collisions within the simulation (e.g., with a wall), and the system size. However, most of these parameters cannot be adjusted in a manner that eliminates the flying ice cube effect without making simulations prohibitively expensive for relevant systems of contemporary interest. Another reason not to attempt to tune these simulation parameters to allow the use of incorrect thermostatting algorithms is the existence of additional resonance artifacts that occur when the thermostat coupling strengths are set to particular values that are difficult to predict \latin{a priori}. Finally, we have demonstrated several severe simulation artifacts that the flying ice cube effect can bring about to the system's structural and dynamic properties. These include incorrect \glspl{rdf}, phase properties, and diffusion coefficients. We have highlighted one case in which the flying ice cube effect has been wholly responsible for the main finding of a highly-cited study. Many more such cases are likely present in the literature. We strongly advocate for discontinuing use of the simple velocity rescaling and Berendsen thermostat algorithms in all \gls{md} simulations for both equilibration and production cycles. The results of past studies that have used these two algorithms should be treated with caution unless they are shown to be replicable with a more reliable thermostat. In situations where velocity rescaling methods are desirable, such as for fast equilibration of a system,\cite{hu041} the \gls{csvr} thermostat should be used instead. \begin{acknowledgement} This research was supported as part of the Center for Gas Separations Relevant to Clean Energy Technologies, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Basic Energy Sciences under Award DE-SC0001015. M.M.\ was supported by the Deutsche Forschungsgemeinschaft (DFG, priority program SPP 1570). This research used resources of the National Energy Research Scientific Computing Center, a DOE Office of Science User Facility supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. E.B.\ thanks the responders on the LAMMPS mailing list for useful discussion and for giving advice regarding the LAMMPS source code (Axel Kohlmeyer, Steven J.\ Plimpton, and Aidan P.\ Thompson were particularly helpful) and Sai Sanigepalli for helping to implement the Tinker simulations. Special thanks go to Rochus Schmid for insightful discussion on the roles of thermostatting and for providing assistance in implementing the Tinker simulations. \end{acknowledgement}
{ "timestamp": "2018-05-16T02:01:44", "yymm": "1805", "arxiv_id": "1805.02295", "language": "en", "url": "https://arxiv.org/abs/1805.02295" }
\section*{Appendix} This is the full version of \url{http://dx.doi.org/10.4230/LIPIcs.ICALP.2018.281}. \section{Details of Section \ref{sec:unifproblems}} \subsection{Uniformizations of recognizable relations} Here we present the result stating that it is decidable whether a given recognizable relation has a uniformization by a subsequential transducer for any given synchronization parameter. \begin{theorem} Given a regular source language with finite $\mathit{shift}$ and a regular target language. Then, the resynchronized uniformization problem is decidable. \end{theorem} Let $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ denote a regular source language with finite $\mathit{shift}$ and $T \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ a regular target language. Note that the usual presentation of a regular relation as $\bigcup_{i=1}^n U_i \times V_i$, where each $U_i$ and $V_i$ are regular languages over $\Sigma_\mathbbmtt{i}$ and $\Sigma_\mathbbmtt{o}$, respectively, clearly is representable as a regular language over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ with finite $\mathit{shift}$, namely as $\bigcup_{i=1}^n U_i \cdot V_i$. In \cite{conf/stacs/FigueiraL14}, it is shown that $1^*2^*$ is an effective canonical representative of the class of regular languages with finite $\mathit{shift}$. \begin{proof} Let $S$ and $T$ be as above. We show the theorem in two steps. First, we effectively compute the regular language $T' = \{ w \mid w \in T \text{ and } \llbracket w \rrbracket \in \llbracket S \rrbracket\}$, that is, the language that contains every $T$-controlled word that describes a pair from $\llbracket S \rrbracket$. Secondly, we show that $S$ has a $T$-controlled uniformization by an sDFA if, and only if, $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket)$ and $T'$ has a subset uniformization by an sDFA, which is decidable by Theorem~\ref{thm:sunif}. For the first part, let $\mathcal A$ be a DFA that recognizes the $1^*2^*$-controlled canonical representation of $S$. Consider an NFA that on reading a word $w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ works as follows. First, it guesses a state $q \in Q_\mathcal A$, then it simulates $\mathcal A$ on $\pi_\mathbbmtt{i}(w)$ from $q_0$ and $\mathcal A$ on $\pi_\mathbbmtt{o}(w)$ from $q$. It accepts if $\delta_\mathcal A^*(q_0,\pi_\mathbbmtt{i}(w)) = q$ and $\delta_\mathcal A^*(q,\pi_\mathbbmtt{o}(w)) \in F_\mathcal A$. The intersection of this language with $T$ is our desired language $T'$. For the second part, assume $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket)$ and $T'$ has a subset uniformization by an sDFA. Since $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket)$, every subset uniformization of $T'$ is also a $T$-controlled uniformization of $S$. For the other direction, assume $S$ has a $T$-controlled uniformization by an sDFA, say $U$. Obviously $\llbracket U \rrbracket \subseteq_\mathit{u} \llbracket S \rrbracket$ and $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket U \rrbracket)$. First, we show $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket)$. Proof by contradiction, assume there is some $u \in \mathrm{dom}(\llbracket S \rrbracket)\setminus \mathrm{dom}(\llbracket T' \rrbracket)$. There exists a $T$-controlled $w \in U$ such that $\pi_\mathbbmtt{i}(w) = u$. By construction, $w \in T'$, thus $u \in \mathrm{dom}(\llbracket T' \rrbracket)$. Thus, $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket) = \mathrm{dom}(\llbracket U \rrbracket)$. Secondly, since $U \subseteq T$ and $\mathrm{dom}(\llbracket U \rrbracket) = \mathrm{dom}(\llbracket T' \rrbracket)$, it is clear that $U$ is a subset uniformization of $T' \subseteq T$. \end{proof} \subsection{Parikh-injective synchronization languages} { \renewcommand{\thetheorem}{\ref{thm:parikh}} \thmparikh* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Proposition~\ref{thm:parikh}] We show that every $T$-controlled uniformization of $S$ is in fact a subset uniformization of $S$. Towards a contradiction, assume that $U$ is a $T$-controlled uniformization, but $U \not\subseteq S$. Since $U$ is $T$-controlled, $U$ is $L$-controlled. There is $w \in U \setminus S$ with $\llbracket w \rrbracket \in \llbracket S \rrbracket$ and $w' \in S \setminus U$ with $\llbracket w \rrbracket = \llbracket w' \rrbracket$. Let $w = u \otimes v$ and $w' = u' \otimes v'$. Since $\llbracket w \rrbracket = \llbracket w' \rrbracket$ and both $v$ and $v'$ are $L$-controlled, it follows that $\Pi_L(v) = \Pi_L(v')$. Assume $v \neq v'$, this is a contradiction because $L$ is Parikh-injective. Thus, $v = v'$ and $u \neq u'$, because $w \neq w'$. This is a contradiction to $\llbracket w \rrbracket = \llbracket w' \rrbracket$. Hence, $U \subseteq S$, i.e., $U$ is a subset uniformization. \end{proof} \section{Details of Section \ref{sec:regular}} \subsection{State transformation trees.} Analogously, we define output state transformation trees, where the roles of input and output are reversed compared to input state transformation trees. \begin{definition}[Output state transformation tree]\label{def:outputstt} Given $i \geq 0$, $p \in Q_\mathcal B$, $q \in Q_\mathcal A$ and an output word $y \in \Sigma_\mathbbmtt{o}^*$, the \emph{state transformation tree} $\mathrm{STT}^i(y,p,q)$ is a tree over $Q_\mathcal B \times Q_\mathcal A$ defined inductively. \begin{itemize}[topsep=0pt] \item For $i = 0$, the tree $\mathrm{STT}^0(y,p,q)$ is built up as follows. \noindent Let $\mathrm{Reach}_0 \subseteq Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(p',q') \in \mathrm{Reach}_0$ if there is some $x \in \Sigma_\mathbbmtt{i}^*$ with $|x| \leq |y|$ such that $\delta_\mathcal A^*(q,(x,y)) = q'$ and $\delta_\mathcal B^*(p,x) = p'$. Then $\mathrm{STT}^0(y,p,q) = (p,q)({r_1}\dots{r_n})$ for $\mathrm{Reach}_0 = \{r_1,\dots,r_n\}$. \item For $i > 0$, the tree $\mathrm{STT}^i(y,p,q)$ is built up as follows. \noindent Let $\mathrm{Reach}_1 \subseteq \Sigma_\mathbbmtt{o}^* \times Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(y'',p',q') \in \mathrm{Reach}_1$ if \begin{itemize} \item $y = y'y''$ with $y''\in \Sigma_\mathbbmtt{o}^+$ for a $y' \in \Sigma_\mathbbmtt{o}^+$ such that there is an $x \in \Sigma_\mathbbmtt{i}^+$ with $|x| = |y'|$, and \item $\delta_\mathcal A^*(q,(x,y')) = q'$ and $\delta_\mathcal B^*(p,x) = p'$. \end{itemize} For $(y'',p',q') \in \mathrm{Reach}_1$, let $\mathrm{Reach}_{(y'',p',q')} \subseteq \Sigma_\mathbbmtt{i}^* \times Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(y'',p'',q') \in \mathrm{Reach}_{(y'',p',q)}$ if $\delta_\mathcal B^*(p',w) = p''$ for some $w \in \Sigma_\mathbbmtt{o}^+$. Furthermore, let the tree $t_{(y'',p',q ')}^{i-1}$ be defined as $(p',q')(\mathrm{STT}^{i-1}{r_1}\dots\mathrm{STT}^{i-1}{r_n})$ for $\mathrm{Reach}_{(y'',p',q')} = \{r_1,\dots,r_n\}$. Then the tree $\mathrm{STT}^i(y,p,q)$ is defined as \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} \mathrm{STT}^0(y,p,q) \circ (p,q)(t_{s_1}^{i-1}\dots t_{s_n}^{i-1}) \end{equation*} \endgroup for $\mathrm{Reach}_1 = \{s_1,\dots,s_n\}$. \end{itemize} \end{definition} Now that we have defined output state transformation trees, we need to introduce one more concept, before we can define output profiles. Ultimately, given a uniformizer, our goal is to replace large segments that cause lag with (short) segments that have the same profile. Towards defining profiles for output words it turns out that we need to store additional information compared to input profiles. Intuitively, a difference arises because waiting a long time before output is produced (i.e., causing large input lag) means that lots of information about the input is known before output is produced; whereas producing large output segments (i.e., causing large output lag) means that output has been produced without prior knowledge of the input. Therefore, we introduce the concept of annotated output state transformation trees which model the possible interactions between input segments and the given output segment in more detail compared to output state transformation trees. More specifically, for an input segment $x$, we collect vertices that can be reached by prefixes of $x$. Below a formal definition is given and in Ex.~\ref{ex:outputSTT} an intuitive example is given. \begin{definition}[Annotated output state transformation tree]\label{def:anntree} Let $i \geq 0$, $p \in Q_\mathcal B$, $q \in Q_\mathcal A$, $y \in \Sigma_\mathbbmtt{o}^*$, and let $t = (V_t,E_t,v_t,\val{t})$ denote the reduced output state transformation tree $\mathit{red}\bigl(\mathrm{STT}^i(y,p,q)\bigr)$. For $v \in V_t$, the \emph{annotated output state transformation tree} $\mathrm{annSTT}^i(y,p,q,v)$ is a tree over $(Q_\mathcal B \times Q_\mathcal A \times {V_t}) \cup (Q_\mathcal B \times Q_\mathcal A \times V_t \times 2^{V_t})$ defined inductively. \begin{itemize}[topsep=0pt] \item For $i = 0$, the tree $\mathrm{annSTT}^0(y,p,q,v)$ is built up as follows. \noindent Let $\mathrm{Reach}_0 \subseteq Q_\mathcal B \times Q_\mathcal A \times V_t \times 2^{V_t}$ be the smallest set such that $(p',q',v',S) \in \mathrm{Reach}_0$ if there is some $x \in \Sigma_\mathbbmtt{i}^*$ with $|x| \leq |y|$ such that \begin{itemize} \item $\delta_\mathcal A^*(q,(x,y)) = q'$ and $\delta_\mathcal B^*(p,x) = p'$, and \item $x$ leads from $v$ to $v'$ w.r.t.\ $y$ and $0$, and \item $v'' \in S$ if there is $x'\sqsubseteq x$ such that $x'$ leads from $v$ to $v''$ w.r.t.\ $y$ and $0$. \end{itemize} Then $\mathrm{annSTT}^0(y,p,q,v) = (p,q,v)({r_1}\dots{r_n})$ for $\mathrm{Reach}_0 = \{r_1,\dots,r_n\}$. \item For $i > 0$, the tree $\mathrm{annSTT}^i(y,p,q,v)$ is built up as follows. \noindent Let $\mathrm{Reach}_1 \subseteq \Sigma_\mathbbmtt{o}^* \times Q_\mathcal B \times Q_\mathcal A \times V_t \times 2^{V_t}$ be the smallest set such that $(y'',p',q',v',S) \in \mathrm{Reach}_1$ if there is some $x \in \Sigma_\mathbbmtt{i}^+$ with $|x| < |y|$ such that \begin{itemize} \item $y = y'y''$ with $y''\in \Sigma_\mathbbmtt{o}^+$ for $y' \in \Sigma_\mathbbmtt{o}^+$ with $|x| = |y'|$, and \item $\delta_\mathcal A^*(q,(x,y')) = q'$ and $\delta_\mathcal B^*(p,x) = p'$, and \item $x$ leads from $v$ to $v'$ w.r.t.\ $y$ and $i$, and \item $v'' \in S$ if there is $x'\sqsubseteq x$ such that $x'$ leads from $v$ to $v''$ w.r.t.\ $y$ and $i$. \end{itemize} For $(y'',p',q',v',S) \in \mathrm{Reach}_1$, let $\mathrm{Reach}_{(y'',p',q',v',S)} \subseteq \Sigma_\mathbbmtt{o}^* \times Q_\mathcal B \times Q_\mathcal A \times V_t$ be the smallest set such that $(y'',p'',q',v'') \in \mathrm{Reach}_{(y'',p',q',v',S)}$ if $\delta_\mathcal B^*(p',w) = p''$ and $w$ leads from $v'$ to $v''$ for some $w \in \Sigma_\mathbbmtt{o}^+$. \noindent Furthermore, let $t_{(y'',p',q',v',S)}^{i-1}$ be the tree \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} (p',q',v',S)(\mathrm{annSTT}^{i-1}{r_1}\dots\mathrm{annSTT}^{i-1}{r_n}) \end{equation*} \endgroup for $\mathrm{Reach}_{(y'',p',q',v',S)} = \{r_1,\dots,r_n\}$. Finally, the tree $\mathrm{annSTT}^i(y,p,q,v)$ is defined as \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} \mathrm{annSTT}^0(y,p,q,v) \circ (p,q,v)(t_{s_1}^{i-1}\dots t_{s_n}^{i-1}) \end{equation*} \endgroup for $\mathrm{Reach}_1 = \{s_1,\dots,s_n\}$. \end{itemize} \end{definition} Next, we give an example to illustrate the difference between output STTs and annotated output STTs. \begin{example}\label{ex:outputSTT} Given an alphabet $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ with $\Sigma_\mathbbmtt{i} = \{a,b\}$ and $\Sigma_\mathbbmtt{o} = \{c\}$, an automatic relation $S_1$ over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ is given by a DFA $\mathcal A_1$ depicted in Fig.~\ref{fig:dfaA}, and an automatic relation $T_1$ over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ is given by a DFA $\mathcal B_1$ depicted in Fig.~\ref{fig:dfaB}. Note that, $S_1$ is $(12)^*(1^*+2^*)$-controlled, i.e., canonical, hence, the notion of state transformation tree is meaningful w.r.t.\ $\mathcal A_1$ and $\mathcal B_1$. Consider the output word $cc \in \Sigma_\mathbbmtt{o}^*$, the reduced variant of the output state transformation tree $\mathrm{STT}^0(cc,p_0,q_0)$ is depicted in Fig.~\ref{fig:stt}. Additionally, its edges are labeled with the respective associated words. Also, its vertices are named, so that they can be referred to in the annotated output state transformation tree $\mathrm{annSTT}^0(cc,p_0,q_0)$ depicted in Fig.~\ref{fig:annstt}. Compared to $\mathit{red}(\mathrm{STT}^0(cc,p_0,q_0))$ we can see that $v_3$ was duplicated with annotation $(v_3,\{v_1,v_3\})$ and $(v_3,\{v_2,v_3\})$, respectively. This has happened because both $ab$ and $ba$ lead from $v_0$ to $v_3$, but $a$ (prefix of $ab$) leads from $v_0$ to $v_1$ and $b$ (prefix of $ba$) leads from $v_0$ to $v_2$. \end{example} \input{fig-output-stt} We are ready to define profiles based on state transformation trees, but beforehand we introduce some terminology to speak more conveniently about state transformation trees. We now formally define the concept of associated words. Examples can be found in Fig.\ref{subfig:tree}, Fig.~\ref{fig:stt}, and Fig.~\ref{fig:annstt}. \begin{definition}[Associated words]\label{def:associatedwords} Let $t = (V_t,E_t,v_t,\val{t})$ be an input STT. Given $v,v' \in V_t$ such that $(v,v') \in E_t$ and $v$ is on an even level, let $\val{t}(v) = (p,q)$ and $\val{t}(v') = (p',q')$. We say that \emph{$y \in \Sigma_\mathbbmtt{o}^*$ leads from $v$ to $v'$} w.r.t.\ $x$ and $i$ if $t|_v = \mathrm{STT}^i(x,p,q)$ and there is $x',x'' \in \Sigma_\mathbbmtt{i}^*$ such that $x=x'x''$, and $\delta_\mathcal A^*(q,(x',y)) = q'$, and $\delta_\mathcal B^*(p,y) = p'$, and $\{ t|_{v''} \mid v'' \in \mathit{children}_t(v')\} = \{ \mathrm{STT}^{i-1}(x'',p'',q') \mid \delta_\mathcal B^*(p',w) = p'' \text{ for some } w \in \Sigma_\mathbbmtt{i}^+\}$. Given $v',v'' \in V_t$ such that $(v',v'') \in E_t$ and $v'$ is on an odd level, let $\val{t}(v') = (p',q')$ and $\val{t}(v'') = (p'',q')$. We say that \emph{$w \in \Sigma_\mathbbmtt{i}^+$ leads from $v'$ to $v''$} if $\delta_\mathcal B^*(p',w) = p''$. Analogously, we define these properties for output and annotated output STTs. \end{definition} For convenience, we introduce the following definition. \begin{definition}[ann] Let $t_{\mathit{ann}}$ be an annotated output state transformation tree based on the reduced state transformation tree $t$. We define a function $\mathit{ann}\colon V_{t_{\mathit{ann}}} \to V_t$ with $\mathit{ann}(v) = u$ if the third component of $v$s label is $u$. \end{definition} We state some simple observations about annotated output state transformation trees used in the upcoming proofs. \begin{lemma}\label{lemma:basic} Let $t_{\mathit{ann}}$ be an annotated output state transformation tree based on the reduced state transformation tree $t$. \begin{enumerate} \item If $x \in \Sigma_\mathbbmtt{i}^*$ leads from $v$ to $v'$ w.r.t.\ $z \in \Sigma_\mathbbmtt{o}^*$ and $i \geq 0$ in $t_{\mathit{ann}}$, then $x$ leads from $\mathit{ann}(v)$ to $\mathit{ann}(v')$ w.r.t.\ $z$ and $i$ in $t$. \item If $y \in \Sigma_\mathbbmtt{o}^*$ leads from $v$ to $v'$ in $t_{\mathit{ann}}$, then $y$ leads from $\mathit{ann}(v)$ to $\mathit{ann}(v')$ in $t$. \item Given $v \in V_{t_{\mathit{ann}}}$, if the first two components of its label are $(p,q)$, then $\mathit{ann}(v) \in V_t$ is labeled $(p,q)$. \end{enumerate} \end{lemma} \subsection{Profiles.} We previously defined input profiles, now we define output profiles. \begin{definition}[Output profile] Given $y \in \Sigma_\mathbbmtt{o}^*$, we define its \emph{profile} $P_y$ as $(\tau_y,\mathrm{annSTT}_y^{\lceil n/2 \rceil})$, where $\mathrm{annSTT}_y^{\lceil n/2 \rceil} =$ \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} \bigcup_{(p,q) \in Q_\mathcal B \times Q_\mathcal A}\!\!\! \{\mathit{red}\bigl(\mathrm{annSTT}^{\lceil n/2 \rceil}(y,p,q,v)\bigr) \mid v = \mathit{root}\bigl({\mathit{red}\bigl(\mathrm{STT}^{\lceil n/2 \rceil}(y,p,q)\bigr)}\bigr)\}. \end{equation*} \endgroup \end{definition} We prove some properties of profiles. { \renewcommand{\thetheorem}{\ref{lemma:monoid}} \lemmamonoid* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Lemma~\ref{lemma:monoid}] Given $x_1, x_2 \in \Sigma_\mathbbmtt{i}^+$, we show that the profile $P_{x_1x_2}$ of $x_1x_2 \in \Sigma_\mathbbmtt{i}^*$ can be computed from $P_{x_1}$ and $P_{x_2}$. The state transformation function $\tau_{x_1x_2}$ is defined as concatenation of the functions $\tau_{x_1}$ and $\tau_{x_2}$, i.e., $\tau_{x_1x_2}(p) = \tau_{x_2}(\tau_{x_1}(p))$. Recall Fig.~\ref{fig:STT} for an easier understanding of the following. Let $m = \lceil n/2 \rceil$, in order to compute the set $\mathrm{STT}_{x_1x_2}^{m}$ from $\mathrm{STT}_{x_1}^{m}$ and $\mathrm{STT}_{x_2}^{m}$, we need to make an observation first. For any $x \in \Sigma_\mathbbmtt{i}^*$, $p \in Q_\mathcal B$, $q \in Q_\mathcal A$, and $i \leq m$, the tree $\mathrm{STT}^i(x,p,q)$ can be obtained from the tree $\mathrm{STT}^m(x,p,q)$ by removing all non-trivial subtrees rooted at a vertex with height $2i+1$. Here, non-trivial is used to describe subtrees with more than one vertex, meaning leaves at height $2i+1$ are not removed. The same observation holds for its reduced variant, in the following we mean by tree always its reduced variant. For any $p \in Q_\mathcal B$ and $q \in Q_\mathcal A$, the tree $\mathrm{STT}^m(x_1x_2,p,q)$ can be obtained from the tree $\mathrm{STT}^m(x_1,p,q) = (V,E,v_0,\val{})$ by performing the following action for each pair $(u,v) \in E$ such that $v$ is a leaf. Let $v$ be at height $2i+1$ for some $i \geq 0$, note that leaves only occur at odd heights, let $\val{}(u) = (p_i',q_i)$ and $\val{}(v) = (p_{i+1},q_{i+1})$, and let $j$ be such that $m = (i+1)+j$. If $i$ is $m$, then we remove $v$, because this indicates that already $m+1$ output segments were used to consume $x_1$, however, at most $m+1$ output segments may be used to consume $x_1x_2$. Otherwise, we add new children to $v$. For all $p_{i+1}' \in Q_\mathcal B$ such that there is $w \in \Sigma_\mathbbmtt{i}^+$ with $\delta_\mathcal B^*(p_{i+1},w) = p_{i+1}'$ we add $\mathrm{STT}^j(x_2,p_{i+1}',q_{i+1})$ as a subtree to $v$. In any case, we add new children to $u$. Consider the tree $\mathrm{STT}^{j+1}(x_2,p_{i+1},q_{i+1})$, let it be of the form $(p_{i+1},q_{i+1})(t_1\dots t_k)$, then we add $t_1, \dots, t_k$ as subtrees to $u$. The meaning of this operation is to extend the output segment that leads from $u$ to $v$ beyond consuming (the remainder of) $x_1$ and also consuming parts of $x_2$. Hence, we are able to define a natural concatenation operation between input profiles. Given $x_1, x_2 \in \Sigma_\mathbbmtt{i}^*$, let $P_{x_1}P_{x_2} = P_{x_1x_2}$. Thus, the set of input profiles is equipped with a concatenation operation and a neutral element, that is, the profile of the empty word, i.e., the set of input profiles is a monoid with concatenation. Given $y_1, y_2 \in \Sigma_\mathbbmtt{o}^+$, the profile $P_{y_1y_2}$ of $y_1y_2$ can be computed from $P_{y_1}$ and $P_{y_2}$ in the same way as described above for input profiles. This allows us to define a concatenation operation for output profiles as for input profiles, consequently, the set of output profiles is a monoid with concatenation. \end{proof} { \renewcommand{\thetheorem}{\ref{lemma:ramsey}} \lemmaramsey* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Lemma~\ref{lemma:ramsey}] Ramsey's Theorem yields that for any number of colors $c$ and any number $r$, there exists a number $K \in \mathbbm{N}$ such that if the edges of a complete graph with at least $K$ vertices are colored with $c$ colors, then the graph must contain a complete subgraph with $r$ vertices such that all edges have the same color, see e.g.~\cite{diestel2000graduate}. Let $x \in \Sigma_\mathbbmtt{i}^*$ with the factorization $x = x_1x_2\dots x_n$, with $x_1,\dots,x_n \in 1 \times \Sigma$. Consider the complete graph $G = (V,E,\mathit{col})$ with edge-coloring $\mathit{col}: E \rightarrow \mathit{Cols}$, where $V := \{1,\dots,n\}$, $E := V \times V$, $\mathit{Cols}$ is the finite set of profiles and $\mathit{col}(e) := P_{x[i,k]}$ if $e = (i,k)$ for all $e \in E$. If there exist $i < j < k \leq n$ such that the edges $(i,j)$, $(j,k)$ and $(i,k)$ have the same color, i.e., the respective profiles are the same, then $x$ has a factorization that contains a non-empty idempotent factor. As a consequence of Ramsey's Theorem, if $|x|$ is equal or larger than the Ramsey number $R(3,|\mathit{Cols}|)$, then $x$ contains a non-empty idempotent factor. \end{proof} \subsection{Proof of Theorem~\ref{thm:regular}.} Recall, the proof of Theorem~\ref{thm:regular} is split in two parts. \subsection*{Part I} The goal is to show that if $S$ has a $T$-controlled uniformizer, then $S$ has a $T_k$-controlled uniformizer for a computable $k$; this is the statement of Lemma~\ref{lemma:shortregular}. We introduce the following terminology used in the proofs of Lemmata~\ref{lemma:longeroutput}~and~\ref{lemma:shortregular}. \begin{definition} We say that \emph{$y$ fully traverses $x$ in $\mathcal A_q$} if $|y| \leq |x|$ and $\delta_\mathcal A^*(q,(x,y))$ is not a sink state, respectively, we say that \emph{$x$ fully traverses $y$ in $\mathcal A_q$} if $|x| \leq |y|$ and $\delta_\mathcal A^*(q,(x,y))$ is not a sink state. Situations where $x$ and $y$ are of different length and $\delta_\mathcal A^*(q,(x,y))$ does not lead to a sink state can occur when $x \in \mathrm{dom}(L(\mathcal A_q))$. \end{definition} Before we can prove Lemma~\ref{lemma:shortregular}., we need to prove two auxiliary lemmata, namely Lemmata~\ref{lemma:longeroutput}~and~\ref{lemma:pumping}. First, we prove Lemma~\ref{lemma:longeroutput} stating that there exists a bound $b$ such that it suffices to consider uniformizer where each output block increases the amount that the output sequence is ahead by at most $b$. \begin{restatable}{lemma}{lemmalongeroutput}\label{lemma:longeroutput} There is a computable $b \geq 0$ such that if $S$ has a $T$-controlled uniformization by an sDFA, then $S$ has a $T$-controlled uniformization $U$ by an sDFA which satisfies the following property for each $w \in U$. For each $i < j$ such that $w[i]$ and $w[j]$ are consecutive shifts and $w[i+1] \in \Sigma_\mathbbmtt{o}$ it holds that $|\pi_\mathbbmtt{o}(w[1,i])| - |\pi_\mathbbmtt{i}(w[1,i])| \leq \mathit{max}\{|\pi_\mathbbmtt{o}(w[1,j])| - |\pi_\mathbbmtt{i}(w[1,j])|,0\} + b$. \end{restatable} \begin{proof}[Proof of Lemma~\ref{lemma:longeroutput}] Let $\beta$ be the smallest bound on the length of representatives of output profiles, then we chose $b$ to be $\mathit{max}\{\beta,\gamma+1\}$. Assume $U$ is a $T$-controlled uniformization given by a sequential DFA $\mathcal U$ that does not satisfy the property stated in the lemma. Recall, $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$. If the bound on the length of output blocks stated in the lemma is violated, then we are in a situation where the lag has exceeded $\gamma$, thus it can be violated at most $\lceil n/2 \rceil$ times, because after the lag has exceeded $\gamma$ there are at most $n$ shifts, i.e., at most $\lceil n/2 \rceil$ output blocks. Let $m = \lceil n/2 \rceil$. We construct a $T$-controlled uniformization $U'$ recognized by a sequential DFA $\mathcal U'$ based on $\mathcal U$ that repairs for every input word the first violation of the output block length. Applying the construction presented below at most $m$ times yields a uniformization according to the statement of the lemma. The computation of $\mathcal U'$ differs from $\mathcal U$ from the point on that the following situation occurs: Consider an arbitrary $w \in \mathit{Pref}(U)$ of length $\ell_2$, such that there is a position $\ell_1 < \ell_2$ such that $w[\ell_1]$ and $w[\ell_2]$ are consecutive shifts, $w[\ell_1+1] \in \Sigma_\mathbbmtt{o}$ and it holds that $|\pi_\mathbbmtt{o}(w[1,\ell_1])| - |\pi_\mathbbmtt{i}(w[1,\ell_1])| > \mathit{max}\{|\pi_\mathbbmtt{o}(w[1,\ell_2])| - |\pi_\mathbbmtt{i}(w[1,\ell_2])|,0\} + b$, i.e., the output block $w[\ell_1+1,\ell_2]$ has increased the lag caused by output symbols being ahead by more than $b$. Let $\ell$ be the smallest position $\ell_1 < \ell \leq \ell_2$ such that $|\pi_\mathbbmtt{o}(w[1,\ell])| - |\pi_\mathbbmtt{i}(w[1,\ell])| > 0$, that is, $\ell$ is the position such that $w[\ell,\ell_2]$ is the greatest part of the block $w[\ell_1+1,\ell_2]$ that is ahead of the input. Note that, the violation is caused because $w[\ell,\ell_2] > b$. We prove that we can replace the output $w[\ell,\ell_2]$ by some output of length at most $b$ chosen as follows. Let $w[\ell,\ell_2]$ be $y \in \Sigma_\mathbbmtt{o}^+$, consider the profile $P_y$ and let $z$ be a representative of $P_y$. We show that we can replace $y$ by $z$. Let $w$ have the factorization $xy$. Since $P_y = P_z$, we have $\tau_y = \tau_z$, and $\mathrm{annSTT}^m_y = \mathrm{annSTT}^m_z$. Let $\delta_\mathcal A^*(q_\mathcal A^0,(\pi_\mathbbmtt{i}(x),\pi_\mathbbmtt{o}(x))) = q$, $\delta_\mathcal B^*(q_\mathcal B^0,(x) = p$, and let \begin{itemize} \item $t_y$ denote $\mathit{red}(\mathrm{STT}^{\lceil n/2 \rceil}(y,p,q))$, and \item $t_z$ denote $\mathit{red}(\mathrm{STT}^{\lceil n/2 \rceil}(z,p,q))$, and \item $t^{\mathit{ann}}_y$ denote $\mathrm{STT}^{\lceil n/2 \rceil}(z,p,q,\ro{t_y})$, and \item $t^{\mathit{ann}}_z$ denote $\mathrm{STT}^{\lceil n/2 \rceil}(z,p,q,\ro{t_z})$. \end{itemize} Clearly, $t_y = t_z$ and $t^{\mathit{ann}}_y = t^{\mathit{ann}}_z$. Assume we already have defined $\mathcal U'$ up to the point where the violation as stated above occurs, and until then, $\mathcal U$ and $\mathcal U'$ have worked exactly the same way. We show that $\mathcal U'$ can continue the computation successfully after replacing $y$ with $z$ by showing that there exists a sequentially computable run satisfying the following properties. Let $\delta_\mathcal U^*(q_\mathcal U^0,xy) = s$, and $\delta_{\mathcal U'}^*(q_{\mathcal U'}^0,xz) = r$, for the next input symbols until $z$ is fully traversed in $\mathcal A_q$, we inductively (on the number of input blocks) define the computation of $\mathcal U'_r$ satisfying the following properties: For $w_1,\dots, w_i \in \Sigma_\mathbbmtt{i}^+$ with $|w_1\dots w_i| < |z|$ and $o_1,\dots, o_i \in \Sigma_\mathbbmtt{o}^+$ such that $w_1o_1\dots w_io_i \in \mathit{Pref}(U'_{r})$, there exists a path $v_0'v_1v_1'\dots v_iv_i'$ in $t^{\mathit{ann}}_z$ with $v_0' = \ro{t^{\mathit{ann}}_z}$ such that \begin{enumerate} \item $w_j$ leads from $v_{j-1}'$ to $v_j$ in $t^{\mathit{ann}}_z$ for all $1 \leq j \leq i$, and \item $o_j$ leads from $v_j$ to $v_{j}'$ in $t^{\mathit{ann}}_z$ for all $1 \leq j \leq i$, and \item there exist $\bar w_1,\dots \bar w_i \in \Sigma_\mathbbmtt{i}^+$ and a path $u_0'u_1u_1'\dots u_iu_i'$ in $t_y$ with $u_0' = \ro{t_y}$ such that \begin{enumerate} \item $\mathit{ann}(v_j) = u_j$ and $\mathit{ann}(v_j') = u_j'$ for all $0 \leq j \leq i$, and \item $\bar w_j$ leads from $u_{j-1}'$ to $u_j$ in $t_y$ for all $1 \leq j \leq i$, and \item $o_j$ leads from $u_j$ to $u_{j}'$ in $t_y$ for all $1 \leq j \leq i$, and \item $\bar w_1o_1\dots \bar w_io_i \in \mathit{Pref}(U_{s})$. \end{enumerate} \end{enumerate} \noindent Additionally, for $w_{i+1} \in \Sigma_\mathbbmtt{i}^*$ with $|w_1\dots w_{i+1}| \leq |z|$ such that $w_1o_1\dots w_io_iw_{i+1} \in \mathit{Pref}(U'_{r})$ and $w_1\dots w_{i+1}$ fully traverses $z$ in $\mathcal A_q$, there exists a leaf node $v_{i+1}$ in $t^{\mathit{ann}}_z$ with $(v_{i}',v_{i+1}) \in E_{t^{\mathit{ann}}_z}$ such that \begin{enumerate} \setcounter{enumi}{3} \item $w_{i+1}$ leads from $v_{i}'$ to $v_{i+1}$ in $t^{\mathit{ann}}_z$, and \item there exists $\bar w_{i+1} \in \Sigma_\mathbbmtt{i}^*$ and a leaf node $u_{i+1}$ in $t_y$ with $(u_{i}',u_{i+1}) \in E_{t_y}$ such that \begin{enumerate} \item $\mathit{ann}(v_{i+1}) = u_{i+1}$ \item $\bar w_{i+1}$ leads from $u_{i}'$ to $u_{i+1}$ in $t_y$, and \item $\bar w_1o_1\dots \bar w_io_i\bar w_{i+1} \in \mathit{Pref}(U_{s})$. \end{enumerate} \end{enumerate} To be clear, the formulation $w_j$ leads from $v'_{j-1}$ to $v_j$ in $t_z^{\mathit{ann}}$ is used to mean that $w_j$ leads from $v'_{j-1}$ to $v_j$ w.r.t.\ $z''$ and $j'$, where $j' = \lceil n/2 \rceil - (j-1)$ and $z'' \in \Sigma_\mathbbmtt{o}^*$ such that $z$ has a factorization $z'z''$ with $|z'| = |w_1\dots w_{j-1}|$. Analogously, the formulation $\bar w_j$ leads from $u'_{j-1}$ to $u_j$ in $t_y$ is used to mean that $\bar w_j$ leads from $u'_{j-1}$ to $u_j$ w.r.t.\ $y''$ and $j'$, where $j' = \lceil n/2 \rceil - (j-1)$ and $y'' \in \Sigma_\mathbbmtt{o}^*$ such that $y$ has a factorization $y'y''$ with $|y'| = |\bar w_1\dots \bar w_{j-1}|$. Assume we have already defined the computation for some $k \leq i$ satisfying conditions $1$.--$3$., i.e., for $w_1,\dots, w_k \in \Sigma_\mathbbmtt{i}^+$ with $|w_1\dots w_k| < |z|$ and $o_1,\dots, o_k \in \Sigma_\mathbbmtt{o}^+$ such that the computation yields $w_1o_1\dots w_ko_k \in \mathit{Pref}(U'_{r})$, there exists a path $v_0'\dots v_k'$ in $t^{\mathit{ann}}_z$ with $v_0' = \ro{t^{\mathit{ann}}_z}$ such that $w_1o_1\dots w_ko_k$ leads from $v_0'$ to $v_k'$ in $t^{\mathit{ann}}_z$, there are $\bar w_1, \dots, \bar w_k \in \Sigma_\mathbbmtt{i}^+$ and a path $u_0'\dots u_k'$ in $t_y$ with $u_0' = \ro{t_y}$ such that $\mathit{ann}(v_0')\dots \mathit{ann}(v_k') = v_0'\dots v_k'$, $\bar w_1o_1\dots \bar w_ko_k$ leads from $u_0'$ to $u_k'$ in $t_y$, and $\bar w_1o_1\dots \bar w_ko_k \in \mathit{Pref}(U_s)$. To determine which part of the next (up to) $|z|-|w_1\dots w_k|$ input symbols will be $w_{k+1}$, we do the following after each read input symbol: Assume that after the $m$th input symbol the sequence $a_1\dots a_{m}$ has been read and let $a_1\dots a_m$ lead from $v_k'$ to $v$ in $t^{\mathit{ann}}_z$. Let $\mathit{ann}(v) = u$, note that, by construction, as stated in Lemma \ref{lemma:basic}, $a_1\dots a_m$ lead from $u_k'$ to $u$ in $t_z$. We distinguish two cases. \begin{description} \item $v$ is not a leaf, i.e., $w_1\dots w_{k}a_1\dots a_m$ does not fully traverse $z$ in $\mathcal A_q$. If there exists a $w' \in \Sigma_\mathbbmtt{i}^+$ such that $w'$ leads from $u_k'$ to $u$ in $t_y$ and there is an $o \in \Sigma_\mathbbmtt{o}^+$ such that $w'_1o_1\dots w'_ko_kw'o \in \mathit{Pref}(U_{s})$, then let $w_{k+1} = a_1\dots a_m$, $w_{k+1}' = w'$, $o_{k+1} = o$, $v_{k+1} = v$, $u_{k+1} = u$ and $w'_1o_1\dots w'_ko_kw_{k+1}o_{k+1} \in \mathit{Pref}(U'_{r})$. Meaning, $\mathcal U'_r$ produces output $o_{k+1}$ after reading $w_1\dots w_{k+1}$. Let $v_{k+1}'$ be the node such that $o_{k+1}$ leads from $v_{k+1}$ to this node in $t^{\mathit{ann}}_z$, and let $u_{k+1}' = \mathit{ann}(v_{k+1}')$. By Lemma \ref{lemma:basic}, $o_{k+1}$ also leads from $u_{k+1}$ to $u_{k+1}'$ in $t_y$. It is easy to see that conditions $1$.--$3$.\ are satisfied. Otherwise, if there exists no $w' \in \Sigma_\mathbbmtt{i}^+$ such that $w'$ leads from $u_k'$ to $u$ in $t_y$ and there is an $o \in \Sigma_\mathbbmtt{o}^+$ such that $w'_1o_1\dots w'_ko_kw'o \in \mathit{Pref}(U_{s})$, then we additionally consider the next input symbol. \item $v$ is a leaf, i.e., $w_1\dots w_{k}a_1\dots a_m$ fully traverses $z$ in $\mathcal A_q$. Then $k = i$, and let $w_{i+1} = a_1\dots a_m$, $v_{i+1} = v$ and $u_{i+1} = u$. Since $w_1o_1\dots w_io_iw_{i+1} \in \mathit{Pref}(U'_{r})$, we show that conditions $4$.--$5$.\ can be satisfied. Clearly, by construction of STTs, $u_{i+1} = u$ is also a leaf and condition $4$.\ is satisfied. Towards a contradiction, assume condition $5$.\ can not be satisfied, meaning for all $w' \in \Sigma_\mathbbmtt{i}^*$ such that $w'$ leads from $u_{i}'$ to $u_{i+1}$ it holds $w'_1o_1\dots w'_io_iw' \notin \mathit{Pref}(U_{s})$. Since $w'_1o_1\dots w'_io_i \in \mathit{Pref}(U_{s})$, for each such $w'$ there exists a factorization $x_1x_2$ such that $w'_1o_1\dots w'_io_ix_1 \in \mathit{Pref}(U_{s})$. Recall, $t^{\mathit{ann}}_z = t^{\mathit{ann}}_y$, thus there exists at least one $w'$ such that $w'$ leads from $v_i'$ to $v_{i+1}$ in $t^{\mathit{ann}}_y$ and a factorization of $w'$ to $x_1x_2$ such that $w'_1o_1\dots w'_io_ix_1 \in \mathit{Pref}(U_{s})$. This means there exists some node $\tilde v$ in $t^{\mathit{ann}}_y$ such that $x_1$ leads from $v_{i}'$ to $\tilde v$ in $t^{\mathit{ann}}_y$ and $x_1$ leads from $u_{i}'$ to $\mathit{ann}(\tilde v)$ in $t_y$. Let $\mathit{ann}(\tilde v) = \tilde u$ and let $\val{t^{\mathit{ann}}_z}(v_{i+1}) = (p_{i+1},q_{i+1},u_{i+1},V_{i+1})$, then by construction of annotated STTs, we obtain $\tilde u \in V_{i+1}$. Therefore, we know that since $w_{i+1}$ leads from $v_i'$ to $v_{i+1}$ in $t^{\mathit{ann}}_z$ there exists a factorization of $w_{i+1}$ to $\tilde x_1\tilde x_2$ such that $\tilde x_1$ leads from $u_i'$ to $\tilde u$ in $t_z$. Consequently, $\tilde x_1$ leads from $v_{i}'$ to $\hat v$ in $t^{\mathit{ann}}_z$ for some $\hat v$ in $t^{\mathit{ann}}_z$ such that $\mathit{ann}(\hat v) = \tilde u$. Let $\tilde x_1 = a_1\dots a_{m'}$ for some $m' < m$. Thus, after reading $a_1\dots a_{m'}$ output would be produced. Therefore, after reading $a_1\dots a_m$ the node $v_{i+1}$ would not be reached, because $\hat v$ and $v_{i+1}$ refer to the same number of produced output blocks, and thus $v_{i+1}$ is not reachable from $\hat v$ in $t^{\mathit{ann}}_z$. Contradiction. Hence, there exists some $w' \in \Sigma_\mathbbmtt{i}^*$ such that $w'$ leads from $u_{i}'$ to $u_{i+1}$ it holds $w'_1o_1\dots w'_io_iw' \in \mathit{Pref}(U_{s})$ meaning condition $5$. is satisfied. \end{description} We have proved the claim of the induction. It is left to show that $xy\bar w_1o_1\dots \bar w_io_i\bar w_{i+1}\tilde z \in \llbracket S \rrbracket$ if, and only if, $xzw_1o_1\dots w_io_i\bar w_{i+1}\tilde z \in \llbracket S \rrbracket$ for all $\tilde z \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ implying that $U'$ is a $T$-controlled uniformization. Recall, $\delta_\mathcal A^*(q_0^\mathcal A,(\pi_\mathbbmtt{i}(x),\pi_\mathbbmtt{o}(x))) = q$ and $\delta_\mathcal B^*(q_0^\mathcal B,xy) = p$, since $\tau_y = \tau_z$, also $\delta_\mathcal B^*(q_0^\mathcal B,xz) = p$. Now, we show that $\delta_\mathcal A^*(q,(w_1\dots w_ {i+1},z))$ $=$ $\delta_\mathcal A^*(q,(\bar w_1\dots \bar w_{i+1},y))$ and $\delta_\mathcal B^*(p,w_1o_1\dots w_io_iw_{i+1})$ $=$ $\delta_\mathcal B^*(p,\bar w_1o_1\dots \bar w_io_i\bar w_{i+1})$. This is easy to see, because $w_1o_1\dots w_io_iw_{i+1}$ leads from $\ro{t^{\mathit{ann}}_z}$ to a leaf $v_{i+1}$ in the annotated STT $t^{\mathit{ann}}_z$ and $\bar w_1o_1\dots \bar w_io_i\bar w_{i+1}$ leads from $\ro{t_y}$ to the leaf $\mathit{ann}(v_{i+1})$ in the STT $t_y$ and by Lemma \ref{lemma:basic} this implies that the induced state transformations are equal, i.e., $U'$ is an $T$-controlled uniformization. Furthermore, given a word from $U'$, the first output block (after the lag has exceeded $\gamma$ at some point) increases the output lag by at most $b$. Applying this construction a total of $\lceil n/2 \rceil$ times yields a uniformization according the statement of the lemma. \end{proof} The proof of the above lemma yields that $b$ can be chosen as $\mathit{max}\{\beta,\gamma+1\}$, where $\beta$ is the smallest bound on the length of representatives of output profiles. The second auxiliary lemma states that it suffices to consider uniformizers that either produce output before the input sequence contains an idempotent factor, or if they do not produce output until then, then neither do they when pumping the idempotent factor. \begin{lemma}\label{lemma:pumping} If $S$ has a $T$-controlled uniformization $U$ by an sDFA, then $S$ has a $T$-controlled uniformization $U'$ by an sDFA such that for each $u \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ and $x,x_1,x_2 \in \Sigma_\mathbbmtt{i}^*$ such that $|\pi_\mathbbmtt{i}(u)| = |\pi_\mathbbmtt{o}(u)|$, $|xx_1x_2| > \gamma$, $x_2$ is idempotent, and $P_{x_1} = P_{x_2}$ it holds that if $uxx_1x_2 \in \mathit{Pref}(U')$, then $uxx_1x_2^i \in \mathit{Pref}(U')$ for each $i \in \mathbbm N$. \end{lemma} \begin{proof} Let $U$ be a $T$-controlled uniformization recognized by an sDFA $\mathcal U$ such that there is $u \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ and $x,x_1,x_2 \in \Sigma_\mathbbmtt{i}^*$ such that $|\pi_\mathbbmtt{i}(u)| = |\pi_\mathbbmtt{o}(u)|$, $|xx_1x_2| > \gamma$, $x_2$ is idempotent, and $P_{x_1} = P_{x_2}$ such that $uxx_1x_2 \in \mathit{Pref}(U)$ and $uxx_1x_2^i \notin \mathit{Pref}(U)$ for some $i \in \mathbbm N$. Since $uxx_1x_2^i \notin \mathit{Pref}(U)$ there exists some $j < i$ and some prefix of $x_2$, say $x_2'$, such that $uxx_1x_2^jx_2' \in \mathit{Pref}(U)$ and $\delta_\mathcal U^*(q_0^\mathcal U,uxx_1x_2^jx_2') \in Q_\mathcal U^\mathbbmtt{o}$, i.e., $\mathcal U$ produces output after reading $uxx_1x_2^jx_2'$. Now we show that there exists an $T$-controlled uniformization $U'$ recognized by an sDFA $\mathcal U'$ such that $\mathcal U'$ produces output after reading $uxx_1x_2'$. Since $x_2$ is idempotent and $P_{x_1} = P_{x_2}$, then also $P_{x_1} = P_{x_1x_2^j}$. Thus, similar as in the proof of Lemma \ref{lemma:shortregular}, we can show that $\mathcal U'$ can sequentially determine the output that has to be produced in a computation on $uxx_1x_2'w$ for some $w \in \Sigma_\mathbbmtt{i}^*$ by behaving like $\mathcal U$ on $uxx_1x_2^jx_2'w$ and replacing outputs that consume $x_1x_2^j$ (w.r.t.\ $\mathcal A$) by equal outputs that consume overlap $x_1$ (w.r.t.\ $\mathcal A$) in the sense that the induced state transformations on $\mathcal A$ and $\mathcal B$ are equal. \end{proof} Recall, in Asm.~\ref{asm:bounds}, we have fixed bounds $r_1$ as in Lemma \ref{lemma:ramsey} and $r_2$ as in Lemma \ref{lemma:longeroutput}. Also, $r_1,r_2 > \gamma$. For a uniformizer according to Lemma~\ref{lemma:longeroutput}, the lemma yields that the next output block is of length at most $r_2$ if there is currently lag caused by output that is behind. However, if there is currently lag because the output is behind, say $\ell$ symbols, then Lemma~\ref{lemma:longeroutput} yields that the next output block is of length at most $\ell+r_2$. This value can become arbitrary large as the lag can generally not be bounded. Our goal is to show that if lag caused by output that is behind exceeds $r_1$, then the length of the next output block can be bounded by $r_1$. Recall, since $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$ and $r_1 > \gamma$, it can only happen $\lceil n/2 \rceil$ times that lag caused by output that is behind exceeds $r_1$. Thus, proving the above statement then gives us the key lemma, stated below. Recall, $T_i$ is defined as $T \cap \left( T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^{\leq i})^n \right)$ for $i \geq 0$. { \renewcommand{\thetheorem}{\ref{lemma:shortregular}} \lemmashortregular* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Lemma~\ref{lemma:shortregular}] We now show that it is sufficient to consider $T_k$ for $k = r_1+r_2$ in order to find an $T$-controlled $\llbracket S \rrbracket$-uniformization if there exists one. Let $U$ be an $T$-controlled uniformization that satisfies the conditions of Lemmata \ref{lemma:longeroutput} and \ref{lemma:pumping} recognized by a sequential DFA $\mathcal U$. We show how we can obtain $U'$ that is an $M_k$-controlled $\llbracket S \rrbracket$-uniformization recognized by a sequential DFA $\mathcal U'$ by modifying $\mathcal U$. Recall, $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$. We construct $\mathcal U'$ such that a computation of $\mathcal U'$ differs from $\mathcal U$ from the point on that the following situations occurs: There is $u \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ and $x,x_1,x_2 \in \Sigma_\mathbbmtt{i}^*$ such that $|\pi_\mathbbmtt{i}(u)| = |\pi_\mathbbmtt{o}(u)|$, $|xx_1x_2| > \gamma$, $x_2$ is idempotent, $P_{x_1} = P_{x_2}$ and $uxx_1x_2 \in \mathit{Pref}(U)$. Since $|\pi_\mathbbmtt{i}(u)| = |\pi_\mathbbmtt{o}(u)|$ and $|xx_1x_2| > \gamma$, we know that $uxx_1x_2 \in \mathit{Pref}(L)$, but $uxx_1x_2 \notin \mathit{Pref}(L_1)$. Thus, for each $z \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ with $uxx_1x_2z \in U$ holds that $\mathit{shift}(z) < n$ and the number of output blocks in $z$ is at most $\lceil n/2 \rceil$. Let $\ell$ be the maximal size of an output block that $\mathcal U$ can produce. Hence, $|\pi_\mathbbmtt{o}(z)|$ is at most $\lceil n/2 \rceil \ell$. We chose the the smallest $m$ such that $|xx_1x_2^m| \geq \lceil n/2 \rceil \ell$. Consider an arbitrary input word $w \in \Sigma_\mathbbmtt{i}$ such that $\pi_\mathbbmtt{i}(u)xx_1x_2w$ is in the domain of $\llbracket S \rrbracket$. In order to determine the computation of $\mathcal U'$ on $\pi_\mathbbmtt{i}(u)xx_1x_2w$, we consider the computation of $\mathcal U$ on $\pi_\mathbbmtt{i}(u)xx_1x_2^{m}w$. Note that by Lemma \ref{lemma:pumping}, after having read $\pi_\mathbbmtt{i}(u)xx_1x_2$, $\mathcal U$ produces no output while reading $x_2^{m-1}$. Assume we have already have defined $\mathcal U'$ up to the point where $\pi_\mathbbmtt{i}(u)xx_1x_2$ was read, and until then, $\mathcal U$ and $\mathcal U'$ have worked the same way. We show that $\mathcal U'$ can continue the computation successfully. Let $\delta_\mathcal U^*(q_0^\mathcal U,\pi_\mathbbmtt{i}(u)xx_1x_2^{m}) = s$, and let $\delta_\mathcal U'^*(q_0^{\mathcal U'},\pi_\mathbbmtt{i}(u)xx_1x_2) = r$. The computation of $\mathcal U'_r$ on $w$ will be based on the computation of $\mathcal U_s$ on $w$ such that we can sequentially define the output blocks that $\mathcal U'_r$ has to produce. Since $x_2$ is idempotent and $P_{x_1} = P_{x_2}$, also $P_{x_1x_2} = P_{x_1x_2^m}$. Furthermore, also $P_{xx_1x_2} = P_{xx_1x_2^m}$. Hence, we have $\mathrm{SST}^{\lceil n/2 \rceil}_{xx_1x_2} = \mathrm{SST}^{\lceil n/2 \rceil}_{xx_1x_2^m}$. We are interested in the unique tree $t_{xx_1x_2^m} \in \mathrm{SST}^{\lceil n/2 \rceil}_{xx_1x_2^m}$ with root label $(p,q)$, where \begin{itemize} \item $p \in Q_\mathcal B$ is the state such that $\delta_\mathcal B^*(q_0^\mathcal B,uxx_1x_2^m) = p$, and \item $q \in Q_\mathcal A$ is the state such that $\delta_\mathcal A^*(q_0^\mathcal B,\pi_\mathbbmtt{i}(uxx_1x_2^mw_0),\pi_\mathbbmtt{o}(uxx_1x_2^mw_0)) = q$, where $w_0$ is the prefix of the input sequence $w$ such that either $\mathcal U_s$ produces output after reading $w_0$, or $w_0 \in U_{s}$, i.e., $w = w_0$ and no output was produced. \end{itemize} Note that, since $P_{xx_1x_2} = P_{xx_1x_2^m}$ we have $\tau_{xx_1x_2} = \tau_{xx_1x_2^m}$, thus also $\delta_\mathcal B^*(q_0^\mathcal B,uxx_1x_2) = p$. Let $t_{xx_1x_2}$ denote the same tree w.r.t.\ $xx_1x_2$, which has to exist since $P_{xx_1x_2} = P_{xx_1x_2^m}$. Now, we are ready to inductively (on the number of produced output blocks) define the computation of $\mathcal U'_r$ (on $w$ as above) satisfying the following properties: For $w_0 \in \Sigma_\mathbbmtt{i}^*$, $w_1,\dots, w_{i-1} \in \Sigma_\mathbbmtt{i}^+$, $y_1,\dots,y_{i-1} \in \Sigma_\mathbbmtt{o}^+$ and $y_i \in \Sigma_\mathbbmtt{o}^*$ such that $w_0y_1 \dots w_{i-1}y_i \in \mathit{Pref}(U'_r)$, there exists a path $v_0'v_1\dots v_{i-1}'v_i$ in $t_{xx_1x_2}$ with $v_0' = \ro{t_{xx_1x_2}}$ and $v_i$ is a leaf in $t_{xx_1x_2}$ such that \begin{enumerate} \item $y_j$ leads from $v_{j-1}'$ to $v_j$ in $t_{xx_1x_2}$ for all $1 \leq j \leq i$, and \item $w_j$ leads from $v_j$ to $v_{j}'$ in $t_{xx_1x_2}$ for all $1 \leq j < i$, and \item there exist $\bar y_1,\dots \bar y_{i-1} \in \Sigma_\mathbbmtt{o}^+$ and $\bar y_i \in \Sigma_\mathbbmtt{o}^*$ such that \begin{enumerate} \item $\bar y_j$ leads from $v_{j-1}'$ to $v_j$ in $t_{xx_1x_2^m}$ for all $1 \leq j \leq i$, and \item $w_j$ leads from $v_j$ to $v_{j}'$ in $t_{xx_1x_2^m}$ for all $1 \leq j < i$, and \item $w_0\bar y_1\dots w_{i-1}\bar y_i \in \mathit{Pref}(U_{s})$. \end{enumerate} \end{enumerate} To be clear, the formulation $y_j$ leads from $v'_{j-1}$ to $v_j$ in $t_{xx_1x_2}$ is used to mean that $y_j$ leads from $v'_{j-1}$ to $v_j$ w.r.t.\ $x''$ and $j'$, where $j' = \lceil n/2 \rceil - (j-1)$ and $x'' \in \Sigma_\mathbbmtt{i}^*$ such that $xx_1x_2$ has a factorization $x'x''$ with $|x'| = |y_1\dots y_{j-1}|$. Analogously, the formulation $\bar y_j$ leads from $v'_{j-1}$ to $v_j$ in $t_{xx_1x_2^m}$ is used to mean that $\bar y_j$ leads from $v'_{j-1}$ to $v_j$ w.r.t.\ $x''$ and $j'$, where $j' = \lceil n/2 \rceil - (j-1)$ and $x'' \in \Sigma_\mathbbmtt{i}^*$ such that $xx_1x_2^m$ has a factorization $x'x''$ with $|x'| = |\bar y_1\dots \bar y_{j-1}|$. Assume $k = 1$, we already defined $w_0$ above and $v_0'$ as $\ro{t_{xx_1x_2}}$, we have to define $y_1$, $\bar y_1$, and $v_1$. The sequence $w_0$ was chosen such that either $\mathcal U_s$ produces output after reading $w_0$, or $w_0 \in U_s$ , i.e., the input sequence ends. In the former case, let $\bar y_1$ be the output that is produced by $\mathcal U_s$, in the latter case let $\bar y_1$ be $\varepsilon$. We chose as vertex $v_1$ the vertex $v$ such that $\bar y_1$ leads from $v_0'$ to $v$ in $t_{xx_1x_2^m}$. This vertex has to exist, because $1 \leq \lceil n/2 \rceil$ and and by choice of $m$ we have $|\bar y_1| \leq |xx_1x_2^m|$. Since $t_{xx_1x_2^m} = t_{xx_1x_2}$, there exists some $z \in \Sigma_\mathbbmtt{o}^*$ such that $z$ leads from $v_0'$ to $v$ in $t_{xx_1x_2}$, let $y_1$ be such a $z$ and let $w_0y_1 \in \mathit{Pref}(U'_r)$. Clearly, the above conditions are satisfied. Assume we have already defined the computation for some $k < i$ satisfying the above conditions, i.e., for $w_1,\dots,w_{k-1} \in \Sigma_\mathbbmtt{i}^+$ and $y_1,\dots,y_k \in \Sigma_\mathbbmtt{o}^+$ such that $w_0y_1 \dots w_{k-1}y_k \in \mathit{Pref}(U'_r)$, there exists a path $v_0'v_1\dots v_{k-1}'v_k$ in $t_{xx_1x_2}$ with $v_0' = \ro{t_{xx_1x_2}}$, and $y_1 \dots w_{k-1}y_k$ leads from $v_0'$ to $v_k$ in $t_{xx_1x_2}$, and there are $\bar y_1,\dots,\bar y_k \in \Sigma_\mathbbmtt{o}^+$ such that $\bar y_1\dots w_{k-1}\bar y_k$ leads from $v_0'$ to $v_k$ in $t_{xx_1x_2^m}$ and $w_0\bar y_1\dots w_{k-1}\bar y_k \in \mathit{Pref}(U_{s})$. Let $\delta_\mathcal U(s,w_0\bar y_1\dots w_{k-1}\bar y_k) = s_k$. To define $w_k$, $y_{k+1}$, $\bar{y}_{k+1}$, $v_k'$ and $v_{k+1}$, we consider the computation of $\mathcal U_{s_k}$, let $w_k \in \Sigma_\mathbbmtt{i}^+$ be the next part of the input sequence such that either $\mathcal U_{s_k}$ produces output after reading $w_k$, or $w_k \in U_{s_k}$, i.e., the input sequence ends. Let $v_{k}'$ be the vertex that is reached from $v_k$ via $w_k$ in $t_{xx_1x_2^m}$, note then $w_k$ also leads from $v_k$ to $v_k'$ in $t_{xx_1x_2}$. If after reading $w_k$ output is produced, then let $\bar y_{k+1}$ be the output that is produced by $\mathcal U_{s_k}$, if the input sequence ends, then let $\bar y_{k+1}$ be $\varepsilon$. For $v_{k+1}$ we pick the vertex $v$ such that $\bar y_{k+1}$ leads from $v_k'$ to $v$ in $t_{xx_1x_2^m}$. This vertex has to exist, because $k < \lceil n/2 \rceil$ and by choice of $m$ we have $|\bar y_1\dots \bar y_{k+1}| \leq |xx_1x_2^m|$. Since $t_{xx_1x_2^m} = t_{xx_1x_2}$, there exists some $z \in \Sigma_\mathbbmtt{o}^*$ such that $z$ leads from $v_k'$ to $v$ in $t_{xx_1x_2}$, let $y_{k+1}$ be such a $z$ and let $w_0y_1 \dots w_{k-1}y_kw_ky_{k+1} \in \mathit{Pref}(U'_r)$. With these choices the conditions stated above are satisfied. Note that after at most $\lceil n/2 \rceil$ output blocks, $v_{k+1}$ will be a leaf, because eventually either $|\bar y_1\dots \bar y_{k+1}| = |xx_1x_2^m|$ or $|\bar y_1\dots \bar y_{k+1}| \leq |xx_1x_2^m|$ and the input sequence has ended. Both cases imply that $\bar y_1\dots \bar y_{k+1}$ fully traverses $xx_1x_2^m$ in $\mathcal A_q$, thus, by construction of state transformation trees, a leaf is reached. Note that this also implies that $|y_1\dots y_{k+1}| \leq |xx_1x_2|$. After reaching a leaf, $\mathcal U'$ continues to read the remaining input, say $w_i$, as $\mathcal U$ does. Altogether, from the induction above, it follows that for $uxx_1x_2w_0y_1\dots w_{i-1}y_iw_i \in U'$, there is $uxx_1x_2uxx_1x_2^mw_0\bar y_1\dots w_{i-1}\bar y_iw_i \in U$ such that both $y_1\dots w_{i-1}y_i$ and $\bar y_1\dots w_{i-1}\bar y_i$ lead through the same path in $t_{xx_1x_2} = t_{xx_1x_2^m}$, thus, we obtain $\delta_\mathcal B^*(p,y_1\dots w_{i-1}y_i)$ $=$ $\delta_\mathcal B^*(p,\bar y_1\dots w_{i-1}\bar y_i)$ and $\delta_\mathcal A^*(q,(xx_1x_2,y_1\dots y_i))$ $=$ $\delta_\mathcal A^*(q,(xx_1x_2^m,\bar y_1\dots \bar y_i))$. Together with $\tau_{xx_1x_2} = \tau_{xx_1x_2^m}$, it now directly follows that $uxx_1x_2w_0y_1\dots w_{i-1}y_iw_i$ is $L$-controlled and $\llbracket uxx_1x_2w_0y_1\dots w_{i-1}y_iw_i \rrbracket \in \llbracket S \rrbracket$. That means $U'$ is an $T$-controlled uniformization, we argue that it is even a $T_k$-controlled uniformization, recall $T_k = T \cap \left( T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^{\leq k})^n \right)$, where $k = r_1+r_2$. As seen above, if the input lag is large enough that it contains an idempotent factor, which is given after at most $r_1$ input symbols, then the remaining output is of length at most $r_1 < k$. If the lag is smaller then $r_1$ input symbols, say $d$, and output is produced, then Lemma \ref{lemma:longeroutput} yields that the produced output block is of length at most $d+r_2 < k$. Hence $U'$ is indeed $T_k$-controlled. \end{proof} The proof of the above lemma yields that we can focus on the construction of $T_k$-controlled uniformizer, where $k$ can be chosen as $r_1+r_2$. \subsection*{Part II} The goal of this section is to show that the problem whether $S$ has a $T_i$-controlled uniformizer reduces to the question whether $T_i(S)$ has a subset uniformizer for some suitable $T_i(S)$ as defined in Lemma~\ref{lemma:transformregular}. Together with Lemma~\ref{lemma:shortregular} this then directly yields Theorem~\ref{thm:regular}. { \renewcommand{\thetheorem}{\ref{lemma:transformregular}} \lemmatransformregular* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Lemma~\ref{lemma:transformregular}] It is possible to give a direct construction from $S$ to $T_i(S)$, however, it is simpler to give a construction from $S_{\mathit{can}}$ to $T_i(S)$. So, we work with $S_{\mathit{can}}$. Let $\mathcal M$ be an NFA recognizing the regular set $T_i$. From $\mathcal M$, we construct an NFA $\mathcal C$ that while reading an $T_i$-controlled word $w$ simultaneously constructs an $(12)^*(1^*+2^*)$-controlled word $w'$ with $\llbracket w \rrbracket = \llbracket w' \rrbracket$ and simulates $\mathcal A$ on $w'$ and accepts if $\mathcal A$ accepts $w'$. In other words, $\mathcal C$ resynchronizes $w$ on the fly to be $(12)^*(1^*+2^*)$-controlled in order to simulate $\mathcal A$ on the resynchronization. We only give an idea of the construction. A $T_i$-controlled word $w$ can be factorized as $w_1 \cdot w_2$ such that $w_1 \in L_{\leq \gamma}$ and $w_2 \in (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^{\leq i})^n$. For each word $w$, $\mathcal C$ guesses when the split occurs and uses different resynchronization techniques on $w_1$ and $w_2$. We now describe how $w_1$ and $w_2$ are resynchronized. While reading $w_1$, every position is at most $\gamma$-lagged. To resynchronize $w_1$ to have a $(12)^*$-controlled synchronization (i.e., a $1$-lagged synchronization), $\mathcal C$ has to store only a window of $\gamma$ symbols to be able to continue the simulation of $\mathcal A$. Concerning $w_2$, we use the following method to obtain a $w_2$ resynchronization that is $(12)^*(1^*+2^*)$-controlled. The length of $\pi_{\mathbbmtt{o}}(w_2)$ is bounded by $i \cdot n$. Hence, before reading $w_2$, $\mathcal C$ guesses an output word $y \in \Sigma_{\mathbbmtt{o}}^*$ of length at most $i \cdot n$. While reading $w_2$, $\mathcal C$ can easily simulate $\mathcal A$ on the $(12)^*(1^*+2^*)$-controlled synchronization of $(\pi_{\mathbbmtt{i}}(w_2),y)$ and check whether $y = \pi_{\mathbbmtt{o}}(w_2)$. It is easy to see that $\mathcal C$ indeed recognizes the desired language $T_i(S)$. \end{proof} Now we have all ingredients to prove our main result. { \renewcommand{\thetheorem}{\ref{thm:regular}} \thmregular* \addtocounter{theorem}{-1} } \begin{proof}[Proof of Theorem~\ref{thm:regular} continued] It is left to show $S$ has a $T$-controlled uniformization by an sDFA iff $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$ and $T_k(S)$ has a subset uniformization by an sDFA, which is decidable by Theorem~\ref{thm:sunif}. Assume $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$ and $T_k(S)$ has a subset uniformization by an sDFA. Since $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$, every subset uniformization of $T_k(S)$ is also a $T_k$-controlled uniformization of $S$. Such a uniformization is also $T$-controlled, because $T_k \subseteq T$. For the other direction, assume $S$ has a $T$-controlled uniformization by an sDFA. As stated above, then $S$ has a $T_k$-controlled uniformization by an sDFA, say $U \subseteq T_k$. First, we show $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$. Since $\llbracket U \rrbracket \subseteq_\mathit{u} \llbracket S \rrbracket$, we have $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket U \rrbracket)$. Clearly, by construction, $\mathrm{dom}(\llbracket S \rrbracket) \supseteq \mathrm{dom}(\llbracket T_k(S) \rrbracket)$. Since $U$ is $T_k$-controlled, also $U \subseteq T_k(S)$ and $\mathrm{dom}(\llbracket U \rrbracket) \subseteq \mathrm{dom}(\llbracket T_k(S) \rrbracket)$ by construction. We can conclude $\mathrm{dom}(\llbracket S \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$. Secondly, we show that $U$ is a subset uniformization of $T_k(S)$. Since $U \subseteq T_k(S)$ and $\mathrm{dom}(\llbracket U \rrbracket) = \mathrm{dom}(\llbracket T_k(S) \rrbracket)$, it is clear that $U$ is a subset uniformization of $T_k(S)$. This concludes the proof of the claim. \end{proof} \section{Introduction}\label{sec:intro} A uniformization of a binary relation is a function that selects for each element in the domain of the relation a unique image that is in relation with this element. Of interest to us in this paper are uniformization problems in the setting where the relations and functions on words are defined by finite automata. Relations on words defined by finite automata extend languages defined by finite automata. Unlike for words, different finite automaton models for relations lead to different classes of relations. Relations defined by asynchronous finite automata are referred to as rational relations. An asynchronous finite automaton is a nondeterministic finite automaton with two tapes whose reading heads can move at different speeds. An equivalent computation model are asynchronous finite transducers (see, e.g., \cite{berstel2009}), that is, nondeterministic finite automata whose transitions are labeled by pairs of words. A well known subclass of rational relations are synchronized rational relations (see \cite{DBLP:journals/tcs/FrougnyS93}), which are defined by synchronous finite automata, that is, finite automata with two tapes whose reading heads move at the same speed. Equivalently, we speak of definability by synchronous finite transducers. The class of synchronized rational relations is also called automatic or regular, here, we use the term automatic. One uniformization problem asks for proving that each relation in a given class has a certain kind of uniformization. For example, each rational relation can be uniformized by an unambiguous rational function (see \cite{sakarovich:2009a}). Here, we are interested in the decision version of the problem: Given a relation from some class, does it have a uniformization in some other class? For the class of uniformizations we consider sequential transducers. A sequential transducer reads the input word in a deterministic manner and produces a unique output word for each input word. The sequential uniformization problem relates to the synthesis problem, which asks, given a specification that relates possible inputs to allowed outputs, whether there is a program implementing the specification, and if so, construct one. This setting originates from Church's synthesis problem \cite{church1962logic}, where logical specifications over infinite words are considered. B{\"u}chi and Landweber \cite{buechi} showed that for specifications in monadic second order logic, that is, specifications that can be translated into synchronous finite automata, it is decidable whether it can be realized by a synchronous sequential transducer (see, e.g., \cite{Thomas08} for a modern presentation of this result). Later, decidability has been extended to asynchronous sequential transducers \cite{hosch1972finite,holtmann2010degrees}. Going from the setting of infinite words to finite words uniformization by subsequential \footnote{A subsequential transducer can make a final output depending on the last state reached in a run whereas a sequential transducer can only produce output on its transitions.} transducers is considered. The problem whether a relation given by a synchronous finite automaton can be realized by a synchronous subsequential transducer is decidable; this result can be obtained by adapting the proof from the infinite setting. Decidability has been extended to subsequential transducers \cite{CarayolL14}. Furthermore, for classes of asynchronous finite automata decidability results for synthesis of subsequential transducers have been obtained in \cite{FJLW16}. A semi-algorithm in this spirit was introduced by \cite{johnson2010}, the algorithm is tasked to synthesize a subsequential transducer that selects the length lexicographical minimal output word for each input word from a given rational relation. The decision problems that have been studied so far either ask for uniformization by a synchronous subsequential or by an arbitrary subsequential transducer. Our aim is to study the decision problem: Given a rational relation, does it have a uniformization by a subsequential transducer in which the allowed input/output behavior is specified by a given language of synchronizations? The idea is to represent a pair of words by a single word where each position is annotated over $\{1,2\}$ indicating whether it came from the input or output component. The annotated string provides a synchronization of the pair. It is known that the class of rational relations is synchronized by regular languages \cite{Niv-transductions-lc}. More recently, main subclasses of rational relations have been characterized by their synchronizations \cite{conf/stacs/FigueiraL14}. We show decidability for a given automatic relation and a given set of synchronizations that synchronizes an automatic relation. Thus our decidability result generalizes the previously known decidability result for synthesis of synchronous subsequential transducers from automatic relations. The paper is structured as follows. First, in Sec.~\ref{sec:sync}, we fix our notations and recap characterizations of synchronization languages established in \cite{conf/stacs/FigueiraL14}. In Sec.~\ref{sec:unifproblems}, we introduce uniformization problems with respect to synchronization languages and compare our setting with known results. In Sec.~\ref{sec:regular}, we prove decidability of the question whether an automatic relation has a uniformization by a subsequential transducer in which the input/output behavior is specified by a set of synchronizations that synchronizes an automatic relation. Omitted proofs can be found in the appendix. \section{Synchronizations of relations}\label{sec:sync} Let $\mathbbm{N}$ denote the set of all non-negative integers $\{0,1,\dots\}$, and for every $k \in \mathbbm{N}$, let $\mathbf{k}$ denote the set $\{1,\dots,k\}$. Given a finite set $A$, let $|A|$ denote its cardinality and $2^A$ its powerset. \subparagraph*{Languages and relations of finite words.} An \emph{alphabet} $\Sigma$ is a finite set of letters, a finite \emph{word} is a finite sequence over $\Sigma$. The set of all finite words is denoted by $\Sigma^*$ and the empty word by $\varepsilon$. The length of a word $w \in \Sigma^*$ is denoted by $|w|$, the number of occurrences of a letter $a \in \Sigma$ in $w$ by $\#_a(w)$. Given $w \in \Sigma^*$, $w[i]$ stands for the $i$th letter of $w$, and $w[i,j]$ for the subword $w[i]\dots w[j]$. A \emph{language} $L$ over $\Sigma$ is a subset of $\Sigma^*$, and $\mathit{Pref}(L)$ is the set $\{ u \in \Sigma^* \mid \exists v: uv \in L \}$ of its prefixes. The prefix relation is denoted by $\sqsubseteq$. A \emph{relation} $R$ over $\Sigma$ is a subset of $\Sigma^* \times \Sigma^*$. The \emph{domain} of a relation $R$ is the set $\textrm{dom}(R) = \{ u \mid (u,v) \in R\}$, the \emph{image} of a relation $R$ is the set $\textrm{img}(R) = \{ v \mid (u,v) \in R\}$. For $u \in \Sigma^*$, let $R(u) = \{ v \mid (u,v) \in R \}$ and write $R(u) = v$, if $R(u)$ is a singleton. A \emph{regular expression $r$} over $\Sigma$ has the form $\emptyset$, $\varepsilon$, $\sigma \in \Sigma$, $r_1 \cdot r_2$, $r_1+r_2$, or $r_1^*$ for regular expressions $r_1$, $r_2$. The term $r^+$ is short for $r\cdot r^*$. The concatenation operator $\cdot$ is often omitted. The language associated to $r$ is defined as usual, denoted $L(r)$, or conveniently, $r$. \begin{definition}[synchronization, $L$-controlled \cite{conf/stacs/FigueiraL14}]\label{def:sync} For $c \in \{\mathbbmtt{i},\mathbbmtt{o}\}$, referring to input and output, respectively, we define two morphisms $\pi_{c}\colon (\mathbf{2} \times \Sigma) \rightarrow \Sigma \cup \{\varepsilon\}$ by $\pi_{\mathbbmtt{i}}((i,a)) = a$ if $i=1$, otherwise $\pi_{\mathbbmtt{i}}((i,a)) = \varepsilon$, and likewise for $\pi_{\mathbbmtt{o}}$ with $i=2$. These morphisms are lifted to words over $(\mathbf{2} \times \Sigma)$. A word $w \in (\mathbf{2} \times \Sigma)^*$ is a \emph{synchronization} of a uniquely determined pair $(w_1,w_2)$ of words over $\Sigma$, where $w_1 = \pi_{\mathbbmtt{i}}(w)$ and $w_2 = \pi_{\mathbbmtt{o}}(w)$. We write $\llbracket w \rrbracket$ to denote $(w_1,w_2)$. Naturally, a set $S \subseteq (\mathbf{2} \times \Sigma)^*$ of synchronizations defines the relation $\llbracket S \rrbracket = \{ \llbracket w \rrbracket \mid w \in S\}$. A word $w = (i_1,a_1)\dots (i_n,a_n) \in (\mathbf{2} \times \Sigma)^*$ is the convolution $u \otimes v$ of two words $u = i_1\dots i_n \in \mathbf{2}^*$ and $v = a_1\dots a_n \in \Sigma^*$. Given a language $L \subseteq \mathbf{2}^*$, we say $w$ is \emph{$L$-controlled} if $u \in L$. A language $S \subseteq (\mathbf{2} \times \Sigma)^*$ is \emph{$L$-controlled} if all its words are. A language $L \subseteq \mathbf{2}^*$ is called a \emph{synchronization language}. For a regular language $L \subseteq \mathbf{2}^*$, $\textsc{Rel}(L) = \{ \llbracket S \rrbracket \!\mid\! S \text{ is a regular $L$-controlled }$ $\text{language}\}$ is the set of relations that can be given by $L$-controlled synchronizations. Let $\mathcal C$ be a class of relations, we say $L$ \emph{synchronizes} $\mathcal C$ if $\textsc{Rel}(L) \subseteq \mathcal C$. \end{definition} \begin{definition}[lag, shift, shiftlag \cite{conf/stacs/FigueiraL14}] Given a word $w \in \mathbf{2}^*$, a position $i \leq |w|$, and $\gamma \in \mathbbm{N}$. We say $i$ is \emph{$\gamma$-lagged} if $|\#_1(w[1,i]) - \#_2(w[1,i])| = \gamma$, and likewise, we define \emph{$\glag{\gamma}$-lagged} and \emph{$\llag{\gamma}$-lagged}. A \emph{shift} of $w$ is a position $i \in \{1,\dots,|w|-1\}$ such that $w[i] \neq w[i+1]$. Two shifts $i < j$ are \emph{consecutive} if there is no shift $l$ such that $i < l < j$. Let $\mathit{shift}(w)$ be the number of shifts in $w$, let $\mathit{lag}(w)$ be the maximum lag of a position in $w$, and let $\mathit{shiftlag}(w)$ be the maximum $n \in \mathbbm{N}$ such that $w$ contains $n$ consecutive shifts which are $\glag{n}$-lagged. We lift these notions to languages by taking the supremum in $\mathbbm N \cup \{\infty\}$, e.g., $\mathit{shift}(L) = \mathrm{sup}\{\mathit{shift}(w) \mid w \in L\}$, and likewise for $\mathit{lag}(L)$ and $\mathit{shiftlag}(L)$. \end{definition} The following characterizations for well known subclasses of rational relations were shown in \cite{conf/stacs/FigueiraL14}. Recall, rational relations are definable by asynchronous finite automata, automatic relations by synchronous finite automata, and recognizable relations are definable as finite unions of products of regular languages. We omit a formal definition of these models since it is not relevant to this paper. \begin{theorem}[\cite{conf/stacs/FigueiraL14}] Let $L \subseteq \mathbf{2}^*$ be a regular language. Then: \begin{enumerate}[noitemsep,topsep=0pt] \item $L$ synchronizes recognizable relations iff $\mathit{shift}(L) < \infty$, \item $L$ synchronizes automatic relations iff $\mathit{shiftlag}(L) < \infty$, \item $L$ synchronizes rational relations. \end{enumerate} \end{theorem} For ease of presentation, let $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$, $\Sigma_{\mathbbmtt{i}}$, and $\Sigma_{\mathbbmtt{o}}$ be short for $\mathbf{2} \times \Sigma$, $\{1\} \times \Sigma$, and $\{2\} \times \Sigma$, respectively. If convenient, we use distinct symbols for input and output, instead of symbols annotated with $1$ or $2$. For the results shown in this paper, it is useful to lift some notions introduced in \cite{conf/stacs/FigueiraL14} from words and languages over $\mathbf{2}$ to words and languages over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$. \begin{definition} We lift the notions of $\mathit{lag}$, $\mathit{shift}$, and $\mathit{shiftlag}$ from words and languages over $\mathbf{2}$ to words and languages over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ in the natural way. Furthermore, given a language $T \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$, we say a word $w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ is $T$-controlled if $w \in T$. A language $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ is $T$-controlled if all its words are, namely, if $S \subseteq T$. \end{definition} \subparagraph*{Automata on finite words.} We fix our notations concerning finite automata on finite words. A \emph{nondeterministic finite automaton} (\emph{NFA}) is a tuple $\mathcal A = (Q,\Sigma,q_0,\Delta,F)$, where $Q$ is a finite set of states, $\Sigma$ is a finite alphabet, $q_0 \in Q$ is the initial state, $\Delta \subseteq Q \times \Sigma \times Q$ is the transition relation, and $F \subseteq Q$ is the set of final states. A \emph{run} $\rho$ of $\mathcal A$ on $w = a_1\dots a_n \in \Sigma^*$ is a sequence of states $p_0p_1\dots p_n$ such that $(p_i,a_{i+1},p_{i+1}) \in \Delta$ for all $i \in \{0,\dots,n-1\}$. Shorthand, we write $\mathcal A: p_0 \xrightarrow{w} p_n$. A run is \emph{accepting} if it starts in $q_0$ and ends in a state from $F$. The language \emph{recognized} by $\mathcal A$, written $L(\mathcal A)$, is the set of words $w \in \Sigma^*$ that admit an accepting run of $\mathcal A$ on $w$. For $q \in Q$, let $\mathcal A_q$ denote the NFA obtained from $\mathcal A$ by setting its initial state to $q$. The class of languages recognized by NFAs is the class of regular languages. An NFA is \emph{deterministic} (a \emph{DFA}) if for each state $q \in Q$ and $a \in \Sigma$ there is at most one outgoing transition. In this case, it is more convenient to express $\Delta$ as a (partial) function $\delta: Q \times \Sigma \rightarrow Q$. Furthermore, let $\delta^*$ denote the usual extension of $\delta$ from letters to words. We introduce some notions only applicable if an NFA recognizes a set of synchronizations. Given a regular $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$, let $\mathcal A = (Q,\Sigma_{\mathbbmtt{i}\mathbbmtt{o}},q_0,\Delta,F)$ be an NFA that recognizes $S$. We define $Q^{\inp} = \{ p \in Q \mid \exists a \in \Sigma, q \in Q: (p,(1,a),q) \in \Delta\}$ and $Q^{\outp} = \{ p \in Q \mid \exists a \in \Sigma, q \in Q: (p,(2,a),q) \in \Delta\}$ as the sets of states that have outgoing transitions from which input and output can be consumed, respectively. If $(Q^{\inp}$,$Q^{\outp})$ is a partition of $Q$, we write $Q = Q^{\inp} \charfusion{\cup}{\cdot} Q^{\outp}$. We call $\mathcal A$ \emph{sequential} if $\mathcal A$ is deterministic, and $Q = Q^{\inp} \charfusion{\cup}{\cdot} Q^{\outp}$, and each $q \in Q^{\outp}$ has at most one outgoing transition. For short, we refer to a sequential DFA as \emph{sDFA}. Finally, we define the input automaton $\mathcal A_D$ of $\mathcal A$ as $(Q,\Sigma,q_0,\Delta',F)$, where $\Delta' = \{ (p,a,q) \mid \mathcal A: p \xrightarrow{w} q \text{ and } \pi_{\mathbbmtt{i}}(w) = a \in \Sigma\}$. A comparison to standard transducer models is given in the next section. \section{Uniformization problems}\label{sec:unifproblems} A \emph{uniformization} of a relation $R \subseteq \Sigma^* \times \Sigma^*$ is a complete function $f_R: \mathrm{dom}(R) \to \Sigma^*$ with $(u,f_R(u)) \in R$ for all $u \in \mathrm{dom}(R)$. If such a function is given as a relation $R_f$, we write $R_f \subseteq_{\mathsf{u}} R$ to indicate that $R_f$ is a uniformization of $R$. \begin{definition}[Resynchronized uniformization problem] The \emph{resynchronized uniformization problem} asks, given a regular source language $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ and a regular target language $T \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$, whether there exists a regular language $U \subseteq T$ recognized by a sequential DFA such that $\llbracket U \rrbracket \subseteq_{\mathsf{u}} \llbracket S \rrbracket$. \end{definition} \begin{example}\label{ex:intro} Let $\Sigma_\mathbbmtt{i} = \{a,b,c\}$ and $\Sigma_\mathbbmtt{o} = \{d,e\}$, let $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ be given by $\mathcal A$ depicted in Fig.~\ref{fig:intro}. The recognized relation is $\llbracket S \rrbracket = \{(a^iba^j,d(d+e)^k) \mid i,j,k \geq 0 \} \cup \{(a^ica^j,e(d+e)^k) \mid i,j,k \geq 0 \}$. Furthermore, let $T = \Sigma_\mathbbmtt{i}^*(\Sigma_\mathbbmtt{i}\Sigma_\mathbbmtt{o})^+$. A $T$-controlled uniformization $U$ is given by the sequential DFA $\mathcal U$ depicted in Fig.~\ref{fig:intro}. The recognized relation is $\llbracket U \rrbracket = \{(a^iba^j,dd^j) \mid i,j,k \geq 0 \} \cup \{(a^ica^j,ed^j) \mid i,j \geq 0 \}$. \end{example} \vskip -0.3cm \begin{figure}[ht] \vskip -0.35cm \centering \begin{tikzpicture}[thick] \tikzstyle{every state}+=[inner sep=4pt, minimum size=3pt]; \node[state, initial, initial text=$\mathcal A$] (0) {}; \node[state, above right = 12pt and 25pt of 0] (1) {}; \node[state, below right = 12pt and 25pt of 0] (2) {}; \node[state, accepting, below right = 12pt and 25pt of 1] (3) {}; \node[state, accepting, right of=3] (4) {}; \draw[->] (0) edge node[near end] {$d$} (1); \draw[->] (0) edge node[swap,near end] {$e$} (2); \draw[->] (1) edge[loop above] node {$a$} (); \draw[->] (1) edge node[near start] {$b$} (3); \draw[->] (2) edge[loop above] node {$a$} (); \draw[->] (2) edge node[swap,near start] {$c$} (3); \draw[->] (3) edge[loop above] node {$a$} (); \draw[->] (3) edge node {$d,e$} (4); \draw[->] (4) edge[loop above] node {$d,e$} (); \begin{scope}[xshift=7cm] \node[state, initial, initial text=$\mathcal U$] (0) {}; \node[state, above right = 12pt and 25pt of 0] (1) {}; \node[state, below right = 12pt and 25pt of 0] (2) {}; \node[state, accepting, below right = 12pt and 25pt of 1] (3) {}; \node[state, right of=3] (4) {}; \draw[->] (0) edge[loop above] node {$a$} (); \draw[->] (0) edge node[near end] {$b$} (1); \draw[->] (0) edge node[swap,near end] {$c$} (2); \draw[->] (1) edge node[near start] {$d$} (3); \draw[->] (2) edge node[swap,near start] {$e$} (3); \draw[->] (3) edge[bend left] node {$a$} (4); \draw[->] (4) edge[bend left] node {$d$} (3); \end{scope} \end{tikzpicture} \caption{ Cf.~Ex.\ref{ex:intro}; $S = L(\mathcal A)$ and $U = L(\mathcal U)$, we have $\llbracket U \rrbracket \subseteq_\mathsf{u} \llbracket S \rrbracket$. } \label{fig:intro} \vskip -0.3cm \end{figure} Comparing our definition of sequential DFAs with standard transducer models we notice that sequential transducers directly correspond to sequential DFAs. See, e.g., \cite{berstel2009} for an introduction to transducers. Our model can be modified to correspond to subsequential transducers (which can make a final output after the word has ended) by slightly modifying the representation of the relation by adding a dedicated endmarker in the usual way. In the remainder it is implicitly assumed that every given source and target language is represented with endmarkers, thus our stated results correspond to uniformization by subsequential transducers. Our main result is the decidability of the resynchronized uniformization problem for a given automatic relation and a given set of synchronizations controlled by a language that synchronizes automatic relations. In Sec.~\ref{sec:regular} we see that our decidability result is obtained by a reduction to the following simpler uniformization problem. \begin{definition}[Subset uniformization problem] The \emph{subset uniformization problem} asks, given a regular language $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$, whether there exists a regular language $U \subseteq S$ recognized by a sequential DFA such that $\llbracket U \rrbracket \subseteq_{\mathsf{u}} \llbracket S \rrbracket$. \end{definition} The notion of subset uniformization directly corresponds to the notion of sequential $\mathbbm I$-uniformization introduced in \cite{FJLW16}. It was shown that deciding the sequential $\mathbbm I$-uniformization problem reduces to deciding which player has a winning strategy in a safety game between $\mathsf{In}$ and $\mathsf{Out}$. Hence, we directly obtain the following result. \begin{restatable}[\cite{FJLW16}]{theorem}{thmsunif}\label{thm:sunif} The subset uniformiza\-tion problem is decidable. \end{restatable} Now that we have formulated our uniformization problems, we link these to known uniformization problems. Asking whether a relation has a $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$-controlled subsequential uniformization is equivalent to asking whether it has a uniformization by an arbitrary subsequential transducer. Asking whether a relation has a $(\Sigma_\mathbbmtt{i}\Sigma_\mathbbmtt{o})^*(\Sigma_\mathbbmtt{i}^* + \Sigma_\mathbbmtt{o}^*)$- resp.\ $\Sigma_\mathbbmtt{i}^*\Sigma_\mathbbmtt{o}^*$-controlled subsequential uniformization is equivalent to asking whether it has a uniformization by a synchronous subsequential transducer resp.\ by a transducer that reads the complete input before producing output. \begin{table}[t] \begin{center} \begin{tabular}{|l|c|c|c|c|c|} \hline \backslashbox[33mm]{sync.}{relation} & rational & \parbox[c]{2cm}{deterministic\\rational} & finite-valued & automatic & \parbox[c]{1.1cm}{recog-\\nizable} \\ \hline $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ & undec. \cite{CarayolL14} & dec. \cite{FJLW16} & dec. \cite{FJLW16} & dec. \cite{CarayolL14} & dec.\\ \hline $(\Sigma_\mathbbmtt{i}\Sigma_\mathbbmtt{o})^*(\Sigma_\mathbbmtt{i}^* + \Sigma_\mathbbmtt{o}^*)$ & undec. \cite{CarayolL14} & ? & ? & dec. \cite{buechi} & dec.\\ \hline $\Sigma_\mathbbmtt{i}^*\Sigma_\mathbbmtt{o}^*$ & ? & ? & ? & dec. \cite{CarayolL14} & dec.\\ \hline \hline rational & undec. & ? & ? & ? & dec.\\ \hline automatic & undec. & ? & ? & \textbf{dec.} & dec.\\ \hline recognizable & ? & ? & ? & dec. & dec. \\ \hline \end{tabular} \end{center} \caption{Overview over decidability results. The columns list the type of relation to be uniformized. The rows list the type of synchronization used as uniformization parameter; the upper three rows list fixed languages of synchronizations, the lower three rows list parameter classes, where `rational' means the given set of allowed synchronizations is controlled by an arbitrary synchronization language, `automatic' (resp.\ `recognizable') means the given set of allowed synchronizations is controlled by a synchronization language that synchronizes automatic (resp.\ recognizable) relations. } \label{tab:overview} \vskip -0.8cm \end{table} Table~\ref{tab:overview} provides an overview over known and new decidability results of the resynchronized uniformization problem for different types of relations and synchronization parameters. Our main result is the decidability for a given automatic relation and a given set of allowed synchronizations that is controlled by a synchronization language that synchronizes automatic relations. The decidability results in the rightmost column can be shown by a simple reduction to the subset uniformization problem which is presented in the appendix. The other entries in the lower three rows are simple consequences of the results presented in the upper three rows resp.\ our main result. Regarding the table entry where the relation is automatic and a desired uniformizer is $(\Sigma_\mathbbmtt{i}\Sigma_\mathbbmtt{o})^*(\Sigma_\mathbbmtt{i}^* + \Sigma_\mathbbmtt{o}^*)$-controlled, there is an alternative formulation of the decision problem in the case that the given relation is $(\Sigma_\mathbbmtt{i}\Sigma_\mathbbmtt{o})^*(\Sigma_\mathbbmtt{i}^* + \Sigma_\mathbbmtt{o}^*)$-controlled (the usual presentation for automatic relations, e.g., by a synchronous transducer). In this case the problem can also be stated as the question whether the relation has a subset uniformization. We now generalize this to Parikh-injective synchronization languages. Given some $L \subseteq \mathbf{2}^*$, let $\Pi_L: L \to \mathbbm{N}^2$ be the function that maps a word $w \in L$ to its \emph{Parikh image}, that is to the vector $(\#_1(w),\#_2(w))$. We say $L$ is \emph{Parikh-injective} if $\Pi_L$ is injective. \begin{restatable}{proposition}{thmparikh}\label{thm:parikh} Let $L \subseteq \mathbf{2}^*$ be a regular Parikh-injective language, let $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ be an $L$-controlled regular language and let $T = \{ w \in \Sigma^* \mid w \text{ is $L$-controlled}\}$. Every $T$-controlled uniformization of $S$ is a subset uniformization of $S$. \end{restatable} Given $L$, $S$ and $T$ as in Proposition \ref{thm:parikh}, it directly follows that the resynchronized uniformization problem is equivalent to the subset uniformization problem, which is decidable by Theorem \ref{thm:sunif}. \section{Automatic uniformizations of automatic relations}\label{sec:regular} Here we present our main result stating that it is decidable whether a given automatic relation has a uniformization by a subsequential transducer whose induced set of synchronizations is controlled by a given regular language that synchronizes automatic relations. \begin{restatable}{theorem}{thmregular}\label{thm:regular} Given a regular source language with finite $\mathit{shiftlag}$ and a regular target language with finite $\mathit{shiftlag}$. Then, the resynchronized uniformization problem is decidable. \end{restatable} In \cite{conf/stacs/FigueiraL14}, it is shown that $(12)^*(1^*+2^*)$ is an effective canonical representative of the class $\mathsf{RL}_{\mathit{FSL}}$ of regular languages with finite $\mathit{shiftlag}$. Meaning that for every $L \in \mathsf{RL}_{\mathit{FSL}}$ and every $R \in \textsc{Rel}(L)$, there is an effectively constructible $(12)^*(1^*+2^*)$-controlled regular language $S$ so that $\llbracket S \rrbracket = R$. In the remainder of this section, let $S \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ be a regular source language with finite $\mathit{shiftlag}$. Also, let $S_\mathit{can}$ be the equivalent $(12)^*(1^*+2^*)$-controlled language with $\llbracket S_\mathit{can} \rrbracket = \llbracket S \rrbracket$. Furthermore, let $T \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ be a regular target language with finite $\mathit{shiftlag}$. \begin{assumption}\label{asm:shiftlag} We assume that $S_\mathit{can}$ is recognized by a DFA $\mathcal A = (Q_\mathcal A,\Sigma_{\mathbbmtt{i}\mathbbmtt{o}},q_0^\mathcal A,\Delta_\mathcal A,F_\mathcal A)$, $T$ is recognized by a DFA $\mathcal B = (Q_\mathcal B,\Sigma_{\mathbbmtt{i}\mathbbmtt{o}},q_0^\mathcal B,\Delta_\mathcal B,F_\mathcal B)$ and $\mathit{shiftlag}(T) < n$. \end{assumption} For notational convenience, given $x \in \Sigma_\mathbbmtt{i}^*$ and $y \in \Sigma_\mathbbmtt{o}^*$, we write $\delta_\mathcal A^*(q,(x,y))$ to mean $\delta_\mathcal A^*(q,w)$, where $w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ is the canonical synchronization of $x$ and $y$, i.e., $w$ is the $(12)^*(1^*+2^*)$-controlled synchronization of the pair $(x,y)$. \\ The remainder of this section is devoted to the proof of Theorem~\ref{thm:regular}. The proof is split in two main parts; the goal of the first part is to show that if $S$ has a $T$-controlled uniformization by an sDFA, then $S$ has a $T_k$-controlled uniformization by an sDFA for a regular $T_k \subseteq T$ that is less complex than $T$, cf.\ Lemma~\ref{lemma:shortregular}. The goal of the second part is to show that the set $T_k(S)$ defined by $\{ w \mid w \in T_k \text{ and } \llbracket w \rrbracket \in \llbracket S \rrbracket\}$ is regular and computable (due to the form of $T_k$), cf.\ Lemma~\ref{lemma:transformregular}. Then, to conclude the proof, we show that the question whether $S$ has a $T$-controlled uniformization by an sDFA can be reduced to the question whether $T_k(S)$ has a subset uniformization by an sDFA, which is decidable by Theorem~\ref{thm:sunif}. Towards giving an exact description of $T_k$, consider the following auxiliary lemma characterizing the form of regular synchronization languages with finite $\mathit{shiftlag}$. Given $\nu \in \mathbbm{N}$, we denote by $L_{\leq \nu}$ the regular set of words over $\mathbf{2}$ with $\leqlag{\nu}$-lagged positions, i.e., $L_{\leq \nu} = \{ u \in \mathbf 2^* \mid \mathit{lag}(u) \leq \nu\}$; we denote by $T_{\leq \nu}$ the regular set of words over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$ with $\leqlag{\nu}$-lagged positions, i.e., $T_{\leq \nu} = \{ w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^* \mid \mathit{lag}(w) \leq \nu\}$. \begin{lemma}[\cite{conf/stacs/FigueiraL14}]\label{lemma:form} Given a regular language $L \subseteq \mathbf{2}^*$ with $\mathit{shiftlag}(L) < m$. It holds that $L \subseteq L_{\leq \nu} \cdot (1^*+2^*)^m$ with $\nu$ chosen as $2\left(m(|Q|+1)+1\right)$, where $Q$ is the state set of an NFA recognizing $L$. \end{lemma} Clearly, this lemma can be lifted to regular languages over $\Sigma_{\mathbbmtt{i}\mathbbmtt{o}}$. Based on Asm.~\ref{asm:shiftlag} and Lemma~\ref{lemma:form}, we can make the following assumption. \begin{assumption}\label{asm:nshift} Assume that $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$ with $\gamma = 2\left(n(|Q_\mathcal B|+1)+1\right)$. \end{assumption} Now, we can be more specific about $T_k \subseteq T$. \begin{definition}\label{def:M} For $i \geq 0$, let $T_i$ be the set $T \cap \left (T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^{\leq i})^n\right )$, that is, the set of $w \in T$ such that after a position in $w$ is more than $\gamma$-lagged, the number of output symbols per block is at most $i$. \end{definition} Our aim is to show that there is a bound $k$ such that $S$ has either a $T_k$-controlled uniformization by an sDFA or no $T$-controlled uniformization by an sDFA. From now on, we call an sDFA implementing a uniformization simply a uniformizer. The main difficulty in solving the resynchronized uniformization problem is that in general a uniformizer can have unbounded lag, because the waiting time between shifts can be arbitrarily long. The key insight for the proof is that if such a long waiting time for a shift from input to output is necessary, then, in order to determine the next output block, it is not necessary to store the complete input that is ahead. We show that it suffices to consider an abstraction of the input that is ahead. Therefore we will introduce input profiles based on state transformation trees we define below. Similarly, to deal with the situation where there is a long waiting time for a shift from output to input, we introduce output profiles as an abstraction of output that is ahead. The bound on the length of output blocks will be chosen based on the profiles. Before defining profiles, we introduce some necessary definitions and notions. \subparagraph*{Trees.} A \emph{finite unordered unranked tree} over an alphabet, a tree for short, is a finite non-empty directed graph with a distinguished root node, such that for any node, there exists exactly one path from the root to this node. Additionally, a mapping from the nodes of the graph to the alphabet is given. More formally, a \emph{tree} $t$ over $\Sigma$ is given by a tuple $(V_t,E_t,v_t,\val{t})$, where $V_t$ is a non-empty set of nodes, $E_t \subseteq V_t \times V_t$ is a set of edges, $v_t$ is the root of $t$, also denoted $\ro{t}$, and $\val{t}$ is a mapping $V_t \to \Sigma$. Furthermore, it is satisfied that any node is reached by a unique path from the root. Let $T_\Sigma$ denote the set of all trees over $\Sigma$. We only distinguish trees up to isomorphism. Given a tree $t$ and a node $v$ of $t$, let $t|_v$ denote the \emph{subtree} of $t$ rooted at $v$. An $a \in \Sigma$ can also be seen as a tree $a \in T_\Sigma$ defined by $(\{v\},\emptyset,v,\val{a})$, where $\val{a}(v) = a$. For two trees $t_1$ and $t_2$ with $\val{t_1}(\ro{t_1}) = \val{t_2}(\ro{t_2})$, i.e., with the same root label, we define $t_1 \circ t_2$ as the tree $t$ given by $(V_t,E_t,\ro{t_1},\val{t})$, where $V_t = V_{t_1} \cup V_{t_2} \setminus \{\ro{t_2}\}$, $E_t = E_{t_1} \cup \{ (\ro{t},v) \mid (\ro{t_2},v) \in E_{t_2}\} \cup (E_{t_2} \setminus \{ (\ro{t_2},v) \in E_{t_2}\})$ and $\val{t}$ as $\val{t_1} \cup \val{t_2}$ over nodes in $V_t$ (assuming $V_{t_1} \cap V_{t_2} = \emptyset$). Given $a \in \Sigma$ and trees $t_1,\dots,t_n$, we define $a(t_1\dots t_n)$ to be the tree $(V_t,E_t,\allowbreak r,\allowbreak \val{t})$, where $V_t = \bigcup_{i=1}^n V_{t_i} \cup \{r\}$ with a new node $r$, $E_t = \bigcup_{i=1}^n E_{t_i} \cup \{(r,\ro{t_i})\! \mid \allowbreak 1 \leq i \leq n\}$ and $\val{t}$ is defined as $\val{t}(r) = a$ and $\bigcup_{i=1}^n \val{t_i}$ (assuming $V_{t_i} \cap V_{t_j} = \emptyset$ for all $i \neq j$). \subparagraph*{State transformation trees.} Now that we have fixed our notations, we explain what kind of information we want to represent using state transformation trees. Basically, for an input segment that is ahead and causes lag, we are interested in how the input segment can be combined with output segments of same or smaller length and how this output can be obtained. In the following we give an intuitive example. \begin{example}\label{ex:intuition} Let $\Sigma_\mathbbmtt{i} = \{a\}$ and $\Sigma_\mathbbmtt{o} = \{b,c\}$. Consider the language $S_1 \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ given by the DFA $\mathcal A_1$ depicted in Fig.~\ref{subfig:dfas}, and the language $T_1 \subseteq \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ given by the DFA $\mathcal B_1$ depicted in Fig.~\ref{subfig:dfas}. As we can see, $S_1$ is $(12)^*(1^* + 2^*)$-controlled, thus, already in its canonical form, and $T_1$ is $1^*2^*1^*2^*$-controlled. Both languages have finite $\mathit{shiftlag}$. Generally, a $T_1$-controlled uniformizer of $S_1$ can have arbitrary large lag. We take a look at the runs starting from $q_0$ in $\mathcal A_1$ and starting from $p_0$ in $\mathcal B_1$ that the computation of such a uniformizer can induce. However, $\mathcal A_1$ can only be simulated on the part where the lag is recovered, but arbitrarily large lag can occur, thus our goal is to find an abstraction of the part that causes lag. E.g., assume that such a uniformizer reads $aa$ without producing output. Towards defining an abstraction of $aa$, we are interested in how $aa$ could be combined with outputs of same or smaller length and how these outputs could be produced by some $T_1$-controlled uniformizer. Such a uniformizer could read some more $a$s and eventually must produce output. Reading $a$s leads from $p_0$ to $p_1$ in $\mathcal B_1$. There are a few possibilities how output of length at most two can be produced such that it is valid from $p_1$ and the simulation from $q_0$ can be continued. It is possible to output $b$ ($\delta^*_{\mathcal B_1}(p_1,b) = p_2$, $\delta^*_{\mathcal A_1}(q_0,aba)=q_1$), $bb$ ($\delta^*_{\mathcal B_1}(p_1,bb) = p_2$, $\delta^*_{\mathcal A_1}(q_0,abab)=q_0$) or $bc$ ($\delta^*_{\mathcal B_1}(p_1,bc) = p_2$, $\delta^*_{\mathcal A_1}(q_0,abac)=q_2$). Alternatively, it is possible to output $b$ ($\delta^*_{\mathcal B_1}(p_1,b) = p_2$, $\delta^*_{\mathcal A_1}(q_0,ab)=q_0$) read another $a$ ($\delta^*_{\mathcal B_1}(p_2,a) = p_3$) and then produce $b$ ($\delta^*_{\mathcal B_1}(p_3,b) = p_3$, $\delta^*_{\mathcal A_1}(q_0,ab)=q_0$) or $c$ ($\delta^*_{\mathcal B_1}(p_3,c) = p_3$, $\delta^*_{\mathcal A_1}(q_0,ac)=q_2$). We see that the outputs $bb$ and $bc$ can each be obtained in two different ways. Namely, as one single output block, or as two output blocks with an input block in between (w.r.t.\ $\mathcal B_1$, we do not care about the number of blocks w.r.t.\ $\mathcal A_1$). The maximal number of considered output blocks (w.r.t.\ the target synchronization) is parameterized in the formal definition. We take a look at the tree in Fig.~\ref{subfig:tree}, this tree contains all the state transformations that can be induced by the described possibilities. The possibilities to produce output in one single block is reflected by the edges $(v_0,v_1)$, $(v_0,v_2)$ and $(v_0,v_3)$ representing the state transformation induced by the respective output block. The possibilities to produce output in two blocks is reflected by the edges $(v_0,v_4)$ representing the state transformation induced by the first output block, $(v_4,v_5)$ representing the state transformation induced by the intermediate input block, $(v_5,v_6)$ and $(v_5,v_7)$ representing the state transformation induced by the respective second output block. \end{example} \begin{figure}[t!] \vskip -0.5cm \centering \begingroup \begin{subfigure}{\textwidth} \begin{center} \begin{tikzpicture}[scale=0.8,thick] \tikzstyle{every state}+=[inner sep=3pt, minimum size=3pt]; \node[state, accepting, initial, initial text=$\mathcal A_1$] (0) {$q_0$}; \node[state, right of= 0] (1) {$q_1$}; \node[state, accepting, right of= 1] (2) {$q_2$}; \draw[->] (0) edge[bend left=15] node {$a$} (1); \draw[->] (1) edge[bend left=15] node {$b$} (0); \draw[->] (1) edge node {$c$} (2); \draw[->] (2) edge[loop above] node {$a$} (); \begin{scope}[xshift=7cm] \node[state, accepting, initial, initial text=$\mathcal B_1$] (0) {$p_0$}; \node[state, right of=0] (1) {$p_1$}; \node[state, accepting, right of=1] (2) {$p_2$}; \node[state, accepting, right of=2] (3) {$p_3$}; \draw[->] (0) edge node {$a$} (1); \draw[->] (1) edge node {$b$} (2); \draw[->] (1) edge[loop above] node {$a$} (); \draw[->] (2) edge node {$a$} (3); \draw[->] (2) edge[loop above] node {$b$,$c$} (); \draw[->] (3) edge[loop above] node {$b$,$c$} (); \end{scope} \end{tikzpicture} \end{center} \vskip -1.3em \caption{ $\Sigma_\mathbbmtt{i} = \{a\}$, $\Sigma_\mathbbmtt{o} = \{b,c\}$. $\mathcal A_1$ recognizes $S_1$, $\mathcal B_1$ recognizes $T_1$. $S_1$ is $(12)^*(1^* + 2^*)$-controlled and $T_1$ is $1^*2^*1^*2^*$-controlled, thus both have finite $\mathit{shiftlag}$. $S_1$ is already in the canonical form. } \label{subfig:dfas} \end{subfigure} \begin{subfigure}{0.49\textwidth} \begin{center} \begin{tikzpicture}[thick] \tikzstyle{textshift}=[xshift=0.7em,yshift=-0.7em] \draw (-5,5.5) rectangle (-2,5); \draw (-5,5) rectangle (-4,4.5); \node at (-5,5.5) (a) {}; \node at (-4.5,5.5) (b) {}; \node at (-4,5.5) (c) {}; \node at (-3.5,5.5) (d) {}; \node at (-3,5.5) (e) {}; \node at (-2.5,5.5) (f) {}; \node at (-5,5) (g) {}; \node at (-4.5,5) (h) {}; \node[textshift] at (a) {$a$}; \node[textshift] at (b) {$a$}; \node[textshift] at (c) {$a$}; \node[textshift] at (d) {$a$}; \node[textshift] at (e) {$a$}; \node[textshift] at (f) {$a$}; \node[textshift] at (g) {$b$}; \node[textshift] at (h) {$c$}; \draw[dashed] ($ (g) - (0,0.5) $) -- ($ (g) + (0,1) $); \draw[dashed] ($ (h) - (0,0.5) $) -- ($ (h) + (0,1) $); \draw[dashed] ($ (-4,5) - (0,0.5) $) -- ($ (-4,5) + (0,1) $); \draw[dashed] ($ (-2,5) - (0,0.5) $) -- ($ (-2,5) + (0,1) $); \node at ($ (g) + (0,1.2) $) {$q_0$}; \node at ($ (h) + (0,1.2) $) {$q_0$}; \node at ($ (-4,5) + (0,1.2) $) {$q_2$}; \node at ($ (-2,5) + (0,1.2) $) {$q_2$}; \node at ($ (a) - (0.5,0.5) $) {$\mathcal A_1\colon$}; \end{tikzpicture} \begin{tikzpicture}[thick] \tikzstyle{textshift}=[xshift=0.7em,yshift=-0.7em] \draw (-5,5.5) rectangle (-2.5,5); \draw (-2.5,5.5) rectangle (-2,5); \draw (-2,5.5) rectangle (-1.5,5); \draw (-1.5,5.5) rectangle (-1,5); \node at (-5,5.5) (a) {}; \node at (-4.5,5.5) (b) {}; \node at (-4,5.5) (c) {}; \node at (-3.5,5.5) (d) {}; \node at (-3,5.5) (e) {}; \node at (-2.5,5.5) (f) {}; \node at (-2,5.5) (g) {}; \node at (-1.5,5.5) (h) {}; \node at (-1,5.5) (i) {}; \node[textshift] at (a) {$a$}; \node[textshift] at (b) {$a$}; \node[textshift] at (c) {$a$}; \node[textshift] at (d) {$a$}; \node[textshift] at (e) {$a$}; \node[textshift] at (f) {$b$}; \node[textshift] at (g) {$a$}; \node[textshift] at (h) {$c$}; \draw[dashed] ($ (a) - (0,0.5) $) -- ($ (a) + (0,0.5) $); \draw[dashed] ($ (f) - (0,0.5) $) -- ($ (f) + (0,0.5) $); \draw[dashed] ($ (g) - (0,0.5) $) -- ($ (g) + (0,0.5) $); \draw[dashed] ($ (h) - (0,0.5) $) -- ($ (h) + (0,0.5) $); \draw[dashed] ($ (i) - (0,0.5) $) -- ($ (i) + (0,0.5) $); \node at ($ (a) + (0,0.7) $) {$p_0$}; \node at ($ (f) + (0,0.7) $) {$p_1$}; \node at ($ (g) + (0,0.7) $) {$p_2$}; \node at ($ (h) + (0,0.7) $) {$p_3$}; \node at ($ (i) + (0,0.7) $) {$p_3$}; \node at ($ (a) - (0.5,0.25) $) {$\mathcal B_1\colon$}; \end{tikzpicture} \end{center} \caption{Runs of $\mathcal A_1$ and $\mathcal B_1$ on synchronizations of $(aaaaaa,bc)$. $\mathcal A_1$ runs on the canonical synchronization, i.e., on $abacaaaa$. To illustrate this, input and output are drawn one above the other.} \label{subfig:runs} \end{subfigure} \begin{subfigure}{0.49\textwidth} \begin{center} \begin{tikzpicture}[thick,scale=0.8,baseline=(current bounding box.base)] \tikzstyle{level 1}=[sibling distance=16mm] \path[level distance=12mm] node (root){\small$(p_1,q_0)$} child{ node(0){\small$(p_2,q_1)$} } child{ node(1){\small$(p_2,q_0)$} } child{ node(2){\small$(p_2,q_2)$} } child{ node(3){\small$(p_2,q_0)$} child{ node(4){\small$(p_3,q_0)$} child{ node(5){\small$(p_3,q_0)$} } child{ node(6){\small$(p_3,q_2)$} } } } ; \path (root) -- coordinate[midway] (r0) (0); \path (root) -- coordinate[midway] (r1) (1); \path (root) -- coordinate[midway] (r2) (2); \path (root) -- coordinate[midway] (r3) (3); \path (3) -- coordinate[midway] (r4) (4); \path (4) -- coordinate[midway] (r5) (5); \path (4) -- coordinate[midway] (r6) (6); \node [dark-gray,right] at (r0) {\small$b$}; \node [dark-gray,right] at (r1) {\small$bb$}; \node [dark-gray,right] at (r2) {\small$bc$}; \node [dark-gray,right] at (r3) {\small$b$}; \node [dark-gray,right] at (r4) {\small$a$}; \node [dark-gray,right] at (r5) {\small$b$}; \node [dark-gray,right] at (r6) {\small$c$}; \node [dark-gray,above right] at (root) {\small$v_0$}; \node [dark-gray,above right] at (0) {\small$v_1$}; \node [dark-gray,above right] at (1) {\small$v_2$}; \node [dark-gray,above right] at (2) {\small$v_3$}; \node [dark-gray,above right] at (3) {\small$v_4$}; \node [dark-gray,above right] at (4) {\small$v_5$}; \node [dark-gray,above right] at (5) {\small$v_6$}; \node [dark-gray,above right] at (6) {\small$v_7$}; \end{tikzpicture} \end{center} \caption{$\mathrm{STT}^1(aa,p_1,q_0)$. The combination of both runs shown in Fig.~\ref{subfig:runs} is reflected by the rightmost path in the state transformation tree.} \label{subfig:tree} \end{subfigure} \caption{ A source language $S_1$ and a target language $T_1$ are given in Fig.~\ref{subfig:dfas}. A pair and two different synchronizations of said pair as well as runs are given in Fig.~\ref{subfig:runs}. The state transformation tree $\mathrm{STT}^1(aa,p_1,q_0)$ is given in Fig.~\ref{subfig:tree}, its edges are labeled with the respective associated words and its vertices are named for easier reference in Ex.~\ref{ex:intuition}. For a formal definition of STTs see Def.~\ref{def:inputstt}, for an explanation for this specific tree see Ex.~\ref{ex:intuition}. } \label{fig:inputstt} \endgroup \vskip -0.5cm \end{figure} Now that we have given some intuition, we formally introduce input state transformation trees, a graphical representation of the construction of input state transformation trees is given in Fig.~\ref{fig:STT}. As seen in the example, each edge of such a tree represents the state transformation induced by an output resp.\ input block, alternatively. \begin{figure} \vskip -0.5cm \centering \begin{tikzpicture}[thick,scale=0.95] \node[draw,circle,fill,inner sep=0pt,minimum size=3pt] (v2) at (-0.5,5) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v1) at (-3,4) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v3) at (-2.5,4) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v4) at (-2,4) {}; \draw (v1) -- (v2); \draw (v3) -- (v2); \draw (v4) -- (v2); \draw [dark-gray] (-2.5,4) ellipse (1.5 and 0.25); \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v10) at (-0.5,3.5) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v5) at (0,3.5) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v6) at (1,3.5) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v7) at (2.5,3.5) {}; \draw (v2) -- (-0.5,3.5) node [draw,circle,fill,inner sep=0pt,minimum size=3pt] {}; \draw (v2) -- (v5); \draw (v2) -- (v6); \draw (v2) -- (v7); \draw [dark-gray](v6) ellipse (2 and 0.25); \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v11) at (0,2) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v8) at (1,2) {}; \node [draw,circle,fill,inner sep=0pt,minimum size=3pt] (v9) at (2.5,2) {}; \draw (v8); \draw (1,2) -- (-1,0) -- (3,0) -- (v8); \draw (1,3.5) -- (0,2); \draw (v6) -- (v8); \draw (v6) -- (v9); \draw [dark-gray] (v8) ellipse (2 and 0.25); \draw [dotted](v10) -- (-5.5,0) -- (-4.5,0) -- (v10); \draw [dotted](v5) -- (-4,0) -- (-3,0) -- (v5); \draw [dotted](v7) -- (5,0) -- (6,0) -- (2.5,3.5); \draw [dotted](v11) -- (-2.5,0) -- (-1.5,0) -- (v11); \draw [dotted](v9) -- (3.5,0) -- (4.5,0) -- (v9); \node at (0,5) {\ \ \ \small$(p,q)$}; \node at (1,0.5) {\small$\mathrm{STT}^{i-1}(x'',p'',q')$}; \node at (1.5,3.5) {\ \ \ \small$(p',q')$}; \node at (1.5,2) {\ \ \ \small$(p'',q')$}; \node [dark-gray] at (-4,4.5) {\small$\mathrm{Reach}_0$}; \node [dark-gray] at (3,4) {\small$\mathrm{Reach}_1$}; \node [dark-gray] at (2.5,2.5) {\small$\mathrm{Reach}_{(x'',p',q')}$}; \node [dark-gray, left] at (v2) {\small$v_0$}; \node [dark-gray, left] at (v6){\small$v_1$}; \node [dark-gray, left] at (v8) {\small$v_2$}; \end{tikzpicture} \caption{ Schema of the input state transformation tree $\mathrm{STT}^{i}(x,p,q)$ for some $i > 0$. Cf.~Def.~\ref{def:inputstt}. Let $x'x''$ be a factorization of $x$ with $x', x'' \in \Sigma_\mathbbmtt{i}^+$, and let $y \in \Sigma_\mathbbmtt{o}^+$ be such that $|x'|= |y|$ and $\delta_\mathcal A^*(q,(x',y)) = q'$ and $\delta_\mathcal B^*(p,y) = p'$, and let $\delta_\mathcal B^*(p',w) = p''$ for some $w \in \Sigma_\mathbbmtt{i}^+$, then $\mathrm{STT}^{i}(x,p,q)$ contains a path $v_0v_1v_2$ labeled $(p,q)(p',q')(p'',q')$ such that $v_0$ is the root, $v_1$ is the root of $t^{i-1}_{(x'',p',q')}$, and $v_2$ is the root of $\mathrm{STT}^{i-1}(x'',p'',q')$. } \label{fig:STT} \vskip -0.5cm \end{figure} \begin{definition}[Input state transformation tree]\label{def:inputstt} For $i \geq 0$, $p \in Q_\mathcal B$, $q \in Q_\mathcal A$ and $x \in \Sigma_\mathbbmtt{i}^*$, the \emph{state transformation tree} $\mathrm{STT}^i(x,p,q)$ is a tree over $Q_\mathcal B \times Q_\mathcal A$ defined inductively. \begin{itemize}[topsep=1em] \item For $i = 0$, the tree $\mathrm{STT}^0(x,p,q)$ is built up as follows. Let $\mathrm{Reach}_0 \subseteq Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(p',q') \in \mathrm{Reach}_0$ if there is some $y \in \Sigma_\mathbbmtt{o}^*$ with $|y| \leq |x|$ such that $\delta_\mathcal A^*(q,(x,y)) = q'$ and $\delta_\mathcal B^*(p,y) = p'$. {\quad\small(This set represents state transformations induced by output blocks that fully consume $x$.)} Then the tree $\mathrm{STT}^0(x,p,q)$ is defined as $(p,q)({r_1}\dots{r_n})$ for $\mathrm{Reach}_0 = \{r_1,\dots,r_n\}$, meaning it contains a child for every state transformation that can be induced w.r.t.\ $\mathcal A$ and $\mathcal B$ starting from $q$ and $p$, respectively, by the input segment $x$ together with an output segment that consumes $x$ (w.r.t.\ $\mathcal A$) consisting of a single output block (w.r.t.\ $\mathcal B$). \item For $i > 0$, the tree $\mathrm{STT}^i(x,p,q)$ is built up as follows. Let $\mathrm{Reach}_1 \subseteq \Sigma_\mathbbmtt{i}^* \times Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(x'',p',q') \in \mathrm{Reach}_1$ if \begin{itemize} \item $x = x'x''$ with $x''\in \Sigma_\mathbbmtt{i}^+$ for an $x'\! \in \Sigma_\mathbbmtt{i}^+$ such that there is a $y \in \Sigma_\mathbbmtt{o}^+$ with $|y| = |x'|$, and \item $\delta_\mathcal A^*(q,(x',y)) = q'$ and $\delta_\mathcal B^*(p,y) = p'$. \end{itemize} \vskip -0.5em {\quad\small(This set represents state transformations induced by output blocks that partially consume $x$.)} For $(x'',p',q') \in \mathrm{Reach}_1$, let $\mathrm{Reach}_{(x'',p',q')} \subseteq \Sigma_\mathbbmtt{i}^* \times Q_\mathcal B \times Q_\mathcal A$ be the smallest set such that $(x'',p'',q') \in \mathrm{Reach}_{(x'',p',q')}$ if $\delta_\mathcal B^*(p',w) = p''$ for some $w \in \Sigma_\mathbbmtt{i}^+$. {\quad\small(These sets represents state transformations induced by intermediate input blocks.)} Furthermore, let the tree $t_{(x'',p',q')}^{i-1}$ be defined as $(p',q')(\mathrm{STT}^{i-1}{r_1}\dots\mathrm{STT}^{i-1}{r_n})$ for $\mathrm{Reach}_{(x'',p',q')}\allowbreak =\allowbreak \{r_1,\dots,r_n\}$. Then the tree $\mathrm{STT}^i(x,p,q)$ is defined as \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} \mathrm{STT}^0(x,p,q) \circ (p,q)(t_{s_1}^{i-1}\dots t_{s_n}^{i-1}) \end{equation*} \endgroup for $\mathrm{Reach}_1 = \{s_1,\dots,s_n\}$, meaning it contains a path for every sequence of state transformations that can be induced w.r.t.\ $\mathcal A$ and $\mathcal B$ starting from $q$ and $p$, respectively, by the input segment $x$ together with an output segment that consumes $x$ (w.r.t.\ $\mathcal A$) consisting of at most $i+1$ output blocks (w.r.t.\ $\mathcal B$). Additionally, for output segments that have a common prefix of output blocks the state transformations induced by the common prefix of blocks are represented by the same nodes in the tree. \end{itemize} Intuitively, edges in such a tree are associated with the words that induced the state transformation, e.g., as shown in Fig~\ref{subfig:tree}. \end{definition} Given a tree as in Def.~\ref{def:inputstt}, the maximal degree of such a tree depends on the input word used as parameter. Our goal is to have state transformation trees where the maximum degree is independent of this parameter. Therefore, we introduce \emph{reduced trees}. The idea is that if for some input word different outputs induce the same state transformations then only one representation is kept in the input state transformation tree. \begin{definition}[Reduced tree]\label{def:redtree} A tree $t \in T_{\Sigma}$ over some alphabet $\Sigma$ is called \emph{reduced} if for each node $v$ there exist no two children $u,u'$ of $v$ such that the subtrees rooted at $u$ and $u'$ are isomorphic. For a tree $t \in T_{\Sigma}$, let $\mathit{red}(t) \in T_\Sigma$ denote its reduced variant. The reduced variant of a tree can easily be obtained by a bottom-up computation where for each node duplicate subtrees rooted at its children are removed. \end{definition} Note that for each $i$, the set of reduced input state transformation trees with parameter $i$ is a finite set. Hitherto, we have discussed how to capture state transformations induced by an input word together with output words of same or smaller length. Additionally, we need to capture state transformations induced by an output word together with input words of same or smaller length. Therefore, we introduce a notion similar to input state transformation trees, namely, \emph{output state transformation trees}. A formal definition can be found in the appendix. Furthermore, we need a notion that captures state transformations that can be induced by an input resp.\ output word alone, see Def.~\ref{def:stf} below. Then, we are ready to define profiles. \begin{definition}[State transformation function]\label{def:stf} For each $w \in \Sigma_\mathbbmtt{i}^* \cup \Sigma_\mathbbmtt{o}^*$, we define the function $\tau_w\colon Q_\mathcal B \to Q_\mathcal B$ with $\tau_w(p) = q$ if $\delta_\mathcal B^*(p,w) = q$ called \emph{state transformation function w.r.t.\ $w$}. \end{definition} \subparagraph*{Profiles.} Recall, $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$, and our goal is to show that there is a bound $k$ such that it suffices to focus on constructing $T_k$-controlled uniformizers instead of $T$-controlled uniformizers, meaning that we can focus on uniformizers in which the length of output blocks is bounded by $k$ after the lag has exceeded $\gamma$ at some point. The core of the proof is to show that if the lag between input and output becomes very large ($\gg \gamma$), it is not necessary to consider the complete input that is ahead to determine the next output block, but an abstraction (in the form of profiles) suffices. Note that if the lag has exceeded $\gamma$ at some point the number of remaining output blocks is at most $\lceil n/2 \rceil$. As a result, given an input word $x \in \Sigma_\mathbbmtt{i}^*$, we are interested in the state transformation that is induced by $(x,\pi_\mathbbmtt{o}(w))$ in $\mathcal A$ (recognizing $S_\mathit{can}$) and by $w$ in $\mathcal B$ (recognizing $T$) for each word $w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^*$ such that $|\pi_\mathbbmtt{o}(w)| \leq |x|$ and $\mathit{shift}(w) \leq \lceil n/2 \rceil$. In words, we are interested in the state transformations that can be induced by $x$ together with outputs of same or smaller length that are composed of at most $\lceil n/2 \rceil$ different output blocks. For $x \in \Sigma_\mathbbmtt{i}^*$, this kind of information is accurately represented by the set of all reduced input state transformation trees with parameters $x$ and $\lceil n/2 \rceil$. The same considerations with switched input and output roles apply for an output word $y \in \Sigma_\mathbbmtt{o}^*$. \begin{definition}[Input profile] For $x \in \Sigma_\mathbbmtt{i}^*$, we define its \emph{profile} $P_x$ as $(\tau_x,\mathrm{STT}_x^{\lceil n/2 \rceil})$, where \begingroup \setlength{\abovedisplayskip}{.5\columnsep } \setlength{\belowdisplayskip}{.5\columnsep } \begin{equation*} \mathrm{STT}_x^{\lceil n/2 \rceil} = \bigcup_{(p,q) \in Q_\mathcal B \times Q_\mathcal A} \{\mathit{red}\bigl(\mathrm{STT}^{\lceil n/2 \rceil}(x,p,q)\bigr)\}. \end{equation*} \endgroup \end{definition} Similarly, we define \emph{output profiles}, a formal definition can be found in the appendix. A note on the number of different profiles. Profiles are based on reduced STTs with parameter $\lceil n/2 \rceil$, where $n$ bounds $\mathit{shiftlag(T)}$. The size of the set of these STTs is non-elementary in $n$, hence also the number of profiles. This implies a non-elementary complexity of our decision procedure. Furthermore, let $\mathcal P_\mathbbmtt{i}$ be the set $\bigcup_{x \in \Sigma_\mathbbmtt{i}^*} \{ P_x \}$ of all input profiles and $\mathcal P_\mathbbmtt{o}$ be the set $\bigcup_{y \in \Sigma_\mathbbmtt{o}^*} \{ P_y \}$ of all output profiles. For a $P \in \mathcal P_\mathbbmtt{i} \cup \mathcal P_\mathbbmtt{o}$, let $z$ be a \emph{representative} of $P$ if $z$ is a shortest word such that $P = P_z$. We show that from the profiles of two words $x_1$ and $x_2$ one can compute the profile of the word $x_1x_2$. Hence, the set of profiles can be equipped with a concatenation operation, i.e., for words $x_1$ and $x_2$ we let $P_{x_1}P_{x_2} = P_{x_1x_2}$. We obtain the following. \begin{restatable}{lemma}{lemmamonoid}\label{lemma:monoid} The set of input profiles is a monoid with concatenation; the set of output profiles is a monoid with concatenation. \end{restatable} A word $x \in \Sigma_\mathbbmtt{i}^*$ and its profile $P_x$ are called \emph{idempotent} if $P_x = P_{xx}$. As a consequence of Ramsey's Theorem (see e.g., \cite{diestel2000graduate}) we obtain the following lemma. \begin{restatable}[Consequence of Ramsey]{lemma}{lemmaramsey}\label{lemma:ramsey} There is a computable $r \in \mathbbm{N}$ such that each word $x \in \Sigma_\mathbbmtt{i}^*$ with $|x| \geq r$ contains a non-empty idempotent factor for the concatenation of profiles. \end{restatable} Now, we have the right tools to prove that the existence of a $T$-controlled uniformizer implies that there also exists a $T_k$-controlled uniformizer for a computable $k$. For the remainder, we fix two bounds. \begin{assumption}\label{asm:bounds} Assume $r_1$ is chosen as in Lemma \ref{lemma:ramsey} and $r_2$ is chosen as the smallest bound on the length of representatives of output profiles. Wlog, assume $r_1,r_2 > \gamma$. \end{assumption} Finally, we are ready to prove the key lemma, that is, Lemma~\ref{lemma:shortregular}, which shows that it is sufficient to consider uniformizers in which the length of output blocks is bounded. Recall, a uniformizer works asynchronously, which leads to large lag. First, we show that if the output is lagged more than $r_1$ symbols, meaning, the input that is ahead contains an idempotent factor, it suffices to consider output blocks whose length depends on the idempotent factor. Secondly, we show that it suffices to consider uniformizers in which the output is ahead at most $r_2$ symbols. The combination of both results yields Lemma~\ref{lemma:shortregular}. Recall, by Asm.~\ref{asm:nshift}, $T \subseteq T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^*)^n$ and by Def.~\ref{def:M}, $T_i = T \cap \left( T_{\leq \gamma} \cdot (\Sigma_\mathbbmtt{i}^*+\Sigma_\mathbbmtt{o}^{\leq i})^n \right)$ for $i \geq 0$. \begin{restatable}{lemma}{lemmashortregular}\label{lemma:shortregular} If $S$ has a $T$-controlled uniformizer, then $S$ has a $T_k$-controlled uniformizer for a computable $k \geq 0$. \end{restatable} The proof of the above lemma yields that $k$ can be chosen as $r_1 + r_2$. This concludes the first part of the proof of Theorem~\ref{thm:regular}. For the second part, we prove that the problem whether $S$ has a $T_i$-controlled uniformizer for an $i$ reduces to the question whether $T_i(S)$ has a subset uniformizer for a suitable $T_i(S)$ as defined below in Lemma~\ref{lemma:transformregular}. \subparagraph*{Reduction.} The next lemma shows that from $S$ a regular $T_i(S)$ can be obtained such that $T_i(S)$ consists of all $T_i$-controlled synchronizations $w$ with $\llbracket w \rrbracket \in \llbracket S \rrbracket$. \begin{restatable}{lemma}{lemmatransformregular}\label{lemma:transformregular} For $i \geq 0$, the language $T_i(S) = \{ w \in \Sigma_{\mathbbmtt{i}\mathbbmtt{o}}^* \mid w \in T_i \text{ and }\allowbreak \llbracket w \rrbracket \in \llbracket S \rrbracket\}$ is a $T_i$-controlled effectively constructible regular language. \end{restatable} We are ready to prove the main theorem of this paper. \begin{proof}[Proof sketch of Theorem~\ref{thm:regular}] By Lemma \ref{lemma:shortregular} we know that if $S$ has a $T$-controlled uniformizer, then $S$ has a $T_k$-controlled uniformizer for a computable $k \geq 0$. Let $T_k(S)$ be defined as in Lemma~\ref{lemma:transformregular}. We can show that $S$ has a $T$-controlled uniformizer iff $\mathrm{dom}(\llbracket S \rrbracket)\allowbreak =\allowbreak \mathrm{dom}(\llbracket T_k(S) \rrbracket)$ and $T_k(S)$ has a subset uniformizer which is decidable by Theorem~\ref{thm:sunif}. \end{proof} \section{Conclusion}\label{sec:conclusion} In this paper we considered uniformization by subsequential transducers in which the allowed input/output behavior is specified by a regular set of synchronizations, the so-called resynchronized uniformization problem. An overview over our results can be found in Table~\ref{tab:overview}. For future work we want to study other problems of this kind, e.g., study whether the resynchronized uniformization problem is decidable for a given rational relation as source language and a given `recognizable' target language in the sense that the target language is controlled by a synchronization language that synchronizes recognizable relations. \subparagraph*{Acknowledgements.} The author would like to thank her supervisor Christof L{\"o}ding for suggesting this topic and his helpful comments and thank the anonymous reviewers of this and an earlier version of the paper for their feedback which greatly improved the presentation.
{ "timestamp": "2018-05-08T02:16:11", "yymm": "1805", "arxiv_id": "1805.02444", "language": "en", "url": "https://arxiv.org/abs/1805.02444" }
\section{Introduction} \input{sec_Intro.tex} \section{Problem Formulation} \label{Prob} \input{sec_Prob.tex} \section{Main Results} \label{Main} \input{sec_Main.tex} \section{Achievability Schemes and DoF Lower Bounds} \label{Achi} \input{sec_Achi.tex} \section{Transfer Function View} \label{Tran} \input{sec_Tran.tex} \input{bib.tex} \ifdefinedTrue \appendices \section{DoF upper bound I} \label{App_A} \input{sec_App_A.tex} \section{DoF upper bound II} \label{App_B} \input{sec_App_B.tex} \section{Coding schemes for $M\le N$ and Proof of Theorems \ref{thm:DoF} and \ref{thm:UBLB}} \label{App_C} \input{sec_App_C.tex} \fi \IEEEtriggeratref{3} \end{document} \ifCLASSINFOpdf \else \fi \subsection{Achievability proof of Theorem \ref{thm:DoF}} To prove Theorem \ref{thm:DoF}, we distinguish three cases: When $N\le \frac{1}{2}M$, it is unnecessary to code across topology. With DoF-optimal code for each topology, it is easy to verify that the following sum DoF is achievable (a.s.): \begin{equation} \label{eq:4_1} N(4pq^3+10p^2q^2+8p^3q+2p^4)=2Np(1+q). \end{equation} When $\frac{1}{2}M < N\le \frac{2}{3}M$, we use codes across $\{z_1, z_2\}$ topologies and $\{z_3, z_4\}$ topologies, together with per-topology DoF-optimal codes for the remaining topologies. With Lemma \ref{lemma1}, it follows that we can achieve (a.s.) a sum DoF of \begin{align} \begin{split} \label{eq:4_2} &pq^34N+p^2q^2(6N+2M)+p^3q(4N+2M)+p^42N\\ =&2N(p^2+2pq^2)+2Mp^2q. \end{split} \end{align} Lastly, let us consider the case where $\frac{2}{3}M < N\le M$ and $p\le \frac{1}{2}$. For a long period of ($n$) channel uses, the $f$-topology occurs approximately $np^4$ times, while each $z_i$-topology occurs approximately $np^3q$ times. We first code across the $\{z_1, z_2, z_3, z_4, f\}$ topologies and totally consume the $f$-topologies. Since $p^4 \le p^3q$, we then use $\{z_1, z_2\}$- and $\{z_3, z_4\}$-topological codes on the remaining $z_i$-topologies. For the other topologies, simply employ a DoF-optimal code on each topology. Thus, by Lemma \ref{lemma1} and \ref{lemma2}, we can achieve sum DoF of $\setlength{\medmuskip}{2mu} pq^34N+p^2q^2(6N+2M)+p^4(6N+2M)+(p^3q-p^4)(4N+2M)$, or equivalently \begin{equation} \label{eq:4_3} 2N(p^2+2pq^2)+2Mp^2q, \end{equation} which, interestingly, coincides with (\ref{eq:4_2}). The achievability of Theorem \ref{thm:DoF} is hence established by (\ref{eq:4_1})--(\ref{eq:4_3}). \subsection{Proof of $\eta^\mathrm{lb}$ of Theorem \ref{thm:UBLB}} When $\frac{2}{3}M < N\le M$ and $p> \frac{1}{2}$, the priority is again to use the $\{z, f\}$-topological code as much as possible. For the remaining topologies, DoF-optimal code is employed on each of them. Noting that $p^4>p^3q$, we conclude that the following sum DoF is achievable (a.s.): $\setlength{\medmuskip}{2mu} pq^34N+p^2q^2(6N+2M)+p^3q(6N+2M)+(p^4-p^3q)\frac{4}{3}M$, or equivalently \begin{equation} \label{eq:4_4} pq^34N+p^2q(6N+2M)+(p^4-p^3q)\frac{4}{3}M, \end{equation} proving the $\eta^\mathrm{lb}$ of Theorem \ref{thm:UBLB}. \subsection{Block-level interference alignment: $24$ DoF achievable} A moment of reflection on (\ref{eq:5_3}) and the block structure of the $\overline H_i$ super channel matrix suggests the following simple precoding scheme via \emph{block-level} interference alignment: \begin{equation} \label{eq:5_4} A = \left[ \arraycolsep=3pt\def\arraystretch{1.0} \begin{array}{cccc} 0 & H_1^\dagger & \tilde I_4 & 0 \\ \tilde I_4 & 0 & 0 & H_3^\dagger \\ 0 & 0 & 0 & H_3^\dagger \\ 0 & H_1^\dagger & 0 & 0 \\ 0 & H_1^\dagger & 0 & H_3^\dagger \\ \end{array} \right], \quad B = \left[ \arraycolsep=3pt\def\arraystretch{1.0} \begin{array}{cccc} 0 & H_2^\dagger & 0 & 0 \\ 0 & 0 & 0 & H_4^\dagger \\ \tilde I_4 & 0 & 0 & H_4^\dagger \\ 0 & H_2^\dagger & \tilde I_4 & 0 \\ 0 & H_2^\dagger & 0 & H_4^\dagger \\ \end{array} \right], \end{equation} where $\tilde I_4$ consists of the first $3$ columns of identity matrix $I_4$ and $H_i^\dagger$ is the pseudo inverse of $H_i$. This leads to the following $\mathbf H_\mathrm{eff}$: \begin{equation} \label{eq:5_5} \setlength{\dashlinedash}{.4pt} \setlength{\dashlinegap}{.8pt} \left[ \arraycolsep=2pt\def\arraystretch{0.9} \begin{array}{cccc:cccc} 0 & I_3 & \tilde H_1 & 0 & 0 & I_3 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & H_2H_4^\dagger \\ 0 & 0 & 0 & H_1H_3^\dagger & 0 & 0 & 0 & 0 \\ 0 & I_3 & 0 & 0 & 0 & I_3 & \tilde H_2 & 0 \\ 0 & I_3 & 0 & H_1H_3^\dagger & 0 & I_3 & 0 & H_2H_4^\dagger \vspace{1pt} \\ \hdashline &&&&&&& \\ [-0.9em] 0 & 0 & 0 & 0 & 0 & H_4H_2^\dagger & 0 & 0 \\ \tilde H_3 & 0 & 0 & I_3 & 0 & 0 & 0 & I_3 \\ 0 & 0 & 0 & I_3 & \tilde H_4 & 0 & 0 & I_3 \\ 0 & H_3H_1^\dagger & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & H_3H_1^\dagger & 0 & I_3 & 0 & H_4H_2^\dagger & 0 & I_3 \\ \end{array} \right], \end{equation} where $\tilde H_i$ denotes the first $3$ columns of $H_i$. With this precoding scheme, the 1$^{st}$ and 5$^{th}$ columns are nulled at $\overline Y_1$, while the 2$^{nd}$ and 6$^{th}$ columns are aligned. In addition, the non-zero columns are linearly independent, so $12$ variables may be solved at $\overline Y_1$. Similar arguments hold at $\overline Y_2$. So this scheme can achieve a sum DoF of $24$. \subsection{Refined interference alignment: $26$ DoF achievable} A simple refinement of the above scheme leads to even higher DoF. Specifically, zooming into each $H_i$ matrix quickly reveals that it has 1 dimension of null space which we may exploit. For example, replace each $H_1^\dagger$ and $H_2^\dagger$ in (\ref{eq:5_4}) with $[G_1, \phi_1, \phi_3]$ and $[G_2, \phi_2]$, respectively, where $G_i$ consists of the first $2$ columns of $H_i^\dagger$ and $\phi_i$ is a basis vector of the null space of $H_i$. Similarly, substitute $[G_3, \phi_3]$ and $[G_4, \phi_4, \phi_2]$ for each $H_3^\dagger$ and $H_4^\dagger$ in (\ref{eq:5_4}), respectively, and the $\mathbf H_\mathrm{eff}$ now becomes: \begin{equation*} \label{eq:5_6} \small \setlength{\dashlinedash}{.4pt} \setlength{\dashlinegap}{.8pt} \left[ \arraycolsep=0pt\def\arraystretch{1.1} \begin{array}{cccc:cccc} 0 & [\tilde I_3, 0, H_1\phi_3] & \tilde H_1 & 0 & 0 & [\tilde I_3, 0] & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & [H_2[G_4, \phi_4],0] \\ 0 & 0 & 0 & H_1[G_3, \phi_3] & 0 & 0 & 0 & 0 \\ 0 & [\tilde I_3, 0, H_1\phi_3] & 0 & 0 & 0 & [\tilde I_3, 0] & \tilde H_2 & 0 \\ 0 & [\tilde I_3, 0, H_1\phi_3] & 0 & H_1[G_3, \phi_3] & 0 & [\tilde I_3, 0] & 0 & [H_2[G_4, \phi_4],0] \vspace{1pt}\\ \hdashline &&&&&&& \\ [-1.0em] 0 & 0 & 0 & 0 & 0 & H_4[G_2, \phi_2] & 0 & 0 \\ \tilde H_3 & 0 & 0 & [\tilde I_3, 0] & 0 & 0 & 0 & [\tilde I_3, 0, H_4\phi_2] \\ 0 & 0 & 0 & [\tilde I_3, 0] & \tilde H_4 & 0 & 0 & [\tilde I_3, 0, H_4\phi_2] \\ 0 & [H_3[G_1, \phi_1], 0] & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & [H_3[G_1, \phi_1], 0] & 0 & [\tilde I_3, 0] & 0 & H_4[G_2, \phi_2] & 0 & [\tilde I_3, 0, H_4\phi_2] \\ \end{array} \right] \normalsize \end{equation*} where $\tilde I_3$ denotes the first $2$ columns of $I_3$. Now consider $[G_1, \phi_1, \phi_3]$ first. The essence is to take away one of the dimensions used by interference alignment, and to save it for interference nulling vectors $\phi_1$ and $\phi_3$. Since $\phi_1$ vanishes at $\overline Y_1$ and so does $\phi_3$ at $\overline Y_2$, these two vectors occupy only $1$ dimension at either receiver, but they enable us to send one more variable through the network. The rationale for $[G_4, \phi_4, \phi_2]$ is the same, and the linear independence of the non-zero columns at each receiver is maintained. Hence the optimal $26$ DoF is achievable with this scheme. Moreover, we obtain the code shown in Figure \ref{fig:zf_code} after a slight optimization (of reducing the number of $\phi_i$ filters.) It is also clear in this view that this code is a space-time code, obtained by interference alignment over space and time (topologies).
{ "timestamp": "2018-05-08T02:17:53", "yymm": "1805", "arxiv_id": "1805.02527", "language": "en", "url": "https://arxiv.org/abs/1805.02527" }
\section{Supplemental Material} We show in Fig.~\ref{expoself} how the occupancies of individual orbitals evolve with $U$. Beyond $U = 3 eV$ both the orbitals become half-filled, whence an {\it emergent} particle-hole symmetry emerges. We show the imaginary part of the self-energies, Im$\Sigma_{a}(i\omega_{n})$ for a range of correlation parameters. The low energy features of the imaginary part of self-energy is fitted to a form $C + A\omega_{n}^{\gamma}$. The exponent $\gamma$ as shown in Fig.~\ref{expoself} in the OSMT phase shows a robust behavior for range of $U$, more specifically at lower temperatures, where it saturates to a value of $0.5$. This is clear evidence for non-LFL metallicity in the OSMP. \vspace{0.5em} \begin{figure}[h!] \centering \subfigure[]{\label{f:C11}\epsfig{file=occ_new.pdf,trim=0in 0in 0in 0.0in, clip=true,width=0.49\linewidth}}\hspace{-0.0\linewidth} \subfigure[]{\label{f:C11}\epsfig{file=expo_self.pdf,trim=0in 0in 0in 0.0in, clip=true,width=0.49\linewidth}}\hspace{-0.0\linewidth} \subfigure[]{\label{f:C21}\epsfig{file=linear_fit_final.pdf,trim=0in 0in 0in 0.0in, clip=true,width=0.49\linewidth}}\hspace{-0.0\linewidth} \caption{(a) The evolution of the occupancies for each orbital is shown. The local U(1) emerges as U is cranked up. -$Im\Sigma(\omega_{n})$ is fitted to a form -$Im\Sigma(\omega_{n})= C + A (\omega_{n})^{\gamma}$. In the fermi liquid regime $\gamma$ must approach $1.0$. In the critical regime, the exponent is robust against interaction parameters. The robustness of such critical behavior of single particle self-energy is shown (b). The low energy power law fit to the self energy (c) shows that the exponent evolves with temperatures before finally saturating to $\sim 0.5$ at lowest temperatures.} \label{expoself} \end{figure} We stress the robustness of the critical thermal scaling collapse for Im$\chi_{s}(\omega,T)$. Here, we show it for a different $U > U_{c,OSMT}$ which again shows an excellent thermal scaling collapse for $T^{-\alpha_{s}}$Im$\chi_{s}(\omega,T)\sim F(\omega/T)$ at $U$ = 8.0 eV (Fig.~\ref{spinscale}) with a slightly different exponent. This shows that the anomalous scaling persists over a range of $U/t_{a,b}$, attesting to its robustness, and indicates a quantum critical {\it phase}. Moreover, the full-width at half-maximum (FWHM) extracted from Im$\chi_{s}(\omega,T)/\omega$ is linear in $\omega$ at low $T$. \vspace{0.5em} \begin{figure}[h!] \centering \subfigure[]{\label{f:C21}\epsfig{file=new_spin_U8.pdf,trim=0in 0in 0in 0.0in, clip=true,width=0.98\linewidth}}\hspace{-0.0\linewidth} \caption{ Im$\chi_{s}(\omega,T)$ shows a proper thermal scaling collapse Im$\chi_{s}(\omega,T)\sim F(\omega/T)$ in the critical regime at $U$ = 8.0 eV. The inset shows the FWHM extracted from Im$\chi_{s}(\omega,T)/\omega$ which is nearly linear over a range of temperature where the scaling collapse is perfect.} \label{spinscale} \end{figure} \vspace{0.5em} Finally, the fact that the one- and two-fermion propagators obey $\omega/T$-scaling in the OSMP with fractional exponents is interesting. First, this implies that the corresponding relaxation rates, evaluated from $\Gamma_{M}(T)=i[\partial$ln$ M(\omega,T)/\partial\omega]|^{-1}_{\omega=0}$ with $M=G_{aa}(\omega),\chi_{s}^{loc},\chi_{c}^{loc}$, are all {\it linear} in $T$. That the exponents in the power-law behavior in spin- and charge fluctuation propagators is distinct reflects the importance of vertex corrections. These are absent in both IPT and large-$N$ solvers in the DMFT context, but are encoded in CT-QMC. Second, our finding that relaxation rates are linear in $T$ also shows the interacting character (noticed earlier by Glossop {\it et al.}~\cite{glossop} of the ``strange'' metal phase, wherein non-linear coupling between quantum critical modes is finite. Thus, our finding of distinct exponents in the dynamical spin- and charge fluctuations is {\it not} an artifact. Experimentally, such behaviour has been seen in $f$-electron systems near local quantum-critical {\it points}(see Glossop {\it et al.} ~\cite{glossop} and references therein). {\bf Analytic rationalization} \vspace{0.3cm} Here, we describe how our CTQMC results can be understood analytically. As in hidden-FL or FL$^{*}$ theories, we will not need to invoke proximity to a $T=0$ antiferromagnetic ordered state to rationalize our findings. This is because DMFT accesses the dynamical but {\it local} spin and charge fluctuations and the changes in their analytic structure as the OSMP is approached from the LFL side as $U_{aa,bb},U_{ab}$ are increased. In contrast, in the spirit of Anderson and Casey~\cite{casey}, we show that using the Schotte-Schotte approach to bosonization of the impurity model of DMFT provides a clean interpretation of our results in terms of bosonic ``tomonagons''. {\bf Analytic Insight:} Our results bear similarity to the hidden-FL view of Anderson as follows. In the hidden-FL theory, applied deep in the doped Mott insulator phase of a $t-J$ model, the exact eigenstates $|\Psi\rangle$ are related to the unprojected states $|\Phi\rangle$ by a Gutzwiller projection: $|\Psi\rangle =\Pi_{i}(1-n_{i\uparrow}n_{i\downarrow})|\Phi\rangle=P|\Phi\rangle$. Using $Pc_{i\sigma}^{\dag}P=c_{i\sigma}^{\dag}(1-n_{i,-\sigma})$, the one-electron Green function $G_{ij}(t)=-i\langle\Psi|T[c_{i\sigma}(t)c_{j\sigma}^{\dag}(0)]|\Psi\rangle$ is written (in the $U\rightarrow\infty$ limit) as $G_{ij}(t)=G_{ij}^{0}(t)G_{ij}^{*}(t)$, where $G_{ij}^{0}(t)=-i\langle\Phi|c_{i\sigma}(t)c_{j\sigma}^{\dag}(0)|\Phi\rangle$ is the free electron propagator, and $G_{ij}^{*}(t)=\langle\Phi|T[(i-n_{i,-\sigma})(t)(1-n_{j,-\sigma})(0)|\Phi\rangle$ represents the scattering processes involving opposite spin fermions due to the Hubbard interaction. Arguing that the latter propagator can be computed by making analogy with the seminal ``X-ray edge'' problem, Anderson concludes that $G_{ij}(t)\simeq t^{-(1+\eta)}$ with $\eta$ an ``s-wave'' scattering phase shift at the Fermi surface. The fact that $0< \eta <1$ implies that the infra-red pole structure of $G(k,\omega)$ is replaced by a branch-cut singularity, leading to non-LFL metallicity. In our model (see main text), onset of the OSMP gives rise to a similar projective aspect due to Mott localization of $b$-states: in the OSMP, at low energy below the selective-Mott gap in $G_{bb}(\omega)$, the $b$ states cannot recoil during an $a-b$ fermion scattering process (which transfers an $a$-fermion into a $b$-fermion state), simply because there are now {\it no} lower-Hubbard band states into which they can recoil. $V_{ab}(k)$ can now only create {\it upper}-Hubbard band states in the $b$-fermion sector, as described in the text. The above argument of Anderson can now be applied to the non-local hybridization (or inter-band hopping) term in identically the same way as above, with precisely the same result: Landau FL metallicity as signified by an infra-red pole structure of $G_{aa}(\omega)$ is destroyed due to the emergent projective aspect in the $b$-fermion sector which controls the low-energy physics in the OSMP. \section{Bosonization} In this section, we detail how the projective aspect at the root of emergence of ``strange'' metallicity permits further analytic insight. Specifically, it allows us to use bosonization for the underlying impurity model in the OSMP regime to analytically rationalize the high-dimensional spin-charge separation, wherein the exponents characterizing the power-law decay of dynamic spin and charge correlations are distinct. We now analyze a suitable impurity limit of the two-band Hubbard Model (2BHM) via bosonization to analytically rationalize this exciting feature observed (main text) in the DMFT calculations using CT-QMC. We emphasize that it is important to notice that, in the orbital selective Mott phase (OSMP) of the 2BHM, the Mott-localized $b$-orbital states interact with the metallic $a$-orbital states via U$_{ab}$ and J$_{H}$ in the regime where the interband one-electron hybridization, $V_{ab}(k)$, is irrelevant: as detailed in the main text, this is the regime where DMFT for the two-band Hubbard model yields an OSMP. This impurity model is then written as \begin{align*} H_{imp} & = \sum_{k,\sigma}\epsilon^{\sim}_{ka}a^{\dagger}_{k,\sigma}a_{k,\sigma}+Un_{oa\uparrow}n_{0a\downarrow}+ U_{ab}n_{oa}n_{ob} \\ & -J{\bf S}_{oa}.{\bf S}_{ob}-\mu\sum_{\sigma} n_{oa\sigma} \end{align*} where $^{\sim}\epsilon_{ka}$ is the $a$-band dispersion in the OSMP, and the $b$-band states are understood to be the Mott localized , i,e., the lower $b$-Hubbard band consists of single occupied states and double occupancy of $b$-band states is forbidden in the asymptotic low-energy limit. To clarify the roles of $U_{ab},J$ in the emergence of the novel features, we begin with U$_{ab}$=0 and J = 0, where low-energy correlated Landau FL metallicity obtains, and consider their effects later. The impurity model \begin{equation} H^{0}_{imp}=\sum_{k,\sigma}\epsilon^{\sim}_{ka}a^{\dagger}_{k,\sigma}a_{k,\sigma}+Un_{oa\uparrow}n_{oa\downarrow}-\mu\sum_{\sigma} n_{oa\sigma} \end{equation} can be recast as, \begin{align*} H^{0}_{imp}& =\sum_{\sigma}[iv_{F}\int_{-\infty}^{\infty}]dx\psi_{\sigma}^{\dagger}(x)\delta_{x}\psi_{\sigma}(x)+\frac{U}{2}:\psi_{\sigma}^{\dagger}(0) \\ & \psi_{\sigma}(0)::\psi_{-\sigma}^{\dagger}(0)\psi_{-\sigma}(0):-\mu:\psi_{\sigma}^{\dagger}(0)\psi_{\sigma}(0): \end{align*} where $\psi_{\sigma}(x)$ are chiral (right-moving) fermion fields describing the radial (outward and inward from the impurity, ``o'') band motion of $a$-fermions. v$_{f}$=$\delta_{k}\epsilon_{ka}$|$_{k=k_{F}}$, $:A:$ implies normal ordering of A, i.e., $:A:$ = A - $\langle 0|A|0\rangle$ with $|0\rangle$ denoting the ground state. Next, use the bosonization identity $\psi_{\sigma}(x)=\frac{1}{\sqrt{2\pi\alpha}}e^{i\phi_{\sigma}(x)}, \phi_{\sigma}(x)=\sqrt{\pi}[\phi_{\sigma}(x)-\int_{-\infty}^{x}dx^{'}\Pi_{\sigma}(x^{'})]$ where $\psi_{\sigma}(x)$, $\pi_{\sigma}(x)$are conjugate bosonic fields satisfying $[\phi_{\sigma}(x),\Pi_{\sigma'}(x')]=i\delta_{\sigma\sigma'}\delta(x-x'),$ and $\alpha$ is a short-distance cut-off. Introducing the charge and spin-fields $\phi_{c}$=$\sum_{\sigma}\phi_{\sigma}$, $\phi_{c}$=$\sum_{\sigma}\sigma\phi_{\sigma}$, we can ``split'' H$^{0}_{imp}$ into charge (c) and spin (s) sectors as H$^{0}_{imp}$ = H$^{0,c}_{imp}$ + H$^{0,s}_{imp}$: \begin{align*} H_{imp}^{0,c} & = \frac{v_{F}}{2}\int_{-\infty}^{\infty}dx[\Pi_{s}^{2}(x)+(\delta_{x}\phi_{s}(x))^{2}] \\ & -\frac{U}{\sqrt{2}\pi}(\delta_{x}\phi_{s}(0))^{2}-\frac{U}{8\pi^{2}}(\delta_{x}\phi_{s}(0))^{2} \end{align*} $:\psi_{\sigma}^{dagger}(0)\psi_{\sigma}(0):=\frac{1}{2\pi}\delta_{x}\phi_{\sigma}(0)$, and $n_{ob\downarrow}=\langle\psi_{\downarrow}^{\dagger}(0)\psi_{\downarrow}(0)$ Now we resolve the bosonic field operates into their Fourier components as, \begin{equation} \phi_{\nu}(x)= \sum_{k}\frac{1}{\sqrt{2|k|}}(a_{\nu,k}e^{ik_{x}})+a^{\dagger}_{\nu,k}e^{-ik_{x}})e^{-\alpha|k|/2} \end{equation} \begin{equation} \Pi_{\nu}(x)= -i\sum_{k}\sqrt{|k|/2}(a_{\nu,k}e^{ik_{x}})-a^{\dagger}_{\nu,k}e^{-ik_{x}})e^{-\alpha|k|/2} \end{equation} with $\nu$=c,s. Eqns. (5),(6) become \begin{align*} H^{o,c}_{imp}& =\sum_{k\rangle 0} \omega_{k} {a_{ck}}^{\dagger}a_{ck}+i{\sqrt{2\rho}}(Un_{ob\downarrow}-\mu)\sum_{k\rangle 0}\sqrt{\omega_{k}}(a_{ck}\\&-{a_{ck}}^{\dagger})-\rho\frac{U}{2}\sum_{k,k'\rangle 0}(a_{ck}-a_{ck}^{\dagger})(a_{ck'}-a_{ck'}^{\dagger}) \end{align*} \begin{equation} H^{o,s}_{imp}=\sum_{k\rangle 0} \omega_{k}a_{sk}^{\dagger}a_{sk}+\rho\frac{U}{2}\sum_{k,k'\rangle 0}\sqrt{\omega_{k}\omega_{k'}}(a_{sk}-a_{sk}^{\dagger})(a_{sk'}-a_{sk'}^{\dagger}) \end{equation} where $\omega_{k}$ = kv$_{F}$ and $\rho$ = $\frac{1}{2\pi v_{F}}$ A this point, one can show that correlated Landau FL properties follow upon using an equation of motion approach. In particular, the local charge susceptibility, $\chi_{c}$ = $\frac{2}{\rho(U+U_{c})}$ + $\frac{i\omega}{2\pi v_{F}^{2}} \frac{2U_{c}^{2}}{(U+U_{c})^{2}}$ The local spin susceptibility, $\chi_{s}$ = $\frac{1}{(U_{c}-U)}$ + $\frac{i\omega}{2\pi v_{F}^{2}} \frac{U_{c}^{2}}{(U-U_{c})^{2}}$ The LFL behavior is clear and as expected, $\chi_{c}$ is suppressed while $\chi_{s}$is enhanced with increasing U. \subsubsection{Effects of U$_{ab}$ and J} In the OSMP, the metallic $b$-fermions strongly scatter off the selectively Mott-localized $a$-fermions via both, $U_{ab}$ and $J$. We will now show how the different exponents in the power-law fall-off of the charge and spin fluctuation propagators in the infra-red, found in DMFT(CTQMC) studies in the main text, emerge as a consequence of $(i)$ the mapping of the underlying impurity problem onto the famed X-ray edge problem, and $(ii)$ the different scattering potential experienced by ``metallic'' $a$-fermions in the charge and spin-fluctuation channels. First, consider the effect U$_{ab}$. Since the a-states are Mott localized, $H_{U_{ab}}$=U$_{ab}$n$_{oa}$n$_{ob}$ in Eq.(1) describes the scattering of metallic $a$-fermions off Mott-localized $b$-fermions: importantly, due to the asymptotically valid projection of double occupancies in the $b$-fermion sector, the $b$-fermions now cannot recoil during the scattering by U$_{ab}$, simply because there are {\it no} empty lower-Hubbard band states in the $b$-fermion sector into which they can recoil (since the lower Hubbard band of a-sector correspond to singly occupied states). This leads to an {\it exact} mapping of this case to the famed X-ray edge problem (PWA). In bosonized form~\cite{schotte}, U$_{ab}$n$_{oa}$n$_{ob}$ becomes U$_{ab}\sum_{\sigma,\sigma^{'}}:\psi^{\dagger}_{\sigma}(0)\psi_{\sigma}(0):n_{oa\sigma^{'}(0)}$. Thus, in the charge sector we get $H^{c}_{imp} = H^{o,c}_{imp} + U_{ab}\sum_{\sigma}:\psi^{\dagger}_{\sigma}(0)\psi_{\sigma}(0):n_{oa\sigma'}(0)$ Using $:\psi^{\dagger}_{\sigma}(0)\psi_{\sigma}(0): = \frac{1}{2\pi}\delta_{x}\phi_{\sigma}$, we thus find that \begin{align*} H_{imp}^{c} & =\frac{v_{F}}{2}\int_{-\infty}^{\infty}dx[\Pi_{c}^{2}(x)+(\delta_{x}\phi_{c}(x))^{2}] \\ & -\frac{U/2-\mu+U_{ab}n_{a}}{\sqrt{2}\pi}(\delta_{x}\phi_{c}(0))^{2}+\frac{U}{8\pi^{2}}(\delta_{x}\phi_{c}(0))^{2} \end{align*} At this level (J=0), the spin sector remains unaffected. In terms of the $a_{ck},a^{\dagger}_{ck}$ oscillator modes, we now have \begin{align*} H^{c}_{imp} & =\sum_{k\rangle 0} \omega_{k}a_{ck}^{\dagger}a_{ck} +i\sqrt{2\rho}(Un_{ob\downarrow} \\ & -\mu+U_{ab}n_{oa})\sum_{k\rangle 0}\sqrt{\omega_{k}}(a_{ck}-a_{ck}^{\dagger}) \\ & -\rho\frac{U}{2}\sum_{k,k'\rangle 0}(a_{ck}-a_{ck}^{\dagger})(a_{ck'}-a_{ck'}^{\dagger}) \end{align*} Because the one-electron hybridization is irrelevant, $U_{ab}n_{oa}n_{ob}$ incoherently scatters a- and b-fermions from the impurity into the bath and vice-versa. Thus. a propagating b-fermion 'sees' either a local potential $U_{ab}$ (when n$_{oa}$=1) or $0$ (when n$_{oa}$=0) as a function of time. Thus, the term $U_{ab}n_{oa}\sum\sqrt{\omega_{k}}(a_{ck}-a_{ck}^\dagger)$ in Eq. (13) above acts to 'shift' the charge-bosonic modes (1st term in Eq. (13)) in precisely the same way as the venerated X-ray edge problem. Following Schotte and Schotte, we can now write two Hamiltonians, corresponding to n$_{oa}$ = 0 (H$_{I}$; no scattering) and n$_{oa}$ = 1 (H$_{F}$; U$_{ab}$-scattering). Employing a unitary transformations, which is nothing but the boundary condition changing operator of Affleck et al. as \begin{equation} H_{F}=U^\dagger H_{I}U, U = exp[i\frac{2\delta}{\pi}\phi_{c}(0)] \end{equation} with $\delta=\frac{U_{ab}}{\sqrt{2}v_{F}}\frac{U_{c}}{U+U_{c}}$, the two-particle correlator, \begin{align*} S(t) & =\langle a_{o\sigma'}^{\dagger}(t)\psi_{\sigma}(t)\psi_{\sigma}^{\dagger}(0)a_{o\sigma'}(o)\rangle \\ & = \langle U_{\sigma'}(t)\psi_{\sigma}(t)\psi^{\dagger}_{\sigma}(0)U_{\sigma'}(o)\rangle \sim t^{2\delta/\pi - (\frac{\delta}{\pi})^{2}} \end{align*} giving, \begin{equation} -ImS(\omega) \sim \frac{sin[\pi(2\delta/\pi-(\delta/\pi)^{2}]}{|\omega|^{(2\delta/\pi-(\delta/\pi)^{2})}} \end{equation} At finite T, this diplays explicit $\omega/T$ scaling, rationalizing the DMFT qualitatively. Here, $\delta = tan^{-1} (U_{ab}\rho(0))$. Next, consider the term $H_{J}=J{\bf S}_{0a}.{\bf S}_{0b}$. In a partially filled two-orbital Hubbard model in its OSMP state, the effective on-site $J$ is negative, leading to tendency to local high-spin (HS) state: however, this does not mean a tendency to ferromagnetism, since the inter-site exchange between local moments is usually antiferromagnetic. Here, we focus on the quantum paramagnetic state. Now $H'=H_{U_{ab}}+H_{J}$ is expressible as $(U_{ab}+J/4)\sum_{\sigma}n_{0a\sigma}n_{0b\sigma} + (U_{ab}-J/4)\sum_{\sigma}n_{0a\sigma}n_{0b,-\sigma} + J/2\sum_{\sigma}a_{0\sigma}^{\dag}a_{0,-\sigma}b_{0,-\sigma}^{\dag}b_{0\sigma}$. Since we focus on the OSMP with Mott localized $b$-fermion states, the effect of these terms in $H'$ is {\it exactly} similar to the effect that obtains in the seminal X-ray edge problem. Specifically, the first two terms represent the distinct scattering potential experienced by a ``metallic'' $a$-fermion whilst scattering off (Mott) localized $b$-fermion with same spin (first term in $H'$, $V=(U_{ab}+J/4)$) or with opposite spin (second term in $H'$, $V=(U_{ab}-J/4)$). From the mapping onto the X-ray edge problem, it now follows that $(i)$ the ``equal-spin excitonic'' fluctuation propagator, $\chi_{ab}^{\sigma\sigma}(\omega)=\int d\tau e^{i\omega\tau}\langle T_{tau}[a_{i\sigma}^{\dag}b_{i\sigma}(\tau);b_{i\sigma}^{\dag}a_{i\sigma}(0)]\rangle \simeq |\omega|^{-\eta_{1}}$, with $\eta_{1}=(2\delta_{1}/\pi -(\delta_{1}/\pi)^{2})$ and $\delta_{1}=$tan$^{-1}[(U_{ab}+J/4)\rho_{0}]$. $(ii)$ the ``opposite-spin excitonic'' fluctuation propagator, $\chi_{ab}^{\sigma,-\sigma}(\omega)=\int d\tau e^{i\omega\tau}\langle T_{tau}[a_{i\sigma}^{\dag}b_{i,-\sigma}(\tau);b_{i,-\sigma}^{\dag}a_{i\sigma}(0)]\rangle \simeq |\omega|^{-\eta_{2}}$, with $\eta_{2}=(2\delta_{2}/\pi -(\delta_{2}/\pi)^{2})$ and $\delta_{2}=$tan$^{-1}[(U_{ab}-J/4)\rho_{0}]$. $(iii)$ the ``interband spin-flip excitonic'' fluctuation propagator, $\chi_{ab}^{sf}(\omega)= \int d\tau e^{i\omega\tau}\langle T_{tau}[a_{i\sigma}^{\dag}b_{i,\sigma}(\tau);b_{i,-\sigma}^{\dag}a_{i,-\sigma}(0)]\rangle \simeq |\omega|^{-\eta_{3}}$, with $\eta_{3}=(2\delta_{3}/\pi -(\delta_{3}/\pi)^{2})$ and $\delta_{3}=$tan$^{-1}(J\rho_{0}/2)$. The corresponding exponent in the spin fluctuation channel can be readily evaluated by using the above results and repeating the procedure detailed above for the charge fluctuation channel, but now with a different local scattering potential (related to $(ii),(iii)$ above). The different exponents in the power-law fall-off for the charge and spin susceptibilities found in our DMFT(CTQMC) study in the main text are thus rationalizable as arising from different scattering potentials experienced by the ``metallic'' $a$-fermions in the ``charge'' and ``spin'' fluctuation channels whilst scattering off the Mott localized $b$-states. Obviously, the selective Mottness is a key factor in this emergent behavior, since it is only in this regime that the underlying impurity problem of DMFT maps onto the venerated X-ray edge problem, facilitating infra-red singular behavior. Since DMFT is a self-consistently embedded single-impurity problem, the above singular behaviors carry over to the lattice problem, as long as one restricts oneself to the selective-metallic states without conventional symmetry breaking. We thus arrive at one of our central results: the high-$D$ spin-charge separation alluded to in the main text arises from $(i)$ suppression of recoil of the ``heavy'' $b$-fermion during scattering processes (due to $U_{ab},J$) in the OSMP due to selective Mott localization, and $(ii)$ due to the different local scattering potentials (hence, different scattering phase shifts) in the charge and spin fluctuation sectors in the corresponding X-ray edge problem. Finally, for smaller $U_{ab}<U_{ab}^{OSMP}$, the one-electron hybrtidization, $V_{ab}(k)$ is relevant since, in the absence of selective Mott localization of the $b$-band fermions, the $b$-fermions can dynamically recoil at low energies, leading to recovery of the lattice Kondo scale and to correlated Landau FL metallicity. This is again in full qualitative accord with our DMFT(CTQMC) numerics. \end{document}
{ "timestamp": "2018-06-26T02:09:50", "yymm": "1805", "arxiv_id": "1805.02288", "language": "en", "url": "https://arxiv.org/abs/1805.02288" }
\section{Introduction} Three special classes of regular polytope \cite{Ziegler} exist in every dimension: the regular simplex, the hypercube, and the cross-polytope. For each of these, it is usual to construct a lower triangular matrix that enumerates, for each dimension, the number of faces of each lower dimension of the polytope in question. For the regular simplex, we obtain the (infinite) matrix \seqnum{A135278} that begins $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 3 & 3 & 1 & 0 & 0 & 0 & 0 \\ 4 & 6 & 4 & 1 & 0 & 0 & 0 \\ 5 & 10 & 10 & 5 & 1 & 0 & 0 \\ 6 & 15 & 20 & 15 & 6 & 1 & 0 \\ 7 & 21 & 35 & 35 & 21 & 7 & 1 \\ \end{array} \right),$$ or alternatively its reversal \seqnum{A074909} $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 & 0 \\ 1 & 3 & 3 & 0 & 0 & 0 & 0 \\ 1 & 4 & 6 & 4 & 0 & 0 & 0 \\ 1 & 5 & 10 & 10 & 5 & 0 & 0 \\ 1 & 6 & 15 & 20 & 15 & 6 & 0 \\ 1 & 7 & 21 & 35 & 35 & 21 & 7 \\ \end{array} \right),$$ (depending on convention). The first matrix is the ordinary Riordan array $\left(\frac{1}{(1-x)^2}, \frac{x}{1-x}\right)$, while the second matrix is its reversal. For the case of the hypercube, the corresponding matrix \seqnum{A038207} and its reversal \seqnum{A013609} begin $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 & 0 \\ 4 & 4 & 1 & 0 & 0 & 0 & 0 \\ 8 & 12 & 6 & 1 & 0 & 0 & 0 \\ 16 & 32 & 24 & 8 & 1 & 0 & 0 \\ 32 & 80 & 80 & 40 & 10 & 1 & 0 \\ 64 & 192 & 240 & 160 & 60 & 12 & 1 \\ \end{array} \right)$$ and $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 & 0 \\ 1 & 4 & 4 & 0 & 0 & 0 & 0 \\ 1 & 6 & 12 & 8 & 0 & 0 & 0 \\ 1 & 8 & 24 & 32 & 16 & 0 & 0 \\ 1 & 10 & 40 & 80 & 80 & 32 & 0 \\ 1 & 12 & 60 & 160 & 240 & 192 & 64 \\ \end{array} \right).$$ These are the ordinary Riordan array $\left(\frac{1}{1-2x}, \frac{x}{1-2x}\right)$ and its reversal. We have that $$\left(\frac{1}{1-2x}, \frac{x}{1-2x}\right)=\mathbf{B}^2,$$ where $\mathbf{B}=\left(\frac{1}{1-x}, \frac{x}{1-x}\right)$ is the binomial matrix $\left(\binom{n}{k}\right)_{n,k \ge 0}$ (Pascal's triangle \seqnum{A007318}). For our purposes in this note, we can also regard the binomial matrix as the exponential Riordan array $$\mathbf{B}=\left[e^x, x\right],$$ in which case the face matrix for the hypercubes is given by $\mathbf{B}^2=\left[e^{2x}, x\right]$ (or its reversal). We now note the following. $$ \left(\frac{1}{(1-x)^2}, \frac{x}{1-x}\right)\cdot \mathbf{B}^{-1}=\left(\frac{1}{1-x},x\right),$$ is the matrix that begins $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 0 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 0 \\ 1 & 1 & 1 & 1 & 1 & 1 & 1 \\ \end{array} \right).$$ Similarly, we have $$ \left[e^{2x}, x\right] \mathbf{B}^{-1}=\left[e^x,x\right]=\mathbf{B},$$ which begins $$\left( \begin{array}{ccccccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 1 & 0 & 0 & 0 & 0 \\ 1 & 3 & 3 & 1 & 0 & 0 & 0 \\ 1 & 4 & 6 & 4 & 1 & 0 & 0 \\ 1 & 5 & 10 & 10 & 5 & 1 & 0 \\ 1 & 6 & 15 & 20 & 15 & 6 & 1 \\ \end{array} \right).$$ In both cases, we obtain centrally symmetric, or palindromic, matrices, whose rows are the $h$-vectors of the polytopes in question. We shall call a lower-triangular matrix $(a_{n,k})_{n,k\ge 0}$ \emph{Pascal-like} if $a_{n,0}=a_{n,n}=1$ and $a_{n,n-k}=a_{n,k}$. If $M$ is a Pascal-like matrix, then the matrix given by the matrix product $M \cdot \mathbf{B}$, where $\mathbf{B}$ is the binomial matrix $(\binom{n}{k})$, will be callde the $f$-matrix (face matrix) of $M$. We can generalize the two Pascal-like matrices above using Riordan arrays in two ways. The first way is to use ordinary Riordan arrays, in which case we obtain the parameterized family given by \cite{Cons} $$\left(\frac{1}{1-x},\frac{x(1+rx)}{1-x}\right),$$ where for instance $r=0$ corresponds to the binomial matrix $\mathbf{B}$. The second way is to use exponential Riordan arrays \cite{Exp}, where we obtain the parameterized family palindromic matrices given by $$\left[e^x, x\left(1+\frac{rx}{2}\right)\right].$$ We have investigated the associated $\gamma$-matrices for these two families in a previous paper \cite{gamma}. We shall now turn our attention to the associated $f$-matrices. In the next section, we shall briefly cover some definitions and results that will provide the context of the rest of the paper. \section{Relevant definitions and results} An ordinary Riordan array \cite{Book, Survey, SGWW} is a lower-triangular invertible matrix whose elements $a_{n,k}$ are given by $$a_{n,k}=[x^n] g(x)f(x)^k,$$ where $g(x)=1+g_1 x+ g_2 x^2+ \cdots$ and $f(x)=x+f_2 x^2+ f_3 x^3+\cdots$ are two power series, with coefficients drawn from a suitable ring. In our case this ring will be the ring of integers $\mathbb{Z}$. This array is denoted by $(g(x), f(x))=(g, f)$, where $x$ is a dummy variable, in the sense that $$a_{n,k}=[x^n]g(x)f(x)^k=[u^n]g(u)f(u)^k.$$ The bivariate generating function of the array $(g,f)$ is given by $$\frac{g(x)}{1-y f(x)}.$$ Such arrays form a group (the Riordan group), where the product is given by $$(g(x), f(x))\cdot (u(x), v(x))= (g(x)u(f(x)), v(f(x)),$$ and we have $$(g(x), f(x))^{-1}=\left(\frac{1}{g(\bar{f}(x))}, \bar{f}(x)\right),$$ where $\bar{f}(x)$ is the compositional inverse of $f(x)$. Thus $\bar{f}(x)$ is the solution $u$ to the equation $f(u)=x$ such that $u(0)=0$. An exponential Riordan array \cite{Book, Riordan_Exp} is a lower-triangular invertible matrix whose elements $a_{n,k}$ are given by $$a_{n,k}=\frac{n!}{k!}[x^n] g(x)f(x)^k,$$ where $g(x)=1+g_1 \frac{x}{1!}+ g_2 \frac{x^2}{2!}+ \cdots$ and $f(x)=\frac{x}{1!}+f_2 \frac{x^2}{2!}+ f_3 \frac{x^3}{3!}+\cdots$ are two (exponential) power series, with coefficients drawn from a suitable ring. This array is denoted by $[g(x), f(x)]=[g, f]$. The product rule and the inverse of an exponential Riordan array are calculated in a similar fashion to the ordinary case. The bivariate generating function of the matrix $[g(x), f(x)]$ is given by $$g(x)e^{yf(x)}.$$ These two variants are specializations of the case of so-called ``generalized Riordan arrays'' \cite{Wang}, which are defined in terms of two power series $g(x)=1+g_1 \frac{x}{c_1}+ g_2 \frac{x^2}{c_2}+ \cdots$ and $f(x)=\frac{x}{c_1}+f_2 \frac{x^2}{c_2}+ f_3 \frac{x^3}{c_3}+\cdots$ where $c_n$ is a suitable sequence of non-zero coefficients. In this case, we have $$a_{n,k}= \frac{c_n}{c_k} [x^n] g(x)f(x)^k.$$ We denote this array by $[g(x), f(x)]_{c_n}$. \begin{example} The triangle of Narayana numbers $N_{n,k}=\frac{1}{k+1}\binom{n}{k}\binom{n+1}{k}$ \seqnum{A001263} is the matrix of $h$-vectors for the associahedron. This matrix is given by the generalized Riordan array $$\left[\frac{I_1(2 \sqrt{x})}{\sqrt{x}}, x\right]_{n!(n+1)!}.$$ (Observation by Peter Bala, \seqnum{A001263}). \end{example} A Jacobi continued fraction is a continued fraction \cite{Wall} of the form $$\cfrac{1}{1-\alpha x - \cfrac{\beta x^2}{1-\gamma x - \cfrac{\delta x^2}{1-\cdots}}}.$$ We use the notation $$\mathcal{J}(\alpha, \gamma, \ldots; \beta, \delta, \ldots)$$ for such a fraction. The $k$-th binomial transform of such a continued fraction is then given by \cite{CFT} $$\mathcal{J}(\alpha+k, \gamma+k, \ldots; \beta, \delta, \ldots).$$ If $a_n$ is a sequence, then its $k$-th binomial transform is the sequence $b_n=\sum_{i=0}^n \binom{n}{i}k^{n-i} a_i$. Sequences in this note, where known, will be referenced by their $Annnnnn$ number from the On-Line Encyclopedia of Integer Sequences \cite{SL1, SL2}. All the lower-triangular matrices that we shall encounter are infinite in extent. We display a suitable truncation. \section{The $f$-matrix of $\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)$} We have that the $f$-matrix of the Pascal-like array $\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)$ is given by $$F_r=\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)\cdot \mathbf{B}=\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)\cdot \left(\frac{1}{1-x}, \frac{x}{1-x}\right).$$ This is equal to $$F_r=\left(\frac{1}{1-2x-rx^2},\frac{x(1+rx)}{1-2x-rx^2}\right).$$ This matrix begins $$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 & 0 \\ r+4 & r+4 & 1 & 0 & 0 & 0 \\ 4 r+8 & 6 r+12 & 2 r+6 & 1 & 0 & 0 \\ r^2+12 r+16 & 2 r^2+24 r+32 & r^2+15 r+24 & 3 r+8 & 1 & 0 \\ 6 r^2+32 r+32 & 15 r^2+80 r+80 & 12 r^2+72 r+80 & 3 r^2+28 r+40 & 4 r+10 & 1 \\ \end{array} \right),$$ or in reversed form, $$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 1 & r+4 & r+4 & 0 & 0 & 0 \\ 1 & 2 r+6 & 6 r+12 & 4 r+8 & 0 & 0 \\ 1 & 3 r+8 & r^2+15 r+24 & 2 r^2+24 r+32 & r^2+12 r+16 & 0 \\ 1 & 4 r+10 & 3 r^2+28 r+40 & 12 r^2+72 r+80 & 15 r^2+80 r+80 & 6 r^2+32 r+32 \\ \end{array} \right).$$ For $r=0,1,2$ we get, respectively, $$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 1 & 4 & 4 & 0 & 0 & 0 \\ 1 & 6 & 12 & 8 & 0 & 0 \\ 1 & 8 & 24 & 32 & 16 & 0 \\ 1 & 10 & 40 & 80 & 80 & 32 \\ \end{array} \right), \left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 1 & 5 & 4 & 0 & 0 & 0 \\ 1 & 8 & 17 & 8 & 0 & 0 \\ 1 & 11 & 39 & 51 & 16 & 0 \\ 1 & 14 & 70 & 154 & 143 & 32 \\ \end{array} \right),$$ and $$\left( \begin{array}{cccccc} 1 & 0 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 & 0 \\ 1 & 6 & 4 & 0 & 0 & 0 \\ 1 & 10 & 22 & 8 & 0 & 0 \\ 1 & 14 & 56 & 72 & 16 & 0 \\ 1 & 18 & 106 & 248 & 220 & 32 \\ \end{array} \right).$$ The case $r=0$ is that of the hypercube. Using the form of the bivariate generating function for a Riordan array, we have the following result. \begin{proposition} The bivariate generating function for the $f$-matrix of the Pascal-like array $\left(\frac{1}{1-x},\frac{x(1+rx)}{1-x}\right)$ is given by $$\frac{1}{1-(y+2)x-r(y+1)x^2},$$ or in reversed form, $$\frac{1}{1-(2y+1)x-ry(y+1)x^2}.$$ \end{proposition} Using the reversed form, we obtain the sequence of generating functions $$\frac{1}{1-x-ryx^2} \to \frac{1}{1-(y+1)x-ryx^2} \to \frac{1}{1-(2y+1)x-ry(y+1)x^2}$$ for, respectively, the $\gamma$-matrix \cite{gamma}, the $h$-matrix, and the $f$-matrix for the Pascal-like matrix $\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)$ (where this matrix is the $h$-matrix). We have, for the matrix family $\left(\frac{1}{1-x}, \frac{x(1+rx)}{1-x}\right)$, $$ \gamma_{n,k}=\binom{n-k}{n-2k}r^k,$$ $$ h_{n,k}=\sum_{j=0}^k \binom{k}{j}\binom{n-j}{n-k-j}r^j,$$ and $$ f_{n,k}=\sum_{i=0}^n \sum_{j=0}^i \binom{i}{j}\binom{n-j}{n-i-j}r^j \binom{i}{k}.$$ This follows from previous work \cite{gamma, Cons} and the definition of the $f$-matrix. \section{The $f$-matrix of the Pascal-like matrix $\left[e^x, x(1+rx/2)\right]$} The $f$-matrix of the Pascal-like matrix $\left[e^x, x(1+rx/2)\right]$ is given by $$\left[e^x, x(1+rx/2)\right]\cdot \mathbf{B}=\left[e^x, x(1+rx/2)\right]\cdot \left[e^x, x\right],$$ which is $$eF_r=\left[e^x e^{x(1+rx/2)}, x(1+rx/2)\right]=\left[e^{2x+rx^2/2}, x(1+rx/2)\right].$$ The bivariate generating function for $eF_r$ is then given by $$e^x e^{x(1+rx/2)} e^{yx(1+rx/2)}=e^{2x+rx^2/2} e^{yx(1+rx/2)}.$$ This matrix begins $$\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 2 & 1 & 0 & 0 & 0 \\ r+4 & r+4 & 1 & 0 & 0 \\ 6r+8 & 9r+12 & 3r+6 & 1 & 0 \\ 3 r^2+24 r+16 & 2 \left(3 r^2+24 r+16\right) & 3 \left(r^2+10 r+8\right) & 6r+8 & 1 \\ \end{array} \right)$$ or in reversed form $$\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 \\ 1 & r+4 & r+4 & 0 & 0 \\ 1 & 2r+6 & 9 r+12 & 6 r+8 & 0 \\ 1 & 6 r+8 & 3(r^2+10r+8) & 2(3r^2+24r+16) & 3 r (r+8)+16 \\ \end{array} \right).$$ For $r=0,1,2$ we obtain the triangles $$\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 \\ 1 & 4 & 4 & 0 & 0 \\ 1 & 6 & 12 & 8 & 0 \\ 1 & 8 & 24 & 32 & 16 \\ \end{array} \right), \left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 \\ 1 & 5 & 5 & 0 & 0 \\ 1 & 9 & 21 & 14 & 0 \\ 1 & 14 & 57 & 86 & 43 \\ \end{array} \right),$$ and $$\left( \begin{array}{ccccc} 1 & 0 & 0 & 0 & 0 \\ 1 & 2 & 0 & 0 & 0 \\ 1 & 6 & 6 & 0 & 0 \\ 1 & 12 & 30 & 20 & 0 \\ 1 & 20 & 96 & 152 & 76 \\ \end{array} \right).$$ The case $r=0$ corresponds to \seqnum{A013609}. The reversed form of $eF_r$ will then have bivariate generating function $$e^{2xy+rx^2y^2/2}e^{x(1+rxy/2)}.$$ We have that $$e^{2xy+rx^2y^2/2}e^{x(1+rxy/2)}=e^{(2y+1)x}e^{ry(y+1)x^2/2}.$$ Now the exponential generating function $e^{\frac{x^2}{2}}$ expands to give the sequence of aerated double factorials \seqnum{A001147} $$1, 0, 1, 0, 3, 0, 15, 0, 105, 0, 945,\ldots$$ which has the ordinary generating function $$\cfrac{1}{1- \cfrac{x^2}{1- \cfrac{2x^2}{1- \cfrac{3x^2}{1-\cdots}}}}.$$ This leads to the following proposition \cite{CFT}. \begin{proposition} The ordinary generating function of the reversal of $F_r$ is given by the continued fraction $$\cfrac{1}{1-(2y+1)x- \cfrac{ry(y+1)x^2}{1-(2y+1)x- \cfrac{2ry(y+1)x^2}{1-(2y+1)x- \cfrac{3ry(y+1)x^2}{1-\cdots}}}}.$$ \end{proposition} This is a Jacobi continued fraction, which we can write as $$\mathcal{J}(2y+1,2y+1,2y+1,\ldots; ry(y+1), 2ry(y+1),3ry(y+1),\ldots).$$ We then have the following result. \begin{proposition} The $\gamma$-matrix, the $h$-matrix, and the $f$-matrix of the Pascal-like matrix $$\left[e^x, x(1+rx/2)\right]$$ have their ordinary generating functions given by, respectively, $$\mathcal{J}(1,1,1,\ldots, ry, 2ry, 3ry,\ldots),$$ $$\mathcal{J}(y+1, y+1,y+1,\ldots; ry, 2ry, 3ry,\ldots),$$ and $$\mathcal{J}(2y+1,2y+1,2y+1,\ldots; ry(y+1), 2ry(y+1),3ry(y+1),\ldots).$$ \end{proposition} \begin{proof} The $h$-matrix in question is the Pascal-like matrix $\left[e^x, x(1+rx/2)\right]$ itself. This has bivariate generating function $$e^x e^{yx(1+rx/2)}=e^{(y+1)x}e^{rx^2/2}.$$ This is the $(y+1)$-st binomial transform of the sequence with generating function $e^{rx^2/2}$, whence the assertion concerning the $h$-matrix. The statement regarding the $\gamma$-matrix is proven in \cite{gamma}. \end{proof} \section{Remarks on the associahedron and the permutahedron} It can be shown that the $\gamma$-matrix, the $h$-matrix and the $f$-matrix for the associahedron of type A (which are \seqnum{A055151}, \seqnum{A001263} and \seqnum{A033282}, respectively) have the following ordinary generating functions: $$\mathcal{J}(1,1,1,\ldots; y,y,y,\ldots),$$ $$\mathcal{J}(y+1,y+1,y+1,\ldots; y,y,y,\ldots),$$ and $$\mathcal{J}(2y+1,2y+1,2y+1,\ldots; y(y+1), y(y+1),y(y+1),\ldots).$$ In like manner, we can show that the $\gamma$-matrix, the $h$-matrix and the $f$-matrix for the permutahedron (which are \seqnum{A101280}, \seqnum{A008292}, \seqnum{A019538}, respectively \cite{Fomin, Petersen}) have the following ordinary generating functions: $$\mathcal{J}(1,2,3,\ldots; 2y,6y,12y,\ldots),$$ $$\mathcal{J}(y+1,2(y+1),3(y+1),\ldots; 2y,6y,12y,\ldots),$$ and $$\mathcal{J}(2y+1,2(2y+1),3(2y+1),\ldots; 2y(y+1), 6y(y+1),12y(y+1),\ldots).$$ We see that the assignment $$\mathcal{J}(\alpha, \beta, \gamma,\ldots;a,b,c,\ldots) \mapsto \mathcal{J}(\alpha, 2\beta, 3\gamma,\ldots;2a,6b,12c,\ldots)$$ provides us with a transfer mechanism between the associahedron and related objects to the permutahedron and associated objects. \section{Conclusion} In this note we have shown how the face-vectors matrix of the hypercube and the $n$-simplex can be generalized to a generalized ``f-matrix'' for Pascal-like matrices that are defined by ordinary and exponential generating functions, respectively. To each such Pascal-like matrix there is an associated $\gamma$-matrix and and $f$-matrix. The bivariate generating functions are related in a specific and simple pattern. This pattern carries over to the associahedron and the permutahedron, and indeed, to other polytopes. It would appear useful to consider generalized Riordan arrays \cite{Wang} as a context for these cases.
{ "timestamp": "2018-05-08T02:12:51", "yymm": "1805", "arxiv_id": "1805.02274", "language": "en", "url": "https://arxiv.org/abs/1805.02274" }
\section{Introduction} Multiuser transmit beamforming is an effective way of improving the reliability and throughput of individual users in a multiuser wireless system. This advantage is brought by employing multiple antennas at the transmitter/receiver side, which enables the ability to manipulate the multiuser (channel-induced) interference through exploiting the spatial domain. In a multiuser downlink scenario, precoding schemes can be applied to mitigate the multiuser interference (MUI), and thus to enhance the performance, by spatially pre-processing the users' data stream; while at the same time, attempting to guarantee certain system-centric or user-specific requirements. Multiuser precoding techniques typically exploit the channel knowledge in order to suppress the MUI \cite{tb_opt}. On the other hand, the notion of constructive interference (CI) has been introduced as a promising alternative where the underlying idea is to turn the MUI, which is often treated as an unwanted distortion, into a useful source of signal power \cite{slp_chr}. Following the CI-based design concept, the precoder's output is obtained on a symbol-by-symbol basis \cite{slp_chr, slp_con}, which is referred to as symbol-level precoding (SLP). Generally speaking, objective-oriented design of a multiuser precoder involves a constrained optimization problem aimed at finding the optimal transmit signal subject to provisioning a measure of the users' quality-of-service (QoS), e.g., the required signal-to-interference-plus-noise ratio (SINR) \cite{tb_convex}. Among a variety of multiuser precoding design criteria, a commonly addressed problem is QoS-constrained power minimization \cite{tb_vis,tb_sinr}, which is the primary focus of this paper. The performance improvement promised by the multiuser precoding may not be realized if accurate channel state information (CSI) is not available at the transmitter. It is mainly because precoding schemes are quite sensitive to channel uncertainties \cite{unc_sens}. One may expect even a more adverse effect of imperfect channel knowledge when a symbol-level precoder is employed; this is due to the fact that the efficiency of the SLP design extremely depends on the satisfaction of CI constraints to properly accommodate the (noise-free) received signals in constructive interference regions (CIR). In reality, assuming perfect CSI, either statistical or instantaneous, is not practical due to various inevitable channel disturbances such as channel estimation errors, quantization errors, and latency-related errors \cite{unc_tract}. Several robust approaches have been proposed in the literature on conventional multiuser precoding, mostly assuming a perturbation-based channel uncertainty. The uncertainty regions are usually considered to be spherical (see, e.g., \cite{unc_wc, unc_conic}), or stochastic (see, e.g., \cite{unc_sta, unc_imp}). In this context, the robustness generally means designing the precoder such that certain constraints are satisfied for all possible errors within the uncertainty region. Based on the spherical uncertainty model, the disturbance is supposed to be within a known norm-bounded uncertainty set, without any assumption on its distribution. This model, which ultimately leads to worst-case analysis, is known to appropriately capture the bounded disturbances resulted from quantization error \cite{tb_qos}. Stochastic robustness, on the other hand, assumes known statistical properties for the channel uncertainty. In scenarios with channel estimation at the receiver, such assumption may adequately characterize the perturbing component since the error in the estimation process can often be approximated as a Gaussian random variable \cite{tb_qos}. This model also enables handling the outage probability by replacing the worst-case constraints with probabilistic constraints \cite{outage_spec, outage_cri}. In the literature on the SLP design, a worst-case robust analysis is presented in \cite{slp_chr} to design the symbol-level precoder with norm-bounded CSI errors, addressing the power minimization and SINR balancing problems. The symbol-level SINR balancing optimization approach with outage probability constraints is also reported in \cite{unc_chr} to achieve robustness against stochastic channel uncertainties. Both the aforementioned methods are restricted to PSK constellations in designing the robust precoder. It is important to notice that as far as the power minimization problem is of concern, the spherical uncertainty model might not yield an efficient solution. This model considers the worst-case errors which inherently leads to increasing the transmit power, though enhancing the users' symbol error probability. In order to have a complete analysis of power minimizing precoders with imperfect channel knowledge, the study of stochastic models may be beneficial, which has not been addressed in the literature of SLP design. In this paper, we study the SLP power minimization problem with SINR constraints based on a general family of CIRs, namely, distance preserving CIRs (DPCIR) \cite{slp_gen}, in the presence of channel uncertainty. We consider both uncertainty models, i.e., norm-bounded spherical and stochastic, where the latter model is expected to better fit the nature of the power minimization problem. Under norm-bounded CSI uncertainties, we obtain a robust precoder taking the worst-case error into consideration. In the case where the statistical properties of the uncertainty is available, we design a stochastically robust precoder by defining a probabilistic (convex) optimization problem. We show that our proposed approach outperforms the existing results in the literature in terms of power efficiency of the precoding scheme. The rest of this paper is organized as follows. In Section \ref{sec:sys}, we describe the system and uncertainty model. This is followed by Section \ref{sec:def}, where we define the DPCIR-based SLP power optimization problem . We then propose two robust formulations for the norm-bounded and stochastic uncertainty models in Section \ref{sec:slp}. Simulation results are provided in Section \ref{sec:sim}. Finally, we conclude the paper in Section \ref{sec:con}. \noindent{\bf{Notation:}} We use uppercase and lowercase bold-faced letters to denote matrices and vectors, respectively. For complex scalars, $(\cdot)^*$ denotes the conjugate operator. For matrices and vectors, $[\,\cdot\,]^\mathrm{H}$ and $[\,\cdot\,]^\mathrm{T}$ denote conjugate transpose and transpose operator, respectively. For a square matrix $\mathrm{\mathbf{A}}$, $|\mathrm{\mathbf{A}}|$ denotes the determinant of $\mathrm{\mathbf{A}}$. For vectors, $\succeq$ denotes componentwise inequality. The operator $\mathrm{vec}(\cdot)$ denotes vectorization, and $\mathrm{blkdiag}(\cdot)$ represents a square block matrix having main-diagonal block matrices and zero off-diagonal blocks. $\mathbf{I}$ stands for an identity matrix of appropriate dimension. The expectation operator is denoted by $\mathrm{E}\{\cdot\}$, and $\otimes$ denotes the Kronecker product. \section{System and Uncertainty Model}\label{sec:sys} We consider a downlink multiuser MISO (unicast) scenario in which a common transmitter, e.g., a base station (BS), sends independent data streams to $K$ single-antenna users. The BS is equipped with $N$ transmit antennas, and a frequency-flat fading channel is assumed between the BS's transmit antennas and any user $k$. The channel vectors are denoted by $\mathrm{\mathbf{h}}_k\in\mathbb{C}^{1\times N}, k=1,...,K$, containing the complex channel coefficients. Independent data symbols $\{s_k\}_{k=1}^K$ are to be conveyed to $K$ users every symbol time, where $s_k$ denotes the intended symbol for the $k$-th user. To simplify the notation, the symbol's time index is dropped throughout the paper. The users' symbols $\{s_k\}_{k=1}^K$ are drawn from finite equiprobable two-dimensional constellation sets. Without loss of generality, we assume an identical $M$-ary constellation set with unit average power for all $K$ users. Collecting the users' symbols in a vector $\mathrm{\mathbf{s}}=[s_1,\ldots,s_K]^\mathrm{T}\in\mathbb{C}^{K\times1}$, the symbol vector $\mathrm{\mathbf{s}}$ is mapped onto $N$ transmit antennas through a symbol-level precoder \cite{slp_chr, slp_con}. This yields the output vector $\mathrm{\mathbf{u}}=[u_1,\ldots,u_N]^\mathrm{T}\in\mathbb{C}^{N\times1}$ to be transmitted by the BS. The received signal by the $k$-th user is then equal to \begin{equation}\label{eq:sys} r_k = \mathrm{\mathbf{h}}_k\mathrm{\mathbf{u}}+z_k, \; k=1,...,K, \end{equation} where $z_k$ represents the additive complex Gaussian noise at the receiver of user $k$ with distribution $z_k\sim\mathcal{CN}(0,\sigma_k^2)$. We assume uncorrelated noise components across the receivers, i.e., $\mathrm{E}\{z_k z_j^*\} = 0, \forall k,j=1,...,K, k\neq j$. Having $r_k$, the $k$-th user may optimally detect its desired symbol $s_k$ based on the single-user maximum-likelihood (ML) decision rule. We further consider a more realistic scenario in which the available channel knowledge at the BS is not accurate. A perturbation-based uncertainty is assumed according to which the $k$-th user's channel is equal to \begin{equation}\label{eq:H} \mathrm{\mathbf{h}}_k = \hat{\mathrm{\mathbf{h}}}_k + \Deee_k, \; k=1,...,K, \end{equation} where $\hat{\mathrm{\mathbf{h}}}_k$ is the known erroneous channel associated with user $k$, and the perturbing component $\Deee_k\in\mathbb{C}^{1\times N}$ is characterized based on the adopted model for the uncertainty region. In the case of spherical uncertainty region, $\Deee_k$ is assumed to be a norm-bounded error vector, i.e., \begin{equation}\label{eq:delc} \|\Deee_k\|_2 \leq \frac{1}{2}\delta_k, \; k=1,...,K, \end{equation} where $\delta_k$ specifies the radius of the uncertainty region related to the $k$-th user. On the other hand, considering a stochastic uncertainty, $\Deee_k$ represents a zero-mean Gaussian CSI error distributed as $\Deee_k\sim\mathcal{CN}(\mathrm{\mathbf{0}},2\,\xi_k^2\,\mathrm{\mathbf{I}})$. In both models, the random channel vectors $\{\mathrm{\mathbf{h}}_k\}_{k=1}^K$ and the disturbances $\{\Deee_k\}_{k=1}^K$ are assumed to be mutually uncorrelated. Hereafter, instead of complex-valued notations, we use the equivalent real-valued ones $\tilde{\mathrm{\mathbf{u}}}=[\Re\{\mathrm{\mathbf{u}}^\mathrm{T}\},\Im\{\mathrm{\mathbf{u}}^\mathrm{T}\}]^\mathrm{T}\in\mathbb{R}^{2N\times1}$, $\HHH_k=\mathrm{T}(\mathrm{\mathbf{h}}_k)\in\mathbb{R}^{2\times2N}$, $\hat{\HHH}_k=\mathrm{T}(\hat{\mathrm{\mathbf{h}}}_k)\in\mathbb{R}^{2\times2N}$, and $\mathrm{\mathbf{\Delta}}_k=\mathrm{T}(\Deee_k)\in\mathbb{R}^{2\times2N},k=1,...,K$, where for any given complex-valued vector $\mathrm{\mathbf{x}}$, the operator $\mathrm{T}(\mathrm{\mathbf{x}})$ is defined as \begin{equation} \nonumber \mathrm{T}(\mathrm{\mathbf{x}})= \begin{bmatrix} \Re\{\mathrm{\mathbf{x}}\} & -\Im\{\mathrm{\mathbf{x}}\}\\ \Im\{\mathrm{\mathbf{x}}\} & \Re\{\mathrm{\mathbf{x}}\} \end{bmatrix}. \end{equation} It is immediately apparent that $\|\mathrm{\mathbf{\Delta}}_k\|_\mathrm{F} = 2\|\Deee_k\|_2$ and further \begin{equation}\label{eq:Hr} \HHH_k = \hat{\HHH}_k + \mathrm{\mathbf{\Delta}}_k, \; k=1,...,K. \end{equation} In what follows, we simplify the norm notations such that $\|\cdot\|$ denotes either the Frobenius norm of a matrix or the Euclidean norm of a vector. \section{Problem Definition}\label{sec:def} We consider a design criterion based on which the symbol-level precoder is aimed at minimizing the total transmit power while guaranteeing certain SINR thresholds for the users. As mentioned before, the SLP design depends on the defined CIRs for any given constellation. In this paper, we adopt the DPCIRs introduced in \cite{slp_gen}, where the regions are defined such that the distances between the noise-free received signals are at least as large as the original distances of the constellation. Assuming DPCIRs and perfect CSI, the SLP design boils down to solving an SINR-constrained power minimization problem which has been expressed in \cite{slp_gen} as \begin{equation}\label{eq:pm} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \quad \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} & \quad \mathrm{\mathbf{A}}_k \HHH_k \tilde{\mathrm{\mathbf{u}}} \succeq \sigma_k\sqrt{\gamma_k}(\mathrm{\mathbf{b}}_k+\mathrm{\mathbf{c}}_k), \; k=1,...,K, \end{aligned} \end{equation} where $\gamma_k$ is the given SINR threshold for user $k$, and $\mathrm{\mathbf{A}}_k\in\mathbb{R}^{2\times2}$, $\mathrm{\mathbf{b}}_k\in\mathbb{R}^2$ and $\mathrm{\mathbf{c}}_k\in\mathbb{R}^2$ describe the hyperplane representation of the DPCIR associated with user $k$. In the rest, we define $\mathrm{\mathbf{\Psi}}_k\triangleq\sigma_k\sqrt{\gamma_k}(\mathrm{\mathbf{b}}_k+\mathrm{\mathbf{c}}_k)$. It should be noted that the optimization constraints in \eqref{eq:pm} accommodate each noise-free received signal $\HHH_k \tilde{\mathrm{\mathbf{u}}}$ in its corresponding DPCIR, enhancing the detection accuracy and enlarging the feasibility region of the precoding design problem. In the presence of channel uncertainty, the non-robust precoder solves the following optimization problem based on imperfect knowledge of the channel at the BS: \begin{equation}\label{eq:pmnr} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \quad \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} & \quad \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}} \succeq \mathrm{\mathbf{\Psi}}_k, \; k=1,...,K. \end{aligned} \end{equation} Nevertheless, for any user $k$, optimizing the transmit vector through \eqref{eq:pmnr} may cause imprecision in the location of the noise-free received signal due to the inaccurate channel $\hat{\HHH}_k$. More precisely, $\HHH_k \tilde{\mathrm{\mathbf{u}}}$ may not be received in the intended DPCIR. It is assumed that, in addition to the erroneous channel $\hat{\HHH}_k$, the BS is aware of either the error sphere radius $\delta_k$ or the statistics of $\mathrm{\mathbf{\Delta}}_k$, for all the users $k=1,...,K$, depending on the adopted uncertainty model. In order to achieve robustness to CSI errors, we need to take their specifications into account when designing the symbol-level precoder. Accordingly, by exploiting our knowledge of $\{\mathrm{\mathbf{\Delta}}_k\}_{k=1}^K$, in the next section our goal is to design the power minimizing symbol-level precoder being robust to partially known channel uncertainties. \section{Robust Power Minimizing SLP Design}\label{sec:slp} Having the perturbation-based uncertainty model in \eqref{eq:Hr}, the CI constraint for the $k$-th user can be written as \begin{equation*} \mathrm{\mathbf{A}}_k (\hat{\HHH}_k + \mathrm{\mathbf{\Delta}}_k) \tilde{\mathrm{\mathbf{u}}} \succeq \mathrm{\mathbf{\Psi}}_k, \end{equation*} or equivalently \begin{equation}\label{eq:ci} \mathrm{\mathbf{A}}_k \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}} \succeq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}. \end{equation} In the sequel, we separately consider each uncertainty region and obtain the design formulation of the corresponding robust symbol-level precoder. \subsection{Spherical Uncertainty Model} The robust design of the precoder in this case aims at optimizing the transmit vector $\tilde{\mathrm{\mathbf{u}}}$ while satisfying the constraints for any possible $\mathrm{\mathbf{\Delta}}_k$ belonging to the region \begin{equation}\label{eq:del} \|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k, \; k=1,...,K. \end{equation} This norm-bounded region can be interpreted as having all the errors inside a $2N$-dimensional sphere. Consequently, using \eqref{eq:ci} the power minimization problem is reformulated as \begin{equation}\label{eq:pmall} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \quad \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} & \quad \mathrm{\mathbf{A}}_k \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}} \succeq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}, \; k=1,...,K,\\ & \quad \forall \|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k, \; k=1,...,K. \end{aligned} \end{equation} In order to deal with norm-bounded CSI errors, a common approach is to design the precoder based on the worst-case uncertainty, which can be regarded as a conservative worst-case robustness \cite{unc_wc}. Accordingly, denoting $\mathrm{\mathbf{A}}_k = [\mathrm{\mathbf{a}}_{k,1},\mathrm{\mathbf{a}}_{k,2}]^\mathrm{T}$ and $\mathrm{\mathbf{\Psi}}_k = [\psi_{k,1},\psi_{k,2}]^\mathrm{T}$, the optimization problem \eqref{eq:pmall} can be expressed as \begin{equation}\label{eq:pmwc} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \enspace \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} & \enspace \begin{bmatrix} \underset{\|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k}{\mathrm{inf}} \! \{\mathrm{\mathbf{a}}_{k,1}^\mathrm{T} \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}}\} \\ \underset{\|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k}{\mathrm{inf}} \! \{\mathrm{\mathbf{a}}_{k,2}^\mathrm{T} \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}}\} \end{bmatrix} \succeq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}, k=1,...,K.\\ \end{aligned} \end{equation} First, let focus on the first row of constraints in \eqref{eq:pmwc}. Using the property that for any given matrices $\mathrm{\mathbf{X}}$, $\mathrm{\mathbf{Y}}$ and $\mathrm{\mathbf{Z}}$, we have $\mathrm{vec}(\mathrm{\mathbf{X}}\mathrm{\mathbf{Y}}\mathrm{\mathbf{Z}})=(\mathrm{\mathbf{Z}}^\mathrm{T} \otimes \mathrm{\mathbf{X}}) \mathrm{vec}(\mathrm{\mathbf{Y}})$ \cite{cookbook}, and that $\mathrm{\mathbf{A}}_k \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}}=\mathrm{vec}(\mathrm{\mathbf{A}}_k \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}})$, the worst-case CI constraint for the $k$-th user is equivalent to \begin{equation}\label{eq:wc} \begin{aligned} \underset{\|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k}{\mathrm{inf}} \left\{(\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{a}}_{k,1}^\mathrm{T}) \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k) \right\} \geq \psi_{k,1} - \mathrm{\mathbf{a}}_{k,1}^\mathrm{T} \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}. \end{aligned} \end{equation} It can be shown that \begin{equation}\label{eq:sup} \begin{aligned} \underset{\|\mathrm{\mathbf{\Delta}}_k\| \leq \delta_k}{\mathrm{inf}} \left\{(\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{a}}_{k,1}^\mathrm{T}) \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k) \right\} = -\|\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{a}}_{k,1}^\mathrm{T}\|\,\|\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\| = -\delta_k \, \|\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{a}}_{k,1}^\mathrm{T}\|, \end{aligned} \end{equation} where the last equality is obtained having $\|\mathrm{\mathbf{\Delta}}_k\| \! = \! \|\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\|$. In fact, \eqref{eq:sup} accounts the worst possible case of the error $\mathrm{\mathbf{\Delta}}_k$ by considering the maximal value of the inner product. Furthermore, \begin{equation}\label{eq:norm} \|\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{a}}_{k,1}^\mathrm{T}\| = \|\tilde{\mathrm{\mathbf{u}}}\| \, \|\mathrm{\mathbf{a}}_{k,1}\|, \end{equation} which holds provided that both $\tilde{\mathrm{\mathbf{u}}}$ and $\mathrm{\mathbf{a}}_{k,1}$ are vectors. Similar manipulation can be done for the second row of constraints in \eqref{eq:pmwc}. Putting \eqref{eq:sup} and \eqref{eq:norm} together, problem \eqref{eq:pmwc} can be recast as \begin{equation}\label{eq:pmwc2} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \enspace \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} & \enspace \delta_k \, \|\tilde{\mathrm{\mathbf{u}}}\| \, \left[\,\|\mathrm{\mathbf{a}}_{k,1}\|,\|\mathrm{\mathbf{a}}_{k,2}\|\,\right]^\mathrm{T} \! \preceq \! \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}} \! - \! \mathrm{\mathbf{\Psi}}_k, k=1,...,K.\\ \end{aligned} \end{equation} This formulation ensures that the CI constraint for the $k$-th user will be met in the presence of any random, but norm-bounded CSI error. The robust formulation \eqref{eq:pmwc2} is a convex optimization problem and can efficiently be solved via off-the-shelf algorithms \cite{convex_boyd}. A similar approach has also been studied in \cite{slp_chr} where the the CIRs coincide with the DPCIRs for $M$-PSK constellations, but the characterization of the CIRs are not identical. Therefore, presentation of the convex optimization problems are slightly different. Nevertheless, it should be noted that the final optimization problems are based on the same idea and are essentially equivalent. \subsection{Stochastic Uncertainty Model} In case the BS knows the statistics of the channel perturbing components $\{\mathrm{\mathbf{\Delta}}_k\}_{k=1}^K$, a reasonable approach is to design the precoder based on probabilistic constraints \cite{unc_sto}. In the context of SLP, this can be interpreted as considering probabilistic CI constraints \cite{unc_chr}. Accordingly, by modifying the deterministic constraints of the non-robust problem \eqref{eq:pmnr}, we define the stochastically robust power minimization as \begin{equation}\label{eq:pmr} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \quad \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}} \\ \mathrm{s.t.} & \quad 1 \!-\! \mathrm{Pr}\left\{\mathrm{\mathbf{A}}_k \mathrm{\mathbf{\Delta}}_k \tilde{\mathrm{\mathbf{u}}} \succeq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}\right\} \leq \epsilon, k=1,...,K, \end{aligned} \end{equation} where $\epsilon$ denotes the threshold probability that allows the noise-free received signal to locate outside the intended DPCIR. In other words, the precoder is so designed to keep the probabilities of violating the CI constraints below a certain value. The probabilistic constraint in \eqref{eq:pmr}, for any $k=1,...,K$, can be written as \begin{equation}\label{eq:prob} \mathrm{Pr}\left\{(\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k) \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k) \succeq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}}\right\} \geq 1-\epsilon. \end{equation} The vector of Gaussian CSI errors, $\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)$, is characterized by its mean and covariance matrix given by \begin{equation*} \mathrm{E}\{\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\} = \; \mathrm{\mathbf{0}}, \end{equation*} and \begin{equation}\label{Edel} \mathrm{E}\{\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)^\mathrm{T}\} = \xi_k^2\begin{bmatrix} \mathrm{\mathbf{I}} & \mathrm{\mathbf{J}} \\ \mathrm{\mathbf{J}}^\mathrm{T} & \mathrm{\mathbf{I}} \end{bmatrix}, \end{equation} respectively, where $$\mathrm{\mathbf{J}}=\mathrm{blkdiag}(\mathrm{\mathbf{J}}_1,...,\mathrm{\mathbf{J}}_N),\mathrm{\mathbf{J}}_n = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix},\forall n\in\{1,...,N\}.$$ Next, let $\ups_k \triangleq (\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k) \; \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k) = [\upsilon_{k,1},\upsilon_{k,2}]^\mathrm{T}$ and $\omg_k \triangleq \mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}} = [\omega_{k,1},\omega_{k,2}]^\mathrm{T}$. By definition, $\omg_k$ is a deterministic function of $\tilde{\mathrm{\mathbf{u}}}$, and $\ups$ is a bivariate Gaussian random variable (r.v.) which is characterized by \begin{equation*} \begin{aligned} \mathrm{E}\{\ups_k\} & = \mathrm{E}\left\{(\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k) \; \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\right\} \\ & = (\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k) \; \mathrm{E}\left\{\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\right\} = \mathrm{\mathbf{0}}, \end{aligned} \end{equation*} and \begin{equation}\label{sigk} \begin{aligned} \mathrm{\mathbf{\Sigma}}_k &= \mathrm{E}\{\ups_k \ups_k^\mathrm{T}\}\\ &= \mathrm{E}\left\{\left((\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \! \otimes \! \mathrm{\mathbf{A}}_k) \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\right)\!\left((\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \! \otimes \! \mathrm{\mathbf{A}}_k) \mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\right)^\mathrm{T}\right\}\\ & \overset{\mathrm{(a)}}{=} (\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k)\mathrm{E}\left\{\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)\mathrm{vec}(\mathrm{\mathbf{\Delta}}_k)^\mathrm{T}\right\}(\tilde{\mathrm{\mathbf{u}}} \otimes \mathrm{\mathbf{A}}_k^\mathrm{T}) \;\\ & \overset{\mathrm{(b)}}{=} \xi_k^2 \, (\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \otimes \mathrm{\mathbf{A}}_k)(\tilde{\mathrm{\mathbf{u}}} \otimes \mathrm{\mathbf{A}}_k^\mathrm{T}) \;\\ & \overset{\mathrm{(c)}}{=} \xi_k^2 \, (\tilde{\mathrm{\mathbf{u}}}^\mathrm{T} \tilde{\mathrm{\mathbf{u}}} \otimes \mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T})\\ & = \xi_k^2 \, \|\tilde{\mathrm{\mathbf{u}}}\|^2 \, \mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T}, \end{aligned} \end{equation} where the equalities (a) and (c) are respectively derived by applying the properties $(\mathrm{\mathbf{X}} \otimes \mathrm{\mathbf{Y}})^\mathrm{T}=(\mathrm{\mathbf{X}}^\mathrm{T} \otimes \mathrm{\mathbf{Y}}^\mathrm{T})$ and $(\mathrm{\mathbf{X}} \otimes \mathrm{\mathbf{Y}})(\mathrm{\mathbf{W}} \otimes \mathrm{\mathbf{Z}})=(\mathrm{\mathbf{X}} \mathrm{\mathbf{W}} \otimes \mathrm{\mathbf{Y}} \mathrm{\mathbf{Z}})$, for any given matrices $\mathrm{\mathbf{X}},\mathrm{\mathbf{Y}},\mathrm{\mathbf{W}},\mathrm{\mathbf{Z}}$ \cite{cookbook}. Furthermore, the equality (b) in \eqref{sigk} can be easily verified using \eqref{Edel}, however, the intermediate steps are omitted for brevity. Having the statistics of $\ups_k$, the left-hand side probability in \eqref{eq:prob} is computed by integrating the joint Gaussian density function of $\upsilon_{k,1}$ and $\upsilon_{k,2}$, i.e., \begin{equation}\label{eq:int} \begin{aligned} \mathrm{Pr} \{\ups_k \succeq \omg_k\} &= \mathrm{Pr}\left\{\upsilon_{k,1} \geq \omega_{k,1},\upsilon_{k,2} \geq \omega_{k,2}\right\}\\ &= \int\limits_{\omega_{k,2}}^{\infty}\int\limits_{\omega_{k,1}}^{\infty}\frac{1}{2\pi\sqrt{|\mathrm{\mathbf{\Sigma}}_k|}} \exp\left\{-\frac{1}{2}\ups_k^\mathrm{T}\mathrm{\mathbf{\Sigma}}_k^{-1}\ups_k\right\} \mathrm{d}\upsilon_{k,1} \mathrm{d}\upsilon_{k,2}. \end{aligned} \end{equation} This integration, however, has no closed-form expression and it becomes even more difficult to handle when included as a constraint into the problem \eqref{eq:pmr}, since it is a function of the optimization variable $\tilde{\mathrm{\mathbf{u}}}$. To tackle this challenge, we apply a decorrelation transform \cite{papoulis} to the Gaussian random vector $\ups_k$ in order to find a tractable expression for \eqref{eq:int}. Denoting $\bar{\omg}_k=(\mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T})^{-1/2}\omg_k=[\bar{\omega}_{k,1},\bar{\omega}_{k,2}]^\mathrm{T}$, we obtain \begin{equation}\label{eq:dec} \begin{aligned} \mathrm{Pr} \left\{\ups_k \succeq \omg_k\right\} &= \mathrm{Pr} \left\{\mathrm{\mathbf{\Sigma}}_k^{1/2}\mathrm{\mathbf{\Sigma}}_k^{-1/2}\ups_k \succeq \omg_k\right\}\\ &= \mathrm{Pr} \left\{\mathrm{\mathbf{\Sigma}}_k^{1/2}\bar{\ups}_k \succeq \omg_k\right\}\\ &= \mathrm{Pr} \left\{\bar{\ups}_k \succeq \mathrm{\mathbf{\Sigma}}_k^{-1/2}\omg_k\right\}\\ &= \mathrm{Pr} \left\{\bar{\ups}_k \succeq \frac{\bar{\omg}_k}{\xi_k \, \|\tilde{\mathrm{\mathbf{u}}}\|}\right\}, \end{aligned} \end{equation} where the decorrelating matrix $\mathrm{\mathbf{\Sigma}}_k^{-1/2}$ is the inverse square root of $\mathrm{\mathbf{\Sigma}}_k$, and $\bar{\ups}_k=\mathrm{\mathbf{\Sigma}}_k^{-1/2}\ups_k=[\bar{\upsilon}_{k,1},\bar{\upsilon}_{k,2}]^\mathrm{T}$ is an uncorrelated bivariate Gaussian r.v. with zero mean and unit diagonal covariance, i.e., \begin{equation}\label{eq:white} \begin{aligned} \bar{\mathrm{\mathbf{\Sigma}}}_k &= \mathrm{E}\left\{\bar{\ups}_k \bar{\ups}_k^\mathrm{T}\right\}\\ &= \mathrm{E}\left\{\mathrm{\mathbf{\Sigma}}_k^{-1/2}\ups_k \ups_k^\mathrm{T} \mathrm{\mathbf{\Sigma}}_k^{-1/2}\right\}\\ &= \mathrm{\mathbf{\Sigma}}_k^{-1/2}\mathrm{E}\left\{\ups_k \ups_k^\mathrm{T}\right\}\mathrm{\mathbf{\Sigma}}_k^{-1/2}\\ &= \mathrm{\mathbf{\Sigma}}_k^{-1/2} \mathrm{\mathbf{\Sigma}}_k \mathrm{\mathbf{\Sigma}}_k^{-1/2} = \mathrm{\mathbf{I}}. \end{aligned} \end{equation} Notice that since $\mathrm{\mathbf{\Sigma}}_k$ is positive semidefinite, it has a unique square root. Furthermore, according to \cite{slp_tsp}, $\mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T}$ is always invertible. This implies the non-singularity of $(\mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T})^{1/2}$, and hence the existence and uniqueness of $\mathrm{\mathbf{\Sigma}}_k^{-1/2}$. It then follows from \eqref{eq:dec} and \eqref{eq:white} that \begin{equation}\label{eq:erf} \begin{aligned} \mathrm{Pr} \left\{\ups_k \succeq \omg_k\right\} &= \mathrm{Pr} \left\{\bar{\upsilon}_{k,1} \geq \frac{\bar{\omega}_{k,1}}{\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right\} \, \mathrm{Pr} \left\{\bar{\upsilon}_{k,2} \geq \frac{\bar{\omega}_{k,2}}{\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right\}\\ &= \left(\frac{1}{2} - \frac{1}{2} \mathrm{erf}\left(\frac{\bar{\omega}_{k,1}}{\sqrt{2}\,\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right)\right) \left(\frac{1}{2} - \frac{1}{2} \mathrm{erf}\left(\frac{\bar{\omega}_{k,2}}{\sqrt{2}\,\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right)\right), \end{aligned} \end{equation} where $\mathrm{erf}(\cdot)$ is the Gauss error function. Due to the increasing monotonicity of the error function, the probability \eqref{eq:erf} is lower bounded by \begin{equation}\label{eq:erflb} \begin{aligned} \mathrm{Pr} \{\ups_k \succeq \omg_k\} &\geq \left(\frac{1}{2} - \frac{1}{2} \mathrm{erf}\left(\frac{\max\{\bar{\omega}_{k,1},\bar{\omega}_{k,2}\}}{\sqrt{2}\,\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right)\right)^2. \end{aligned} \end{equation} Considering \eqref{eq:erflb}, the probabilistic constraint \eqref{eq:prob} simplifies to \begin{equation}\label{eq:prob2} \begin{aligned} \left(\frac{1}{2} - \frac{1}{2} \mathrm{erf}\left(\frac{\max\{\bar{\omega}_{k,1},\bar{\omega}_{k,2}\}}{\sqrt{2}\,\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|}\right)\right)^2 \geq 1-\epsilon, \end{aligned} \end{equation} from which the subsequent steps to obtain the following linear inequality is straightforward \begin{equation}\label{eq:lmi} \sqrt{2}\rho(\epsilon)\xi_k\|\tilde{\mathrm{\mathbf{u}}}\| \geq \max\{\bar{\omega}_{k,1},\bar{\omega}_{k,2}\}, \end{equation} with $\rho(\epsilon) \triangleq \mathrm{erf}^{-1}\left(1 - 2 \sqrt{1 - \epsilon}\right)$. \noindent Using \eqref{eq:lmi}, the robust power minimization can be formulated as a convex optimization problem expressed by \begin{equation}\label{eq:pmrlmi} \begin{aligned} \underset{\tilde{\mathrm{\mathbf{u}}}}{\mathrm{min}} & \quad \tilde{\mathrm{\mathbf{u}}}^\mathrm{T}\tilde{\mathrm{\mathbf{u}}}\\ \mathrm{s.t.} &\quad\!\max\left\{(\mathrm{\mathbf{A}}_k \mathrm{\mathbf{A}}_k^\mathrm{T})^{-1/2}(\mathrm{\mathbf{\Psi}}_k - \mathrm{\mathbf{A}}_k \hat{\HHH}_k \tilde{\mathrm{\mathbf{u}}})\right\}\!\leq \!\sqrt{2}\rho(\epsilon)\xi_k\|\tilde{\mathrm{\mathbf{u}}}\|,\\ & \quad k=1,...,K, \end{aligned} \end{equation} which can be solved via several efficient methods known in the literature of convex optimization theory \cite{convex_boyd}. It is worth noting that the inequality \eqref{eq:lmi} is a stricter constraint than \eqref{eq:prob}, which is a consequence of using the probability lower bound \eqref{eq:erflb}. Therefore, the optimal solution of \eqref{eq:pmrlmi} is an upper bound on the optimum of \eqref{eq:pmr}, i.e., on the lowest possible transmit power for the stochastically robust precoder. \section{Simulation Results}\label{sec:sim} In this section, we provide some simulation results to evaluate the performance of different robust SLP approaches. The simulations have been done using MATLAB software and CVX convex optimization package (SDPT3 solver). In all the simulations, we consider a downlink multiuser MISO channel with $N=K=4$, in which the intended symbols of all the users are taken from an 8-PSK constellation. A Rayleigh block fading channel is assumed between any user $k$ and the BS's antennas where the channel coefficients are generated following an i.i.d. complex Gaussian distribution, i.e., $\mathrm{\mathbf{h}}_k\sim\mathcal{CN}(\mathrm{\mathbf{0}},\mathrm{\mathbf{I}})$. It is further assumed that $\mathrm{E}\{\mathrm{\mathbf{h}}_k^\mathrm{H}\mathrm{\mathbf{h}}_j\}=\mathrm{\mathbf{0}}, \forall k,j=1,...,K,k\neq j$. We consider unit noise variance at the receiver of all the users, and also equal SINR thresholds, i.e., $\gamma_k=\gamma,k=1,...,K$. The threshold probability $\epsilon$ is set to $0.01$, unless otherwise stated. In the presence of stochastic uncertainties, zero-mean Gaussian CSI errors of equal variances $\xi_k^2=\xi^2$ are assumed for all the users. In order to have a rather fair comparison between the two uncertainty models, the norm-bounded CSI errors are so generated to be within balls of equal radii $\delta_k=\delta=\sqrt{2N}\xi,k=1,...,K$, where the errors are uniformly chosen from the uncertainty sets. For the spherical uncertainty model, we only present the results obtained from the equivalent worst-case robust symbol-level precoder in \cite{slp_chr}. All the plots are obtained by averaging the results over $200$ fading blocks each of $50$ symbol slots. In order for the results to be interpretable, the same set of channel realizations is considered for all SINR thresholds. In a more extensive study, one needs to differently generate the channel matrix for each SINR and solves the optimization problems for all possible combinations of the users' symbols. However, by doing so, extensive simulations through thousands of channel realizations with symbol slots of order $M^K$ are required to have reliable statistics. \begin{figure} \centering \includegraphics[trim={0 0 0 .25in},clip,width=.5\columnwidth]{fig1.eps} \caption{Optimized total transmit power versus SINR threshold.} \label{fig:1} \end{figure} \begin{figure} \centering \includegraphics[width=.5\columnwidth]{fig2.eps} \caption{Average users' symbol error probability versus SINR threshold.} \label{fig:2} \end{figure} Figures \ref{fig:1} and \ref{fig:2} show the optimized transmit power and the average users' symbol error rate (SER), respectively, versus the SINR threshold. It can be observed from Fig. \ref{fig:1} that the proposed stochastically robust precoder reduces the transmit power by around $9$ dBW compared to the worst-case scheme. Moreover, robust precoding schemes lead to higher transmit powers when compared to the case with perfect channel knowledge which is an expected cost in order to achieve robustness. On the other hand, the worst-case robust scheme results in lower average SERs with an approximate gain of $7$ dB compared to the stochastically robust method, as can be seen in Fig. \ref{fig:2}, which is of course due to higher power consumption. This, however, means that the users are provided with higher SINRs than the required QoS levels, which may not be efficient in general, especially when the goal is to minimize the transmit power under a given SER requirement. Therefore, in order to make a more meaningful comparison between the overall performance of the two categories of robust precoding schemes, i.e., the worst-case and the stochastic, we define the power efficiency $\eta$ as the ratio between the average per-user throughput and the total transmit power, i.e., \begin{equation}\label{eq:pe} \nonumber \eta = \frac{\frac{1}{K} \sum_{k=1}^K (1-\mathrm{SER}_k) \log_2(1+\|\HHH_k\tilde{\mathrm{\mathbf{u}}}\|^2)}{\|\tilde{\mathrm{\mathbf{u}}}\|^2}. \end{equation} In Fig. \ref{fig:3}, the power efficiency of different robust approaches is plotted versus the SINR threshold. The ratio $\eta$ can be interpreted as a trade-off factor between the achievable throughput (as a function of the SER performance) and the required transmit power. It can be seen that the stochastic-based power minimization scheme provides a more power-efficient robustness to known channel uncertainties. \begin{figure} \centering \includegraphics[width=.5\columnwidth]{fig3.eps} \caption{Power efficiency as a function of SINR threshold.} \label{fig:3} \end{figure} \section{Conclusion and Future Research}\label{sec:con} In this paper, we study the robust design of SLP under CSI uncertainty. First, we formulate the power minimization problem considering two different models for the uncertainty, namely, spherical and stochastic. For the minimum power SLP design with spherical (norm-bounded) CSI errors, which has been previously addressed for $M$-PSK constellations, we provide a generic formulation based on DPCIRs. Moreover, we formulate the problem also for the stochastic uncertainty model, where the noise-free received signals are allowed to not fall within the DPCIRs with a given probability. The main challenge in this case is that the probabilistic constraints in the optimization problem are not easy to handle. We use a rather efficient simplification that allows us to obtain convex constraints after decorrelating the error components. Our simulations show that there is an essential trade-off between the two robust approaches. While the worst-case method may always provide the requested target SER, the transmit power is usually increased considerably. On the other hand, in the stochastic approach, the increase in the transmit power with respect to the scenario with perfect CSI is quite smaller ($\approx 4$ dBW), however, it leads to higher SERs compared to the worst-case scheme. We further discuss the systems in which a target SER is needed to be satisfied, and introduce the power efficiency of the system as a comparison ratio. The power efficiency incorporates both the total transmit power and the average per-user throughput. The simulation results indicate that the power efficiency of the stochastic model is much higher with respect to that of the worst-case analysis. An interesting problem to be investigated is to understand this trade-off for other types of modulation scheme. In addition, addressing the SLP design for SINR balancing and sum-rate maximization problems under given uncertainty models can be a topic of future work. \section*{Acknowledgment} The authors are supported by the Luxembourg National Research Fund (FNR) under CORE Junior project: C16/IS/11332341 Enhanced Signal Space opTImization for satellite comMunication Systems (ESSTIMS).
{ "timestamp": "2018-08-14T02:08:04", "yymm": "1805", "arxiv_id": "1805.02395", "language": "en", "url": "https://arxiv.org/abs/1805.02395" }
\section{Introduction} The relation between two-body and many-body physics is often an important point for the comprehension and the description of strongly-correlated quantum systems. A celebrated example is provided by homogeneous one-dimensional (1D) interacting systems solvable by the Bethe Ansatz, such as bosons and fermions with contact interactions~\cite{LiebLin,Yang67,Sutherland68}. In that case, the $N$-body solution can be exactly expressed as a function of a product of two-body scattering contributions. Generally, such a system is no longer integrable when subjected to an external potential but a notable exception is the limit of infinitely strong repulsive interactions, known as the Tonks-Girardeau limit, where fermionization occurs. In that case, the system remains exactly solvable, for any number of bosons and fermions \cite{Vignolo00,Deuretzbacher,Fang2011,vignolo2013,Volosniev2014,Deuretzbacher2014,Decamp2016,Decamp2016b,Decamp2017}. At finite interactions, the harmonically-trapped system can be exactly solved for 2 particles~\cite{Busch98} and is approximately solved in the large-$N$ limit by a local density approximation (LDA) on the Lieb-Liniger solution \cite{Olshanii03}. For finite-$N$ systems, several approaches have been proposed: a pair-correlated wavefunction approach \cite{Brouzos2012,Koscik2018}; a $T$-matrix approach for the Fermi polaron at zero and finite temperature \cite{Doggen2013}; a geometric wavefunction description, that is very accurate for 2 and 3 bosons \cite{Wilson2014}; and, more recently, an interpolatory Ansatz combining the non-interacting and unitary wavefunctions \cite{Andersen2016}. This last approach provides very accurate results for the energy in impurity systems \cite{Andersen2016}, but is less accurate when increasing the number of particle components \cite{Pecak2017}. A crucial observable for a 1D system of $N$ particles with contact interactions is Tan's contact parameter, characterizing the asymptotic behavior of the momentum distribution of the particles $C_{N}= \lim_{k\to \infty} k^4 n(k)$~\cite{Minguzzi02}. The contact embeds information on the interaction energy and the density-density correlation function \cite{Tan2008a,Tan2008b,Tan2008c}. It is a univocal measure of the wavefunction symmetry of fermionic and/or bosonic mixtures~\cite{Decamp2016b,Decamp2017}. The contact parameter is also determined by the probability density of finding 2 particles at a vanishing distance \cite{Olshanii03}. For trapped quantum gases, this probability density has a nontrivial dependence on the number of particles and on the interaction strength~\cite{xu2015,Yao2018}. In this Letter, we propose a change of paradigm by showing that the contact parameter plays in fact a more fundamental role than the energy in analyzing Lieb-Liniger bosons. Inspired by the scaling properties of this model, we show that if the starting point of the scaling analysis is the contact parameter instead of the energy, the two-body result provides a very good description of the system for {\it any} number of particles and interaction strengths. The quantitative difference between our predictions and numerically-exact DMRG results is always very small (i.e. less than a few percent) and is the largest at intermediate interaction strengths where the interaction energy is also the largest. For particle numbers $N>2$ we show that the many-body corrections to the rescaled two-body result can be accounted for by a simple interpolation connecting the two-body solution and the LDA one. With this, we obtain an analytical and very accurate expression for the contact parameter at all particle numbers $N$ that we use to derive an accurate formulation for the total energy of the system. \section{Model and scaling analysis} We start with the case of $N \geq 2$ identical and harmonically-trapped 1D bosons of mass $m$ at zero temperature, interacting via repulsive contact interactions. Such a system is described by the many-body Hamiltonian \begin{equation} H=\sum_{j=1}^N \left[ \frac{-\hbar^2}{2m}\frac{\partial^2}{\partial x_j^2} + \frac{1}{2} m \, \omega^2 \, x_j^2 +g \sum_{\ell>j} \delta(x_j-x_\ell)\right] \label{eq:Ham} \end{equation} with $g = 2\hbar^2/(m |a_{\textrm{1D}}|) \geq 0$~\cite{Olsh98}. As shown by Tan in~\cite{Tan2008a,Tan2008b,Tan2008c}, the contact parameter associated to the eigenenergy $E_{N}$ reads \begin{equation} C_{N}(g)= \frac{m^2}{\pi\hbar^4} \left( - \frac{\partial E_{N}}{\partial g^{-1}}\right) = \frac{m^2 g^2}{\pi\hbar^4} \frac{\partial E_{N}}{\partial g} \equiv \frac{m^2 g}{\pi\hbar^4} E_\textrm{int} \, \label{eq:contact} \end{equation} where $E_\textrm{int} $ is the interaction energy. Tan's contact Eq.\eqref{eq:contact} is thus a direct by-product of the dependence of the system energy on the interaction strength $g$. In the following, we will stick our analysis to the ground state energy. By rescaling Hamiltonian (\ref{eq:Ham}) by the ground state energy in the fermionized regime $E^\infty_{N} = N^2\hbar\omega/2$, and by expressing the particle coordinates in units of $a_{\textrm{ho}}/\sqrt{N}$, where $a_{\textrm{ho}}=\sqrt{\hbar/(m\omega)}$ is the harmonic oscillator length, it is easy to see that the ground state energy writes~\cite{Decamp2016b} \begin{equation} E_{N}(g)=E^\infty_{N} \, \mathcal{E}(N,g_N) \label{eq:einf} \end{equation} where \begin{equation} g_N = \dfrac{mga_\textrm{ho}}{2 \hbar^2 \sqrt{N}} = \dfrac{a_{\textrm{ho}}}{|a_{\textrm{1D}}|\sqrt{N}} \equiv \dfrac{\alpha}{\sqrt{N}} \label{eq:AlphaN} \end{equation} is the dimensionless interaction strength and $\alpha\!=\!a_{\textrm{ho}}/|a_{\textrm{1D}}|$. The dimensionless energy function $\mathcal{E}$ interpolates between the non-interacting regime where $\mathcal{E}(N,0)=1/N$ and the fermionized regime where $\mathcal{E}(N,\infty)=1$. Obviously, Tan's contact depends on the same parameters $N$ and $g_N$ and reads: \begin{equation} C_{N}(g)= \dfrac{N^{5/2}}{\pi a_{\textrm{ho}}^3} \, \mathcal{C}(N,g_N), \label{eq:reducedTan0} \end{equation} where the rescaled dimensionless Tan's contact \begin{equation} \mathcal{C}(N,z) = z^2 \, \partial_z \mathcal{E}(N,z) \label{eq:reducedTan} \end{equation} is evaluated at $z=g_N$. By the same token, $E_\textrm{int} = E^\infty_{N} \, \mathcal{E}_\textrm{int}(N, z)$ and we find \begin{equation} \mathcal{E}_\textrm{int}(N, z) = \dfrac{\mathcal{C}(N, z)}{z} = z \, \partial_z \mathcal{E}(N,z). \label{eq:InterEn} \end{equation} In the thermodynamic limit ($N, a_\textrm{ho} \to \infty$ at constant $a_\textrm{ho}/\sqrt{N}$), the only scaling parameter is $g_N$, both for the dimensionless energies $\mathcal{E}$, $\mathcal{E}_\textrm{int}$ and contact parameter $\mathcal{C}$. This can be easily shown in a Local Density Approximation (LDA) on the Lieb-Liniger homogeneous solution \cite{LiebLin,Olshanii03}, and generalized to a generic trapping potential (see App.~\ref{app-scaling}). One gets: \begin{equation} \begin{split} & E^{\textrm{\tiny LDA}}_{N}(g)=E^\infty_{N} \, \mathcal{E}_{\textrm{\tiny LDA}}(g_N) \\ & C^{\textrm{\tiny LDA}}_{N}(g)=\dfrac{N^{5/2}}{\pi a_{\textrm{ho}}^3} \, \mathcal{C}_{\textrm{\tiny LDA}}(g_N) \label{eq:scalingC} \end{split} \end{equation} with $\mathcal{E}_{\textrm{\tiny LDA}}(0)\! = \! \mathcal{C}_{\textrm{\tiny LDA}}(0)\! = \!0$, $\mathcal{E}_{\textrm{\tiny LDA}}(\infty)\! = \!1$ and $\mathcal{C}_{\textrm{\tiny LDA}}(\infty)\! = \!128\sqrt{2}/(45\pi^2)$~\cite{Olshanii03}. Although the derivation has been detailed for single-component bosons, it is possible to show that the scaling analysis applies also to multi-component bosons and fermions \cite{Massignan2015,Matveeva2016,Decamp2016b,Lewenstein-Massignan,Laird2017}, the Hamiltonian being the same as Eq. (\ref{eq:Ham}). \section{The reduced contact parameter and scaling Ansatz} Strictly speaking, the LDA scaling behavior with respect to the sole variable $z$ should only hold in the large-$N$ limit. Indeed, it is what we observe if we plot $\mathcal{C}(N,z)$ obtained by a 2-tensor DMRG optimisation of a Matrix Product States (MPS) Ansatz ~\cite{Schollwock2011} (see App.~\ref{app-dmrg}) in comparison with $\mathcal{C}_{\rm LDA}(z)$, as shown in the left panel of Fig. \ref{fig1}. \begin{figure*} \begin{center} \includegraphics[width=0.45\linewidth]{fig0.pdf} \includegraphics[width=0.45\linewidth]{fig1.pdf} \end{center} \caption{\label{fig1} (Color online) Rescaled dimensionless contact $\mathcal{C}(N,z)$, Eq.\eqref{eq:reducedTan}, (left panel) and reduced contact parameter $f_{N}(z)$, Eq.\eqref{eq:scalingHyp}, (right panel) as a function of the dimensionless scaling parameter $z=a_{\textrm{ho}}/(|a_{\textrm{1D}}|\sqrt{N})=a_{\textrm{ho}}/(|a_{\textrm{1D}}'|\sqrt{2})$. The different symbols corresponds to DMRG calculations: $N=2$ (black squares), $N=3$ (brown circles), $N=4$ (purple triangles up), $N=5$ (light-blue triangles down) and $N=8$ (green diamonds). The black continuous line corresponds to Eq.~\eqref{eq:c2}, and the orange dashed line corresponds to the LDA solution, Eq.~\eqref{eq:scalingC}~\cite{Olshanii03}. Top inset in the right panel: Reduced contact parameter $f_{N}(z)$ for $SU(\kappa)$ fermions \cite{Decamp2016b} (red points), $\kappa$ ranging from 2 to 6, superposed to all the data and curves of the main panel. Bottom inset in the right panel: Convergence rate $R_{N}(z)$ as a function of $N$, in a log-log scale, for $z=$ 0.14 (squares), 0.35 (stars), 0.70 (crosses), and $z\rightarrow\infty$ (plus).} \end{figure*} However all the curves seem to have the same shape, but with different asymptotic values. Here we put forward a different {\it scaling hypothesis} by assuming that the reduced scaling parameter \begin{equation} f_{N}(z)=\dfrac{C_N(g(z))}{C_N(\infty)}=\dfrac{\mathcal{C}(N,z)}{\mathcal{C}(N,\infty)}, \end{equation} with $g(z)=2\hbar^2\sqrt{N}z/(ma_{\textrm{ho}})$, is an universal function for any $N\ge 2$. In particular, if this scaling hypothesis holds, \begin{equation} f_{N}(z) = f_2(z). \label{eq:scalingHyp} \end{equation} This would correspond to the assumption that a $N$-boson system at contact interaction strength $g$ is amenable to an effective 2-boson system at a rescaled {\it weaker} contact interaction strength $g'=\sqrt{2/N} g$. Stated equivalently, the scattering length is renormalized through $a_{{\textrm 1D}} \to a_{{\textrm 1D}}'=\sqrt{N/2} \, a_{{\textrm 1D}}$. In the case of $N=2$ bosons, Tan's contact is given by \begin{equation} C_2(g)=\dfrac{m^2g^2}{\pi\hbar^4} |\psi_\nu(0)|^2 \label{eq:c2} \end{equation} where $\psi_\nu(0)$ is the wavefunction solving the Schr\"odinger equation for the relative motion~\cite{Busch98} evaluated at $x_1-x_2=0$. It is straightforward (see App. \ref{app-two}) to show that \begin{equation} f_2(z)=\dfrac{\mathcal{C}(2,z)}{\mathcal{C}(2,\infty)} = \dfrac{\pi \nu^2\, 2^{\nu-1}}{\mathcal{N}(\nu) \, [\Gamma(1-\nu/2)]^2} \label{scaling1} \end{equation} where $\mathcal{C}(2,\infty)= 1/(2\sqrt{\pi})$, $\mathcal{N}(\nu)$ is a normalization factor (see App. \ref{app-two}), and the $\nu$'s solve \begin{equation} \dfrac{\Gamma(-\nu/2)}{\Gamma(-\nu/2+1/2)} =-\dfrac{1}{z}. \label{gammaeq} \end{equation} In the right panel of Fig. \ref{fig1}, we compare the exact result for $f_2(z)$, Eq.~\eqref{scaling1}, to the numerical data. The fact that all curves (almost) collapse show that $z=a_{\textrm{ho}}/(|a_{\textrm{1D}}|\sqrt{N})=a_{\textrm{ho}}/(|a_{\textrm{1D}}'|\sqrt{2})$ is indeed the dimensionless scaling parameter of the reduced contact parameter, and that the contact for any interaction strength and any number $N$ of particles can be deduced from a simple 2-body calculation, $f_2(z)$, {\it and} from the knowledge of the contact for $N$ particles in the Tonks-Girardeau limit, $\mathcal{C}(N,\infty)$, that, for bosons, can be calculated exactly \cite{vignolo2013}. This means also that the function $C_N(\infty)$ almost embeds the full $N$-dependence of the problem for any value of $z$, even for few-body systems where the $N^{5/2}$ factor, deduced in the thermodynamic limit, starting from the energy scaling-analysis, fails. This result seems to be general and not to depend on the particle statistics \cite{Massignan2015,Matveeva2016,Decamp2016b,Lewenstein-Massignan,Laird2017}. Indeed, the data for the reduced contact parameter of a harmonically-trapped one-dimensional SU($\kappa$) interacting fermions \cite{Decamp2016b} collapse on the same curve, as shown in the top inset in the right panel of Fig. \ref{fig1}. \subsection{Are two enough?} Our DMRG data match at first sight very well with the simple prediction of Eq.~\eqref{eq:scalingHyp}. However, we observe small deviations at intermediate interaction strengths where the data lie between $f_2$ (black continuous line) and the LDA solution $f_{\textrm{\tiny LDA}}=\mathcal{C}_{\textrm{\tiny LDA}}(z)/\mathcal{C}_{\textrm{\tiny LDA}}(\infty)$ (orange dashed line) that is known to be a very good approximation for the contact in the large-$N$ limit. This point is illustrated in the bottom inset of the right panel of Fig. \ref{fig1}, where we show, by plotting the convergence rate $R_{N}(z) = 1- {\mathcal{C}(N,z)}/{\mathcal{C}_{\textrm{\tiny LDA}}(z)}$, how fast the exact contact converges to its LDA value at increasing $N$, for various values of $z$. A numerical fit in the fermionized regime \cite{vignolo2013} gives \begin{equation} R_{N}(\infty) = 1- \dfrac{\mathcal{C}(N,\infty)}{\mathcal{C}_{\textrm{\tiny LDA}}(\infty)} \simeq 1.04 \, N^{-7/4}. \label{eq:conv} \end{equation} The weak dependence on $z$ of the slope of the convergence rate $R_{N}(z)$ confirms that the dependence on $N$ of $C_N(z)$ is almost independent of $z$. \subsection{Beyond two} To further quantify the corrections to the scaling prediction Eq.\eqref{eq:scalingHyp}, we plot in Fig. \ref{fig2} the difference $\mathcal{D}_{N}(z) \!=\! f_{N}(z) \!-\! f_2(z)$. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{fig2.pdf} \end{center} \caption{\label{fig2} Top panel: (Color online) Difference $\mathcal{D}_{N}(z) \!=\! f_{N}(z) \!-\! f_2(z)$ for different values of $N$ as a funciton of the dimensionless scaling parameter $z$. Middle panel: Dimensionless interaction energy $\mathcal{E}_{\textrm{int}}(N,z)$ for different values of $N$, Eq.\eqref{eq:InterEn}, as a function of $z$. All curves display a clear maximum at intermediate dimensionless interaction strengths $z \simeq 0.5$. Bottom panel: Scaled difference $\mathcal{D}_{N}(z)/\beta_N$ with $\beta_N=1-2/N$ for different values of $N$ as a function of $z$. All curves collapse quite well onto the LDA prediction $\mathcal{D}_{\textrm{\tiny LDA}}(z)$ (dashed orange curve) even if further corrections would be needed around the maximum. Symbols are the same as in Fig.~\ref{fig1}.} \end{figure} We observe that $\mathcal{D}_{N}(z)$ reaches its largest value where the interaction energy $\mathcal{E}_{\textrm{int}}(N,z)$ is maximum. By comparing $\mathcal{D}_{N}(z)$ to the LDA prediction $\mathcal{D}_{\textrm{\tiny LDA}}(z) = f_{\textrm{\tiny LDA}}(z) - f_2(z)$ (orange dashed line), we infer the approximate, but quite accurate, proportionality relation $\mathcal{D}_{N}(z) \simeq \beta_N \, \mathcal{D}_{\textrm{\tiny LDA}}(z)$ with $\beta_N = 1-2/N$, see bottom panel of Fig. \ref{fig2}. As a consequence, the simple interpolation \begin{equation} f_N(z) \simeq \left(1-\beta_N\right) \, f_2(z) +\beta_N\, f_{\textrm{\tiny LDA}}(z) \label{2corr} \end{equation} connects quite accurately the exact two-body solution for the contact parameter to the LDA one. We validate this interpolation in Fig. \ref{fig3} by comparing Eq.~\eqref{2corr} with DMRG data obtained for $N=3, 4, 5$ and $8$ bosons. We find a perfect agreement. This means that, within our approach, we can calculate with the same degree of precision all non-trivial experimentally relevant quantities that are directly connected to the contact parameter, such as the interaction energy \cite{Tan2008a,Zwe11}, the two-body correlation function \cite{Olshanii03,Zwe11}, the magnetization \cite{Decamp2016b}, the loss-rate in boson-fermion mixtures \cite{Sebastien2017}, or the heating rate due to measurement back-action of an atomic system in an optical cavity \cite{Uchino2018}. \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{fig3.pdf} \end{center} \caption{\label{fig3} (Color online) Reduced contact parameter $f_{N}(\alpha/\sqrt{N})$, Eq.\eqref{eq:scalingHyp}, for different $N$ and plotted as a function of $\alpha\!=\!a_{\textrm{ho}}/|a_{\textrm{1D}}|$ for better visibility. Solid lines: theoretical prediction Eq.~\eqref{2corr}; symbols: DMRG results. Symbols are the same as in Fig.~\ref{fig1}. } \end{figure} \section{From the contact to the energy} The most crucial test of the quality of our Ansatz for the contact parameter is the ground-state energy, since it is obtained by integration of the contact adding up the deviations: \begin{equation} \mathcal{E}(N, z) = 1 - \int_{z}^\infty dz' \, \dfrac{\mathcal{C}(N,z')}{z'^2}. \label{eq-en} \end{equation} Using Eq.~\eqref{2corr}, we arrive at \begin{equation} \begin{split} \mathcal{E}(N,z) &\simeq 1 - \dfrac{2}{N} \, \dfrac{\mathcal{C}(N,\infty)}{\mathcal{C}(2,\infty)}\left[1- \mathcal{E}(2, z)\right]\\ &-\left(1-\dfrac{2}{N}\right) \, \dfrac{\mathcal{C}(N,\infty)}{\mathcal{C}_{\textrm{\tiny LDA}}(\infty)} \left[1- \mathcal{E}_{\textrm{\tiny LDA}}(z)\right]. \end{split} \label{mia} \end{equation} In Fig. \ref{fig4}, we plot the rescaled energy difference \begin{equation} \Delta_N(z) =\dfrac{E_N(g(z))-E_N(0)}{E_N^\infty-E_N(0)}= \dfrac{N \mathcal{E}(N, z) - 1}{N-1}, \label{miaDiff} \end{equation} whose limits $\Delta_N(\infty) \!=\! 1$ and $\Delta_N(0) \!=\! 0$ do not depend on $N$. We compare the exact numerical results with the prediction obtained by using Eq.\eqref{mia} for different values of $N$ . \begin{figure} \begin{center} \includegraphics[width=1\linewidth]{fig4.pdf} \end{center} \caption{\label{fig4} (Color online) Rescaled ground-state energy $\Delta_N(z)$ relative to its non-interacting value, Eq.(\ref{miaDiff}), as a function of the dimensionless scaling parameter $z$ for different values of $N$. Solid lines: theoretical prediction Eq.~\eqref{mia}; symbols: DMRG results, same $N$ values and symbols as in Fig.~\ref{fig1}. } \end{figure} The agreement with the DMRG data is very good from moderately weak to strong interaction strengths ($z \ge 0.02$). Discrepancies only occur in the weak interaction regime ($z \le 0.02$) where LDA is less accurate. \section{Conclusion} We have shown that the contact parameter for $N$ harmonically-trapped interacting 1D bosons at zero temperature can be simply and accurately obtained from an appropriate rescaling of the two-body contact parameter followed by a smooth interpolation to the $N$-body LDA one. The key point is a change of paradigm: identifying the contact as the starting point for the scaling analysis instead of the energy. Indeed almost all the dependence of the contact on the number of particles can be embedded in the contact at infinite interactions for {\it any} number of particles. This result seems to be general and not to depend on the particle statistics. It shows the fundamental role of the contact, that is likely due to its local two-body correlation nature. We have further shown that our approach leads to a ground state energy for any number of bosons that matches very well the exact result down to moderately weak interaction strengths where no analytical solution is known. Our results improve on previous studies~\cite{Brouzos2012,Wilson2014,Andersen2016,Pecak2017} with a simpler and more accurate Ansatz, that further confirm that the ground state properties of an interacting 1D Bose gas can be accurately described by an effective two-body contact interaction dressed by the other particles in the fluid \cite{Braaten2008,Zwe11}. Our work constitutes an important step forward in understanding the effects of correlations and interactions in harmonically-trapped one-dimensional interacting boson and fermion mixtures. It opens the way to further studies of similar scaling properties in higher-dimensional systems \cite{Levinsen2017}, confined in various trapping potentials, at zero and finite temperature \cite{Yao2018}. \section*{Acknowledgements} P.V. acknowledges UMI 3654 MajuLab hospitality and D. Goupy for enlightening discussions. A.M. aknowledges ANR SuperRing project (ANR-15-CE30-0012-02), and discussions with G. Lang. M.R. acknowledges computational time from the Mogon cluster of the JGU (made available by the CSM and AHRP), S. Montangero for a long-standing collaboration on the flexible Abelian Symmetric Tensor Networks Library employed here, as well as J. J\"unemann for his participation in early stages of this work. C.M. is a Fellow of the Institute of Advanced Studies at Nanyang Technological University (Singapore). The Centre for Quantum Technologies is a Research Centre of Excellence funded by the Ministry of Education and National Research Foundation of Singapore.
{ "timestamp": "2018-10-09T02:15:55", "yymm": "1805", "arxiv_id": "1805.02463", "language": "en", "url": "https://arxiv.org/abs/1805.02463" }
\section{Introduction} \label{sec:intro} A deterministic automaton is called \emph{synchronizing} when there exists a word that brings every state to the same state. If it exists, such a word is called \emph{reset} or \emph{synchronizing}. Synchronizing automata serve as natural models of error-resistant systems because a reset word allows to turn a system into a known state, thus reestablishing the control over the system. For instance, prefix code decoders can be represented by automata. If an automaton corresponding to a decoder is synchronizing, then decoding a reset word, after an error appeared in the process, would recover the correct decoding process. There has been a lot of research done on synchronizing automata since pioneering works of \v{C}ern\'{y}~\cite{Ce64}. Two questions that attract major interest here are whether an automaton is synchronizing and what is the length of shortest reset words if the answer to the first question is `yes'? These questions are also studied from different perspectives such as algorithmic, general statements etc. and in variety of settings, e.g. for particular classes of automata, random settings, \emph{etc.} The reader is referred to the survey of Volkov~\cite{Vo08} for a brief introduction to the theory of synchronizing automata. One of the most studied direction of research in this field is the long-standing conjecture of \v{C}ern\'{y}, which states that if an automaton is synchronizing, then it admits a reset word of length at most $(n-1)^2$, where $n$ is the number of states of the automaton. This bound is best possible, as shown by \v{C}ern\'{y}. However, despite many efforts, only cubic upper bounds have been obtained so far~\cite{Pin1983,Szykula2017}. \bigskip It is the probabilistic settings that interest us in this article. During the attempts to tackle the conjecture of \v{C}ern\'{y}, lots of experiments have been done, showing that random automata seem to be synchronizing with high probability, and that their reset words seem to be quite small in expectation. This was proved quite recently in a series of articles: \begin{itemize} \item Skvortsov and Zaks~\cite{Zaks10} obtained some results for large alphabets (where the number of letters grows with $n$); \item Berlinkov~\cite{Berl2013RandomAut} proved that the probability that a random automaton is not synchronizing is in $\O(n^{-k/2})$, where $k$ is the number of letters, for any $k\geq 2$ (this bound is tight for $k=2$); \item Nicaud~\cite{FastSyn} proved that with high probability a random automaton admits a reset word of length $\O(n\log^3 n)$, for $k\geq 2$ (but with less precise error terms than in~\cite{Berl2013RandomAut}). \end{itemize} All these results hold for the \emph{uniform distribution} on the set of deterministic and complete automata with $n$ states on an alphabet of size $k$, where all automata have the same probability. And it is, indeed, the first probability distribution to study. The reader is refered to the survey~\cite{RandomAutSurvey} for more information about random deterministic automata. \bigskip In this article we study a distribution on a restricted set of deterministic automata, the \emph{almost-group automata}, which will be defined later in this introduction. In order to motivate our choice, we first need to outline the main features of the uniform distribution on deterministic automata and how they were used in the proofs of the articles cited above. In a deterministic and complete automaton, one can consider each letter as a map from the set of states $Q$ to itself, which is called its \emph{action}. The action of a given letter in a uniform random automaton is a uniform random mapping from $Q$ to $Q$. Properties of uniform random mappings have been long studied and most of their typical\footnote{In all the informal statements of this article, \emph{typical} means \emph{with high probability} as the size of the object (cardinality of the set, number of states of the automaton, ...) tends to infinity.} statistics are well known. The \emph{functional graph} proved to be a useful tool to describe a mapping; it is the directed graph of vertex set $Q$, built from a mapping $f:Q\rightarrow Q$ by adding an edge $i\rightarrow j$ whenever $j=f(i)$. Such a graph can be decomposed as a set of cycles of trees. Vertices that are in a cycle consists of elements $q\in Q$ such that $f^\ell(q)=q$ for some positive $\ell$. They are called \emph{cyclic vertices}. The expected number of cyclic vertices in a uniform random mapping on a set of size $n$ is in $\Theta(\sqrt{n})$. This is used in~\cite{FastSyn} and~\cite{Berl2013RandomAut} to obtain the synchronization of most automata. The intuitive idea is that after reading $a^n$, the set of states already shrinks to a much smaller set, in a uniform random automaton; this gives enough leverage, combined with the action of the other letters, to fully synchronize a typical automaton. \medskip In a nutshell, uniform random automata are made of uniform random mappings, and each uniform random mapping is already likely to synchronize most of the states, due to their inherent typical properties. At this point, it seems natural to look for "harder" random instances with regard to synchronization, and it was a common question asked when the authors presented their works. In this article, to prevent easy synchronization from the separate action of the letter, we propose to study what we call \emph{almost-group automata}, where the action of each letter is a permutation, except for one of them which has only one non-cyclic vertex. An example of such an automaton is depicted on Fig:~\ref{fig:intro}. \begin{figure}[h] \begin{center} \begin{tikzpicture} \node[draw,circle] (p0) at (0,0) {0}; \node[draw,circle] (p2) at (1,1) {2}; \node[draw,circle] (p3) at (2,0) {3}; \node[draw,circle] (p6) at (1,-1) {6}; \node[draw,circle,fill=black!15] (p1) at (4,0) {1}; \node[draw,circle] (p5) at (5.5,0) {5}; \node[draw,circle] (p4) at (7.5,0) {4}; \draw[->,thick] (p0) edge[bend left] node[left]{$a,b$} (p2); \draw[->,thick] (p2) edge[bend left] node[right]{$a$} (p3); \draw[->,thick] (p3) edge[bend left] node[below]{$a$} (p6); \draw[->,thick] (p6) edge[bend left] node[left]{$a,b$} (p0); \draw[->,thick] (p2) edge node[left]{$b$} (p6); \draw[->,thick] (p3) edge[bend left] node[above]{$b$} (p1); \draw[->,thick] (p1) edge[bend left] node[below]{$b$} (p3); \draw[->,thick] (p1) edge node[above]{$a$} (p5); \draw[->,thick] (p5) edge[bend left] node[above]{$a,b$} (p4); \draw[->,thick] (p4) edge[bend left] node[below]{$a,b$} (p5); \end{tikzpicture} \end{center} \caption{An almost-group automaton with $7$ states. The action of $b$ is a permutation. The action of $a$ is not, as $1$ has no preimage by $a$; but if state $1$ is removed, $a$ acts as a permutation on the remaining states.\label{fig:intro}} \end{figure} Since a group automaton with more than one state cannot be synchronizing, almost-group automata can be seen as the automata with the maximum number of cyclic states (considering all its letters) that can be synchronizing. The question we investigate in this article is the following. \smallskip \noindent\textbf{Question: }For the uniform distribution, what is the probability that a strongly connected almost-group automaton is synchronizing? \smallskip For this question, we consider automata with $n$ states on a $k$-letter alphabet, with $k\geq 2$, and try to answer asymptotically as $n$ tends to infinity. We prove that such an automaton is synchronizing with probability that tends to $1$. We also provide a precise asymptotic estimation of the probability that it is not synchronizing. In other words, one can state our result as follows: group automata are always non-synchronizing when there are at least two states, but if one allows just one letter to act not bijectively for just one state, then the automaton is synchronizing with high probability. This suggests that from a probabilistic point of view, it is very difficult to achieve non-synchronization. This article starts with recalling some basic definitions and notations in Section~\ref{sec:basicdefs}. Then some interesting properties of this set of automata regarding synchronization are described in Section~\ref{sec:almostgroup}. Finally, we rely on this properties and some elementary counting techniques to establish our result in Section~\ref{sec:counting}. \section{Basic Definitions and Notations} \label{sec:basicdefs} \noindent\textbf{Automata and synchronization.} Throughout the article, we consider automata on a fixed $k$-letter alphabet $\Sigma=\{a_0,\ldots,a_{k-1}\}$. Since we are only interested in synchronizing properties, we only focus on the transition structure of automata: we do not specify initial nor final states, and will never actually consider recognized languages in the sequel. From now on a \emph{deterministic and complete automaton} (DFA) $\mathcal{A}$ on the alphabet $A$ is just a pair $(Q,\cdot)$, where $Q$ is a non-empty finite set of \emph{states} and $\cdot$, the \emph{transition mapping}, is a mapping from $Q\times A$ to $Q$, where the image of $(q,a)\in Q\times A$ is denoted $q\cdot a$. It is inductively extended to a mapping from $Q\times A^*$ to $Q$ by setting $q\cdot \varepsilon = q$ and $q\cdot ua=(q\cdot u)\cdot a$, for any word $u\in A^*$ and any letter $a\in A$, where $\varepsilon$ denote the empty word. Let $\mathcal{A}=(Q,\cdot)$ be a DFA. A word $u\in A^*$ is a \emph{synchronizing word} or a \emph{reset word} if for every $q,q'\in Q$, $q\cdot u=q'\cdot u$. An automaton is \emph{synchronizing} if it admits a synchronizing word. A subset of states $S\subseteq Q$ is \emph{synchronized} by a word $u\in A^*$ if $|S\cdot u|=1$. Observe that if an automaton contains two or more terminal strongly connected components\footnote{A strongly connected component $S$ is terminal when $S\cdot u\subseteq S$ for every $u\in A^*$.}, then it is not synchronizing. Moreover if it has only one terminal strongly connected component $S$, then it is synchronizing if and only if $S$ is synchronized by some word $u$. For this reason, most works on synchronization focus on strongly connected automata, and this paper is no exception. \smallskip \noindent\textbf{Almost-group automata.} Let $\S_n$ be the set of all permutations of $E_n=\{0,\ldots,n-1\}$. A \emph{cyclic point} of a mapping $f$ is an element $x$ such that $f^\ell(x)=x$ for some positive $\ell$. An \emph{almost-permutation} of $E_n$ is a mapping from $E_n$ to itself with exactly $n-1$ cyclic points; its unique non-cyclic point is called \emph{dangling point} (or \emph{dangling state} later on, when we use this notion for automata). Equivalently, an almost-permutation is a mapping that acts as a permutation on a subset of size $n-1$ of $E_n$ and that is not a permutation. Let $\S'_n$ denote the set of almost-permutations on $E_n$. An \emph{almost-group automaton} is a DFA such that one letter act as an almost-permutation and all others as permutations. An example of such an automaton is given in Fig.~\ref{fig:intro}. For counting reasons, we need to normalize the automata, and define $\mathcal{G}_{n,k}$ as the set of all almost-group automata on the alphabet $\{a_0,\ldots,a_{k-1}\}$ whose state set is $E_n$ and such that $a_0$ is the almost-permutation letter. \smallskip \noindent\textbf{Probabilities.} In this article, we equip non-empty finite sets with the uniform distribution, where all elements have same probability. The sets under consideration are often sequences of sets, such as $\S_n$; by abuse of notation, we say that a property \emph{hold with high probability} for $\S_n$ when the probability that it holds, which is defined for every $n$, tends to $1$ as $n$ tends to infinity. \section{Synchronization of Almost-Group Automata} \label{sec:almostgroup} In this section we introduce the main tools that we use to describe the structure of synchronizing and of non-synchronizing almost-group automata. The notion of a \emph{stable pair}, introduced by Kari~\cite{KariStable02}, has proved to be fruitful mostly by Trahtman, who managed to use it for solving the famous \emph{Road Coloring Problem}~\cite{TRRCP08}. We make use of this definition in our proof as well, along with some ideas coming from~\cite{TRRCP08}. A pair of states $\{p,q\}$ is called \emph{stable}, if for every word $u$ there is a word $v$ such that $p\cdot uv=q\cdot uv$. The \emph{stability} relation given by the set of stable pairs joined with a diagonal set $\{(p,p) \mid p \in Q\}$ is invariant under the actions of the letters and complete whenever $\mathcal{A}$ is synchronizing. The definition on pairs is sound as stability is a symmetric binary relation. It is also transitive whence it is an equivalence relation on $Q$ which is a congruence, i.e. invariant under the actions of the letters. Notice also, that an automaton is synchronizing if and only if its stability relation is complete, that is, all pairs are stable. Because of that, if an automaton is not synchronizing and admits a stable pair, then one can consider a non-trivial factorization of the automaton by the stability relation. So, we aim at characterizing stable pairs in a strongly-connected non-synchronizing almost-permutation automaton, in order to show there is a slim chance for such a factorization to appear when switching to probabilities. For this purpose, we need the definition of a \emph{deadlock}, which is a pair that cannot be merged into one state by any word (somehow opposite to the notion of stable pair). A subset $S \subseteq Q$ is called an $F$-clique of $\mathcal{A}$ if it is a set of maximum size such that each pair of states from $S$ is a deadlock. It follows from the definition that all $F$-cliques have same size and that the image of $F$-clique by a letter or a word is also an $F$-clique. Let us reformulate~\cite[Lemma~2]{TRRCP08} for our purposes and present a proof for self-completeness. \begin{lemma} \label{lem:f-clique-diff} If $S$ and $T$ are two $F$-cliques such that $S \setminus T = \{p\}$ and $T \setminus S= \{q\}$, for some states $p$ and $q$, then $\{p,q\}$ is a stable pair. \end{lemma} \begin{proof} By contradiction, suppose there is a word $u$ such that $\{p\cdot u,q\cdot u\}$ is a deadlock. Then $(S \cup T)\cdot u$ is an $F$-clique because all its pairs are deadlocks. Since $p\cdot u \neq q\cdot u$, we have $|S\cup T| = |S| + 1 > |S|$ contradicting maximality of $S$.\qed \end{proof} \begin{lemma} \label{lem:stable_pair} Each strongly-connected almost-group automaton $\mathcal{A} \in \mathcal{G}_{n,k}$ with at least two states, admits a stable pair containing the dangling state that is synchronized by $a_0$. \end{lemma} \begin{proof} If $\mathcal{A}$ is synchronizing, then we are done because all pairs are stable. In the opposite case, there must be an $F$-clique $F_1$ of size at least two. Let $p_0$ be the dangling state (which is not permuted by $a_0$) and let $d$ be the product of all cycle lengths of $a_0$. Since $\mathcal{A}$ is strongly-connected there is a word $u$ such that $p_0 \in F_1\cdot u$. By the property of $F$-cliques, $F_2 = F_1\cdot u$ and $F_3 = F_2\cdot a_0^{d}$ are $F$-cliques too. Notice that $p_0$ is the only state which does not belong to the cycles of $a_0$ and all the cycle states remains intact under the action $a_0^d$, by construction of $d$. Hence $F_2 \setminus F_3 = \{p_0\}$ and $F_3 \setminus F_2 = \{p_0\cdot a_0^d\}$. Hence, by Lemma~\ref{lem:f-clique-diff}, $\{p_0,p_0\cdot a_0^d\}$ is a stable pair. This concludes the proof since $p_0\cdot a_0 = p_0\cdot a_0^{d+1}$.\qed \end{proof} To characterize elements of $\mathcal{G}_{n,k}$ that are not synchronizing, we build their \emph{factor automata}, which is defined as follows. Let $\mathcal{A}$ be a DFA with stability relation $\rho$. Let $\mathcal{C}=\{C_1$,\ldots, $C_\ell\}$ denote its classes for $\rho$. The \emph{factor automaton} of $\mathcal{A}$, denoted by $\mathcal{A} / \rho$, is the automaton of set of states $\mathcal{C}$ with transition function defined by $C_i\cdot a = C_j$ in $\mathcal{A} / \rho$ if and only if $C_i\cdot a \subseteq C_j$ in $\mathcal{A}$. Or equivalently, if and only if there exists $q\in C_i$ such that $q\cdot a\in C_j$ in $\mathcal{A}$. \begin{lemma} \label{lem:factor_automaton} If $\mathcal{A}\in\mathcal{G}_{n,k}$ is strongly-connected, then its factor automaton $\mathcal{A} / \rho$ is a strongly-connected permutation automaton. \end{lemma} \begin{proof} Strong-connectivity follows directly from the definition. If one of the letters was not a permutation on the factor automaton, then there would be a stable class $S$ in $\mathcal{A}$ which has no incoming transition by this letter. It would follow that there is no incoming transition to every state of $S$ in $\mathcal{A}$ either. However, this may happen only for the letter $a_0$ and the (unique) dangling state $p_0$ by this letter. Due to Lemma~\ref{lem:stable_pair}, the dangling state $p_0$ must belong to a stable pair whence there is another state in $S$: this contradicts that $p_0$ is the only state with no incoming transition by $a_0$.\qed \end{proof} \begin{lemma} \label{lem:size_of_components} Let $\mathcal{A}\in\mathcal{G}_{n,k}$ and let $D$ be the stable class of $\mathcal{A}$ that contains the dangling state $p_0$. Then the set of stable classes can be divided into two disjoint, but possibly empty, subsets $\mathcal{B}$ and $\mathcal{S}$ such that \begin{itemize} \item[$\bullet$] $D \in \mathcal{B}$ and $|B|=|D|$ for every $B \in \mathcal{B}$; \item[$\bullet$] $|S|=|D|-1$ for every $S \in \mathcal{S}$; \item[$\bullet$] The $a_0$-cycle of $\mathcal{A} / \rho$ that contains $D$ only contains elements of $\S$ besides $D$; \item[$\bullet$] Every other cycle in $\mathcal{A} / \rho$ lies entirely in either $\mathcal{B}$ or $\mathcal{S}$. \end{itemize} \end{lemma} \begin{proof} Since stable pairs are mapped to stable pairs, the image of a stable class by any letter must be included in a stable class. Recall that by Lemma~\ref{lem:factor_automaton} all letters in $\mathcal{A} / \rho$ act as permutations on the stable classes. Our proof consists in examining the different cycles of the group automaton $\mathcal{A}/\rho$. Let us consider any cycle of a letter $a$ in $\mathcal{A} / \rho$, made of the stable classes $C_0, C_1, \dots, C_{r-1}$ with $C_j\cdot a \subseteq C_{j+1 \pmod{r}}$, for any $j\in\{0,\ldots r-1\}$. If $a\neq a_0$ then the letter $a$ acts as a permutation in $\mathcal{A}$, and for each $j$, we have $|C_j| \leq |C_{j+1 \pmod{r}}|$, since $a$ does not merge pairs of states. Therefore, \[ |C_0| \leq |C_1| \dots \leq |C_{r-1}| \leq |C_0|. \] As a direct consequence, all $|C_j|$ have same cardinality. If $a = a_0$, then observe that the same argument can be used when one removes the dangling state $p_0$ and its outgoing transition by $a_0$: the action of $a_0$ on $Q\setminus\{p_0\}$ becomes a well-defined permutation. Henceforth, if this cycle does not degenerate to a simple loop consisting of only $D$, then all the other elements of the cycle are stable classes of size $|D|-1$. And this is the only place where changes of size may happen in $\mathcal{A}/\rho$. The lemma follows from the strong-connectivity of $\mathcal{A} / \rho$. \qed \end{proof} Notice that an almost-group automaton is non-synchronizing if and only if it has at least two stable classes. The following theorem is a consequence of this fact and of Lemma~\ref{lem:size_of_components}. \begin{theorem} \label{thm:non-synch-criterion} A strongly-connected almost-group automaton $\mathcal{A}$ is non-synchro\-nizing if and only if its partitioning described in Lemma~\ref{lem:size_of_components} is such that $|\mathcal{B} \cup \mathcal{S}|>1$. \end{theorem} \section{Counting Non-synchronizing Almost-Group automata} \label{sec:counting} In this section, we use counting arguments to establish our main result: a precise estimation of the asymptotic number of strongly connected almost-group automata that are not synchronizing. Recall that our working alphabet is $\Sigma=\{a_0,\ldots,a_{k-1}\}$, that $E_n=\{0,\ldots,n-1\}$ and that $\mathcal{G}_{n,k}$ is the set of almost-group automata on $\Sigma$ with set of states $E_n$. Our first counting lemma is immediate. \begin{lemma}\label{lem:ag automata} For any $n\geq 1$, there are exactly $(n-1) n!$ almost-permutations of $E_n$. The number of elements of $\mathcal{G}_{n,k}$ is therefore equal to $(n-1)n!^k$. \end{lemma} \begin{proof} An almost-permutation of $E_n$ is characterized by its element with no preimage $x_0$, the way it permutes $E_n\setminus\{x_0\}$ and the image of $x_0$ in $E_n\setminus\{x_0\}$. Since there are $n$ choices for $x_0$, $(n-1)!$ ways to permute the other elements and $n-1$ choices for the image of $x_0$, the result follows.\qed \end{proof} \subsection{Strong-Connectivity} Our computations below focus on strong-connectivity. We shall need an estimation of the number of strongly connected group automata and almost-group automata. These results are given in Lemma~\ref{lem:sc_group_automata} and \ref{lem:sc_automata}. The proofs of these lemmas are kind of folklore, so we moved them into Appendix section~\ref{sec:appendix} to fit into a space limit \begin{restatable}{lemma}{primelemma} \label{lem:sc_group_automata} There are at most $n(n-1)!^k(1+o(1))$ group automata with set of states $E_n$ that are not strongly-connected. Henceforth, there are $n!^k(1+o(n^{1-k}))$ strongly-connected group automata. \end{restatable} \begin{restatable}{lemma}{secprimelemma} \label{lem:sc_automata} The number of not strongly-connected almost-group automata is at most $2(n-1)n(n-1)!^{k}(1+o(1))$. Henceforth, almost-group automata are strongly connected with high probability: there are $(n-1)n!^k(1+o(n^{1-k}))$ strongly connected elements in $\mathcal{G}_{n,k}$ \end{restatable} \subsection{Non-synchronizing Almost-Group Automata: a Lower Bound} In this section we give a lower bound on the number of strongly connected elements of $\mathcal{G}_{n,k}$ that are not synchronizing. In order to do so, we build a sufficiently large family of automata of that kind. The construction of this family is intuitively driven by the structure given in Lemma~\ref{lem:size_of_components} but the formal details of the construction can be done without mentioning this structure. For $n\geq 3$, let $\mathcal{F}_{n,k}$ be the subset of $\mathcal{G}_{n,k}$, made of the almost-group automata on $\Sigma$ with set of states $E_n$ such that: \begin{enumerate} \item there exists a state $p$ that is not the dangling state $p_0$ such that for every letter $a\neq a_0$, either $p\cdot a = p_0$ and $p_0\cdot a=p$, or $p\cdot a = p$ and $p_0\cdot a=p_0$; \item for at least one letter $a\neq a_0$, we have $p\cdot a = p_0$ and $p_0\cdot a=p$; \item there exists a state $q\in Q'=E_n\setminus\{p,p_0\}$ such that the action of $a_0$ on $Q\setminus\{p_0\}$ is a permutation with $q$ being the image of $p$ \item the image of the dangling state by $a_0$ is $p_0\cdot a_0 = q$. \item let $q'$ be the preimage of $p$ by $a_0$; if one removes the states $p$ and $p_0$ and set $q'\cdot a_0=q$, then the resulting automaton is a strongly connected group automaton; \end{enumerate} The structure of such an automaton is depicted on Fig.~\ref{fig:Fnk}. Clearly from the definition, an element of $\mathcal{F}_{n,k}$ is a strongly connected almost group automaton with the dangling state $p_0$. \begin{figure}[h] \begin{center} \begin{tikzpicture} \node[draw,circle,fill=black!15] (p0) at (-2,1) {$p_0$}; \node[draw,circle] (p) at (-2,-1) {$p$}; \node[draw,circle] (q) at (0,0) {$q$}; \node[draw,circle] (q1) at (-.1,-2.0) {$q'$}; \node (q2) at (1.5,-2) {}; \node (q3) at (2,-1) {}; \draw[->,thick] (p0) -- node[above]{$a_0$} (q); \draw[->,thick] (p) -- node[below]{$a_0$} (q); \draw[->,thick,dotted] (q1) edge[bend left] node[left]{$a_0$} (p); \draw[->,thick,dotted] (q2) edge[bend left] node[below]{$a_0$} (q1); \draw[->,thick,dotted] (q) edge[bend left] node[below]{$a_0$} (q3); \draw[->,thick,dotted] (q3) edge[bend left] (q2); \draw[->] (p) edge[bend left] node[left]{$a_1$}(p0); \draw[->] (p0) edge[bend left] node[right]{$a_1$}(p); \draw[->] (p0) edge[loop left] node[left]{$a_2,a_4$}(p0); \draw[->] (p) edge[loop left] node[left]{$a_2,a_4$}(p); \node [cloud, draw,cloud puffs=20,cloud puff arc=100, aspect=.9,minimum width=5.2cm, minimum height=4.8cm] (cloud) at (1.7,-1.1) {}; \node (qp) at (3,.5) {$\mathcal{Q'}$}; \end{tikzpicture} \end{center} \caption{The shape of an element of $\mathcal{F}_{n,k}$, with the dangling state $p_0$. \label{fig:Fnk}} \end{figure} \begin{lemma}\label{lem:Fnk not synchronizing} For every $n\geq 3$, every automaton of $\mathcal{F}_{n,k}$ is not synchronizing. \end{lemma} \begin{proof} First observe that $\{p_0,p\}$ is the only pair that can be synchronized by reading just a letter, which has to be $a_0$. The preimage of $\{p_0,p\}$ is either $\{p_0,p\}$ for $a \neq a_0$ or a singleton $\{q'\}$ otherwise. Hence, no other pair can be mapped to $\{p_0,p\}$ and thus be synchronized by more that one letter. \qed \end{proof} \begin{lemma} \label{lem:lower_bound} There are $(2^{k-1}-1)n(n-1)(n-2)(n-2)!^{k}(1+o(n^{1-k}))$ elements in $\mathcal{F}_{n,k}$. Thus there are at least that many strongly connected non-synchronizing almost-group automata. \end{lemma} \begin{proof} From the definition of $\mathcal{F}_{n,k}$, we observe that there are $n(n-1)(n-2)$ ways to choose $p_0$, $p$ and $q$. Once it is done, we choose any strongly connected group automaton $\mathcal{A}'$ with $n-2$ states in $E_N\setminus\{p_0,p\}$; there are $(n-2)!^k(1+o(n^{1-k}))$ ways to do that according to Lemma~\ref{lem:sc_group_automata}. We then change the transition from the preimage $q'$ of $q$ by $a_0$ by setting $q'\cdot a_0 = p$. We set $p\cdot a_0=p_0\cdot a_0 =q$. Finally we choose the actions of the letters $a\in\Sigma\setminus\{a_0\}$ on $\{p_0,p\}$ in one of the $2^{k-1}-1$ possible ways, as at least one of them is not the identity. This concludes the proof, since all the elements of $\mathcal{F}_{n,k}$ are built exactly once this way.\qed \end{proof} Observe that using the definitions of Lemma~\ref{lem:size_of_components}, an element of $\mathcal{F}_{n,k}$ consists of exactly one stable class $\{p_0,p\}$ in $\mathcal{B}$ and $n-2$ stable classes of size $1$ in $\S$. \subsection{Non-synchronizing Almost-Group Automata: an Upper Bound} In this section, we upper bound the number of non-synchronizing strongly-connected elements of $\mathcal{G}_{n,k}$ using the characterization of Lemma~\ref{lem:size_of_components}. In the sequel, we freely use the notations used in this lemma (the sets $D$, $\mathcal{B}$, $\S$, \ldots). Let $b\geq 1$, $s\geq 0$ and $\ell\geq 1$ be three non-negative integers such that $(\ell+1)b + \ell s = n$. Let $\mathcal{G}_{n,k}(b,s,\ell)$ denote the subset of $\mathcal{G}_{n,k}$ made of the automata such that $|\mathcal{B}|=b$, $|\S|=s$ and $|D|=\ell+1$. \begin{lemma} \label{lem:main_bound} The number of non-synchroninzing strongly-connected elements of $\mathcal{G}_{n,k}(b,s,\ell)$ is at most \[ \begin{cases} n! (n-2)!^{k-1}(n-2)(2^{k-1}-1) & \text{if }b=1,\,s=n-2,\text{ and}\ \ell=1,\\ n! \max(1,s) \ell\big(b!s!(\ell+1)!^{b}\ell!^{s}\big)^{k-1}&\text{otherwise}. \end{cases} \] \end{lemma} \begin{proof} Our proof consists in counting the number of ways to build, step by step, an element of $\mathcal{G}_{n,k}(b,s,\ell)$. Firstly, by elementary computations, one can easily verify that the number of ways to split $E_n$ into $b$ subsets of size $\ell+1$ and $s$ subsets of size $\ell$ is exactly \begin{equation}\label{eq:count_partitioning} \frac{n!}{(\ell+1)!^{b}\ell!^{s}b!s!}. \end{equation} Secondly, let us count the number of ways to define the transitions at the level of the factor automaton, i.e. between stable classes, as follows: \begin{itemize} \item Choose a permutation on $\mathcal{B}$ in $b!$ ways and on $\mathcal{S}$ in $s!$ ways for each of the $k-1$ letters $a \neq a_0$. \item Choose which stable class of $\mathcal{B}$ is the class $D$, i.e. the one containing the dangling state $p_0$, amongst the $b$ possibilities. \item Choose a permutation for $a_0$ on the $b-1$ classes $\mathcal{B} \setminus\{D\}$ in $(b-1)!$ ways. \item If $s\neq0$, choose one of the $s!$ permutations of $\mathcal{S}$ for the action of $a_0$ on these classes, then alter the action of $a_0$ the following way: choose the image $D'$ of $D$ by $a_0$ in $\mathcal{S}$ in $s$ ways, then insert it in the $a_0$-cycle: if $D''$ is the former preimage of $D'$, then now $D\cdot a_0 = D'$ and $D''\cdot a_0 = D$ in $\mathcal{A}/\rho$. \item If $s=0$, then set $D\cdot a_0 = D$ in $\mathcal{A}/\rho$. \end{itemize} In total, the number of ways to define the transitions of the factor automaton $\mathcal{A} / \rho$, once the stable classes are chosen is \begin{equation}\label{eq:factor_count} (b!s!)^{k-1}b(b-1)!\max(1,s)s! = b!^ks!^{k}\max(1,s). \end{equation} Now, we need to define transitions between stable classes for all letters. For all letters but $a_0$, there are $b$ injective transitions between stable classes of size $\ell+1$ and $s$ injective transitions between stable classes of size $\ell$, that is, there are at most $(\ell+1)!^b \ell!^s$ ways to define them for each of the $k-1$ letters. This is an upper bound, as some choices may result in an automaton that is, for instance, not strongly connected. We refine this bound for the case $\ell=1, b=1, s=n-2$: one of the letters must swap the states in the single $2$-element class in $\mathcal{B}$ for strong connectivity, so we count just one choice instead of $2$ (for $(\ell+1)!$) to define this letter on this component, that is, we consider only $2^{k-1}-1$ ways to define all permutations on $\mathcal{B}$ in this case, instead of the $((\ell+1)!^b)^{k-1}$ upper bound in the general case (this refinement is used to match our lower bound). For the action of $a_0$, we additionally choose the dangling state $p_0 \in D$ in $\ell+1$ ways and its image in $D\cdot a_0$ in $\ell$ ways: there are $\ell$ choices in the case where $D\cdot a_0=D$, since $p_0\cdot a_0\neq p_0$, and also when $D\cdot a_0\neq D$, since $D\cdot a_0\in\S$ in this case, according to Lemma~\ref{lem:size_of_components}. Then, it remains to define the injective transitions between the $\mathcal{B} \setminus\{D\}$ blocks in $(\ell+1)!^{b-1}$ ways, and the $s+1$ injective transitions between the $\mathcal{S} \cup \{D'\}$ blocks in $\ell!^{s+1}$ ways, where $D'=D\setminus\{p_0\}$. Thus, the number of ways to define the transitions between stable classes is at most $((\ell+1)!^b \ell!^s)^{k-1}\ell(\ell+1)(\ell+1)!^{b-1}\ell!^{s+1} = \ell(\ell+1)!^{bk}\ell!^{sk}$, in the general case, and $2(2^{k-1}-1)$ in the case $\ell=1, b=1, s=n-2$. Putting together \eqref{eq:count_partitioning}, \eqref{eq:factor_count} and this last counting result yield the lemma.\qed \end{proof} \begin{lemma} \label{lem:upper_bound} The number of non-synchroninzing strongly-connected almost-group automata in $\mathcal{G}_{n,k}$ is at most $n(2^{k-1}-1)n!(n-2)!^{k-1}(1+o(1/n))$. \end{lemma} \begin{proof} By Lemma~\ref{lem:lower_bound} and Theorem~\ref{thm:non-synch-criterion}, the number of non-synchroninzing strongly-connected almost-group automata in $\mathcal{G}_{n,k}$ is at most \begin{equation} \label{eq:sum_bs} n!\sum_{\ell=1}^{\lfloor n/2 \rfloor}\sum_{\{b,s \mid b(\ell+1) + s\ell = n\} } N_{\ell,b,s}, \end{equation} where $b \geq 1$, $s \geq 0$, and $b+s\geq 2$, and where $N_{\ell,b,s}$ is defined by \begin{equation} N_{\ell,b,s} = \begin{cases} \max(1,s) \ell (b!s!(\ell+1)!^{b}\ell!^{s})^{k-1}, & \text{for } (\ell,b,s) \neq (1,1,n-2) \\ (n-2)!^{k-1}(n-2)(2^{k-1}-1), & \text{for } (\ell,b,s) = (1,1,n-2). \end{cases} \end{equation} To finish the proof, it will be sufficient to prove that the sum in~(\ref{eq:sum_bs}) is asymptotically equivalent to the term $N_{1,1,n-2}$ since $n!N_{1,1,n-2}$ is asymptotically equivalent to the expression stated in Lemma~\ref{lem:upper_bound}. To prove this, let us consider the following fraction for $(\ell,b,s) \neq (1,1,n-2)$: \begin{equation} \label{leq:fraction1} \frac{N_{1,1,n-2}}{N_{\ell,b,s}} = \frac{n-2}{\max(1,s) \ell}\frac{(n-2)!^{k-1}(2^{k-1}-1)}{ (b!s!(\ell+1)!^{b}\ell!^{s})^{k-1}} \geq \left(\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}\right)^{k-1}, \end{equation} where we used that $n-2 = s\ell + b(\ell+1)-2 \geq s \ell$, as $b$ and $\ell$ are positive; thus $n-2\geq \max(1,s)\ell$ if $s>0$; but it also holds if $s=0$ since $b+s\geq2$. Observe that, for positive $\ell$ and $m$ we have \begin{align*} \frac{(bm)!}{m!^b} &= \left(\frac{1\cdot 2\cdots m}{1\cdot 2\cdots m}\right) \left(\frac{(m+1)(m+2)\cdots 2m}{1\cdot 2\cdots m}\right)\cdots \left(\frac{((b-1)m+1)\cdots bm}{1\cdot2\cdots m}\right)\\ & \geq 1^m\cdot 2^m\cdots b^m = b!^m \end{align*} Hence, for $m=\ell+1$, we have $\frac{(b(\ell+1))!}{(\ell+1)!^b}\geq b!^{\ell+1}$. Similarly, one can get that \begin{equation} {\frac{n!}{(b(\ell+1))!}}\frac{1}{\ell!^s} \geq \left(\frac{(b+s)!}{b!}\right)^{\ell}. \end{equation} Let $M_{\ell,b,s}=\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}$, the expression in brackets of (\ref{leq:fraction1}). This quantity can be bounded from below as follows. \begin{align} \label{eq:fraction2} M_{\ell,b,s} &= \frac{1}{n(n-1)b!s!}\frac{(b(\ell+1))!}{(\ell+1)!^{b}} \frac{n!}{(b(\ell+1))!\ell!^s} \\ &\geq \frac{b!^{\ell+1}}{n(n-1)b!s!} \left(\frac{(b+s)!}{b!}\right)^{\ell} \geq \frac{(b+s)!^\ell}{n^2 s!}. \end{align} Recall that we want to prove that $M_{\ell,b,s}$ is sufficiently large, so that $N_{1,1,n-2}$ is really larger than $N_{\ell,b,s}$. Notice that there are at most quadratic in $n$ number of combinations $(\ell, b, s)$ satisfying $b(\ell+1) + s\ell = n$, as for any values $1 \leq b, \ell < n$ there is at most one suitable value of $s$. Therefore, qubic lower bound on $M_{\ell,b,s}$ is enough in general. We distinguish two cases: \noindent$\triangleright$ If $\ell\geq2$, then $M_{\ell,b,s} \geq {n^{-2}(b+s)!^{\ell-1}}.$ If $b+s \geq \ln{n}$, this expression is greater than $\Theta(n^3)$ by Stirling formula. Otherwise, because $b(\ell+1)+s\ell=n$, we have $\ell \geq \frac{n}{\ln{n}}-1$ and as $b+s\geq 2$ the same $\Theta(n^3)$ lower bound holds. \noindent$\triangleright$ If $\ell=1$, then $s = n-2b$ and $M_{\ell,b,s} \geq \frac{(n-b)!}{n^2 (n-2b)!}.$ Clearly, this expression decreases as $b$ increases; for $b=3$ it is greater than $\Theta(n)$ (and there is only one such term) and for $b>3$ it is greater than $\Theta(n^3)$. If $b=1$, then $s=n-2$ and this is the term $N_{1,1,n-2}$. The only remaining case is when $b=2$, $\ell=1$, and $s=n-4$. For this case by (\ref{leq:fraction1}), we get \begin{equation} \frac{N_{1,1,n-2}}{N_{\ell,b,s}} \geq \left(\frac{(n-2)!}{b!s!(\ell+1)!^{b}\ell!^{s}}\right)^{k-1} = \left(\frac{(n-2)!}{8(n-4)!}\right)^{k-1} = \Theta(n^{2(k-1)}). \end{equation} Thus, we proved that the sum (\ref{eq:sum_bs}) is indeed asymptotically equal to the term $N_{1,1,n-2}$ multiplied by $n!$.\qed \end{proof} \subsection{Main Result and Conclusions} Now, we are ready to prove our main result on the asymptotic number of strongly connected elements of $\mathcal{G}_{n,k}$ that are not synchronizing. \begin{theorem} \label{thm:main} The probability that a random strongly connected almost-group automaton with $n$ states and $k\geq 2$ letters is not synchronizing is equal to \begin{equation} ({2^{k-1}-1}){n^{-2(k-1)}}\left(1+o(1)\right). \end{equation} In particular, random strongly connected almost-group automata are synchronizing with high probability as $n$ tends to infinity. \end{theorem} \begin{proof} Lemma~\ref{lem:lower_bound} and Lemma~\ref{lem:upper_bound} give lower and upper bounds on the number of strongly-connected non-synchronizing almost-group automata, which are both equal to $(2^{k-1}-1)n^3(n-2)!^{k}(1+o(1/n))$. We conclude the proof using the estimation on the number of strongly-connected almost-group automata given in Lemma~\ref{lem:sc_automata}.\qed \end{proof} Thus we obtained a precise asymptotic on the probability for strongly-connected almost group automata of being synchronizable for any alphabet size. As in~\cite{Berl2013RandomAut}, it would be natural to design an algorithm which would verify whether a given random strongly-connected almost group automaton is synchronizing in optimal average time. Another, much more challenging problem, concerns estimation of the expected length of a shortest reset word for random automata in this setting. We are thankful to anonymous referees whose comments helped to improve the presentation of the results. \bibliographystyle{splncs04}
{ "timestamp": "2018-05-08T02:09:48", "yymm": "1805", "arxiv_id": "1805.02154", "language": "en", "url": "https://arxiv.org/abs/1805.02154" }
\section*{Introduction} \'Etant donn\'ee une fonction $f$ holomorphe dans un ouvert $\mathscr U$ de $\mathbb{C}^n$ et une $(n-1,n-1)$-forme diff\'erentielle $\varphi$ \`a support compact dans $\mathscr {U}$, la transform\'ee de Mellin de la fonction $$ \varepsilon \in\, ]0,+\infty[\, \longmapsto I_f(\varphi)(\varepsilon) = \frac{1}{2i\pi} \int_{\{|f|^2 = \varepsilon\}} \frac{df\wedge \varphi}{f} $$ est formellement la fonction m\'eromorphe \begin{equation*} \lambda \in \mathbb{C}\longmapsto \lambda \int_0^\infty \varepsilon^{\lambda-1} \, I_f(\varphi)(\varepsilon)\, d\varepsilon = \frac{1}{2i\pi} \, \Big\langle [\lambda |f|^{2(\lambda-1)} \, \overline{df}]\,,\, df\wedge \varphi \Big\rangle = \Big\langle dd^c \Big[\frac{|f|^{2\lambda}}{\lambda}\Big]\,,\, \varphi \Big\rangle, \end{equation*} ($dd^c=\frac{-\partial\overline{\partial}}{2i\pi}$) et sa valeur en $\lambda =0$ (qui n'est pas un p\^ole) est exactement $\langle [{\rm div}(f)]\,,\, \varphi\rangle$, ce qui constitue une mani\`ere alternative de formuler dans le cadre de la g\'eom\'etrie analytique complexe la formule de Lelong-Poincar\'e. Introduite pour la premi\`ere fois dans un tel contexte dans \cite[chapitre 1, sections 1 et 2]{BGVY}, cette approche s'est r\'ev\'el\'ee particuli\`erement utile pour envisager les probl\`emes d'intersection et de division dans le cadre de la g\'eom\'etrie analytique complexe en profitant du scindage du courant d'int\'egration $[{\rm div}(f)]$ en $[{\rm div}(f)] = \bar\partial [1/f] \wedge df$, o\`u $[1/f]$ est le $(0,0)$-courant Valeur Principale extension standard depuis l'ouvert $\mathscr U_f= \mathscr U \setminus {\rm Supp}([{\rm div}(f)])$ de la fonction holomorphe inversible $1/f$. Quand bien m\^eme pareil scindage du courant $[{\rm div}(f)]$ ne s'av\`ere plus possible dans le contexte de la g\'eom\'etrie analytique sur un corps $\mathbb{K}$ \'equip\'e d'une valeur absolue non archim\'edienne, un des objectifs de ce travail est de montrer qu'une approche du m\^eme type peut n\'eanmoins \^etre conduite dans ce cadre, suivant en particulier l'approche \`a la th\'eorie de l'intersection incompl\`ete propos\'ee dans \cite{ASWY14} (toujours dans le cadre de la g\'eom\'etrie analytique complexe). \vskip 1mm \noindent Soit $\mathbb K$ un corps \'equip\'e d'une valeur absolue ultram\'etrique (triviale ou non) $|\ |$ pour laquelle $\mathbb K$ est aussi complet et $\Gamma$ son groupe des valeurs, c'est-\`a-dire le sous-groupe de $\mathbb{R}$ obtenu comme l'image de $\mathbb{K}^*$ par la valuation $-\log |\ |$. Soit $X$ une vari\'et\'e alg\'ebrique de dimension $n$ d\'efinie sur $\mathbb K$. On note $X^{\rm an}$ l'analytifi\'e de $X$ au sens de Berkovich \cite[Remarque 3.4.2]{Berk90}, ensemblistement d\'ecrit ainsi (voir par exemple \cite[section 4]{Gub14})~: si $U ={\rm Spec}(A)$ est un ouvert affine de $X$, on consid\`ere l'ensemble $U^{\rm an}$ de toutes les semi-normes multiplicatives ultram\'etriques sur l'alg\`ebre $A$ prolongeant la valeur absolue $|\ |$ sur $\mathbb K$ et l'on \'equipe cet ensemble $U^{\rm an}$ de la topologie la moins fine rendant continues toutes les applications d'\'evaluation $x\in U^{\rm an} \mapsto x(a)=|a|_x$ ($a\in A$)~; en recollant les divers $U^{\rm an}$ pour tous les ouverts affines $U$ de $X$, on obtient un espace topologique connexe, localement compact et s\'epar\'e, sur lequel il reste \`a construire un faisceau structural $\mathcal O_{X^{\rm an}}$ de fonctions dites r\'eguli\`eres. Ce faisceau est d\'efini ainsi (voir par exemple \cite[chapitre 1, section 1.2]{BPS} pour une pr\'esentation rapide). Chaque point $x$ de $U^{\rm an}$ induit une norme sur l'anneau int\`egre $B_x = A/{\rm ker}\, x$, o\`u ${\rm ker}\, x = \{a\in A\,;\, x(a) =0\}$, cette norme s'\'etendant en une norme $|\ |_x$ sur le corps des fractions de $B_x$, corps que l'on compl\`ete relativement \`a cette norme en un corps $\mathscr H_x$. Si $\mathscr U$ est un ouvert de $U^{\rm an}$, une fonction r\'eguli\`ere dans $\mathscr U$ (c'est-\`a-dire un \'el\'ement de $\mathcal O_{X^{\rm an}}(\mathscr{U})$) est par d\'efinition une fonction $$ x\in \mathscr{U} \longmapsto f(x) \in \bigsqcup_{x\in \mathscr{U}} \mathscr{H}_x $$ se pliant \`a la clause d'approximation suivante~: pour tout $x\in \mathscr{U}$, il existe un voisinage $\mathscr{U}_x$ de $x$ dans $\mathscr{U}$ tel que \begin{equation}\label{faisceau} \begin{split} & \forall\, \epsilon>0,\ \exists\, a_{x,\epsilon} \in A,\, \exists\, b_{x,\epsilon} \in A\setminus {\rm ker}\, x,\ {\rm tels\ que} \\ & \forall\, \xi\in \mathscr{U}_x\,,\ |f(\xi) - a_{x,\epsilon}(\xi)/b_{x,\epsilon}(\xi)|_{\xi } < \epsilon, \end{split} \end{equation} o\`u il faut ici entendre $a_{x,\epsilon}(\xi)$ et $b_{x,\epsilon}(\xi)$ comme les classes dans $B_\xi$ des \'el\'ements $a_{x,\epsilon}(\xi)=\xi(a_{x,\epsilon})$ et $b_{x,\epsilon}(\xi)=\xi(b_{x,\epsilon})$, le quotient $a_{x,\epsilon}(\xi)/b_{x,\epsilon}(\xi)$ \'etant alors un \'el\'ement du corps des fractions de cet anneau int\`egre $B_\xi$, donc de son compl\'et\'e $\mathscr H_\xi$ pour la norme $|\ |_\xi$. \vskip 1mm \noindent Lorsque $X$ est le tore $T_r := {\rm Spec}\, \mathbb{K}[X_1^{\pm 1},...,X_r^{\pm 1}]$ (plus g\'en\'eralement $\mathbb{K}[M_r]$, o\`u $M_r$ est un groupe ab\'elien libre de rang $r$), l'analytifi\'e $T^{\rm an}_r$ se visualise ensemblistement ainsi~: se donner une semi-norme multiplicative sur $\mathbb{K}[X_1^{\pm 1},...,X_r^{\pm 1}]$ prolongeant la valeur absolue $|\ |$ sur $\mathbb K$ revient \`a se donner une $\mathbb{K}$-extension $\mathbb K\subset \mathbb{L}$ avec une valeur absolue $|\ |_{\mathbb L}$ prolongeant la valeur absolue sur $\mathbb{K}$, un point $\ell \in \mathbb L^r$ et \`a poser $x_{\mathbb L,\ell}(a)=|a(\ell)|_{\mathbb L}$. On dispose d'une application continue et propre de tropicalisation~: $$ x \in T_r^{\rm an} \stackrel{\rm trop}{\longmapsto} \big(- \log (x(X_1)),...,-\log (x(X_r))\big) \in \mathbb{R}^r\; ({\rm ou}\ N_{r,\mathbb{R}},\ N_r = {\rm Hom}(M_r,\mathbb{Z})). $$ Cette application de tropicalisation admet la section suivante~: \`a $(\omega_1,...,\omega_r) \in \mathbb{R}^r$ (respectivement dans $N_{r,\mathbb{R}}$), on associe la norme multiplicative sur $\mathbb{K}[X_1^{\pm 1},...,X_n^{\pm 1}]$ (respectivement sur $\mathbb{K}[M_r]$) d\'efinie par $$ x_\omega (a) := \sup_{\alpha \in {\rm Supp}(a)} |a_\alpha|\, e^{-\langle \alpha,\omega\rangle}. $$ L'image de $\mathbb{R}^r$ (ou plus g\'en\'eralement de $N_{r,\mathbb{R}}$) par cette section $\omega \mapsto x_\omega$ est appel\'ee squelette de $T^{\rm an}_r$~; c'est un sous-ensemble ferm\'e de $T^{\rm an}_r$ sur lequel $T_r^{\rm an}$ se r\'etracte continument au sens fort. \vskip 2mm \noindent Le r\^ole des cartes locales sur $X^{\rm an}$ est tenu par les analytifications des applications moment de la vari\'et\'e alg\'ebrique $X$ (voir par exemple \cite{Gub14}). \vskip 2mm \noindent On dispose aussi sur $X^{\rm an}$ d'\^etres cette fois \og souples\fg\ et non plus \og rigides\fg\ (comme le sont les sections locales du faisceau structurant), \`a savoir des sections du faisceau $\mathscr{A}^{0,0}$ des fonctions lisses. Si $U$ est un ouvert de $X^{\rm an}$, une section de $\mathscr{A}^{0,0}$ dans $U$ est par d\'efinition une fonction de $U$ dans $\mathbb{R}$ pouvant s'exprimer localement (au voisinage de tout point $x$ de $U$) sous la forme \begin{equation}\label{reguliere} \xi \longmapsto \phi( \log |f_{x,1}|_\xi,...,\log |f_{x,m_x}|_\xi), \end{equation} o\`u les fonctions $f_{x,j}$ sont des fonctions r\'eguli\`eres inversibles au voisinage de $x$ et $\phi$ une fonction de $\mathbb{R}^{m_x}$ dans $\mathbb{R}$ de classe $C^\infty$ au voisinage de l'image dans $\mathbb{R}^{m_x}$ d'un voisinage de $x$ par l'application $$ \xi \longmapsto \big(\log |f_{x,1}|_\xi,...,\log |f_{x,m_x}|_\xi\big) \in \mathbb{R}^{m_x}. $$ Tel est le cas par exemple des fonctions s'exprimant au voisinage $U_x$ de tout point $x$ de $U$ sous la forme \begin{equation*} \xi \longmapsto \prod\limits_{j=1}^{m_x} \Big(\frac{|f_{x,j}(\xi)|_\xi}{e^{\rho_{x,j}}}\Big) ^{\lambda_j} = \prod\limits_{j=1}^{m_x} \exp \big(\lambda_j\, (\log |f_{x,j}(\xi)|_\xi -\rho_{x,j})\big) \end{equation*} o\`u $\lambda_1,...,\lambda_{m_x}$ sont des param\`etres r\'eels, $f_{x,1},...,f_{x,m_x}$ des fonctions r\'eguli\`eres et inversibles au voisinage de $x$ et $\rho_{x,1},...,\rho_{x,m_x}$ des \'el\'ements de $\mathscr{A}^{0,0}(U_x)$. \vskip 2mm \noindent On dispose aussi sur $X^{\rm an}$, pour tout couple d'entiers $p,q$ entre $0$ et $n$, du faisceau $\mathscr{A}^{p,q}$ des $(p,q)$-formes et, par dualit\'e, du faisceau $\mathscr{D}_{p,q}$ des $(p,q)$-courants, tous deux introduits par A. Chambert-Loir et A. Ducros (\cite{ChLD,ChL}) et W. Gubler \cite{Gub14}). Rappelons bri\`evement comment sont d\'efinis ces faisceaux. Afin de pouvoir profiter, tout en travaillant dans un cadre r\'eel, de la positivit\'e inh\'erente au cadre complexe, on introduit dans un premier temps sur $\mathbb{R}^r$ (ou $\mathbb{N}_{r,\mathbb{R}}$) le faisceau des $(p,q)$ super-formes ($0\leq p,q\leq r$)~: une section de ce faisceau dans un ouvert $\Omega\subset \mathbb{R}^r$ (ou $N_{r,\mathbb{R}}$) est une $(p,q)$-forme (au sens usuel) dans l'ouvert ${\rm Log}^{-1}(\Omega)\subset (\mathbb{C}^*)^r$ (ou $\mathbb{C}[N_r](\mathbb{C})$) s'exprimant comme $$ \omega = \sum\limits_{\stackrel{1\leq i_1 < \dots < i_p\leq r} {1\leq j_1 < \dots < j_q\leq r}} \omega_{I,J} ({\rm Log}(z))\, \frac{dz_I}{z_I} \wedge \frac{d\bar z_J}{\bar z_J}, $$ o\`u ${\rm Log}~: z \mapsto (\log |z_1|,...,\log |z_r|)$ et les $\omega_{I,J}$ sont des fonctions \`a valeurs r\'eelles de classe $C^\infty$ dans $\Omega$. Par dualit\'e, on dispose du faisceau des super-courants de bidimension $(p,q)$, dont les sections dans un ouvert $\Omega\subset \mathbb{R}^r$ (ou $N_{r,\mathbb{R}}$) sont les courants de bidimension $(p,q)$ dans ${\rm Log}^{-1}(\Omega)$ s'exprimant sous la forme $$ T = \sum\limits_{\stackrel{1\leq i_1' < \dots < i_{n-p}'\leq r} {1\leq j_1' < \dots < j_{n-q}'\leq r}} T_{I',J'} ({\rm Log}(z))\, \frac{dz_{I'}}{z_{I'}} \wedge \frac{d\bar z_{J'}}{\bar z_{J'}}, $$ o\`u les distributions $T_{I',J'}$ sont des distributions r\'eelles dans $\Omega$ et $$ \langle T_{I',J'}({\rm Log}(z)),\varphi \rangle := \Big\langle T_{I',J'}(x), \frac{1}{(2\pi)^r}\int_{[0,2\pi]^r} \varphi(e^{x+i\theta})\, d\theta_1 \dots d\theta_r\Big\rangle $$ pour toute fonction test $\varphi\in \mathcal D({\rm Log}^{-1}(\Omega),\mathbb{R})$. Le courant tropical de bidimension $(n,n)$ attach\'e (voir \cite{Bab,BabH}) \`a un cycle tropical de dimension $n<r$ dans $\mathbb{R}^r$ ou $N_{r,\mathbb{R}}$ (la pond\'eration \'etant prise en compte) constitue un exemple important de super-courant de bidimension $(n,n)$ dans $\mathbb{R}^r$ ou $N_{r,\mathbb{R}}$. Pour plus de d\'etails sur la mani\`ere dont les formes diff\'erentielles r\'eelles sur les espaces de Berkovich sont introduites, voir \cite{Gub14}. Le faisceau $\mathscr D_{p,q}(U)$ des courants de bidimension $(p,q)$ sur $X^{\rm an}$ est d\'efini comme le dual du faisceau $\mathscr A^{p,q}(U)$ des $(p,q)$-formes diff\'erentielles (on notera par la suite, pour tout ouvert $U$ de $X^{\rm an}$ par $\mathscr{A}^{p,q}_c(U)$ le $\mathbb{R}$-espace vectoriel des $(p,q)$-formes diff\'erentielles \`a support compact inclus dans $U$) \cite[section 6]{Gub14}. On dispose en particulier du courant de bidegr\'e $(p,p)$ d'int\'egration sur une sous-vari\'et\'e $Y$ de $X$ de codimension $0\leq p\leq n$ et, lorsque $f$ est une fonction r\'eguli\`ere dans un ouvert $U$ de $X^{\rm an}$, du courant de bidegr\'e $(1,1)$ d'int\'egration (avec multiplicit\'es prises en compte) $[{\rm div}(f)]$. Si $u~: U \rightarrow \mathbb{R}$ est une fonction continue dans un ouvert $U$ de $X^{\rm an}$, $u$ d\'efinit naturellement un courant de bidegr\'e $(0,0)$ dans $U$ (voir par exemple \cite[section 6]{GuK}), not\'e $[u]$. \vskip 2mm \noindent Le but de ce travail est de d\'evelopper une approche bas\'ee sur le prolongement analytique aux fins de la r\'esolution de l'\'equation de Lelong-Poincar\'e, de la construction de courants de Green (voir la section \ref{sectionGreen}) et de l'explicitation des courants de Vogel (section \ref{sectionVogel}) ou de Segre (section \ref{sectionsegre})~; une interpr\'etation de la formule de King, en relation avec la d\'ecomposition de Siu des courants positifs ferm\'es et le couplage \og composantes stables/composantes mobiles\fg\ en th\'eorie de l'intersection impropre \cite{ASWY14} est aussi \'evoqu\'ee dans ce travail (section \ref{sectionKing}). L'objectif vis\'e ici est d'esquisser une transposition au cadre non archim\'edien de l'approche \`a la th\'eorie de l'intersection dans le cadre impropre telle qu'elle est d\'evelopp\'ee dans \cite{ASWY14} (dans le contexte analytique complexe). \`A plus long terme, nous esp\'erons exploiter cette approche en la combinant avec le calcul d'Igusa \cite{Igu} (dans le contexte ultam\'etrique) aux fins d'unifier les contributions aux places finies et infinies dans par exemple l'expression de la hauteur logarithmique (relativement \`a un choix de m\'etrique, lisse ou non) d'un cycle arithm\'etique \cite{BPS}. \section{Approche du type Mellin aux \'equations de Le\-long-Poincar\'e}\label{section1} Soit $X$ un bon $\mathbb{K}$-espace de Berkovich, c'est-\`a-dire un $\mathbb{K}$-espace de Berkovich dont tout point admet un voisinage affino\"ide, donc une base de voisinages affino\"ides. On suppose $X$ de dimension pure $n$ et sans bord. On pourra penser $X$ comme l'analytifi\'e d'une vari\'et\'e alg\'ebrique de dimension $n$ d\'efinie au-dessus de $\mathbb{K}$. \vskip 1mm \noindent Soit $U\subset X$ un ouvert de $X$ et $f$ une fonction m\'eromorphe r\'eguli\`ere non diviseur de z\'ero dans $U$, c'est-\`a-dire une fonction s'exprimant au voisinage de tout point $x\in U$ comme le quotient de deux sections locales de $\mathcal O_{X}$. On note $U_f$ le plus grand ouvert de $U$ dans lequel $f$ s'exprime localement comme une section locale inversible du faisceau $\mathcal O_{X}$. \vskip 1mm \noindent Si $\omega$ est une section \`a support compact de $\mathscr{A}^{n-1,n-1}_c(U)$, le support (compact) de la $(n-1,n)$-forme $d''\omega$ \'evite tout sous-ensemble ferm\'e de Zariski d'int\'erieur vide \cite[lemme 3.2.5]{ChLD}. Pour tout $\lambda\in \mathbb{R}^*$, la forme $$ x\in U_f \mapsto \Big(d'\Big(\frac{|f|^\lambda}{\lambda}\Big) \wedge d''\omega\Big)(x) = \Big(d'\Big(\frac{e^{\lambda \log|f|}}{\lambda}\Big)\wedge d''\omega\Big)(x) $$ appartient \`a $\mathscr A^{n,n}_c(U_f)$ et l'on a d'apr\`es la formule de Stokes ($X$ est suppos\'ee sans bord), si $\varphi$ d\'esigne une fonction lisse identiquement \'egale \`a $1$ au voisinage du support de $d''\omega$ et de support compact dans $U_f$ (que l'on peut encore construire gr\^ace au th\'eor\`eme de partitionnement de l'unit\'e, \cite[proposition 3.3.6]{ChLD}), \begin{multline*} \int_{U_f} d'\Big(\frac{|f|^{\lambda}}{\lambda}\Big) \wedge d'' \omega = \int_{X} d'\Big(\varphi \frac{|f|^{\lambda}}{\lambda}\Big) \wedge d'' \omega \\ = -\int_{X} \Big(\varphi \, \frac{|f|^{\lambda}}{\lambda}\Big)\, d'd''\omega = - \Big\langle \Big[ \frac{|f|^\lambda}{\lambda}\Big]\,,\, d'd''\omega \Big\rangle = \Big\langle d'\, \Big[ \frac{|f|^\lambda}{\lambda}\Big]\,,\, d''\omega \Big\rangle, \end{multline*} o\`u le courant $[|f|^\lambda/\lambda]$ est le $(0,0)$-courant d\'efini \`a partir de la fonction lisse $$ |f|^{\lambda}/\lambda~: U_f \rightarrow \mathbb{R} $$ suivant le lemme 4.6.1 de \cite{ChLD}. \vskip 1mm \noindent On d\'efinit, pour tout $\lambda \in \mathbb{R}^*$, un courant $T_\lambda^f \in \mathscr{D}_{1,1}(U)$ en posant \begin{multline*} \forall\, \omega \in \mathscr{A}^{n-1,n-1}_c(U),\quad \langle T_\lambda^f,\omega\rangle := - \Big\langle d'\, \Big[ \frac{|f|^\lambda}{\lambda}\Big]\,,\, d''\omega \Big\rangle = - \int_{U_f} d'\Big(\frac{|f|^{\lambda}}{\lambda}\Big) \wedge d'' \omega \\ = -\int_{U_f} |f|^\lambda d'\Big(\log |f|\Big)\wedge d''\omega. \end{multline*} D'apr\`es le th\'eor\`eme de convergence domin\'ee de Lebesgue, la fonction $\lambda \mapsto T_\lambda^f$ (\`a valeurs dans $\mathscr D_{1,1}(U)$) admet comme limite lorsque $\lambda$ tend vers $0$ le $(1,1)$-courant \begin{multline*} \omega \in \mathscr A^{n-1,n-1}_c(U) \longmapsto - \int_{U_f} d'\Big(\varphi\, \log |f|\Big) \wedge d''\omega = \int_{X^{\rm an}} \varphi\, \log |f|\, d'd''\omega \\ = \big\langle \big[\varphi \, \log |f|\big]\,,\, d'd''\omega \big \rangle = \big\langle d'd''\, \big[\varphi \, \log |f|\big]\,,\,\omega\big\rangle = \big \langle [{\rm div}(f)]\,,\,\omega\big\rangle \end{multline*} d'apr\`es la formule de Stokes, une nouvelle fois le lemme 4.6.1 de \cite{ChLD}, et enfin la formule de Lelong-Poincar\'e \cite[th\'eor\`eme 4.6.5 ]{ChLD}. \vskip 2mm \noindent On peut maintenant envisager le cas o\`u $f_1$ et $f_2$ sont deux fonctions r\'eguli\`eres m\'eromorphes dans un ouvert $U$ de $X$, telles que ${\rm codim}_{U} ({\rm Supp}\,([{\rm div}( f_1)]) \cap {\rm Supp}\,([{\rm div}( f_2)]))\geq 2$. Suivant la description du courant $[{\rm div}(f_1)]$ donn\'ee dans la section 4.6 de \cite[commentaire apr\`es la preuve du lemme 4.6.4]{ChLD}, on exprime ce courant comme la somme de courants d'int\'egration $\pm [{\rm div}(f_{1,\kappa})]$, o\`u $f_{1,\kappa}$ est une fonction r\'eguli\`ere dans $U$ non diviseur de $0$. On d\'esigne par $Z_{1,\kappa}$ le $\mathbb{K}$-sous-espace analytique ferm\'e (de dimension $n-1$) $f_{1,\kappa}^{-1}(\{0\})$, que l'on consid\`ere comme un $\mathbb{K}$-espace analytique de dimension $n-1$~; on note $\iota_{1,\kappa}$ le morphisme de $\mathbb{K}$-espaces analytiques correspondant \`a l'inclusion $Z_{1,\kappa}\subset U$. L'action du courant $[{\rm div}(f_1)]$ s'exprime sous la forme $$ \omega \in \mathscr A^{n-1,n-1}_c(U) \longmapsto \big\langle [{\rm div} (f_1)]\,,\omega\big\rangle := \sum_\kappa \int_{Z_{1,\kappa}\cap U} \omega = \sum_\kappa \int_{\iota_{1,\kappa}^{-1}(U)} \iota_{1,\kappa}^* (\omega). $$ Pour chaque indice $\kappa$, on d\'efinit, pour tout $\lambda \in \mathbb{R}^*$, suivant le lemme 4.6.1 de \cite{ChLD}, un $(1,1)$-courant $\big[|f_2|^\lambda/\lambda]\big]\, [{\rm div}(f_{1,\kappa})]$ par~: $$ \Big\langle \Big[\frac{|f_2|^\lambda}{\lambda}\Big] \, [{\rm div}(f_{1,\kappa})]\,,\omega \Big\rangle := \int_{\iota_{1,\kappa}^{-1}(U)} \Big(\frac{\iota_{1,\kappa}^* (\theta)\, |\iota_{1,\kappa}^* (f_2)|^\lambda}{\lambda}\Big)\, \iota_{1,\kappa}^* (\omega) $$ pour tout $\omega \in \mathscr A^{n-1,n-1}_c(U)$ ($\theta$ d\'esignant encore une fonction lisse de support inclus dans $U_{f_2}$ identiquement \'egale \`a $1$ au voisinage de $Z_{1,\kappa}\cap {\rm Supp}\, \omega$, que l'on peut encore construire gr\^ace au th\'eor\`eme de partitionnement de l'unit\'e, \cite[proposition 3.3.6]{ChLD}). On observe en utilisant la formule de Stokes dans $Z_{1,\kappa}$ et le lemme 4.6.1 de \cite{ChLD} que \begin{multline*} \Big\langle d'd''\, \Big(\Big[\frac{|f_2|^\lambda}{\lambda}\Big]\Big) \, [{\rm div}(f_{1,\kappa})]\,,\omega \Big\rangle = - \int_{\iota_{1,\kappa}^{-1}(U_{f_2})} \Big( d' \Big(\frac{|\iota_{1,\kappa}^* f_2|^\lambda}{\lambda}\Big)\Big) \wedge d''\big(\iota_{1,\kappa}^* (\omega)\big) \\ = - \int_{\iota_{1,\kappa}^{-1}(U_{f_2})} |\iota_{1,\kappa}^*(f_2)|^{\lambda} \, d' \log| \iota_{1,\kappa}^*(f_2)|\wedge d''\big(\iota_{1,\kappa}^* (\omega)\big) \end{multline*} pour tout $\omega \in \mathscr A^{n-2,n-2}_c(U)$. La limite lorsque $\lambda$ tend vers $0$ dans $\mathbb{R}^*$ (au sens de la convergence faible des $(2,2)$-courants sur $U$) de $d'd''\big[|f_2|^\lambda/\lambda\big]\, \big[{\rm div}(f_{1,\kappa})\big]$ existe et d\'efinit un $(2,2)$-courant de support inclus dans $Z_1\cap Z_2$ que l'on conviendra de noter $[{\rm div}(f_{1,\kappa})] \wedge [{\rm div}(f_2)]$ (en respectant pour l'instant cet ordre). On observe d'ailleurs que \begin{equation}\label{prod2diviseurs} [{\rm div}(f_{1,\kappa})]\wedge [{\rm div}(f_{2})] = (\iota_{1,\kappa})_* \Big(d'd'' \big[\log |\iota_{1,\kappa}^* f_2|\big]\Big). \end{equation} On pose, en respectant pour l'instant l'ordre, $$ [{\rm div}(f_{1})] \wedge [{\rm div}(f_2)] := \sum_\kappa \pm [{\rm div}(f_{1,\kappa})] \wedge [{\rm div}(f_2)], $$ puisque $[{\rm div}(f_1)]:= \sum_\kappa \pm [{\rm div}(f_{1,\kappa})] $ (voir \cite[lemme 4.6.4]{ChLD} et commentaire qui suit la d\'emonstration du dit lemme). \begin{prop}\label{prop2fonctions} Soient $f_1$ et $f_2$ deux fonctions m\'eromorphes dans un ouvert $U$ de $X$, telles que ${\rm codim}_U \big({\rm Supp}\, \big([{\rm div}(f_1)]\big) \cap {\rm Supp}\, \big([{\rm div}(f_2)]\big)\big)\geq 2$. Pour tout $(\lambda_1,\lambda_2) \in (\mathbb{R}^*)^2$, on d\'efinit un courant $T_{\lambda_1,\lambda_2}^{f_1,f_2}$ appartenant \`a $\mathscr D_{2,2}(U)$ en posant $$ \forall\, \omega \in \mathscr{A}^{n-2,n-2}_c(U),\quad \big\langle T_{\lambda_1,\lambda_2}^{f_1,f_2},\omega \big\rangle = - \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge d''\omega $$ apr\`es avoir d\'ecoup\'e cette int\'egrale suivant un partitionnement de l'unit\'e $1=\sum_\iota \varphi_\iota$ subordonn\'ee au support de $d''\omega$ afin d'en assurer la convergence. Alors, on a, au sens des courants \begin{equation}\label{produit} \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1 \not=0,\ \lambda_2 \not=0}} T_{\lambda_1,\lambda_2}^{f_1,f_2} = [{\rm div}(f_1)] \wedge [{\rm div}(f_2)], \end{equation} o\`u le courant figurant au membre de droite a \'et\'e pr\'ec\'edemment d\'efini en termes des non-diviseurs de z\'ero $f_{1,\kappa}$ figurant dans $f_j$ ($j=1,2$). \end{prop} \begin{proof} On note $Z_1$ et $Z_2$ les sous-espaces analytiques ferm\'es (au sens de Zariski) de $U$ d\'efinis comme les supports des courants $[{\rm div}(f_1)]$ et $[{\rm div}(f_2)]$. Soit $\omega \in \mathscr A^{n-2,n-2}_c(U)$. Du fait de l'hypoth\`ese ${\rm codim}_U \big({\rm Supp}\, \big([{\rm div}(f_1)]\big) \cap {\rm Supp}\, \big([{\rm div}(f_2)]\big)\big)\geq 2$, il r\'esulte du lemme 3.2.5 de \cite{ChLD} et de la d\'efinition de la dimension locale $d_{\mathbb{K}}(x)$ ($x\in U)$ comme le minimum des dimensions $\mathbb{K}$-analytiques des domaines $\mathbb{K}$-affinoides qui contiennent $x$ (voir par exemple \cite[d\'efinition 1.16]{Duc07}), que le support de la $(n-2,n-1)$-forme diff\'erentielle $d''\omega$ ne rencontre pas le sous-ensemble de Zariski $Z_1\cap Z_2$. D'apr\`es le lemme de partitionnement de l'unit\'e \cite[proposition 3.3.6]{ChLD}, on peut introduire dans $U$ une partition de l'unit\'e $1= \sum_\iota \varphi_\iota$ (par des fonctions lisses \`a support compact), subordonn\'ee au recouvrement du compact ${\rm Supp}\, (d''\omega)$ de $U$ par les deux ouverts $U_{f_1}$ et $U_{f_2}$. On donne le sens suivant \`a l'expression \begin{equation}\label{expressionProp11} - \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, d''\omega \end{equation} dans les deux cas (de fait sym\'etriques) o\`u ${\rm Supp}\, \varphi_\iota \subset U_{f_1}$ et ${\rm Supp}\, \varphi_\iota \subset U_{f_2}$. \begin{itemize} \item Dans le premier cas, le sens que l'on donne \`a l'expression \eqref{expressionProp11} est le suivant~: on introduit suivant le lemme 4.6.1 de \cite{ChLD} le courant $\big[|f_2|^{\lambda_2}/\lambda_2\big]$ et le sens que l'on donne \`a \eqref{expressionProp11} est \begin{multline}\label{expressioncas1} - \Big\langle d'\Big[ \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d'd'' \Big(\varphi\, \frac{|f_1|^{\lambda_1}}{\lambda_1}\Big) \wedge \varphi_\iota\, d''\omega \Big\rangle \\ = - \Big\langle d'\Big[ \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d'\big(\varphi\, |f_1|^{\lambda_1}\, d''(\varphi\, \log |f_1|)\big) \wedge \varphi_\iota\, d''\omega \Big\rangle \\ = - \Big\langle d'\Big[ \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, |f_1|^{\lambda_1} \Big(\varphi\, d'd''(\log |f_1|) + \lambda_1 d'(\varphi \log |f_1|) \wedge d''(\varphi \log |f_1|) \wedge \varphi_\iota \, d''\omega \Big\rangle \\ = - \lambda_1\, \Big\langle d'\Big[ \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, \varphi\, |f_1|^{\lambda_1}\, d'(\varphi \log |f_1|) \wedge d''(\varphi \log |f_1|) \wedge \varphi_\iota \, d''\omega \Big\rangle \end{multline} o\`u $\varphi$ est une fonction lisse de support inclus dans $U_{f_1}$, identiquement \'egale \`a $1$ au voisinage du support de $\varphi_\iota\, d''\omega$. On a utilis\'e ici le fait que $d'd''\log |f_1| =0$ dans $U_{f_1}$, cons\'equence de la formule de Lelong-Poincar\'e \cite[th\'eor\`eme 4.6.5]{ChLD}. \item Dans le second cas, le sens que l'on donne \`a l'expression \eqref{expressionProp11} est le suivant~: \begin{equation}\label{expressioncas2} - \Big\langle T_{\lambda_1}^{f_1}\,,\, d'\Big(\psi\, \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge \varphi_\iota\, d''\omega \Big\rangle = -\Big\langle T_{\lambda_1}^{f_1}\,,\, \varphi_\iota\, |f_2|^{\lambda_2}\, d'(\psi \log |f_2|)\wedge d''\omega\Big\rangle, \end{equation} o\`u $\psi$ est une fonction lisse de support inclus dans $U_{f_2}$, identiquement \'egale \`a $1$ au voisinage du support de $\varphi_\iota\, d''\omega$. \end{itemize} On sait d'autre part que dans l'ouvert $U_{f_j}$ ($j=1,2)$, on a l'identit\'e suivante entre fonctions lisses~: $$ \forall\, \lambda_j\in \mathbb{R}^*,\quad \frac{|f_j|^{\lambda_j}}{\lambda_j} = \sum\limits_{k=0}^\infty \lambda_j^{k-1} \frac{(\log |f_j|)^k}{k!}, $$ la convergence \'etant uniforme sur tout compact de $U_{f_2}$. Si l'on note $[(\log |f_j| )^k]$ le $(0,0)$-courant dans $U$ associ\'e \`a la fonction $(\log |f_j|)^k$ suivant le lemme 4.6.1 de \cite{ChLD}, on a donc les identit\'es courantielles $$ \Big[\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big] = \frac{[1]}{\lambda_j} + \sum\limits_{k=1}^\infty \frac{\lambda_j^{k-1}}{k!} \, \big[(\log |f_j|)^k], $$ la convergence de la s\'erie figurant au membre de droite \'etant entendue ici au sens faible dans $\mathscr D_{0,0}(U)$. On en d\'eduit \begin{equation}\label{relationscourants} \begin{split} & T_{\lambda_1}^{f_1} = d'd'' \Big[ \frac{|f_1|^{\lambda_1}}{\lambda_1}\Big] = \sum\limits_{k=1}^\infty \frac{\lambda_1^{k-1}}{k!} \, d'd''\, \big[(\log |f_1|)^k\big]\\ & d'\, \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big] = \sum\limits_{k=1}^\infty \frac{\lambda_2^{k-1}}{k!}\, d'\big[(\log |f_2|)^k\big]. \end{split} \end{equation} On observe que chacune des contributions $$ (\lambda_1,\lambda_2) \longmapsto - \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, d''\omega $$ admet une limite lorsque $(\lambda_1,\lambda_2)$ tend vers $0$. Dans le premier cas, on a \begin{multline}\label{limite-cas1} \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1,\lambda_2 \in \mathbb{R}^*}} \Big(- \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, d''\omega\Big) = \\ \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1,\lambda_2 \in \mathbb{R}^*}} \Big( -\lambda_1\, \sum\limits_{k=1}^\infty \frac{\lambda_2^{k-1}}{k!} \Big\langle d'\, \big[(\log |f_2|)^k\big]\,,\, \varphi\, |f_1|^{\lambda_1}\, d'(\varphi \log |f_1|) \wedge d''(\varphi \log |f_1|) \wedge \varphi_\iota \, d''\omega\Big\rangle\Big) = 0 \end{multline} d'apr\`es le th\'eor\`eme de convergence domin\'ee. Dans le second cas, on a pour les m\^emes raisons \begin{equation}\label{limite-cas2} \begin{split} &\qquad\lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1,\lambda_2 \in \mathbb{R}^*}} \Big(- \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, d''\omega\Big) \\ &\qquad= \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1,\lambda_2 \in \mathbb{R}^*}} \Big( \sum\limits_{k=1}^\infty \frac{\lambda_1^{k-1}}{k!} \Big\langle d'd''\, \big[(\log |f_1|)^k\big]\,,\, \varphi_\iota\, |f_2|^{\lambda_2}\, d'(\psi \log |f_2|)\wedge d''\omega\Big\rangle\Big) \\ &\qquad= \lim\limits_{\stackrel{\lambda_2 \rightarrow 0}{\lambda_2 \in \mathbb{R}^*}} \Big\langle [{\rm div}(f_1)]\,,\, \varphi_\iota\, |f_2|^{\lambda_2}\, d'(\psi \log |f_2|)\wedge d''\omega\Big\rangle\\ &\qquad= \lim\limits_{\stackrel{\lambda_2 \rightarrow 0}{\lambda_2 \in \mathbb{R}^*}} \Big\langle [{\rm div}(f_1)]\,,\, \varphi_\iota\, d'\, \Big(\frac{\psi |f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d''\omega\Big\rangle \\ &\qquad= \lim\limits_{\stackrel{\lambda_2 \rightarrow 0}{\lambda_2 \in \mathbb{R}^*}} \Big( \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\, [{\rm div}(f_1)]\,,\, \varphi_\iota \, d'd''\omega \Big\rangle + \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\, [{\rm div}(f_1)]\,,\, d'(\varphi_\iota)\wedge d''\omega \Big\rangle\Big). \end{split} \end{equation} Or, dans le premier cas, on a, puisque le support de $\varphi_\iota$ ne rencontre pas $Z_1$, que pour tout $\lambda_2$ dans $\mathbb{R}^*$ $$ \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\, [{\rm div}(f_1)]\,,\, \varphi_\iota \, d'd''\omega \Big\rangle + \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\, [{\rm div}(f_1)]\,,\, d'(\varphi_\iota)\wedge d''\omega \Big\rangle = 0. $$ En tenant donc compte de \eqref{limite-cas1} et de \eqref{limite-cas2}, on constate que la limite lorsque $(\lambda_1,\lambda_2)$ tend vers $(0,0)$ dans $(\mathbb{R}^*)^2$ de l'expression $$ - \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge d''\omega:= - \sum\limits_{\iota \in I} \int_{U} d'\Big(\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big) \wedge d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, d''\omega $$ est \'egale \`a la limite lorsque $\lambda$ tend vers $0$ de $$ \sum\limits_{\iota \in I} \Big( \Big\langle \Big[\frac{|f_2|^{\lambda}}{\lambda}\Big]\, [{\rm div}(f_1)]\,,\, \varphi_\iota \, d'd''\omega \Big\rangle + \Big\langle \Big[\frac{|f_2|^{\lambda}}{\lambda}\Big]\, [{\rm div}(f_1)]\,,\, d'\varphi_\iota\wedge d''\omega \Big\rangle\Big), $$ c'est-\`a-dire, puisque $\sum_{\iota \in I} \varphi_\iota =1$ et que par cons\'equent $\sum_{\iota \in I} d'\varphi_\iota =0$, \`a l'action sur $\omega$ du courant $[{\rm div}(f_1)]\wedge [{\rm div}(f_2)]$ tel qu'il a \'et\'e d\'efini avant l'\'enonc\'e de la proposition \ref{prop2fonctions}. \end{proof} \begin{remark}\label{rem2fonctions} {\rm On v\'erifie que $T_{\lambda_1,\lambda_2}^{f_1,f_2} = T_{\lambda_2,\lambda_1}^{f_2,f_1}$ pour tout couple $(\lambda_1,\lambda_2)$ de $(\mathbb{R}^*)^2$ et toute paire de fonctions m\'eromorphes $(f_1,f_2)$. On commence par d\'efinir dans $U_{f_1}\cup U_{f_2}$ les deux courants $[|f_2|^{\lambda}/\lambda_2] \, T^{\lambda_1}_{f_1}$ et $[|f_1|^{\lambda_1}/\lambda_1]\, T^{\lambda_2}_{f_2}$ de la mani\`ere suivante (sym\'etrique). Par exemple \begin{multline*} \forall\, \omega \in \mathscr A^{n-1,n-1}_c(U), \quad \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\, T_{\lambda_1}^{f_1}\,,\, \omega\Big\rangle := \\ \begin{cases} \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d'd''\Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge \omega \Big\rangle\ {\rm si}\ {\rm Supp}\, \omega \subset U_{f_1} \\ \\ \Big\langle T_{\lambda_1}^{f_1}\,,\, \frac{|f_2|^{\lambda_2}}{\lambda_2}\, \omega\Big\rangle \ {\rm si}\ {\rm Supp}(\omega) \subset U_{f_2} \end{cases} \end{multline*} (dans le premier cas, on utilise le lemme 4.6.1 de \cite{ChLD}). Les deux d\'efinitions alternatives propos\'ees se recollent ici dans $U_{f_1}\cap U_{f_2}$. On d\'efinit alors les deux courants suivants dans $U_{f_1}\cup U_{f_2}$~: $$ d'\Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big] \wedge T_{\lambda_1}^{f_1} := d'\Big( \frac{|f_2|^{\lambda_2}}{\lambda_2}\, T_{\lambda_1}^{f_1}\Big)\ ,\ d'\Big[\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big] \wedge T_{\lambda_2}^{f_2} := d'\Big( \frac{|f_1|^{\lambda_1}}{\lambda_1}\, T_{\lambda_2}^{f_2}\Big). $$ On observe que le courant $\mu_{\lambda_1,\lambda_2}$ d\'efini comme la diff\'erence de ces deux courants est un courant $d''$-ferm\'e dans $U_{f_1}\cup U_{f_2}$~: en effet, on a par exemple, si $\alpha \in \mathscr A^{n-2,n-2}_c(U_{f_1})$, \begin{multline}\label{calculsremarque} \Big\langle d'\Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big] \wedge T_{\lambda_1}^{f_1}\,,\, d''\alpha \Big\rangle = - \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d'd'' \Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge d'd''\alpha \Big\rangle \\ = - \Big\langle \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d'\Big( d''\Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge d'd''\alpha\Big)\Big\rangle = \Big\langle d'\Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d''\Big(\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big)\wedge d'd''\alpha\Big\rangle \\ = \Big\langle d'\Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, d''\Big(\frac{|f_1|^{\lambda_1}}{f_1}\, d'd''\alpha\Big) \Big\rangle = \Big\langle d'd'' \Big[\frac{|f_2|^{\lambda_2}}{\lambda_2}\Big]\,,\, \frac{|f_1|^{\lambda_1}}{\lambda_1}\, d'd''\alpha \Big\rangle \\ = \Big\langle d'\Big[\frac{|f_1|^{\lambda_1}}{\lambda_1}\Big] \wedge T_{\lambda_2}^{f_2}\,,\, d''\alpha \Big\rangle~; \end{multline} le calcul est sym\'etrique dans $U_{f_2}$. On rappelle que l'on a par d\'efinition \begin{multline*} \forall\, \lambda_1,\lambda_2 \in \mathbb{R}^*,\quad \forall\, \omega \in \mathscr A_c^{n-2,n-2}(U), \quad \Big\langle T_{\lambda_1,\lambda_2}^{f_1,f_2} - T_{\lambda_2,\lambda_1}^{f_2,f_1}\,,\,\omega \Big\rangle = \sum\limits_{\iota \in I} \big\langle \mu_{\lambda_1,\lambda_2}, \varphi_\iota \, d''\omega \big\rangle \\ = - \sum\limits_{\iota \in I} \big\langle \mu_{\lambda_1,\lambda_2}\,,\, d'' \varphi_\iota \wedge \omega \rangle = \Big\langle \mu_{\lambda_1,\lambda_2}\,,\, \Big(\sum_{\iota \in I} d''\varphi_\iota)\Big) \wedge \omega\Big\rangle = 0. \end{multline*} On conviendra de noter par la suite pour tous $\lambda_1,\lambda_2\in \mathbb{R}^*$~: \begin{equation}\label{notation} T_{\lambda_1,\lambda_2}^{f_1,f_2} = d'd''\Big[ \frac{|f_2|^{\lambda_2}}{\lambda_2}\Big] \wedge d'd''\Big[ \frac{|f_1|^{\lambda_1}}{\lambda_1}\Big], \end{equation} l'ordre des deux facteurs \'etant ici sans importance, ce qui en coh\'erent avec le fait qu'il s'agisse (formellement) d'un produit de $2$-courants. On a de plus $$ [{\rm div}(f_1)]\wedge [{\rm div}(f_2)]= [{\rm div}(f_2)] \wedge [{\rm div}(f_1)]. $$ } \end{remark} \noindent On peut envisager le cas o\`u l'on a trois fonctions m\'eromorphes r\'eguli\`eres sans diviseurs de z\'ero dans un ouvert $U$ de $X$. On rappelle que l'action du courant $[{\rm div}(f_1)]\wedge [{\rm div}(f_2)]$ sur $\omega \in \mathscr A_c^{n-2,n-2}(U)$ est d\'efinie par \begin{multline*} \sum_{\kappa}\Big( - \int_{\iota_{1,\kappa}^{-1}(U_{f_2})\cap Z_{1,\kappa}} d'(\chi_\kappa\, \log| \iota_{1,\kappa}^*(f_2)|)\wedge d''\big(\iota_{1,\kappa}^* (\omega)\big) \Big) \\ = \sum_\kappa \Big(-\int_{\iota_{1,\kappa}^{-1}(U)\cap Z_{1,\kappa}} d'(\chi_\kappa\, \log| \iota_{1,\kappa}^*(f_2)|)\wedge d''\big(\iota_{1,\kappa}^* (\omega)\big) \Big), \end{multline*} o\`u $\chi_\kappa$ est une fonction lisse sur le $\mathbb{K}$-espace analytique $Z_1$ identiquent \'egale \`a $1$ au voisinage du support de la $(n-2,n-1)$-forme lisse $d''\big(\iota_{1,\kappa}^* (\omega)$ et de support inclus dans le plus grand ouvert de $Z_{1,\kappa}\cap \iota_{1,\kappa}^{-1}(U)$ dans lequel la fonction m\'eromorphe r\'eguli\`ere $\iota_{1,\kappa}^*(f_2)$ ne s'annule pas. Pour $\lambda \in \mathbb{R}^*$, on utilise dans chaque ouvert $\iota_{1,\kappa}^{-1}(U)$ le lemme 4.6.1 de \cite{ChLD} pour justifier la d\'efinition du $(3,3)$-courant $$ \big([{\rm div}(f_1)]\wedge [{\rm div}(f_2)]\big) \wedge [{\rm div}(f_3)] $$ de la mani\`ere suivante. On rappelle d'abord (voir \eqref{prod2diviseurs}) que $$ [{\rm div}(f_1)] \wedge [{\rm div}(f_2)] := \sum_\kappa \pm (\iota_{1,\kappa})_* \Big( d'd'' \big[\log |\iota_{1,\kappa}^* (f_2)|\big]\Big) = \sum_\kappa \pm (\iota_{1,\kappa})_* \Big( [{\rm div}(\iota_{1,\kappa}^*(f_2)]\Big) $$ d'apr\`es la formule de Lelong-Poincar\'e appliqu\'ee \`a la fonction m\'eromorphe r\'eguli\`ere $\iota_{1,\kappa}^*(f_2)$ dans l'ouvert $\iota_{1,\kappa}^{-1}(U)$ du $\mathbb{K}$-espace analytique (de dimension $n-1$) $Z_{1,\kappa}$. On d\'efinit alors (en respectant pour l'instant l'ordre) $$ [{\rm div}(f_1)]\wedge [{\rm div}(f_2)] \wedge [{\rm div}(f_3)] := \sum_\kappa \pm\, (\iota_{1,\kappa})_*\Big( [{\rm div}(\iota_{1,\kappa}^* (f_2))] \wedge [{\rm div}(\iota_{1,\kappa}^* (f_3))]\Big). $$ Plus g\'en\'eralement, l'on aboutit \`a la d\'efinition suivante~: \begin{definition}{\rm Si $f_1,...,f_p$ sont $p$ fonctions r\'eguli\`erement m\'eromorphes dans $U$ telles que pour toute liste d'indices $1\leq j_1 < \dots < j_k \leq p$ (avec $k=1,...,p$) on a $$ {\rm codim}_{U} \Big(\bigcap_{\ell =1}^k {\rm Supp}\, \big([{\rm div}(f_{j_\ell})]\big)\Big) \geq k, $$ on d\'efinit inductivement pour tout $k$ entre $2$ et $p$ \begin{equation}\label{defpproduit} [{\rm div}(f_1)]\wedge [{\rm div}(f_2] \wedge\cdots \wedge [{\rm div}(f_{k})] := \sum_\kappa \pm\, (\iota_{1,\kappa})_*\Big( [{\rm div}(\iota_{1,\kappa}^* (f_2))] \wedge \cdots \wedge [{\rm div}(\iota_{1,\kappa}^* (f_k))]\Big). \end{equation} } \end{definition} On se doit pour l'instant dans cette construction de respecter l'ordre dans lequel sont prises les fonctions r\'eguli\`erement m\'eromorphes $f_j$. \begin{theorem}\label{theorempfonctions} Soient $f_1,...,f_p$ ($p\geq 1$) des fonctions m\'eromorphes r\'eguli\`eres dans un ouvert $U$ d'un bon $\mathbb{K}$-espace analytique de Berkovich de dimension $n$ sans bord. On fait l'hypoth\`ese que pour toute liste d'indices $1\leq j_1 < \dots < j_k \leq p$ (avec $k=1,...,p$) on a $$ {\rm codim}_{U} \Big(\bigcap_{\ell =1}^k {\rm Supp}\, \big([{\rm div}(f_{j_\ell})]\big)\Big) \geq k. $$ On d\'efinit un courant $T_{\lambda_1,...,\lambda_p}^{f_1,...,f_p} \in \mathscr{D}_{p,p}(U,\mathbb{R})$ en posant $$ \forall\, \omega \in \mathscr{A}^{n-p,n-p}_c(U),\quad \langle T_{\lambda_1,...,\lambda_p}^{f_1,...,f_p},\omega \rangle = - \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge d''\omega $$ apr\`es avoir d\'ecoup\'e cette int\'egrale suivant un partitionnement de l'unit\'e $1=\sum_\iota \varphi_\iota$ subordonn\'ee au support de $d''\omega$ afin de lui donner un sens et d'en assurer la convergence. Alors, ce courant ne d\'epend pas de l'ordre dans lequel sont prises les fonctions $f_1,...,f_p$ et l'on a donc pour toute permutation $\sigma$ de $\{1,...,p\}$ \begin{equation}\label{produitgen} \lim\limits_{\stackrel{(\lambda_1,...,\lambda_p) \rightarrow (0,...,0)}{\lambda_1 \not=0,..., \lambda_p \not=0}} T_{\lambda_1,...,\lambda_p}^{f_1,...,f_p} = [{\rm div}(f_1)] \wedge\dots \wedge [{\rm div}(f_p)] = {[\rm div} (f_{\sigma(1)})] \wedge \dots \wedge [{\rm div}(f_{\sigma(p)})], \end{equation} o\`u l'op\'eration multiplicative entre courants $[{\rm div}(f_j)]$ (respectant un ordre a priori impos\'e) a \'et\'e introduite pr\'ealablement. \end{theorem} \begin{proof} Le r\'esultat est acquis pour $p=2$ d'apr\`es la proposition \ref{prop2fonctions} et la remarque \ref{rem2fonctions}. On le suppose donc acquis (hypoth\`ese de r\'ecurrence) pour $p-1$ fonctions m\'eromorphes ($p\geq 3$). On note, pour $j=1,...,p$, $Z_j$ les sous-espaces analytiques ferm\'es (au sens de Zariski) de $U$ de codimension $1$ d\'efinis comme les supports des courants $[{\rm div}(f_j)]$. Pour chaque $j=1,...,p$, on note $\widehat{Z}_j$ l'intersection des sous-ensembles $\mathbb{K}$-analytiques $Z_\ell$ pour $\ell=1,...,j-1,j+1,...,p$. Si $\widehat{Z_j}$ est non vide (${\rm codim}_{U} \widehat{Z_j} < +\infty$), on a n\'ecessairement ${\rm codim}_{U} \widehat {Z_j} = p-1$ car cette codimension est minor\'ee par $p-1$ par hypoth\`eses et que $\widehat{Z_j}$ est d\'efini comme lieu des z\'eros communs d'exactement $p-1$ \'equations. On peut donc consid\'erer $\widehat {Z_j}$ comme un $\mathbb{K}$-espace analytique de dimension $n-p+1$. Soit $\omega \in \mathscr A^{n-p,n-p}_c(U)$. Du fait de l'hypoth\`ese que pour toute liste d'indices $1\leq j_1 < \dots < j_k \leq p$ (avec $k=1,...,p$) $$ {\rm codim}_{U} \Big(\bigcap_{\ell =1}^k {\rm Supp}\, \big([{\rm div}(f_{j_\ell})]\big)\Big) \geq k, $$ il r\'esulte du lemme 3.2.5 de \cite{ChLD} et de la d\'efinition de la dimension locale $d_{\mathbb{K}}(x)$ ($x\in U)$ comme le minimum des dimensions $\mathbb{K}$-analytiques des domaines $\mathbb{K}$-affinoides qui contiennent $x$ (voir par exemple \cite{Duc07}, d\'efinition 1.16), que le support de la forme diff\'erentielle $d''\omega$ de bidegr\'e $(n-p,n-p+1)$ ne rencontre pas le sous-ensemble de Zariski $Z_1\cap Z_2\cdots\cap Z_p$. D'apr\`es le lemme de partitionnement de l'unit\'e \cite[proposition 3.3.6]{ChLD}, on peut introduire dans $U$ une partition de l'unit\'e $1= \sum_\iota \varphi_\iota$ (par des fonctions lisses \`a support compact), subordonn\'ee au recouvrement du compact ${\rm Supp}\, (d''\omega)$ de $U$ par les $p$ ouverts $U_{f_1}$, $U_{f_2}$,..., $U_{f_p}$. \noindent Dans un premier temps, \'etant donn\'es $p-1$ fonctions m\'eromorphiquement r\'eguli\`eres $g_1,...,g_{p-1}$ satisfaisant la condition $$ {\rm codim}_{U} \Big(\bigcap_{\ell =1}^k {\rm Supp}\, \big([{\rm div}(g_{j_\ell})]\big)\Big) \geq k $$ pour tout $k$ entre $1$ et $p-1$ et tous $1\leq j_1 < \cdots <j_k\leq p-1$, il nous faut donner un sens (\`a l'aide du lemme 4.6.1 de \cite{ChLD}) au courant $$ \Big[\frac{|g_1|^{\lambda_1}}{\lambda_1}\Big] \, T_{\lambda_2,...,\lambda_{p-1}}^{g_2,...,g_{p-1}} \in \mathscr D_{p-2,p-2}(U). $$ On proc\`ede pour cela ainsi~: \'etant donn\'e $\eta \in \mathscr A_c^{n-p+2,n-p+2}(U)$, on introduit, puisque le support de la forme $\eta$ ne rencontre pas $Z_{g_1} \cap \dots \cap Z_{g_{p-1}}$ d'apr\`es le lemme 3.2.5 de \cite{ChLD}, une partition de l'unit\'e $\sum_\iota \tau_\iota$ du support de $\eta$ subordonn\'ee au recouvrement de ce support par les ouverts $U_{g_\ell}$, $\ell=1,...,p-1$. On d\'efinit alors \begin{multline}\label{produitaux} \Big\langle \Big[\frac{|g_1|^{\lambda_1}}{\lambda_1}\Big] \, T_{\lambda_2,...,\lambda_{p-1}}^{g_2,...,g_{p-1}}\,,\, \tau_\iota\, \eta \Big\rangle \\ := \begin{cases} \Big\langle T_{\lambda_2,...,\lambda_{p-1}}^{g_2,...,g_{p-1}}\,,\, \Big(\displaystyle{\frac{|g_1|^{\lambda_1}}{\lambda_1}}\Big)\, \tau_\iota \, \eta\Big\rangle \ {\rm si}\ {\rm Supp}(\tau_\iota)\subset U_{g_1} \\ \\ \Big\langle \Big[\displaystyle{\frac{|g_1|^{\lambda_1}}{\lambda_1}}\Big] \, T_{\lambda_2,...,\widehat{\lambda_{j_0}},\cdots \lambda_{p-1}}^{g_2,..., \widehat{g_{j_0}},...,g_{p-1}}\,,\, d'd''\Big(\frac{|g_{j_0}|^{\lambda_{j_0}}}{\lambda_{j_0}}\Big)\wedge \tau_\iota\, \eta \Big\rangle\ {\rm si}\ {\rm Supp}(\tau_\iota) \subset U_{g_{j_0}}. \end{cases} \end{multline} La seconde alternative se traite inductivement et la construction finit par se conclure lorsque $p=2$ \`a l'application du lemme 4.6.1 de \cite{ChLD}. La construction montre aussi (par r\'ecurrence sur le nombre de fonctions $g_j$ en jeu) que la limite \begin{equation}\label{limitepfonctions} \lim\limits_{\stackrel{(\lambda_1,...,\lambda_{p-1}) \rightarrow (0,...,0)} {\lambda_1,...,\lambda_{p-1}\in \mathbb{R}^*}} \Big(\Big[\frac{|g_1|^{\lambda_1}}{\lambda_1}\Big] \, T_{\lambda_2,...,\lambda_{p-1}}^{g_2,...,g_{p-1}}\Big) \end{equation} existe inconditionnellement dans $\mathscr D_{p-2,p-2}(U)$ (au sens faible de la convergence des courants). \vskip 1mm \noindent On est maintenant en mesure de donner le sens suivant \`a l'expression \begin{equation}\label{expressionpfonctions} - \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega \end{equation} suivant que le support de $\varphi_\iota$ est inclus dans l'un des $U_{f_j}$ pour $j=1,...,p-1$ ou que le support de $\varphi_\iota$ est inclus dans $U_{f_p}$. \begin{itemize} \item Si ${\rm supp}\, \varphi_\iota \subset U_{f_{j_0}}$ pour un indice $j_0$ entre $1$ et $p-1$, on d\'efinit l'expression \eqref{expressionpfonctions} par \begin{multline}\label{expressionpfonctionscas1} - \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega\\ := - \Big\langle d' \Big( \Big[\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big]\, T_{\lambda_1,...,\widehat{\lambda_{j_0}},...,\lambda_{p-1}}^{f_1,...,\widehat{f_{j_0}},...,f_{p-1}}\Big)\,,\, d'd''\Big( \frac{\varphi |f_{j_0}|^{\lambda_{j_0}}}{\lambda_{j_0}}\Big) \wedge \varphi_\iota d''\omega\Big\rangle \\ = - \lambda_{j_0} \, \Big\langle d' \Big( \Big[\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big]\, T_{\lambda_1,...,\widehat{\lambda_{j_0}},...,\lambda_{p-1}}^{f_1,...,\widehat{f_{j_0}},...,f_{p-1}}\Big)\,,\, \varphi\, |f_{j_0}|^{\lambda_{j_0}}\, d'(\varphi \log |f_{j_0}|) \wedge d''(\varphi \log |f_{j_0}|) \wedge \varphi_\iota \, d''\omega \Big\rangle, \end{multline} o\`u $\varphi$ d\'esigne une fonction lisse identiquement \'egale \`a $1$ au voisinage du support de $\varphi_\iota \, d''\omega$ et de support inclus dans $U_{f_{j_0}}$. \item Si ${\rm Supp}\, \varphi_\iota \subset U_{f_p}$, on d\'efinit l'expression \eqref{expressionpfonctions} par \begin{multline}\label{expressionpfonctionscas2} - \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega\\ := - \Big\langle T_{\lambda_1,...,\lambda_{p-1}}^{f_1,...,f_{p-1}}\,,\, d'\Big( \varphi \frac{|f_p|^{\lambda_p}}{\lambda_p}\Big)\wedge \varphi_\iota\, d''\omega\Big\rangle. \end{multline} \end{itemize} On \'etudie maintenant le comportement de chaque fonction \begin{equation}\label{fonctionpfonctions} (\lambda_1,...,\lambda_p) \longmapsto \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega \end{equation} lorsque $(\lambda_1,...,\lambda_p)$ tend vers $(0,...,0)$ inconditionnellement dans $(\mathbb{R}^*)^p$ suivant que l'on se trouve dans l'un des deux cas distingu\'es ci-dessus. \begin{itemize} \item Si ${\rm Supp}(\varphi_\iota)\subset U_{f_{j_0}}$ pour $j_0\in \{1,...,p-1\}$, on peut remplacer dans le crochet figurant au membre de droite \eqref{expressionpfonctionscas1} l'expression $|f_{j_0}|^{\lambda_{j_0}}$ par $$ |f_{j_0}|^{\lambda_{j_0}} = \sum\limits_{k=0}^\infty \frac{\lambda_{j_0}^k}{k!} \, (\log |f_{j_0}|)^k. $$ Il r\'esulte alors du fait que la limite inconditionnelle \eqref{limitepfonctions} existe au sens faible dans $\mathscr D_{p-1,p-1}(U)$ que la fonction \eqref{fonctionpfonctions} tend vers $0$ lorsque $(\lambda_1,...,\lambda_p)$ tend inconditionnellement vers $(0,...,0)$ dans $(\mathbb{R}^*)^p$. \item Si ${\rm Supp}(\varphi_\iota) \subset U_{f_1}$, on observe que l'expression \eqref{expressionpfonctionscas2} s'exprime aussi \begin{multline*} - \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega\\ = - \sum\limits_{k=1}^\infty \frac{\lambda_p^{k-1}}{k!} \Big\langle T_{\lambda_1,...,\lambda_{p-1}}^{f_1,...,f_{p-1}}\,,\, (\varphi \log |f_p|)^{k}\, d'(\varphi \log|f_p|) \wedge \varphi_\iota\, d''\omega\Big\rangle. \end{multline*} et la fonction \eqref{fonctionpfonctions} admet d'apr\`es l'hypoth\`ese de r\'ecurrence comme limite inconditionnelle lorsque $(\lambda_1,...,\lambda_p)$ tend vers $(0,...,0)$ dans $(\mathbb{R}^*)^p$ l'expression \begin{multline}\label{limitepfonctionscas2} \Big\langle [{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\,,\, \log |f_p| \, \varphi_\iota\, d'd''\omega\Big\rangle \\ + \Big\langle [{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\,,\, \log|f_p|\, d'\varphi_\iota \wedge d''\omega\Big\rangle \\ = \Big\langle \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0}\,,\, \varphi_\iota\, d'd''\omega\Big\rangle \\ + \Big\langle \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0}\,,\, d'\varphi_\iota \wedge d''\omega\Big\rangle, \end{multline} o\`u le courant $$ \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0} = [\log |f_p|]\, \Big([{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\Big) $$ est bien d\'efini gr\^ace au lemme 4.6.1 de \cite{ChLD} compte-tenu de l'expression inductive du courant $[{\rm div}(f_1)] \wedge \cdots \wedge [{\rm div}(f_{p-1})]$ (de support par construction m\^eme inclus dans $Z_1 \cap \dots \cap Z_{p-1}$). \end{itemize} On remarque \'egalement que si ${\rm Supp}\, (\varphi_\iota)\subset U_{j_0}$ avec $1\leq j_0\leq p-1$, alors \begin{equation}\label{limitepfonctionscas2bis} \Big\langle \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0}\,,\, \varphi_\iota\, d'd''\omega\Big\rangle + \Big\langle \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0}\,,\, d'\varphi_\iota \wedge d''\omega\Big\rangle = 0 \end{equation} compte-tenu du fait que le support du courant $$ \big[\log |f_p|\big]\, T^{f_1,...,f_{p-1}}_{0,...,0} = [\log |f_p|]\, \Big([{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\Big) $$ est inclus dans le sous-ensemble de Zariski $Z_1\cap \cdots \cap Z_{p-1}$ de codimension $p-1$. On en d\'eduit donc que la limite lorsque $(\lambda_1,...,\lambda_p)$ tend vers $(0,...,0)$ incondionnellement dans $(\mathbb{R}^{*})^p$ de $$ \Big\langle T^{f_1,...,f_p}_{\lambda_1,...,\lambda_p}\,,\, \omega \Big\rangle := - \sum\limits_{\iota \in I} \int_{U} d'\Big(\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{|f_j|^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge \varphi_\iota d''\omega $$ existe et vaut \begin{multline*} \sum\limits_{\iota \in I} \Big(\Big\langle \big[\log |f_p|\big]\, [{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\,,\, \varphi_\iota\, d'd''\omega\Big\rangle \\ + \Big\langle \big[\log |f_p|\big]\, [{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\,,\, d'\varphi_\iota \wedge d''\omega\Big\rangle\Big) \\ =\Big\langle \big[ \log |f_p|\big]\, [{\rm div}(f_1)]\wedge \cdots \wedge [{\rm div}(f_{p-1})]\,,\, d'd''\omega\Big\rangle = \Big\langle [{\rm div}(f_1)]\wedge \dots \wedge [{\rm div}(f_p)]\,,\, \omega\Big\rangle \end{multline*} compte-tenu de la d\'efinition inductive des courants $[{\rm div}(f_1)\wedge \cdots \wedge [{\rm div}(f_k)]$ pour $k=2,...,p$ et de la formule de Lelong-Poincar\'e sur le ferm\'e de Zariski $Z_1\cap \cdots \cap Z_{p-1}$ (de codimension $p-1$) consid\'er\'e comme un $\mathbb{K}$-espace analytique de dimension $n-(p-1)$. \hfil\break Il reste \`a justifier l'\'egalit\'e $$ T_{\lambda_{\tau(1)},...,\lambda_{\tau(p)}}^{f_{\tau(1)},...,f_{\tau(p)}} = T_{\lambda_1,...,\lambda_p}^{f_1,...,f_p} $$ pour toute transposition $\tau$ de $\mathscr S_{\{1,...,p\}}$ (groupe des permutations). Le r\'esultat est acquis du fait de l'hypoth\`ese de r\'ecurrence lorsque $\tau(p)=p$ et l'on peut se ramener ainsi \`a prouver l'\'egalit\'e courantielle \begin{equation}\label{egalitecourantielle} T_{\lambda_1,...,\lambda_{p-2},\lambda_{p-1},\lambda_p}^{f_1,...,f_{p-2},f_{p-1},f_p} = T_{\lambda_1,...,\lambda_{p-2},\lambda_{p},\lambda_{p-1}}^{f_1,...,f_{p-2},f_{p},f_{p-1}}. \end{equation} On d\'efinit pour cela comme dans la remarque \ref{rem2fonctions} le courant $$ \mu_{\lambda_1,...,\lambda_p} = d'\Big( \Big[\frac{|f_p|^{\lambda_p}}{\lambda_p}\Big] \, T_{\lambda_1,...,\lambda_{p-2},\lambda_{p-1}}^{f_1,...,f_{p-2},f_{p-1}}\Big) - d'\Big( \Big[\frac{|f_{p-1}|^{\lambda_{p-1}}}{\lambda_{p-1}}\Big]\, T_{\lambda_1,...,\lambda_{p-2},\lambda_p}^{f_1,...,f_{p-2},f_p}\Big)\, $$ o\`u la multiplication des courants s'effectue suivant la d\'emarche inductive \eqref{produitaux}. Si l'on remarque que \begin{eqnarray*} T_{\lambda_1,...,\lambda_{p-2},\lambda_{p-1}}^{f_1,...,f_{p-2},f_{p-1}} = d'd''\Big( \Big[\frac{|f_{p-1}|^{\lambda_{p-1}}}{\lambda_{p-1}}\Big] \, T_{\lambda_1,...,\lambda_{p-2}}^{f_1,...,f_{p-2}}\Big) \ ,\ T_{\lambda_1,...,\lambda_{p-2},\lambda_{p}}^{f_1,...,f_{p-2},f_{p}} = d'd''\Big( \Big[\frac{|f_{p}|^{\lambda_{p}}}{\lambda_p}\Big] \, T_{\lambda_1,...,\lambda_{p-2}}^{f_1,...,f_{p-2}}\Big), \end{eqnarray*} on se met en situation de reprendre les calculs \eqref{calculsremarque}. On v\'erifie que le courant $\mu_{\lambda_1,...,\lambda_p}$ est $d''$-ferm\'e dans l'union des ouverts $U_{f_j}$ pour $j=1,...,p$~: \begin{itemize} \item si $\alpha\in \mathscr A_c^{n-p,n-p}(U)$ est de support dans $U_{f_{p-1}}\cup U_{f_p}$, les calculs sont identiques \`a ceux conduits dans \eqref{calculsremarque} et le courant $d'$ et $d''$-ferm\'e $T_{\lambda_1,...,\lambda_{p-2}}^{f_1,...,f_{p-2}}$ qui y intervient y joue un r\^ole neutre~; \item si d'autre part $\alpha\in \mathscr A_{c}^{n-p,n-p}(U)$ est de support dans $U_{f_{j_0}}$ pour $j_0$ entre $1$ et $p-2$, on est amen\'e \`a remplacer la forme $\alpha$ par la forme lisse $\alpha \wedge d'd''(|f_{j_0}|^{\lambda_{j_0}}/\lambda_{j_0})$ et \`a \'eliminer ainsi la fonction $f_{j_0}$ de la liste $[f_1,...,f_{p-2}]$, ce qui permet d'abaisser le nombre de fonctions $f_1,...,f_{p-2}$. \end{itemize} Comme dans la remarque \ref{rem2fonctions}, on conclut \`a l'\'egalit\'e courantielle \eqref{egalitecourantielle}. Ce qui ach\`eve la preuve du th\'eor\`eme \ref{theorempfonctions}. \end{proof} \section{R\'ealisation \`a la Mellin de courants de Green normalis\'es}\label{sectionGreen} Dans cette section, comme dans la pr\'ec\'edente, $X$ d\'esigne un bon espace de Berkovich sans bord de dimension pure $n$. \vskip 1mm \noindent Soit $\mathscr{L} \rightarrow U$ un fibr\'e en droites au-dessus d'un ouvert $U$ de $X$ \'equip\'e d'une m\'etrique continue $\|\ \|=\exp(-\rho)$, o\`u $\rho$ est une fonction continue r\'eelle. On pourra se repporter \`a \cite[section 6.2]{ChLD} pour la notion de fibr\'e en droites avec une m\'etrique et \`a \cite[section 6.4.1]{ChLD} pour la d\'efinition de la forme de Chern ou du courant de Chern suivant que la m\'etrique $\|\ \|$ est lisse ou non. \'Etant donn\'ee une section m\'eromorphe $s$ du fibr\'e $\mathscr{L}$ au-dessus de l'ouvert $U$, on convient d'appeler courant de Green normalis\'e subordonn\'e au courant $[{\rm div}(s)]$ dans $U$ un courant $G\in\mathscr D_{0,0}(U)$ tel que $$ d'd'' G + [{\rm div}(s)] = c_1(\mathscr{L},\|\ \|), $$ o\`u $\|\ \|$ d\'esigne une m\'etrique continue sur le fibr\'e en droites $\mathscr{L}$ et $c_1(\mathscr{L},\|\ \|)$ d\'esigne le $(1,1)$-courant de Chern associ\'e \`a la m\'etrique $\|\ \|$. Lorsque cette m\'etrique est lisse, il en est de m\^eme de la premi\`ere forme de Chern que l'on convient de noter pour simplifier $c_1(\mathscr {L},\|\ \|)$ et le $(0,0)$-courant $G$ est alors un courant de Green pour $[{\rm div}(s)]$, au sens o\`u $d'd''G + [{\rm div}(s)]$ est un $(1,1)$-courant de la forme $\varphi \mapsto \int_U \omega \wedge \varphi$, o\`u $\omega = c_1(\mathscr {L},\|\ \|)$ est une forme lisse. \vskip 1mm \noindent Soit $\omega\in \mathscr{A}_c^{n,n}(U)$. Le support (compact) de $\omega$ \'evite tout sous-ensemble ferm\'e de Zariski d'int\'erieur vide \cite[lemme 3.2.5]{ChLD} et l'on peut donc affirmer qu'il existe, pour tout $x\in {\rm Supp}(\omega)$, un voisinage $V_x$ de $x$ dans $U$ au-dessus duquel le fibr\'e $\mathscr{L}$ admet un rep\`ere $\sigma_{V_x}$ dans lequel la section $s$ s'exprime sous la forme $f_{V_x} \sigma_{V_x}$, o\`u $f_{V_x}$ est une fonction r\'eguli\`ere inversible dans $V_x$. \begin{definition}{\rm Soit $s~: U\to \mathscr L$ une section m\'eromorphe du fibr\'e $\mathscr L$ au-dessus de $U$, \'equip\'e d'une m\'etrique lisse $\|\ \|$. On d\'efinit donc, pour tout $\lambda \in \mathbb{R}^*$, un \'el\'ement de $\mathscr{D}_{0,0}(U)$ par~: $$ G_\lambda^{s} = -\Big[\frac{\|s\|^{\lambda}}{\lambda}\Big]~: \omega \in \mathscr{A}^{n,n}_c(U) \longmapsto -\int_{U} \frac{\|s\|^{\lambda}}{\lambda}\, \omega. $$ } \end{definition} \noindent Il r\'esulte de la formule de Lelong-Poincar\'e que l'on a, au sens des courants dans $U$, \begin{equation}\label{poinclel2} \lim\limits_{\stackrel{\lambda \rightarrow 0}{\lambda \not=0}} (d'd'' G_{\lambda}^{s}) + [{\rm div}(s)] = c_1(\mathscr{L},\|\ \|). \end{equation} En effet, l'on a d'apr\`es la formule de Stokes ($X$ est suppos\'e sans bord), si $\varphi$ d\'esigne une fonction lisse identiquement \'egale \`a $1$ au voisinage du support de $d''\omega$ et de support compact dans $U$ (que l'on peut encore construire gr\^ace au th\'eor\`eme de partitionnement de l'unit\'e, \cite[proposition 3.3.6]{ChLD}), \begin{multline*} \forall\,\omega\in\mathscr A_c^{n-1,n-1}(U),\quad \big\langle d'd'' G_\lambda^s\,,\,\omega\big\rangle=- \big\langle d' G_\lambda^s\,,\,d''\omega\big\rangle=-\int_{X^{\rm an}}d'\Big(\varphi\frac{\|s\|^\lambda}{\lambda}\Big)\wedge d''\omega\\ =-\int_{U_s}\|s\|^\lambda\,d'\big(\log\|s\|\big)\wedge d''\omega, \end{multline*} avec $U_s~:= U\setminus Z$, o\`u $Z$ est le sous-espace analytique ferm\'e (au sens de Zariski) de $U$ d\'efinit comme le support du courant $[{\rm div}(s)]$. \vskip 1mm \noindent D'apr\`es le th\'eor\`eme de convergence domin\'ee de Lebesgue, la fonction $\lambda \mapsto d'd''G_\lambda^s$ (\`a valeurs dans $\mathscr D_{1,1}(U)$) admet comme limite lorsque $\lambda$ tend vers $0$ \begin{multline}\label{eqgreen1} \forall\, \omega\in\mathscr A_c^{n-1,n-1}(U),\quad \longmapsto -\int_{U_s}d'\big(\log\|s\|\big)\wedge d''\omega =-\int_{U_s}d'd''\big(\log\|s\|\big)\wedge\omega\\=-\big\langle[{\rm div}(s)]- d'd''[\rho]\,,\, \omega\big\rangle. \end{multline} Ainsi on conclut de \eqref{eqgreen1}, avec $c_1(\mathscr{L},\|\ \|)=d'd''[\rho]$, ce qui ach\`eve la justification de l'\'egalit\'e \eqref{poinclel2}. \vskip 1mm \noindent Supposons maintenant que $\mathscr{L}_1 \rightarrow U$ et $\mathscr{L}_2\rightarrow U$ sont deux fibr\'es en droites au-dessus de $U$, chacun \'equip\'e d'une m\'etrique lisse $(e^{-\rho_{j,\iota}})_\iota$ (subordonn\'ee \`a un recouvrement $(V_\iota)_\iota$ de $U$ suffisamment fin pour que les deux fibr\'es se trivialisent au-dessus de chaque $V_\iota$), ce qui signifie que, pour chaque $\iota$, les deux fonctions $\rho_{j,\iota}$ s'expriment localement au voisinage $\xi$ de chaque point de $V_\iota$ comme des fonctions $C^\infty$ \`a valeurs r\'eelles de fonctions du type $\log|f_{\iota,\xi}|$ o\`u $f_{\iota,\xi}$ est une fonction r\'eguli\`ere inversible. Pour $j=1,2$, les premiers courants de Chern $c_1(\mathscr{L}_j,\|\ \|_j)$ sont dans ce cas associ\'es \`a des \'el\'ements de $\mathscr{A}^{1,1}(U)$ (que l'on notera de la m\^eme mani\`ere, mais ce sont cette fois des $(1,1)$-formes diff\'erentielles dans $U$, que l'on traitera comme telles), dites premi\`eres formes de Chern des fibr\'es $\mathscr{L}_j$ (chacun \'equip\'e de la m\'etrique lisse $\|\ \|_j$). \vskip 1mm \noindent La proposition suivante s'inscrit dans la droite ligne de la proposition \ref{prop2fonctions}. \begin{prop}\label{prop2fonctions2} Soient $\mathscr{L}_1 \rightarrow U$ et $\mathscr{L}_2 \rightarrow U$ deux fibr\'es en droites au-dessus d'un ouvert $U$ d'un bon $\mathbb{K}$-espace de Berkovich $X$ sans bord, chacun \'equip\'e d'une m\'etrique lisse $\|\ \|_j$. Soient $s_1$ et $s_2$ deux sections m\'eromorphes respectivement de $\mathscr{L}_1$ et $\mathscr{L}_2$ telles que $${\rm codim}_{U}({\rm Supp}\big([{\rm div}(s_1)]) \cap {\rm Supp}([{\rm div}(s_2)])\big)\geq 2.$$ Pour tout $(\lambda_1,\lambda_2)\in (\mathbb{R}^*)^2$, on d\'efinit un \'el\'ement $G_{\lambda_1,\lambda_2}^{s_1,s_2}$ de $\mathscr{D}_{1,1}(U)$ par $$ G_{\lambda_1,\lambda_2}^{s_1,s_2}~: \omega \in \mathscr{A}^{n-1,n-1}_c(U) \longmapsto - \int_{U} \frac{\|s_2\|_2^{\lambda_2}}{\lambda_2}\, d'd'' \Big(\frac{\|s_1\|^{\lambda_1}_1}{\lambda_1}\Big) \wedge \omega $$ apr\`es avoir d\'ecoup\'e cette int\'egrale suivant un partitionnement de l'unit\'e $1=\sum_\iota \varphi_\iota$ subordonn\'ee au support de $d''\omega$ afin d'en assurer la convergence. De plus on a, au sens de la convergence faible des courants sur $U$, \begin{equation}\label{green1} \begin{split} & \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1\not=0,\lambda_2 \not=0}} \Big(d'd'' \Big( G_{\lambda_1,\lambda_2}^{s_1,s_2} + c_1(\mathscr{L}_1,\|\ \|_1) \wedge G_{\lambda_2}^{s_2} + c_1(\mathscr{L}_2,\|\ \|_2) \wedge G_{\lambda_1}^{s_1}\Big)\Big) \\ & \qquad + [{\rm div}(s_1)] \wedge [{\rm div}(s_2)] = \big[c_1(\mathscr{L}_1,\|\ \|_1) \wedge c_1(\mathscr{L}_2,\|\ \|_2)\big], \end{split} \end{equation} o\`u le produit de courants $[{\rm div}(s_1)] \wedge [{\rm div}(s_2)]$ est d\'efini localement comme l'est le courant $[{\rm div}(f_1)]\wedge [{\rm div}(f_2)]$ dans la Proposition \ref{prop2fonctions} \`a partir des fonctions m\'eromorphes coordonn\'ees $f_1$ et $f_2$ respectivement de $s_1$ et $s_2$ dans les rep\`eres locaux pour les fibr\'es $\mathscr{L}_1$ et $\mathscr {L}_2$. \end{prop} \begin{proof} La preuve est similaire \`a celle de la proposition \ref{prop2fonctions}. On note encore $Z_1$ et $ Z_2$ les sous-espaces analytiques ferm\'es (au sens de Zariski) de $U$ d\'efinis comme les supports des courants $[{\rm div}(s_1)]$ et $[{\rm div}(s_2)]$ et $U_{s_j} :=U \setminus Z_j$ ($j=1,2$). Notons $\iota_j~: Z_j \rightarrow U$ les morphismes de $\mathbb{K}$-espaces analytiques correspondant aux inclusions $ Z_j\subset U$ (o\`u $j=1,2$). Soit $\omega\in \mathscr{A}^{n-1,n-1}_c(U)$. Du fait de l'hypoth\`ese ${\rm codim}_U \big({\rm Supp}\, \big([{\rm div}(s_1)]\big) \cap {\rm Supp}\, \big([{\rm div}(s_2)]\big)\big)\geq 2$, il r\'esulte du lemme 3.2.5 de \cite{ChLD} et de la d\'efinition de la dimension locale $d_{\mathbb{K}}(x)$ ($x\in U)$ comme le minimum des dimensions $\mathbb{K}$-analytiques des domaines $\mathbb{K}$-affinoides qui contiennent $x$ (voir par exemple \cite[d\'efinition 1.16]{Duc07}), que le support de la $(n-2,n-1)$-forme diff\'erentielle $d''\omega$ ne rencontre pas le sous-ensemble de Zariski $ Z_1\cap Z_2$. D'apr\`es le lemme de partitionnement de l'unit\'e \cite[proposition 3.3.6]{ChLD}, on peut introduire dans $ U$ une partition de l'unit\'e $1= \sum_\iota \varphi_\iota$ (par des fonctions lisses \`a support compact), subordonn\'ee au recouvrement du compact ${\rm Supp}\, (d''\omega)$ de $ U$ par les deux ouverts $U_{s_1}$ et $ U_{s_2}$. Pour chaque indice $\iota$, l'int\'egrale $$ - \int_{U} \frac{\|s_2\|_2^{\lambda_2}}{\lambda_2} \, d'd'' \Big(\frac{\|s_1\|_1^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, \omega $$ est bien d\'efinie. Il est donc clair que l'on d\'efinit l'action d'un courant de bidimension $(n-1,n-1)$ en posant \begin{equation}\label{green1aux} \big\langle G_{\lambda_1,\lambda_2}^{s_1,s_2},\omega \big\rangle := - \sum\limits_\iota \int_{U} \frac{\|s_2\|_2^{\lambda_2}}{\lambda_2} \, d'd'' \Big(\frac{\|s_1\|_1^{\lambda_1}}{\lambda_1}\Big)\wedge \varphi_\iota\, \omega. \end{equation} Il r\'esulte de \eqref{poinclel2} et \eqref{eqgreen1} que l'on a respectivement dans $U_{s_2}$ et $U_{s_1}$ les \'egalit\'es suivantes : \begin{multline}\label{eqgreen2} \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1\not=0,\lambda_2 \not=0}} d'd''\Big(c_1(\mathscr L_1,\|\ \|_1)\wedge G_{\lambda_2}^{s_2}\Big)= c_1(\mathscr L_1,\|\ \|_1)\wedge\big( -[{\rm div}(s_2)] + c_1(\mathscr{L}_2,\|\ \|_2)\big)\\ \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1\not=0,\lambda_2 \not=0}} d'd''\Big(c_1(\mathscr L_2,\|\ \|_2)\wedge G_{\lambda_1}^{s_1}\Big)= c_1(\mathscr L_2,\|\ \|_2)\wedge\big( -[{\rm div}(s_1)] + c_1(\mathscr{L}_1,\|\ \|_1)\big). \end{multline} Il r\'esulte aussi de la proposition \ref{prop2fonctions} que dans chacun des deux ouverts $U_{s_j}$, $j=1,2$, on a \begin{equation}\label{eqgreen3} \begin{split} & \lim\limits_{\stackrel{(\lambda_1,\lambda_2) \rightarrow (0,0)}{\lambda_1\not=0,\lambda_2 \not=0}} \big(d'd'' (G_{\lambda_1,\lambda_2}^{s_1,s_2}\big)\big) = \big(c_1(\mathscr{L}_2,\|\ \|_2) - [{\rm div}(s_2)]\big)\wedge \big([{\rm div}(s_1)] - c_1(\mathscr{L}_1,\|\ \|_1)\big) \\ & \qquad = - [{\rm div}(s_1)] \wedge [{\rm div}(s_2)] - \big[c_1(\mathscr{L}_1,\|\ \|_1) \wedge c_1(\mathscr{L}_2,\|\ \|_2)\big] \\ & \qquad \qquad \qquad + c_1(\mathscr{L}_1,\|\ \|_1) \wedge [{\rm div}(s_2)] + c_1(\mathscr{L}_2,\|\ \|_2) \wedge [{\rm div}(s_1)]. \end{split} \end{equation} Du fait de la possibilit\'e de d\'ecomposer $\langle G_{\lambda_1,\lambda_2}^{s_1,s_2},\omega \rangle$ sous la forme \eqref{green1aux} suivant une partition de l'unit\'e subordonn\'ee \`a un recouvrement de l'adh\'erence d'un voisinage ouvert de ${\rm Supp}(\omega)$ par des ouverts dans lesquelles une des sections $s_j$ au moins est r\'eguli\`ere et inversible, cette relation asymptotique entre courants est valide dans $U$ tout entier. En combinant \eqref{eqgreen2} et \eqref{eqgreen3} et en tenant compte de \eqref{prod2diviseurs}, on obtient bien la relation asymptotique \eqref{green1} voulue. \end{proof} \vskip 1mm \noindent Par r\'ecurrence sur l'entier $p=2,...,n$, nous sommes en mesure de d\'emontrer le r\'esultat suivant, pendant naturel du th\'eor\`eme \ref{theorempfonctions}. \begin{theorem}\label{theorempfonctions2} Soient $\mathscr{L}_j\rightarrow U$, $j=1,...,p$, $p\geq 2$ fibr\'es en droites au-dessus d'un ouvert $U$ d'un bon $\mathbb{K}$-espace analytique $X$ au sens de Berkovich sans bord, \'equip\'e chacun d'une m\'etrique lisse $\|\ \|_j$. Pour chaque $j=1,...,p$, soit $s_j$ une section m\'eromorphe du fibr\'e $\mathscr{L}_j$ dans $U$. On suppose que pour tout $1\leq j_1 < \dots < j_k \leq p$ (avec $k=1,...,p$) on a ${\rm codim}_{U} \Big(\bigcap_1^k {\rm Supp}\, \big([{\rm div}(s_{j_\ell})]\big)\Big)\geq k$ comme au th\'eor\`eme {\rm \ref{theorempfonctions}}. Pour tout $(\lambda_1,...,\lambda_p) \in (\mathbb{R}^*)^p$, on peut d\'efinir l'action d'un courant $G^{s_1,...,s_p}_{\lambda_1,...,\lambda_p}$ de $\mathscr{D}_{n-p+1,n-p+1}(U)$ par $$ G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}~: \omega \in \mathscr{A}^{n-p+1,n-p+1}_c(U) \longmapsto - \int_{U} \frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\, \bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{\|s_j\|^{\lambda_j}_j}{\lambda_j}\Big)\wedge \omega $$ apr\`es avoir d\'ecoup\'e cette int\'egrale suivant un partitionnement de l'unit\'e $1=\sum_\iota \varphi_\iota$ subordonn\'ee au support de $d''\omega$ afin d'en assurer la convergence. De plus on a, au sens de la convergence faible des courants sur $U$, \begin{equation}\label{greentheo} \begin{split} & \lim\limits_{\stackrel{(\lambda_1,...,\lambda_p) \rightarrow (0,...,0)}{\lambda_1\not=0,...,\lambda_p \not=0}} \Big(d'd'' \Big(G^{s_1,...,s_p}_{\lambda_1,...,\lambda_p} + \sum\limits_{k=1}^{p-1}\sum\limits_{1\leq j_1<\dots < j_k\leq p} \Big(\bigwedge_{j \not= j_1,...,j_k} c_1(\mathscr{L}_{j},\|\ \|_{j})\Big) \wedge G^{s_{j_1},...,s_{j_k}}_{\lambda_{j_1},...,\lambda_{j_k}}\Big)\Big) \\ & \qquad + \bigwedge\limits_{j=1}^p [{\rm div}(s_j)] = \Big[\bigwedge\limits_{j=1}^p c_1(\mathscr{L}_j,\|\ \|_j)\Big], \end{split} \end{equation} o\`u le produit de courants $[{\rm div}(s_1)] \wedge \dots \wedge [{\rm div}(s_p)]$ est d\'efini localement comme l'est le courant $[{\rm div}(f_1)]\wedge \dots \wedge [{\rm div}(f_p)]$ dans le th\'eor\`eme \ref{theorempfonctions} \`a partir des fonctions m\'eromorphes coordonn\'ees $f_1,...,f_p$ des $s_j$ dans les rep\`eres locaux pour les fibr\'es $\mathscr {L}_j$, $j=1,...,p$. \end{theorem} \begin{proof} La preuve est calqu\'ee sur celle du th\'eor\`eme \ref{theorempfonctions}. Le r\'esultat est acquis pour $p=2$ d'apr\`es la proposition \ref{prop2fonctions2}. On suppose donc le r\'esultat acquis pour $p-1$ fibr\'es en droites ($p\geq 3$). On note, pour $j=1,...,p$, $Z_j$ les sous-espaces analytiques ferm\'es (au sens de Zariski) de $U$ de codimension $1$ d\'efinis comme les supports des courants $[{\rm div}(s_j)]$. Pour chaque $j=1,...,p$, on note $\widehat{Z}_j$ l'intersection des sous-ensembles $\mathbb{K}$-analytiques $Z_\ell$ pour $\ell=1,...,j-1,j+1,...,p$. On note $U_{s_j}$ le plus grand ouvert de $U$ dans lequel la section $s_j$ est localement r\'eguli\`ere et inversible. Soit $\omega \in \mathscr{A}_c^{n-p+1,n-p+1}(U)$. On est maintenant en mesure de donner le sens suivant \`a l'expression (en tenant compte de la d\'emarche conduisant \`a \eqref{limitepfonctions}) \begin{equation}\label{expressionpsecctions} - \int_{ U} d'\Big(\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{\|s_j\|_j^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge d''(\varphi_\iota \omega) \end{equation} suivant que le support de $\varphi_\iota$ est inclus dans l'un des $U_{s_j}$ pour $j=1,...,p-1$ ou que le support de $\varphi_\iota$ est inclus dans $U_{s_p}$. \begin{itemize} \item Si ${\rm Supp}\, \varphi_\iota \subset U_{f_{j_0}}$ pour un indice $j_0$ entre $1$ et $p-1$ et si ${\rm codim}_{U} \widehat { Z_j} = p-1$, on peut consid\'erer $\widehat {Z_j}$ comme un $\mathbb{K}$-espace analytique de dimension $n-p+1$, on d\'efinit l'expression \eqref{expressionpsecctions} par \begin{multline}\label{expressionpsectionscas1} - \int_{U} d'\Big(\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{\|s_j\|_j^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge d''(\varphi_\iota \omega)\\ := - \Big\langle d' \Big( \Big[\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big]\, d'd''\, G_{\lambda_1,...,\widehat{\lambda_{j_0}},...,\lambda_{p-1}}^{s_1,...,\widehat{s_{j_0}},...,s_{p-1}}\Big)\,,\, d'd''\Big( \frac{\|s_{j_0}\|_{j_0}^{\lambda_{j_0}}}{\lambda_{j_0}}\Big) \wedge d''(\varphi_\iota \omega)\Big\rangle \\ = - \lambda_{j_0} \, \Big\langle d' \Big( \Big[\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big]\, d'd''\, G_{\lambda_1,...,\widehat{\lambda_{j_0}},...,\lambda_{p-1}}^{s_1,...,\widehat{s_{j_0}},...,s_{p-1}}\Big)\,,\, \|s_{j_0}\|_{j_0}^{\lambda_{j_0}}\, d'(\log \|s_{j_0}\|_{j_0}) \wedge d''(\log \|s_{j_0}\|_{j_0}) \wedge d''(\varphi_\iota\omega) \Big\rangle, \end{multline} et (d'apr\`es l'hypoth\`ese de r\'ecurrence) on a aussi $$ \big\langle G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}\,,\, \varphi_\iota \omega \big\rangle := \Big\langle G_{\lambda_1,...,\widehat{\lambda_{j_0}},...,\lambda_p}^{s_1,...,\widehat{s_{j_0}},...,s_p}, \varphi_\iota \omega \wedge d'd'' \Big(\frac{\|s_{j_0}\|_{j_0}^{\lambda_{j_0}}}{\lambda_{j_0}}\Big) \Big\rangle. $$ \item Si ${\rm Supp}\, \varphi_\iota \subset U_{s_p}$, on d\'efinit l'expression \eqref{expressionpfonctions} par \begin{multline}\label{expressionpsectionscas2} - \int_{U} d'\Big(\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big) \wedge \Big(\bigwedge\limits_{j=1}^{p-1} d'd'' \Big(\frac{\|s_j\|_j^{\lambda_j}}{\lambda_j}\Big)\Big)\wedge d''(\varphi_\iota \omega)\\ := - \Big\langle d'd''\, G_{\lambda_1,...,\lambda_{p-1}}^{s_1,...,s_{p-1}}\,,\, d'\Big(\frac{\|s_p\|_p^{\lambda_p}}{\lambda_p}\Big)\wedge d''(\varphi_\iota\omega)\Big\rangle, \end{multline} et toujours suivant l'hypoth\`ese de r\'ecurrence $$ \big\langle G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}\,,\, \varphi_\iota \omega \big\rangle := \Big\langle d'd'' G_{\lambda_1,...,\lambda_{p-1}}^{s_1,...,s_{p-1}}\,,\, \varphi_\iota \omega \, \frac{\|s_{p}\|_p^{\lambda_{p}}}{\lambda_{p}} \Big\rangle. $$ \end{itemize} On d\'efinit l'action du courant $G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}$ en exploitant le partitionnement de l'unit\'e (par des ouverts tous inclus dans au moins un $U_{s_j}$) de l'adh\'erence d'un voisinage ouvert du support de $\omega$~: $$ \big\langle G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}\,,\, \omega \big\rangle = \sum_\iota \big\langle G_{\lambda_1,...,\lambda_p}^{s_1,...,s_p}\,,\, \varphi_\iota\, \omega \big\rangle. $$ Il r\'esulte du th\'eor\`eme \ref{theorempfonctions} et des \'egalit\'es \eqref{eqgreen3}, \eqref{expressionpsectionscas1} et \eqref{expressionpsectionscas2} que dans chaque ouvert $U_{s_j}$ ($j=1,...,p$), on a, pour la convergence au sens de la limite faible des courants dans $U$, \begin{equation}\label{green2} \begin{split} & \lim\limits_{\stackrel{(\lambda_1,...,\lambda_p) \rightarrow (0,...,0)}{\lambda_1\not=0,...,\lambda_p \not=0}} \Big(d'd'' (G^{s_1,...,s_p}_{\lambda_1,...,\lambda_p})\Big) = -\bigwedge\limits_{j=1}^p \Big( [{\rm div}(s_j)] - c_1(\mathscr{L}_j,\|\ \|_j)\Big) \\ & = - [{\rm div}(s_1)] \wedge \dots \wedge [{\rm div}(s_p)] + \\ & + \sum\limits_{k=1}^{p-1} \sum\limits_{1\leq j_1 < \dots < j_k\leq p} (-1)^{p-1-j} \Big(\bigwedge\limits_{\ell =1}^k [{\rm div}(s_{j_\ell})]\Big) \wedge \Big(\bigwedge\limits_{j\not = j_1,...,j_k} c_1(\mathscr{L}_j,\|\ \|_j)\Big), \end{split} \end{equation} avec \begin{multline*} -\bigwedge\limits_{j=1}^p \Big( [{\rm div}(s_j)] - c_1(\mathscr{L}_j,\|\ \|_j)\Big)~:=\\ \begin{cases} \Big(c_1(\mathscr L_p,\|\ \|_p)-[{\rm div}(s_p)]\Big)\bigwedge\limits_{j=1}^{p-1} \Big( [{\rm div}(s_j)] - c_1(\mathscr{L}_j,\|\ \|_j)\Big) \mbox{ dans } U_{s_p}\\ \Big(c_1(\mathscr L_{j_0},\|\ \|_{j_0})-[{\rm div}(s_{j_0})]\Big)\bigwedge\limits_{\stackrel{ j=1}{j\neq j_0}}^p \Big( [{\rm div}(s_j)] - c_1(\mathscr{L}_j,\|\ \|_j)\Big) \mbox{ dans } U_{s_{j_0}}. \end{cases} \end{multline*} La formule asymptotique \eqref{green2} est donc valide au sens des courants dans $U$ puisque l'on peut utiliser un partitionnement de l'unit\'e subordonn\'e au recouvrement d'une forme test ${\rm Supp}\, \omega$ par les $U_{s_j}$. Pour chaque valeur de $k$ entre $1$ et $p-1$, pour chaque suite de $k$ indices distincts $1\leq j_1< \dots < j_k \leq p$, on substitue au second membre de la relation \eqref{green2} les relations asymptotiques \begin{equation*} \begin{split} & \bigwedge\limits_{\ell = 1}^k [{\rm div}(s_k)] = \Big[\bigwedge_{\ell =1}^k c_1(\mathscr{L}_{j_\ell},\|\ \|_{j_\ell})\Big] \\ & - \lim\limits_{\stackrel{(\lambda_{j_1},...,\lambda_{j_k}) \rightarrow (0,...,0)}{\lambda_{j_1}\not=0,...,\lambda_{j_k} \not=0}} \Big(d'd'' \Big(G^{s_{j_1},...,s_{j_k}}_{\lambda_{j_1},...,\lambda_{j_k}} \\ & \qquad \qquad \qquad \qquad \qquad \quad + \sum\limits_{\kappa=1}^{k-1}\sum\limits_{1\leq \iota_1<\dots < \iota_\kappa\leq k} \Big(\bigwedge_{\iota \not= \iota_1,...,\iota_{\kappa}} c_1(\mathscr{L}_{j_\iota},\|\ \|_{j_\iota})\Big) \wedge G^{s_{j_{\iota_1}},...,s_{j_{\iota_k}}}_{\lambda_{j_{\iota_1}},...,\lambda_{j_{\iota_{\kappa}}}}\Big)\Big) \end{split} \end{equation*} avant de regrouper dans le membre de gauche de \eqref{green2} ainsi transform\'e tous les termes s'exprimant comme des limites (et devant lesquels figure l'action de l'op\'erateur de Green $d'd''$). \end{proof} \noindent Le th\'eor\`eme \ref{theorempfonctions2} est \`a rapprocher de la construction de courants de Green normalis\'es inspir\'ee par la m\'ethode de prolongement analytique, telle qu'elle est par exemple d\'ecrite dans \cite[section 3]{BY98}. On note que dans ce nouveau cadre on dispose de $p$ param\`etres $\lambda_1,...,\lambda_p$ (au lieu d'un seul, comme dans \cite[proposition 4]{BY98}) pour construire une solution $G^{s_1,...,s_p}_{\lambda_1,...,\lambda_p}$ \`a une approximation de l'\'equation de Green normalis\'ee \begin{equation}\label{greennorm} d'd'' G + \bigwedge\limits_{j=1}^p [{\rm div}(s_j)] = \Big[\bigwedge\limits_{j=1}^p c_1(\mathscr{L}_j,\|\ \|_j)\Big]. \end{equation} Mais il est par contre possible (dans ce cadre des espaces $\mathbb{K}$-analytiques au sens de Berkovich) de supposer les sections $s_j$ m\'eromorphes et non seulement holomorphes comme c'\'etait le cas dans le cadre analytique complexe~; le fait que toute $(\ell,k)$-forme lisse \`a support compact sur un bon espace analytique $Y$ de dimension $k$ soit telle que son support \'evite tout ferm\'e de Zariski d'int\'erieur non vide de $Y$ (voir \cite[lemme 3.2.5]{ChLD}) joue dans ce cadre non archim\'edien un r\^ole majeur. Par contre, il convient de faire, lorsque l'on travaille dans un tel cadre, une hypoth\`ese plus forte concernant les supports des diviseurs que celle consistant \`a juste supposer que ces supports s'intersectent proprement~; il est n\'ecessaire en effet de supposer que c'est aussi le cas pour toute sous-famille extraite de la famille des supports des $s_j$, $j=1,...,p$. \vskip 1mm \noindent Pour construire une solution $G$ \`a l'\'equation de Green normalis\'ee \eqref{greennorm} (et non seulement une solution \`a une approximation de cette \'equation suivant \eqref{greentheo}), il convient par exemple de complexifier le $\mathbb{R}$-espace vectoriel $\mathscr{D}_{n-p+1,n-p+1}(U)$ et de former, dans ce complexifi\'e $\mathscr{D}_{n-p+1,n-p+1}(U)\otimes_\mathbb{R} \mathbb{C}$, le courant \begin{equation}\label{greenexplicit} \begin{split} & G^{s_1,...,s_p} := \frac{1}{(2i\pi)^p} \times \\ & \int_{\Gamma_{r_1,...,r_p}} \Big(G^{s_1,...,s_p}_{\lambda_1,...,\lambda_p} + \sum\limits_{k=1}^{p-1}\sum\limits_{1\leq j_1<\dots < j_k\leq p} \Big(\bigwedge_{j \not= j_1,...,j_k} c_1(\mathscr{L}_{j},\|\ \|_{j})\Big) \wedge G^{s_{j_1},...,s_{j_k}}_{\lambda_{j_1},...,\lambda_{j_k}}\Big)\, \bigwedge_1^p\frac{d\lambda_j}{\lambda_j} \end{split} \end{equation} o\`u $r_1,...,r_p>0$, $$ \Gamma_{r_1,...,r_p}~: (t_1,...,t_p)\in [0,1]^p \mapsto(r_1 e^{2i\pi t_1},...,r_p e^{2i\pi t_p}) = (\lambda_1,...,\lambda_p). $$ Il est en effet possible de supposer dans les th\'eor\`emes \ref{theorempfonctions} et \ref{theorempfonctions2} que les param\`etres $\lambda_1,...,\lambda_p$ sont dans $\mathbb{C}^*$ et non plus dans $\mathbb{R}^*$. Le courant \og moyen\fg\ $G^{s_1,...,s_p}$ ainsi construit est un courant r\'eel car $\overline{G^{s}_\lambda + \cdots} = G^s_{\bar \lambda} + \cdots$ et que la forme $\Gamma_{r_1,...,r_p}^* \big(\bigwedge d\lambda_j/(2i\pi \lambda_j)\big)$ est la forme r\'eelle $\bigwedge_j \big(d\theta_j/(2\pi)\big)$. Ce courant d\'epend naturellement de l'ordre dans lequel sont consid\'er\'es les fibr\'es $\mathscr{L}_1,...,\mathscr{L}_p$ et les sections m\'eromorphes qui y sont attach\'ees. Il r\'esulte des th\'eor\`emes \ref{theorempfonctions} et \ref{theorempfonctions2} (repris en supposant cette fois les $\lambda_j$ dans $\mathbb{C}^*$) que le courant $G^{s_1,...,s_p}$ est solution de l'\'equation de Green normalis\'ee \eqref{greennorm}. \section{Approche du type Mellin aux courants de Vogel dans le cadre alg\'ebrique}\label{sectionVogel} Dans cette section, nous nous pla\c cons dans le cadre alg\'ebrique et consid\'erons une vari\'et\'e alg\'ebrique projective $X$ de dimension $n$ d\'efinie au-dessus du corps valu\'e $\mathbb{K}$, un entier $m\in \mathbb{N}^*$, et la vari\'et\'e alg\'ebrique projective produit $\mathbb{P}^m_\mathbb{K} \times X$ de dimension $n+m$. On se donne un fibr\'e en droites $L_X \rightarrow X$ au-dessus de $X$ et des sections globales $s_0,...,s_m$ du fibr\'e $L_X$ au-dessus de $X$. Comme le foncteur d'analytification est compatible avec le produit fibr\'e, on a $\big(\mathbb{P}^m_\mathbb{K}\times X\big)^{\rm an} = (\mathbb{P}^m_\mathbb{K})^{\rm an} \times X^{\rm an}$. \vskip 1mm \noindent Soit $\|\ \|$ une m\'etrique semi-positive sur le fibr\'e en droites $\mathcal O_{\mathbb{P}^m_\mathbb{K}} (1) \rightarrow \mathbb{P}^m_\mathbb{K}$. On sait (voir \cite{ChL06, ChL11, Gub08, BFJ}, \cite[section 6.9]{ChLD} ou aussi le survey \cite[section 3.3]{Yuan}) lui associer une mesure de Monge-Amp\`ere que l'on note en effet $\big(c_1(\mathcal O_{\mathbb{P}^m(\mathbb{K})}(1),\|\ \|)\big)^{\wedge^m}$ sur l'analytification $(\mathbb{P}^m_\mathbb{K})^{\rm an}$ telle que $$ \int_{(\mathbb{P}^m_\mathbb{K})^{\rm an}} \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}(\kappa) = \deg_{\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)} (\mathbb{P}^m_\mathbb{K}) = 1. $$ \vskip 1mm \noindent Lorsque le fibr\'e ainsi m\'etris\'e $\overline{\big(\mathscr O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}}$ est un fibr\'e vectoriel PL (voir \cite[d\'efinition 6.2.9]{ChLD}, ceci signifiant essentiellement que l'on puisse disposer localement de rep\`eres orthonorm\'es), la mesure de Monge-Amp\`ere $\big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}$ est une mesure atomique support\'ee par un sous-ensemble discret $S_{\|\ \|}$, i.e il existe des r\'eels positifs $\gamma_\eta$ tels que pour toute fonction $\varphi$ continue de $(\mathbb{P}^m_\mathbb{K})^{\rm an}$ dans $\mathbb{R}$ (\cite{ChLD}, proposition 6.9.2 et d\'efinition 6.7.2 pour la d\'efinition de $S_{\|\ \|}$) \begin{equation}\label{dirac} \int_{(\mathbb{P}^m_\mathbb{K})^{\rm an}} \varphi(\kappa) \, \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^m}(\kappa) = \sum\limits_{\eta\in S_{\|\ \|}} \gamma_\eta\, \varphi(\eta). \end{equation} On supposera par la suite que l'on est toujours dans cette situation (m\'etrique $\|\ \|$ semi-positive et fibr\'e m\'etris\'e $\overline{\big(\mathscr O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}}$ PL)~; si la m\'etrique n'est plus semi-positive mais que le fibr\'e m\'etris\'e $\overline{\big(\mathscr O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}}$ est toujours PL, les masses $\gamma_\eta$ dans \eqref{dirac} sont des nombres r\'eels non n\'ecessairement positifs ou nuls. \begin{example} {\rm Dans le cas particulier o\`u $\|\ \|$ d\'esigne la m\'etrique standard $$ \|\langle \kappa,z\rangle\|_{\rm std} = \frac{|\langle \kappa,z\rangle|}{\max (|z_0|,...,|z_m|)},\quad \kappa \in \mathbb{K}^{m+1}\setminus \{(0,...,0)\},\quad z =[z_0:\dots : z_m] $$ (qui est bien semi-positive, se r\'ef\'erer par exemple \`a la section 1.3 de \cite{ChL11}), la mesure de Monge-Amp\`ere $\big(c_1(\mathscr O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}$ sur $(\mathbb{P}^m_\mathbb{K})^{\rm an}$ qui lui est attach\'ee est la mesure de Dirac $\delta_\xi$ au point de Gau\ss. } \end{example} \begin{remark}{\rm Dans le cadre archim\'edien ($\mathbb{K}=\mathbb{C}$), la m\'etrique sur $\mathbb{P}^m_\mathbb{C}$ construite sur le m\^eme principe que celui sur lequel est construite $\big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}$ s'obtient comme image directe de la mesure de Haar normalis\'ee sur le tore $$ \{[z_0:\dots : z_m] \in \mathbb{P}^m_\mathbb{C}\,;\, |z_0| = \dots = |z_m|\}. $$ Notons que la m\'etrique $\|\ \|_{\rm std}$ est continue mais non lisse. Toujours dans ce cadre archim\'edien, mais lorsque la m\'etrique $\|\ \|$ est la m\'etrique de Fubini-Study (qui, elle, est lisse) $$ \|\langle \kappa,z\rangle\|_{\rm fs} = \frac{|\langle \kappa,z\rangle|} {\sqrt{|z_0|^2+\dots + |z_m|^2}},\quad \kappa \in \mathbb{C}^{m+1}\setminus \{(0,...,0)\},\quad z =[z_0:\dots : z_m], $$ on obtient naturellement $\big(c_1(\mathscr O_{\mathbb{P}^m_\mathbb{C}}(1),\|\ \|_{\rm fs})\big)^{\wedge^m} = (dd^c \log \|z\|^2)^{\wedge^m}$, m\'etrique pour laquelle on rappelle que l'on dispose de la formule de Crofton~: si $f_0,...,f_m$ sont $m+1$ \'el\'ements de $\mathcal O_{X}(U)$ (o\`u $U$ d\'esigne un ouvert d'un espace analytique complexe $X$), on a, lorsque les $f_j$ n'ont aucun z\'ero commun dans $U$~: \begin{equation}\label{crofton} dd^c (\log \|f(x)\|^2_{\rm eucl}) = \int_{[\kappa_0:\cdots : \kappa_m] \in \mathbb{P}^m_\mathbb{C}} \big[{\rm div}(\langle \kappa,f(x)\rangle\big] \wedge \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{C}},\|\ \|_{\rm fs})\big)^{\wedge^m}(\kappa), \end{equation} $\|\ \|_{\rm eucl}$ d\'esignant la norme euclidienne sur $\mathbb{C}^{n+1}$ (voir par exemple \cite{ASWY14}, lemme 6.3) et $f=(f_0,...,f_m)$.} \end{remark} \vskip 2mm \noindent Dans le cadre non archim\'edien (alg\'ebrique), nous pouvons \'enoncer ce qui peut \^etre consid\'er\'e comme le pendant de la formule de Crofton. On consid\`ere les analytifications $\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}$ et $\mathscr{L}_X^{\rm an}$ respectivement des fibr\'es en droites $\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)$ et $L_X$ (consid\'er\'es tous deux comme des fibr\'es en droites au-dessus de la vari\'et\'e alg\'ebrique projective produit $\mathbb{P}^m_\mathbb{K}\times X$) et la section du fibr\'e produit $\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}\otimes \mathscr{L}_X^{\rm an}$ obtenue en analytifiant la section $(\kappa,z) \mapsto \langle \kappa,f(z)\rangle$ du fibr\'e en droites produit $\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1) \otimes L_X$. On notera $[{\rm div}(\langle \kappa,f\rangle)]$ le courant d'int\'egration correspondant \`a ce diviseur effectif sur $\big(\mathbb{P}^{m}_\mathbb{K} \times X\big)^{\rm an} = (\mathbb{P}^m_\mathbb{K})^{\rm an} \times X^{\rm an}$. On suppose ici que le fibr\'e m\'etris\'e $\overline{\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}}$ est PL. La formule de Crofton \eqref{crofton} dans ce cadre non archim\'edien s'\'enonce alors ainsi : \'etant donn\'ees des sections holomorphes $s_0,...,s_m$ de $L_X$ telles que $\bigcap_1^m {\rm Supp}([{\rm div}(s_j)]) = \emptyset$ et $s^{\rm an}_j$ ($j=0,...,m$) leurs analytifications, on a\footnote{Il faut comprendre ici $\langle \kappa,s^{\rm an}\rangle$ comme l'analytification de la section $\langle \kappa,s\rangle$ du fibr\'e $\mathcal O_{\mathbb{P}^m_\mathbb{K}} (1) \otimes L_X\rightarrow \mathbb{P}^m_\mathbb{K}\times X$ en une section du fibr\'e $\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}\otimes \mathscr{L}_X^{\rm an} \rightarrow (\mathbb{P}^m_\mathbb{K})^{\rm an} \times X^{\rm an}$.}~: \begin{equation}\label{croftonarch} \begin{split} & d'd''\Big(\sum\limits_{\eta \in S_{\|\ \|}} \lambda_\eta\, \big[\log \|\langle \eta,s^{\rm an}(x)\rangle\|\big]\Big) \\ & = \int_{\kappa \in (\mathbb{P}^m_\mathbb{K})^{\rm an}} \big[{\rm div} (\langle \kappa,s^{\rm an}(x)\rangle)\big]\wedge \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}(\kappa). \end{split} \end{equation} Elle se r\'eduit pour la m\'etrique standard \`a l'\'equation de Lelong-Poincar\'e $$ d'd'' \big(\big[\log \|\langle \xi,s^{\rm an}(x)\rangle\|_{\rm sdt}\big]\big) = \big[{\rm div}\langle \xi,s^{\rm an}(x)\rangle\big], $$ o\`u $\xi\in (\mathbb{P}^m_\mathbb{K})^{\rm an}$ d\'esigne le point de Gau\ss. \vskip 2mm \noindent Soit $\pi~:\widehat{X}^{\rm an}\to X^{\rm an}$ un \'eclatement normalis\'e de $X^{\rm an}$ (\cite{Con99}, commentaire avant le lemme 2.2.1). Les composantes irr\'eductibles de $X^{\rm an}$ sont des sous-ensembles analytiques de la forme $U_i=\pi(\widehat{U_i})$, avec $\widehat{U_i}$ les composantes connexes de l'\'eclatement normalis\'e $\pi~:\widehat{X}^{\rm an}\to X^{\rm an}$. On dit que $X^{\rm an}$ est irr\'eductible si il est non vide et admet une unique composante irr\'eductible (\cite[lemme 2.2.1 et d\'efinition 2.2.2]{Con99}). \vskip 1mm \noindent Pour d\'efinir une approche de type Mellin au cycle de Vogel attach\'e \`a une famille $(s_0,...,s_m)$ de sections d'un fibr\'e en droites $L_X \rightarrow X$, une fois choisie une m\'etrique lisse $\|\ \|_{L_X}$ sur le fibr\'e $\mathscr{L}_X^{\rm an} \rightarrow X^{\rm an}$, il suffit d'exploiter de mani\`ere it\'erative le lemme suivant, directement inspir\'e de \cite[lemme 3.1]{ASWY14}. \begin{lemma}\label{lemmeintersection} Soit $U$ un ouvert d'un bon $\mathbb{K}$-espace analytique au sens de Berkovich $X^{\rm an}$ de dimension $n$, $$ Z = \sum\limits_{\iota} \mu_\iota\, Z_\iota $$ une combinaison formelle localement finie de sous-ensembles analytiques de $U$ de dimension pure $n-p$ ($1\leq p \leq n-1$) et $s$ une section holomorphe d'un fibr\'e m\'etris\'e $\mathscr{L}^{\rm an}\rightarrow U$ \'equip\'e d'une m\'etrique lisse $\|\ \|$ de premi\`ere forme de Chern $c_1(\mathscr{L}^{\rm an},\|\ \|)$. On note \begin{equation*} \begin{split} & Z^{{\rm div}(s)} := \sum\limits_{\big\{\iota\,;\, {\rm Supp} (Z_\iota)\, \subset\, {\rm Supp} ({\rm div}(s))\big\}} \mu_\iota Z_\iota \\ & Z^{U \setminus {\rm div}(s)} := \sum\limits_{\big\{\iota\,;\, {\rm Supp} ( Z_\iota)\, \not\subset\, {\rm Supp} ({\rm div}(s))\big\}} \mu_\iota Z_\iota. \end{split} \end{equation*} Soit $\lambda \in \{\lambda \in \mathbb{C}\,;\, {\rm Re}\lambda >0\}$. On d\'efinit $\tilde T^s_\lambda \in \big(\mathscr{D}_{n-p,n-p}(U) \oplus \mathscr{D}_{n-p-1,n-p-1}(U)\big)\otimes_\mathbb{R} \mathbb{C}$ comme \begin{equation} \tilde T^s_\lambda := \sum\limits_{\iota} \Big([1 - \|s\|^{\lambda}] + \big[\|s\|^\lambda \ c_1(\mathscr{L}^{\rm an},\|\ \|)\big] + d'd'' \Big[\frac{\|s\|^\lambda}{\lambda}\Big] \Big) \wedge [Z_\iota], \end{equation} o\`u le courant $[\|s\|^\lambda]\, [Z_\iota]$ est d\'efini \`a partir du lemme 4.6.1 de \cite{ChLD} comme l'image directe par $i_{Z_\iota}~: Z_\iota \rightarrow U$ du courant $[\|s\circ i_{Z_\iota}\|^\lambda]$. On a $$ \tilde T^s_\lambda = \big[Z^{{\rm div}(s)}\big] +\big[\|s\|^\lambda\, c_1(\mathscr{L}^{\rm an},\|\ \|)\big] \wedge \big[Z^{U \setminus {\rm div}(s)}\big] + d'd'' \big(\Big[\frac{\|s\|^\lambda}{\lambda}\Big]\big)\wedge [Z^{U\setminus {\rm div}(s)}] $$ et, par cons\'equent : \begin{equation}\label{vogel1} \lim\limits_{\stackrel{\lambda \rightarrow 0}{\rm Re\, \lambda >0}} \tilde T^s_\lambda = \big[Z^{{\rm div}(s)}\big] + [{\rm div}(s)] \wedge \big[Z^{U \setminus {\rm div}(s)}\big]. \end{equation} \end{lemma} \begin{proof} Ce lemme r\'esulte imm\'ediatement de l'\'equation de Lelong Poincar\'e. \end{proof} \noindent Suivant l'approche propos\'ee dans \cite{ASWY14} (voir en particulier le th\'eor\`eme 6.2 dans cette r\'ef\'erence) et la transcription \eqref{croftonarch} que nous avons propos\'e pour la formule de Crofton dans le cadre non archim\'edien, il est naturel de d\'efinir ainsi le courant de Vogel (et son approche du type Mellin) attach\'e \`a $m+1$ sections globales $s_0,...,s_m$ d'un fibr\'e $L_X \rightarrow X$ au-dessus d'une vari\'et\'e projective $X$ d\'efinie au-dessus d'un corps valu\'e $\mathbb{K}$, une fois choisie une m\'etrique lisse $\|\ \|_{L_X}$ sur le fibr\'e $\mathscr{L}^{\rm an}$. On note $\|\ \|_{L_X,{\rm moy}}$ la m\'etrique induite sur le fibr\'e $\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}\otimes \mathscr{L}_X^{\rm an}$ par le choix des m\'etriques $\|\ \|$ sur $\big(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1)\big)^{\rm an}$ et $\|\ \|_{L_X}$ sur $\mathscr{L}^{\rm an}$. \begin{definition} {\rm Le courant de Vogel attach\'e \`a $s_0,...,s_m$ est d\'efini comme la limite suivante au sens (faible) des courants sur $X^{\rm an}$~: \begin{equation} \begin{split} & \lim\limits_{\lambda_\nu \rightarrow 0} \Big( \lim\limits_{\lambda_{\nu-1} \rightarrow 0} \Big( \cdots \Big(\lim\limits_{\lambda_1 \rightarrow 0} \, \int\limits_{\big((\mathbb{P}^m_\mathbb{K})^{\rm an}\big)^\nu} \Big(\bigwedge\limits_{j=1}^\nu \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|)\big)^{\wedge^m}(\kappa_j)\Big) \wedge \\ & \bigwedge\limits_{j=1}^\nu \Big([1 - \|\langle \kappa_j,s^{\rm an}\rangle\|_{L_X,{\rm moy}}^{\lambda_j}] + \big[\|\langle \kappa_j,s^{\rm an}\rangle\|_{L_X,{\rm moy}}^{\lambda_j}\, (c_1(L_X,\|\ \|)\big] \\ & \qquad \qquad \qquad \qquad \qquad \quad + d'd''\Big( \Big[\frac{\|\langle \kappa_j,s^{\rm an}\rangle\|_{L_X,{\rm moy}}^{\lambda_j}}{\lambda_j}\Big]\Big)\Big)\Big) \cdots \Big)\Big) \end{split} \end{equation} lorsque $\nu := \min(m+1,n+1)$, o\`u le produit des courants se trouve justifi\'e par le lemme 4.6.1 de \cite{ChLD} si l'on tient compte de \eqref{vogel1} et du fait que les limites suivant $\lambda_1,...,\lambda_\nu$ sont prises les unes apr\`es les autres. } \end{definition} \begin{remark} {\rm Du fait que la mesure correspondant aux courant $[c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|_{\rm moy})]^{\wedge^m}$ est une mesure atomique (combinaison lin\'eaire de masses de Dirac), il r\'esulte du lemme \ref{lemmeintersection} que le courant de Vogel est un courant d'int\'egration sur un cycle analytique (non de dimension pure) de $X^{\rm an}$ que l'on convient de d\'efinir comme le cycle (moyen) de Vogel.} \end{remark} \section{Approche de type Mellin aux courants de Segre dans le cadre alg\'ebrique}\label{sectionsegre} Soit $X$ une vari\'et\'e alg\'ebrique projective de dimension $n$ d\'efinie sur $\mathbb{K}$ et $X^{\rm an}$ son analytification au sens de Berkovich. On consid\`ere un fibr\'e alg\'ebrique $E_X \rightarrow X$ de rang $m+1$ au-dessus de $X$ et on \'equipe son analytifi\'e $E_X^{\rm an} \rightarrow X^{\rm an}$ d'une m\'etrique formelle PL (voir \cite[d\'efinition 6.2.9]{ChLD}) not\'ee $\|\ \|_{E_X^{\rm an}}$ au-dessus de l'analytification $X^{\rm an}$. \begin{example}{\rm Si $X^{\rm an}=(\mathbb{P}^n_\mathbb{K})^{\rm an}$ et si $E_X^{\rm an} = \big(\mathscr O_X(d_0)\big)^{\rm an} \oplus \dots \oplus \big(\mathscr O_X(d_m)\big)^{\rm an}$, on peut \'equiper chaque $\big(\mathscr O_X (d_j)\big)^{\rm an}$ de la m\'etrique standard $$ \|s_j([z_0:\dots :z_n])\|_{\rm std} = \frac{|s_j(z_0,...,z_n)|}{\max_\ell |z_\ell|^{d_j}} $$ qui est une m\'etrique globalement psh-approchable \cite[proposition 6.3.2]{ChLD} et le fibr\'e $E_X^{\rm an}$ de la m\'etrique $$ \|s\|_{E_X^{\rm an}} := \max_{j} \|s_j\|_{\rm std}. $$ } \end{example} \vskip 2mm \noindent Soit $s \in \mathscr O_X(E_X)$ une section globale de $E_X$ dont on notera $s^{\rm an}$ l'analytification. Soit $\pi~: \widehat X \rightarrow X$ l'\'eclatement normalis\'e de $X$ suivant le faisceau d'id\'eaux de $\mathcal O_X$ induit par $s$ et $\pi^{\rm an}~: \widehat{X}^{\rm an} \rightarrow X^{\rm an}$ son analytification. On note $L_{\widehat X}$ le fibr\'e en droites correspondant au diviseur exceptionnel $D_s$ de $\pi~: \widehat X \rightarrow X$ et $\widehat{\mathscr{L}}^{\rm an}$ le fibr\'e que $L_{\widehat X}$ induit au-dessus de l'analytification $\widehat{X}^{\rm an}$. On a (du fait de la d\'efinition de l'\'eclatement normalis\'e $\pi$) $\pi^* (s) = \sigma\otimes \tau $, o\`u $\sigma$ est une section globale du fibr\'e en droites $L_{\widehat X}$ et $\tau$ une section ne s'annulant pas du fibr\'e $F_{\widehat X} := L_{\widehat X}^{-1} \otimes \pi^* (E_X)$ (de rang $m+1$ comme $E_X$, et dont on notera $F_{\widehat X}^{\rm an}\rightarrow \widehat{X}^{\rm an}$ l'analytification). \vskip 1mm \noindent Comme dans la section 4 de \cite{ASWY14}, on \'equipe le fibr\'e $L_{\widehat X}$ de la m\'etrique $\|\ \|_\tau$ telle que $\|\sigma\|_\tau = \|\pi^*(s)\|_{\pi^*(E_X)}$. On note $\sigma^{\rm an}$ et $\tau^{\rm an}$ les sections holomorphes respectivement des fibr\'es $\widehat {\mathscr L}^{\rm an}$ et $F_{\widehat X}^{\rm an}$ au-dessus de $\widehat {X}^{\rm an}$ d\'eduites de $\sigma$ et $\tau$ par analytification. On note $\|\ \|_{\tau^{\rm an}}$ la m\'etrique formelle d\'efinie sur le fibr\'e $\widehat{\mathscr{L}}^{\rm an}$. Le courant $-d'd'' \log \|\tau^{\rm an}\|_{\pi^*(E_X)}$ (calcul\'e ici localement en choisissant arbitrairement une trivialisation locale de $(\widehat{\mathscr{L}}^{\rm an})^{-1}$) est le courant de Chern $c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})$. \\ Si $a_0,...,a_\mu$ sont des fonctions r\'eguli\`eres globalement inversibles dans un ouvert $U$ de $X^{\rm an}$, la fonction $\log \max(|a_0|,...,|a_\mu|)$ est globalement psh approchable (voir \cite[Proposition 6.8.3]{ChLD}) dans $U$. Comme la m\'etrique $\|\ \|_{E^{\rm an}_X}$ est suppos\'ee PL, il en est de m\^eme pour la m\'etrique $\|\ \|_{\pi^*(E^{\rm an}_X)}$ sur $\widehat {X}^{\rm an}$ \cite[6.2.15]{ChLD}. Par cons\'equent, la fonction $-d'd'' \log \|\tau^{\rm an}\|_{\pi^*( E^{\rm an}_X)}$ est une fonction globalement psh-approchable au voisinage de tout point $\hat x$ o\`u $\sigma$ n'est pas inversible (il suffit pour cela de travailler dans un ouvert de carte $U_{\pi(\hat x)}$ au-dessus duquel on dispose d'un rep\`ere orthonorm\'e pour le fibr\'e $E^{\rm an}_X$ et de consid\'erer le voisinage $\pi^{-1} (U_{\pi(x)})$ de $\hat x$) et l'on sait donc donner un sens (en approchant cette fonction par des fonctions psh lisses) aux puissances ext\'erieures $\big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}$, $k=1,...,n$. Pour $1\leq k \leq n$, on peut donc d\'efinir sur $\widehat {X}^{\rm an}$ le courant $[{\rm div}(\sigma^{\rm an})] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}$. \noindent En transposant la notion de courant de Segre $M^s$ introduite dans \cite[section 4]{ASWY14}, on aboutit \`a la d\'efinition suivante~: \begin{definition} Le courant de Segre attach\'e \`a la section $s$ est le courant $$ M^s := [1 - \|s^{\rm an}\|^{\lambda}_{ E_X^{\rm an}}]_{\lambda = 0} + \pi_* \Big( \sum\limits_{k=1}^{n} [{\rm div}(\sigma^{\rm an})] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big). $$ \end{definition} \noindent Nous avons la proposition suivante~: \begin{prop}\label{propsegre} Le courant de Segre $M^s$ s'exprime aussi comme $M^s = \sum\limits_{k=0}^n M^s_k$, o\`u \begin{equation}\label{propsegre1} \begin{split} & M^s_0 = \lim\limits_{\lambda_0 \rightarrow 0} \big[1 - \|s^{\rm an}\|^{\lambda_0}_{ E_X^{\rm an}}\big]~; \\ & M^s_k = \lim\limits_{\lambda_k \rightarrow 0} \Big(\lim\limits_{\lambda_{k-1} \rightarrow 0} \Big( \cdots \Big(\lim\limits_{\lambda_1 \rightarrow 0} \\ & \Big(d''[\|s^{\rm an}\|^{\lambda_k}_{ E_X^{\rm an}}] \wedge d'[\log \|s^{\rm an}\|_{E_X^{\rm an}}] \wedge \bigwedge\limits_{\ell =1}^{k-1} d'd'' \Big(\Big[\frac{\|s^{\rm an}\|_{E_X^{\rm an}}^{\lambda_\ell}}{\lambda_\ell}\Big]\Big)\Big)\Big) \cdots \Big) \Big). \end{split} \end{equation} \end{prop} \begin{proof} La preuve est directement inspir\'ee de celle qui est conduite dans le cadre complexe dans \cite[section 4]{ASWY14}. Puisqu'on a localement l'\'egalit\'e (au sens des courants) $$ [\log \|\pi^*[s^{\rm an}]\|_{\pi^*(E^{\rm n}_X)}] = [\log |\sigma^{\{\rm an\}}|] + [\log \|\tau^{\rm an}\|] = [\log \|\sigma^{\rm an}\|_{\|\tau\|^{\rm an}}], $$ o\`u nous avons not\'e $\sigma^{\{\rm an\}}$ la fonction coordonn\'ee de $\sigma^{\rm an}$ dans un rep\`ere local, il d\'ecoule de la formule de Lelong-Poincar\'e que $$ d'd'' \big[\log \|\pi^*[s^{\rm an}]\|_{\pi^*(E^{\rm an}_X)}\big] = [{\rm div}(\sigma^{\rm an})] - c_1(\widehat {\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}}). $$ au sens des courants. Notons $M^{s,\lambda}_k$ ($k=0,...,n$) la composante de bidegr\'e $(0,k)$ dans le courant dont on prend la limite lorsque les $\lambda_j$ tendent (les uns apr\`es les autres) vers $0$ au second membre de \eqref{propsegre1}. On a pour $\lambda_0 >0$ $$ \pi^* (M_0^{s,\lambda})= 1 - [\|\pi^*[s^{\rm an}]\|^{\lambda_0}_{E_{X^{\rm an}}^{\rm an}}] $$ et, pour $\lambda_k >0$ ($k=1,...,n$)~: $$ \pi^* (M_k^{s,\lambda}) =[{\rm div}(\sigma^{\rm an})] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\,. $$ Si l'on remplace le $(1,1)$-courant $-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})$ par une $(1,1)$-forme lisse $\hat\omega$ qui l'approche au sens des courants (on a observ\'e que cela \'etait possible puisque la fonction $-d'd'' \log \|\tau^{\rm an}\|_{\pi^*( E^{\rm an}_X)}$ est une fonction globalement psh-approchable au voisinage de tout point $\hat x$ o\`u $\sigma$ n'est pas inversible), il r\'esulte du lemme \ref{lemmeintersection} que l'on a pour tout $1\leq k \leq n$, $$ \lim\limits_{\lambda_k \rightarrow 0_+} \dots \lim\limits_{\lambda_1 \rightarrow 0_+} M_k^{s,\underline\lambda} = \pi_* \Big(\Big(\dots \pi^* (M^{s,\lambda}_k)_{\lambda_1=0}\dots \Big)_{\lambda_k =0}\Big) = \pi_* \Big([{\rm div}(\sigma^{\rm an})] \wedge \hat\omega^{\wedge^{k-1}}\Big). $$ On d\'eduit le r\'esultat de la Proposition \ref{propsegre} en approchant au sens des courants (au fur et \`a mesure que les $\lambda_k$ tendent successivement vers $0$) la forme $-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})$ par une $(1,1)$-forme lisse $\hat \omega$. \end{proof} \section{Nombres ou cycles de Lelong dans le contexte non archim\'edien}\label{sectlelong} Soit $\mathscr X$ un espace analytique complexe de dimension $n$ et $T$ un $(k,k)$-courant positif sur $\mathscr X$. Soit $x_0\in \mathscr X$. Le nombre de Lelong (ordinaire) $\nu(T,x_0)$ du courant $T$ au point $x_0$ est d\'efini comme la limite lorsque $\epsilon$ tend vers $0^+$ de la fonction croissante sur $]0,\epsilon_0]$ (avec $0<\epsilon_0<<1$)~: $$ \epsilon \mapsto \frac{1}{\epsilon^{2(n-k)}}\int_{\|x-x_0\|<\epsilon} T \wedge (dd^c \|x-x_0\|^2)^{\wedge^{n-k}}. $$ Appelons cycle g\'en\'eralis\'e de $\mathscr X$ tout courant de la forme $\pi_* (c)$, o\`u $\pi~: \mathscr Y \rightarrow \mathscr X$ est un morphisme propre entre espaces analytiques complexes et $c$ est un produit de composantes de formes de Chern lisses sur $\mathscr Y$, chacune attach\'ee \`a un fibr\'e holomorphe $(F\rightarrow \mathscr Y,\|\ \|)$ \'equip\'e d'une m\'etrique lisse~; tel est le cas par exemple des courants $$ \pi_*\big([Y_\iota]\wedge (-c_1(\hat L, \|\ \|_\tau))^{\wedge^{k-1}}\big) = (\pi \circ i_\iota)_* \Big( - \big(c_1(\hat L_{|Y_\iota},(\|\ \|_\tau)_{|Y_\iota})\big)^{\wedge^{k-1}} \Big) $$ ($k=1,...,n$), o\`u $Y_\iota$ d\'esigne l'une des composantes irr\'eductibles du diviseur exceptionnel $[D]$ de l'\'eclatement $\pi~: \hat{\mathscr X} \rightarrow \mathscr X$ le long du faisceau d'id\'eaux de $\mathcal O_{\mathscr X}$ attach\'e \`a une section $s$ d'un fibr\'e hermitien $E_{\mathscr X} \rightarrow \mathscr X$ et $i_\iota~: Y_\iota \rightarrow \hat {\mathscr{X}}$ l'immersion de $Y_\iota$ dans $\hat{\mathscr{X}}$~; la m\'etrique $\|\ \|_\tau$ sur le fibr\'e en droites $\hat L=\mathcal O(-[D])$ est ici d\'efinie par $\|\sigma\|_\tau = \|\pi^* s\|_{\pi^* (E_{\mathscr X})}$. \'Etant donn\'e un point $x_0$ de $\mathscr X$ et un cycle g\'en\'eralis\'e $T$ sur $\mathscr X$, on sait associer \`a $T$ un nombre de Lelong $\nu(T,x_0)\in \mathbb{Z}$ au point $x_0$. Par exemple, le nombre de Lelong $\nu(T_\iota,x_0)$ du courant $T_\iota = \pi_*\big([Y_\iota]\wedge \big(-c_1(\hat L, \|\ \|_\tau)\big)^{\wedge^{k-1}}\big)$ au point $x_0$ s'exprime ainsi lorsque $\xi_{x_0} = \xi_{x_0,0},...,\xi_{x_0,m_{x_0}}$ d\'esigne un syst\`eme de g\'en\'erateurs de l'id\'eal maximal $\EuFrak M_{x_0}$ de $\mathscr O_{\mathscr X,x_0}$~: \begin{equation}\label{nblelongcmplx} \begin{split} & \Big[ \cdots \Big[ \int\limits_{\big(\mathbb{P}^{m_{x_0}}_\mathbb{C}\big)^{\nu}} \Big(\bigwedge\limits_{j=1}^{\nu} \big(c_1(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{C}}(1),\|\ \|_{\rm fs })\big)^{\wedge^{m_{x_0}}}(\kappa_j)\Big) \wedge \\ & \bigwedge\limits_{j=1}^{\nu} \Big(1-|\langle \kappa_j,\xi_{x_0}\rangle|_{\rm fs}^{2\lambda_j} + dd^c \Big( \frac{|\langle \kappa_j,\xi_{x_0}\rangle|^{2\lambda_j}_{\rm fs}}{\lambda_j}\Big)\Big) \wedge T_\iota \Big] _{\lambda_1 =0}\cdots \Big]_{\lambda_{\nu}=0} = \nu(T_\iota,x_0)\, [\{x_0\}] \end{split} \end{equation} o\`u $\nu= \min (n+1,m_{x_0}+1)$ et $|\langle \kappa,\xi\rangle|_{\rm fs} := |\langle \kappa,\xi\rangle|/\|\kappa\|$ si $\kappa = [\kappa_0:\dots : \kappa_{m_{x_0}}]$ et $\|\kappa\|$ d\'esigne la norme euclidienne dans $\mathbb{C}^{m_{x_0}+1}$ (voir la Proposition 5.3 de \cite{ASWY14})~; la notation $\big[\dots \big]_{\lambda_j=0}$ signifie ici que l'on prolonge m\'eromorphiquement la fonction holomorphe (\`a valeurs courants) de $\lambda_j$ (pour ${\rm Re}\, \lambda_j >>1$) enserr\'ee par les crochets et que l'on \'evalue ensuite le coefficient de $\lambda_j^0$ dans le d\'eveloppement en s\'erie de Laurent de ce prolongement m\'eromorphe au voisinage de l'origine. \vskip 1mm \noindent Soit maintenant $X$ une vari\'et\'e alg\'ebrique projective de dimension $n$ d\'efinie sur un corps valu\'e $\mathbb{K}$ et $X^{\rm an}$ son analytification au sens de Berkovich. Consid\'erons un courant $T$ sur $X^{\rm an}$ de la forme $T = \sum_{\iota,\iota'} (\pi_\iota) _*[\omega_{\iota'}]$, o\`u $\pi_\iota~: Y_\iota^{\rm an} \rightarrow X^{\rm an}$ est un morphisme analytique entre analytifi\'es au sens de Berkovich de vari\'et\'es alg\'ebriques projectives d\'efinies sur $\mathbb{K}$ et $\omega_{\iota'}$ est un produit de premi\`eres formes de Chern de fibr\'es en droites $(\mathscr{L}_{\iota,\iota'}^{\rm an},\|\ \|_{\iota,\iota'}^{\rm an})$, o\`u $\|\ \|_{\iota,\iota'}^{\rm an}$ est une m\'etrique formelle PL globalement psh approchable sur le fibr\'e $\mathscr{L}^{\rm an}_{\iota,\iota'} \rightarrow Y_\iota^{\rm an}$. Si $x_0 $ est un point ferm\'e de $X$, on peut analytifier le morphisme $\iota_{x_0}~: \{x_0\} \rightarrow X$ et consid\'erer $\{x_0\}^{\rm an}$ comme un sous-ensemble de Zariski de dimension $0$ de $X^{\rm an}$. Soit $\xi_{x_0} = (\xi_{x_0,0},...,\xi_{x_0,m_{x_0}})$ un syst\`eme de g\'en\'erateurs de l'id\'eal maximal $\EuFrak M_{x_0}$ de $\mathscr O_{X,x_0}$ et $\nu = \min(n+1,m_{x_0}+1)$. On consid\`ere le fibr\'e $\big(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)\big)^{\rm an} \otimes \mathbb{K}^{\rm an}$ sur $\big(\mathbb{P}^{m_{x_0}}_\mathbb{K} \times \mathscr U\big)^{\rm an}$ ($\mathscr U$ ouvert affine contenant $x_0$) et on analytifie la section $(\kappa,x)\mapsto \langle \kappa,\xi_{x_0}(x)\rangle$ en une section du fibr\'e $\big(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)\big)^{\rm an} \otimes \mathbb{K}^{\rm an}$ au-dessus de $\big(\mathbb{P}^{m_{x_0}}_\mathbb{K}\times \mathscr U\big)^{\rm an} = \big(\mathbb{P}^{m_{x_0}}_\mathbb{K}\big)^{\rm an} \times U$. On choisit une m\'etrique semi-positive sur $\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)$ induisant une m\'etrique PL sur $\big(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)\big)^{\rm an}$ que l'on note $\|\ \|_{\rm moy}$ et pour laquelle la mesure de Monge-Amp\`ere $\big(c_1(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^{m_{x_0}}}$ est une mesure atomique (par exemple la mesure de Dirac au point de Gau\ss \ lorsque $\|\ \|_{\rm moy}$ est la m\'etrique induite par le choix de la m\'etrique standard sur $\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)$). On d\'efinit ainsi un courant sur $X^{\rm an}$ (en s'inspirant de l'approche \eqref{nblelongcmplx}) de support le sous-ensemble de Zariski $\{x_0\}^{\rm an}$~: \begin{equation} \begin{split} & \lim\limits_{\lambda_\nu \rightarrow 0} \Big(\lim\limits_{\lambda_{\nu-1} \rightarrow 0} \Big( \cdots \Big(\lim\limits_{\lambda_1 \rightarrow 0} \, \int\limits_{\big((\mathbb{P}^{m_{x_0}}_\mathbb{K})^{\rm an}\big)^{\nu} } \Big(\bigwedge\limits_{j=1}^\nu \big(c_1(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^{m_{x_0}}}(\kappa_j)\Big) \wedge \\ & \bigwedge\limits_{j=1}^\nu \Big([1 - \|\langle \kappa_j,\xi_{x_0}^{\rm an}\rangle\|_{{\rm moy}}^{\lambda_j}] + d'd''\Big(\Big[ \frac{\|\langle \kappa_j,\xi_{x_0}^{\rm an}\rangle\|_{{\rm moy}}^{\lambda_j}}{\lambda_j}\Big]\Big)\Big)\wedge T(x)\Big) \cdots \Big)\Big). \end{split} \end{equation} Lorsque l'on choisit comme m\'etrique la m\'etrique standard sur $\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1)$, le courant ainsi construit est ind\'ependant du choix du syst\`eme g\'en\'erateur $\xi_{x_0}$ de l'id\'eal maximal~: si l'on dispose de deux syst\`emes de g\'en\'erateurs $\xi_{x_0}$ et $\tilde \xi_{x_0}$ pour l'id\'eal maximal $\EuFrak M_{x_0}$, on peut les compl\'eter par des fonctions nulles pour en faire deux syst\`emes de g\'en\'erateurs de la m\^eme longueur $m_{x_0} + \tilde m_{x_0}$ et on compare les deux courants construits en utilisant la m\'etrique PL induite par la m\'etrique standard sur $\mathcal O_{\mathbb{P}^{m_{x_0} + \tilde m_{x_0} -1} _\mathbb{K}}(1)$. Le courant ainsi construit correspond \`a un cycle analytique de dimension pure $0$, de support $\{x_0\}^{\rm an}$ que l'on peut appeler cycle de Lelong du courant $T$ sur le $\mathbb{K}$-espace analytique $\{x_0\}^{\rm an}$. \section{La formule de King dans le contexte non archim\'edien}\label{sectionKing} Soit (comme dans la section \ref{sectionsegre}) $X$ une vari\'et\'e alg\'ebrique projective de dimension $n$ d\'efinie sur un corps valu\'e $\mathbb{K}$ et $X^{\rm an}$ son analytification au sens de Berkovich. On consid\`ere un fibr\'e alg\'ebrique $E_X \rightarrow X$ de rang fini au-dessus de $X$ et on \'equipe son analytifi\'e $ E_X^{\rm an} \rightarrow X^{\rm an}$ d'une m\'etrique formelle PL \cite[d\'efinition 6.2.9]{ChLD}, que l'on supposera ici globalement psh approchable not\'ee $\|\ \|_{E_X^{\rm an}}$ au-dessus de l'analytification $X^{\rm an}$. Soit $s\in \mathcal O_X(E_X)$ une section globale de $E_X$ dont on notera $s^{\rm an}~: X^{\rm an} \rightarrow E_X^{\rm an}$ l'analytification. \vskip 1mm \noindent Soit $\pi~: \widehat X \longmapsto X$ l'\'eclatement normalis\'e de $X$ via le faisceau coh\'erent d'id\'eaux attach\'e \`a la section globale $s\in \mathcal O_X(E_X)$ et $\pi^{\rm an}~:\widehat{X}^{\rm an} \rightarrow X^{\rm an}$ son analytification. \vskip 1mm \noindent Pour chaque $k=0,...,n$, on note $(Y_{k,\iota_k})_{\iota_k}$ la liste des composantes exceptionnelles de l'\'eclatement normalis\'e $\pi~: \widehat X \longmapsto X$ telles que ${\rm codim}_X\, \pi(Y_{k,\iota_k}) = k$ et $( Y_{k,\iota_k}^{\rm an}\hookrightarrow \widehat{X}^{\rm an})_{\iota_k}$ la liste de leurs analytifications au sens de Berkovich. On introduit \'egalement l'analytifi\'e $\widehat{\mathscr{L}}^{\rm an}$ induit au-dessus de $\widehat{X}^{\rm an}$ par le fibr\'e $L_{\widehat X}$ correspondant au diviseur exceptionnel de l'\'eclatement $\pi$. Ce fibr\'e $\widehat {\mathscr{L}}^{\rm an}$ est \'equip\'e de la m\'etrique $\|\ \|_{\tau^{\rm an}}$ induite par la m\'etrique d\'efinie par $\|\sigma\| = \|\pi^*(s)\|_{\pi^*(E_X)}$ si $s = \sigma \otimes \tau$, o\`u $\sigma$ est une section de $L_{\widehat X}$ et $\tau$ une section ne s'annulant pas de $L_{\widehat X}^{-1} \otimes \pi^*(E)$. \\ Pour chaque paire d'entiers $k,\ell\in \{1,...,n\}$, pour chaque indice $\iota_\ell$, on introduit le courant $T_{k,\ell,\iota_\ell} := \pi_*^{\rm an} \Big([Y_{\ell,\iota_\ell}^{\rm an}] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big)$. Le support de ce courant est inclus dans l'union des ensembles de Zariski $\pi^{\rm an}(Y_{\ell,\iota_\ell}^{\rm an})$, sous-ensemble analytique ferm\'e de $X^{\rm an}$ de codimension $\ell$. \vskip 1mm \noindent Lorsque $\ell >k$ et que $\omega \in \mathscr{A}_c^{n-k,n-k}(X^{\rm an})$, on a $(j_{\iota_\ell}^{\rm an})^* \omega =0$ si $$ j_{\iota_\ell}^{\rm an}~: Y_{\ell,\iota_\ell}^{\rm an} \rightarrow X^{\rm an} $$ d\'esigne l'analytification du morphisme $$ Y_{\ell,\iota_\ell} \hookrightarrow \widehat X \stackrel{\pi}{\longrightarrow} X $$ (pour des raisons de dimension, du fait que ${\rm codim}_X (\pi(Y_{\ell,\iota_\ell}))=\ell >k$). Il en r\'esulte donc que, d\`es que $\ell >k$, on a $T_{k,\ell,\iota_\ell}=0$ pour tout indice $\iota_\ell$. \vskip 1mm \noindent On remarque aussi que si $\ell < k$, le cycle de Lelong du courant $T_{k,\ell,\iota_\ell}$ en $\{x_0\}^{\rm an}$ dans $X^{\rm an}$ est le cycle nul (pour tout $x_0\in X$). On raisonne pour cela ainsi, apr\`es avoir dans un premier temps approch\'e le $(1,1)$-courant $-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})$ par une suite de $(1,1)$-formes de Chern lisses en utilisant le fait que la m\'etrique PL en jeu ici est suppos\'ee globalement psh approchable. \begin{itemize} \item On multiplie le courant $T_{k,\ell,\iota_\ell}$ par le \og courant moyen\fg\ (on rappelle que le courant $\big(c_1(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^{m_{x_0}}}(\kappa_1)$ correspond \`a une mesure atomique) $$ \int\limits_{\big(\mathbb{P}^{m_{x_0}}_\mathbb{K}\big)^{\rm an}} \big(c_1(\mathcal O_{\mathbb{P}^{m_{x_0}}_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^{m_{x_0}}}(\kappa_1) \wedge \Big( [1 - \|\langle \kappa_1,\xi_{x_0}^{\rm an}\rangle\|_{{\rm moy}}^{\lambda_1}] + d'd''\Big( \Big[\frac{\|\langle \kappa_1,\xi_{x_0}^{\rm an}\rangle\|_{{\rm moy}}^{\lambda_1}}{\lambda_1}\Big]\Big)\Big). $$ En utilisant le fait que le support de toute forme $\varphi\in \mathscr{A}_c^{p,n-1}( Y_{\ell,\iota_\ell}^{\rm an})$ ($0\leq \ell\leq n-1$) ne saurait intersecter aucun sous-ensemble de Zariski propre de $ Y_{\ell,\iota_\ell}^{\rm an}$ (on applique \`a nouveau \cite{ChL}, 5.1), on voit que soit le courant obtenu ainsi est nul, soit l'analytifi\'e de $\mathbb{P}^{m_{x_0}}_\mathbb{K} \times \pi(Y_{\iota_\ell,\ell})$ dans $\mathbb{P}^{m_{x_0}}_\mathbb{K} \times\mathscr U$ (on reprend ici les notations utilis\'ees dans la section \ref{sectlelong}) est inclus dans $\{\langle \kappa_1,\xi_{x_0}^{\rm an}\rangle =0\}$ pour un $\kappa_1$ g\'en\'erique (la moyennisation effectu\'ee ici correspond \`a la prise de mesure de Dirac au point de Gauss). \item On r\'eit\`ere si n\'ecessaire (lorsque $\ell<k-1$) cette op\'eration $k-\ell -1$ fois. Cette op\'eration ne saurait se poursuivre sans que l'on ne rencontre lors du processus le courant nul. \end{itemize} \vskip 1mm \noindent Ainsi l'on peut \'ecrire, pour tout $k\in [{\rm codim}_X s^{-1}(0),n]$, \begin{equation}\label{king1} \begin{split} & \pi_*^{\rm an} \Big( [{\rm div}(\sigma^{\rm an})] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big) = \\ & = \sum_{\iota_k} \pi_*^{\rm an} \Big( [Y^{\rm an}_{k,\iota_k}] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big) + \mathscr N_k[s]\,, \end{split} \end{equation} de mani\`ere \`a ce que le sous-ensemble des points $\{x_0\}^{\rm an}$ de $X^{\rm an}$ o\`u le $(k,k)$-courant $\mathscr N_k[s]$ a un cycle de Lelong non nul soit de codimension au moins \'egale \`a $k+1$. \vskip 1mm \noindent On peut donc \'enoncer la version suivante du Th\'eor\`eme de King, dans le cadre cette fois non archim\'edien. Ce r\'esultat constitue le pendant du Th\'eor\`eme 1.1 de \cite{ASWY14}. Nous ne donnerons l'\'enonc\'e ici que dans le contexte alg\'ebrique, contexte o\`u nous nous pla\c cons dans cet article. La terminologie \og stable\fg\ et \og mobile\fg\ fait ici r\'ef\'erence \`a celle classiquement introduite dans le cadre de la th\'eorie de l'intersection impropre en g\'eom\'etrie analytique complexe, voir par exemple l'introduction de \cite{ASWY14} ainsi que \cite{GafGas} o\`u cette terminologie est introduite. \begin{theorem} Soit $X$ une vari\'et\'e alg\'ebrique projective de dimension $n$ d\'efinie sur un corps valu\'e $\mathbb{K}$ et $X^{\rm an}$ son analytification au sens de Berkovich. On consid\`ere un fibr\'e alg\'ebrique $E_X \rightarrow X$ de rang fini au-dessus de $X$, l'on suppose que le fibr\'e $E_X^{\rm an} \rightarrow X^{\rm an}$ est \'equip\'e d'une m\'etrique formelle PL, not\'ee $\|\ \|_{E_X^{\rm an}}$, au-dessus de l'analytification $X^{\rm an}$. Soit $s\in \mathcal O_X(E_X)$ une section globale de $E_X$ et $s^{\rm an}\in \mathcal O_{X^{\rm an}} (E_X^{\rm an})$ son analytification. Pour tout $k=0,...,n$, on note $(Y_{k,\iota_k})_{\iota_k}$ la liste des composantes exceptionnelles de l'\'eclatement normalis\'e $\pi~: \widehat X \longmapsto X$ (le long du faisceau coh\'erent d'id\'eaux attach\'e \`a la section $s$) telles que ${\rm codim}_X\, \pi(Y_{k,\iota_k}) = k$ et $(Y_{k,\iota_k}^{\rm an}\hookrightarrow \widehat{X}^{\rm an})_{\iota_k}$ la liste de leurs analytifications au sens de Berkovich. La composante de bidegr\'e $(k,k)$ du courant $M^s$ de Segre se scinde, pour $k=1,...,n$ en sa composante \og stable\fg~: $$ (M^s_k)_{\rm stable} = \sum_{\iota_k} \pi_*^{\rm an} \Big( [Y^{\rm an}_{k,\iota_k}] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big) $$ et sa composante \og mobile\fg~: $$ (M^s_k)_{\rm mobile} = \sum_{\ell =0}^{k-1}\sum\limits_{\iota_\ell} \pi_*^{\rm an} \Big( [Y^{\rm an}_{\ell,\iota_\ell}] \wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big) $$ telle que, pour tout point ferm\'e $x\in X$, le cycle de Lelong du courant $(M^s_k)_{\rm mobile}$ sur $\{x\}^{\rm an}$ soit nul. \end{theorem} \begin{proof} Supposons que $E_X$ soit de rang $m+1$. Soit $x^{\rm an}$ un point de $X^{\rm an}$ et $U_{x^{\rm an}}$ un domaine analytique contenant $x^{\rm an}$ au-dessus duquel $E^{\rm an}$ admette un rep\`ere orthonorm\'e $\{e_0,...,e_m\}$. La section $s^{\rm an}$ s'exprime dans $U_{x^{\rm an}}$ sous la forme $$ s^{\rm an} = \sum\limits_{\ell=0}^m s_{\ell}^{\rm an}\, e_j, $$ o\`u les fonctions coordonn\'ees $s_{\ell}^{\rm an}$, $\ell =0,...,m$, sont des fonctions analytiques et o\`u $$ \|s\| = \max\limits_{0\leq \ell\leq m} |s_\ell^{\rm an}|. $$ Auquel cas, on peut consid\'erer, au lieu de la factorisation $(\pi^{\rm an})^*(s)=\sigma^{\rm an} \otimes \tau^{\rm an}$ (o\`u $\sigma^{\rm an}$ est une section du fibr\'e $\widehat{\mathscr{L}}^{\rm an}$), ind\'ependamment chaque factorisation $(\pi^{\rm an})^* (s_\ell^{\rm an}) = \sigma^{\rm an} \otimes \tau^{\rm an}_\ell$, les $\tau^{\rm an}_\ell$ ($\ell=0,...,m$) \'etant des sections au-dessus de $(\pi^{\rm an})^{-1}(U_{x^{\rm an}})$ du fibr\'e $(\widehat{\mathscr{L}}^{\rm an})^{-1}$. Reprenant la construction des courants de Vogel telle qu'elle a \'et\'e d\'ecrite dans la section \ref{sectionVogel}, on observe que, pour tout $k=1,...,n$, pour tout $\iota_k$, on peut construire \`a l'aide du Th\'eor\`eme \ref{theorempfonctions2} un $(k-1,k-1)$-courant $A_k \in \mathscr{D}_{n-(k-1),n-(k-1)}(\pi^{\rm an}(U_{x^{\rm an}}))$ de support inclus dans l'ensemble de Zariski $\pi^{\rm an}(Y_{k,\iota_k}^{\rm an})$ (de codimension $k$ dans $X^{\rm an}$, donc dans $U_{x^{\rm an}}$), solution de l'\'equation de Green \og moyennis\'ee\fg \ \begin{eqnarray*} d'd''A_k &=& \lim\limits_{\lambda_{k-1} \rightarrow 0} \Big( \cdots \Big(\lim\limits_{\lambda_1 \rightarrow 0} \, \int\limits_{\big((\mathbb{P}^m_\mathbb{K})^{\rm an}\big)^{k-1}} \Big(\bigwedge\limits_{j=1}^{k-1} \big(c_1(\mathcal O_{\mathbb{P}^m_\mathbb{K}}(1),\|\ \|_{\rm moy})\big)^{\wedge^m}(\kappa_j)\Big) \wedge \\ & & \bigwedge_{j=1}^{k-1} d'd''\Big(\Big[\frac{\|\langle \kappa_j,\tau^{\rm an}\rangle\|_{(\widehat {\mathscr{L}}^{\rm an})^{-1} ,{\rm moy}}^{\lambda_j}}{\lambda_j}\Big]\Big) \wedge [Y_{k,\iota_k}^{\rm an}]\\ & & \qquad \qquad - [Y_{k,\iota_k}^{\rm an}]\wedge \big(-c_1(\widehat{\mathscr{L}}^{\rm an},\|\ \|_{\tau^{\rm an}})\big)^{\wedge^{k-1}}\Big)\Big) \end{eqnarray*} Chaque courant $\pi^{\rm an}_* (A_{k,\iota_k})\in \mathscr D_{n-(k-1),n-(k-1)}(U_{x^{\rm an}})$ est de support inclus dans l'ensemble de Zariski $\pi^{\rm an}(Y_{k,\iota_k}^{\rm an})$~; un tel courant, de part sa construction m\^eme via le prolongement analytique, est donc nul pour des raisons de dimension et la composante stable $(M^s_k)_{\rm stable}$ de la composante $M^s_k$ du courant de Segre $M^s$ s'exprime donc aussi comme \begin{eqnarray*} & & (M^s_k)_{\rm stable} = \\ & & \pi^{\rm an}_*\Big( \sum\limits_{\iota_k} \Big( \lim\limits_{\lambda_{k-1} \rightarrow 0} \Big( \cdots \Big(\lim\limits_{\lambda_1 \rightarrow 0} \, \int\limits_{\big((\mathbb{P}^m_\mathbb{K})^{\rm an}\big)^{k-1}} \Big(\bigwedge\limits_{j=1}^{k-1} \big(c_1(\mathcal O_{\mathbb{P}^m(\mathbb{K})}(1),\|\ \|_{\rm moy})\big)^{\wedge^m}(\kappa_j)\Big) \wedge \\ & & \bigwedge_{j=1}^{k-1} d'd''\Big( \Big[\frac{\|\langle \kappa_j,\tau^{\rm an}\rangle\|_{(\widehat {\mathscr{L}}^{\rm an})^{-1} ,{\rm moy}}^{\lambda_j}}{\lambda_j}\Big]\Big)\Big) \wedge [Y_{k,\iota_k}^{\rm an}]\Big)\Big)\Big). \end{eqnarray*} Lorsque $x$ est un point ferm\'e de $X$, le cycle de Lelong de $M_k^s$ en $x^{\rm an}$ (qui est aussi celui de $(M^s_k)_{\rm stable}$) s'interpr\`ete donc comme un courant de Vogel (au sens introduit dans la section \ref{sectionVogel}), ce de mani\`ere analogue \`a ce qui se produit dans le cadre archim\'edien (voir les sections 7 et 8 de \cite{ASWY14}). \end{proof} \vskip 3mm \noindent \dedicatory{{\bf Remerciements} \vskip 1mm \noindent Je tiens \`a remercier le rapporteur d'avoir lu extr\^ement attentivement les diff\'erentes versions de ce document et de m'avoir propos\'e de nombreuses remarques et suggestions qui ont grandement contribu\'e \`a l'am\'eliorer. \\ L'auteur tient aussi \`a exprimer sa profonde gratitude \`a Alain Yger, Professeur \`a l'Institut de math\'ematiques de Bordeaux (Universit\'e de Bordeaux, France), pour son aide tant pr\'ecieuse lors de la recherche. Il lui est \'egalement agr\'eable de remercier Salomon Sambou, Professeur de l'Universit\'e Assane Seck de Ziguinchor (S\'en\'egal), pour des discussions int\'eressantes. }
{ "timestamp": "2018-05-08T02:19:00", "yymm": "1805", "arxiv_id": "1805.02569", "language": "fr", "url": "https://arxiv.org/abs/1805.02569" }
\section{Introduction} Graphical calculi enable reasoning about quantum computation in an intuitive yet rigorous way. Calculi based on string diagrams are more flexible than circuit-style languages, allowing the description of states and measurement projections as well as unitary operations in one unified framework. Their rigour is ensured by the category-theoretical underpinnings \cite{SelingerCPM}. The best-known of these graphical languages is the ZX-calculus, which was first introduced 10 years ago \cite{CD1,CD2}. It is built around the interactions of two complementary bases, the computational basis and the Hadamard basis, which are graphically represented by so-called \emph{spiders}. A related formalism is the ZW-calculus \cite{hadzihasanovic2017thesis}, which is built around the interactions of generators related to the two different types of three-qubit entangled states: GHZ states and $W$ states. Here, we introduce a new graphical language called the \textit{ZH-calculus}, which roughly follows this analogy with multipartite entanglement: \begin{center} \textit{ZX-calculus} : \textit{ZH-calculus} :: \textit{graph states} : \textit{hypergraph states} \end{center} Graph states are the basic resource for the one-way model of measurement-based quantum computation~\cite{MBQC2}, and have been studied extensively using the ZX-calculus~\cite{CD2,DP1,DP2,RossMBQC}. Hypergraph states were introduced in~\cite{rossi2013hypergraph} as a generalisation of graph states, and have recently gathered some interest due, for example, to the role they play in quantum search algorithms~\cite{HyperGrover}, exponentially-growing Bell violations~\cite{gachechiladze2016extreme}, and universal measurement-based quantum computation~\cite{HyperSPTO}. Like the ZX- and ZW-calculi, the ZH-calculus includes a family of ``Z-spiders'' associated with the computational basis. However, its point of departure is the inclusion of ``H-boxes'', which are $n$-ary generalisations of the Hadamard gate satisfying a variation of the spider fusion law, much like the one satisfied by $W$-spiders in the ZW-calculus.\footnote{Despite satisfying a similar variation of the spider fusion rule, this generalisation of the Hadamard node is different from that employed in the original version of the Y-calculus \cite[Definition 2 of Version 1]{jeandel2018y-calculus}.} Whereas Hadamard gates are used to introduce edges between 2 vertices in a graph state, H-boxes can introduce hyperedges between $n$ vertices in a hypergraph state. Seen from another perspective, H-boxes are closely related to both $n$-ary AND gates in classical logic and to the Hadamard-CCZ family of quantum circuits. As a result, Boolean circuits can be encoded in the ZH-calculus with low constant overhead. In particular, the linear maps corresponding to classical AND and NOT gates can be depicted as follows in terms of the ZH calculus: \ctikzfig{logic} While the unitary NOT gate has a simple expression in the ZX-calculus, a typical encoding of an AND gate requires $25$ basic generators and non-Clifford phases (cf.~\cite{CKbook}, \S12.1.3.1). Similarly, multiply-controlled phase gates also have very succinct representations, indicating that the ZH-calculus may be useful for analysing Hadamard-CCZ circuits (a.k.a. Hadamard-Toffoli circuits~\cite{ShiToffoli,aharonov2003hadamardtoffoli}, cf. forthcoming~\cite{Niel2018} for connection to ZH), as well as diagonal operators at any level of the Clifford hierarchy~\cite{DiagHierarchy}. Our main theorem is the ZH-calculus is complete with respect to its standard interpretation as matrices. That is, if two ZH-diagrams describe the same matrices, then they are provably equal using the rules of the ZH-calculus. Perhaps one of the most appealing features of the calculus is the simplicity of this completeness proof. The core of the proof (section~\ref{s:completeness}) fits on 4 pages, where only especially intricate lemmas---which appear in Appendix~\ref{sec:disconnect}---were done within the proof assistant Quantomatic~\cite{quanto-cade}. This is due to two main factors. The first is the extensive use of \textit{!-box notation}~\cite{kissinger2014pattern}, which gives an elegant way to reason about diagrams which have arbitrarily-large fan-out-type behaviour. The second is a unique normal form for the ZH-calculus, which expresses any matrix as a Schur product -- i.e.\ entrywise product -- of elementary matrices with the property that all but one entry of each matrix is 1. This multiplicative construction contrasts with the additive construction of the normal form in the ZW-calculus \cite{hadzihasanovic2017thesis}, which arises as a sum of elementary matrices with the property that all but one entry of each matrix is 0. For example the normal form of the diagram corresponding to the matrix $\left(\begin{smallmatrix}a&b\\c&d\end{smallmatrix}\right)$ is effectively constructed as follows, where the left-hand side represents the approach in the ZH-calculus with $*$ denoting entrywise multiplication, and the right-hand side represents the approach in the ZW-calculus: \[ \begin{pmatrix}a&1\\1&1\end{pmatrix} * \begin{pmatrix}1&b\\1&1\end{pmatrix} * \begin{pmatrix}1&1\\c&1\end{pmatrix} * \begin{pmatrix}1&1\\1&d\end{pmatrix} = \begin{pmatrix}a&b\\c&d\end{pmatrix} = \begin{pmatrix}a&0\\0&0\end{pmatrix} + \begin{pmatrix}0&b\\0&0\end{pmatrix} + \begin{pmatrix}0&0\\c&0\end{pmatrix} + \begin{pmatrix}0&0\\0&d\end{pmatrix}. \] Unlike the completeness proofs for universal versions of the ZX-calculus \cite{LoriaCompleteness,OxfordCompleteness}, which make use of the ZW-completeness proof via suitable translations between the two respective languages, our proofs of soundness, completeness, and universality are self-contained and don't rely on encoding into another calculus. The paper is structured as follows. The generators and relations of the ZH-calculus are introduced in Section~\ref{s:ZH-dfn} and the calculus is proved to be universal and sound. The completeness proof is given in Section~\ref{s:completeness}. In section~\ref{s:applications} we survey two potential applications and comment on future work. Omitted proofs and a link to the Quantomatic project used to prove Lemmas~\ref{lem:disconnect-4} and \ref{lem:big-disconnect} are given in the appendix. \section{Definition of the ZH-calculus} \label{s:ZH-dfn} The ZH-calculus is a graphical language expressing operations as \emph{string diagrams}. These are diagrams consisting of dots or boxes, connected by wires. Wires are also allowed to have one or two ``dangling'' ends, which are not connected to a dot or box: these represent inputs of the diagram if oriented towards the bottom, outputs of the diagram if oriented to the top. \subsection{The generators and their interpretation} \label{s:ZX-translation} The diagrams of the ZH-calculus are generated by \emph{Z-spiders}, which are represented as white dots, and \emph{H-boxes}, which are represented as white boxes labelled with a complex number $a$. These generators are interpreted as follows, where $\intf{\cdot}$ denotes the map from diagrams to matrices. \[ \intf{\tikzfig{Z-spider}} := \ket{0}^{\otimes n}\bra{0}^{\otimes m} + \ket{1}^{\otimes n}\bra{1}^{\otimes m} \qquad\qquad \intf{\tikzfig{H-spider}} := \sum a^{i_1\ldots i_m j_1\ldots j_n} \ket{j_1\ldots j_n}\bra{i_1\ldots i_m} \] The sum in the second equation is over all $i_1,\ldots, i_m, j_1,\ldots, j_n\in\{0,1\}$, i.e.\ an H-box represents a matrix all but one of whose entries are equal to 1. A label of $-1$ is usually left out and the box is then drawn smaller, e.g.\ $\dotunit{small hadamard}:=\hadastate{-1}$. Straight and curved wires have the following interpretations: \[ \intf{\;|\;} := \ketbra{0}{0}+\ketbra{1}{1} \qquad\qquad\qquad \intf{\tikzfig{wire-cup}} := \ket{00}+\ket{11} \qquad\qquad\qquad \intf{\tikzfig{wire-cap}} := \bra{00}+\bra{11}. \] The juxtaposition of two diagrams is interpreted as the tensor product of the corresponding matrices and the sequential composition of two diagrams is interpreted as the matrix product of the corresponding matrices: \[ \intf{\gendiagram{$D_1$}\;\gendiagram{$D_2$}} := \intf{\gendiagram{$D_1$}}\otimes\intf{\gendiagram{$D_2$}} \qquad\qquad \intf{\tikzfig{sequential-composition}} := \intf{\gendiagram{$D_2$}}\circ\intf{\gendiagram{$D_1$}} \] The statements of the relations of the ZH-calculus will be simplified by introducing two derived generators, called \emph{grey spiders} and NOT, respectively. \begin{equation}\label{eq:grey-spider} \tikzfig{X-spider-dfn} \end{equation}\par\noindent \begin{equation}\label{eq:X-dfn} \tikzfig{negate-dfn} \end{equation}\par\noindent With these definitions, \dotmult{gray dot}\ acts on computational basis states as XOR and \greyphase{\neg} acts as NOT: \[ \intf{\dotmult{gray dot}} = \ketbra{0}{00}+\ketbra{0}{11}+\ketbra{1}{01}+\ketbra{1}{10} \qquad\qquad\qquad \intf{\greyphase{\neg}}=\ketbra{0}{1}+\ketbra{1}{0}. \] There is an evident encoding of the generators of the ZX-calculus into ZH given by the following translation: \[ \tikzfig{green-spider} \qquad\qquad \tikzfig{Hadamard} \qquad\qquad \tikzfig{red-spider} \] Since it is well-known that the ZX-calculus is universal for representing arbitrary linear maps $\mathbb C^{2^m} \to \mathbb C^{2^n}$, the following is immediate: \begin{proposition} Any linear map $\mathbb C^{2^m} \to \mathbb C^{2^n}$ can be expressed using the generators of the ZH-calculus. \end{proposition} We will also give a normal form in Section~\ref{s:completeness} which directly implies universality of the ZH-calculus, without going via ZX. \begin{figure} \centering \begin{tabular}{ccccc} (ZS1) & \tikzfig{Z-spider-rule} & \qquad & (HS1) & \tikzfig{H-spider-rule} \\ &&&& \\ (ZS2) & \tikzfig{Z-special} & & (HS2) & \tikzfig{H-identity} \\ &&&& \\ (BA1) & \tikzfig{ZX-bialgebra} & & (BA2) & \tikzfig{ZH-bialgebra} \\ &&&& \\ (M) & \tikzfig{multiply-rule} & & (U) & \tikzfig{unit-rule} \\ &&&& \\ (A) & \tikzfig{average-rule} & & (I) & \tikzfig{intro-rule} \\ &&&& \\ (O) & \tikzfig{ortho-rule} & & & \end{tabular} \caption{The rules of the ZH-calculus. Throughout, $m,n$ are nonnegative integers and $a,b$ are arbitrary complex numbers. The right-hand sides of both \textit{bialgebra} rules (BA1) and (BA2) are complete bipartite graphs on $(m+n)$ vertices, with an additional input or output for each vertex. The horizontal edges in equation (O) are well-defined because only the topology matters and we do not need to distinguish between inputs and outputs of generators. n.b. the rules (M), (A), (U), (I), and (O) are pronounced \textit{multiply}, \textit{average}, \textit{unit}, \textit{intro}, and \textit{ortho}, respectively.} \label{fig:ZH-rules} \end{figure} \subsection{The relations, and soundness}\label{sec:relations} The rules of the ZH-calculus are given in Figure~\ref{fig:ZH-rules}. We furthermore add one meta rule, often stated as ``only topology matters''. That is, two diagrams are considered equivalent if one can be deformed into the other. Furthermore, the Z-spiders and H-boxes are assumed to be \emph{symmetric} and \emph{undirected}: i.e.\ two inputs or outputs of the same generator can be swapped without changing the interpretation, and an input can be ``bent'' around to become an output, or conversely. Graphically: \ctikzfig{generator-symmetries} \medskip \begin{proposition} The ZH-calculus is sound. \end{proposition} \begin{proof} It is straightforward to check the symmetry properties for each generator and all of the rules in Figure~\ref{fig:ZH-rules} by concrete calculation. Soundness of the meta rule ``only the topology matters'' follows by considering the string diagrams as morphisms in a compact closed category~\cite{SelingerCPM}. \end{proof} \subsection{!-box notation}\label{sec:bang-boxes} Many of the calculations in this paper are greatly simplified by the use of \textit{!-box notation}~\cite{kissinger2014pattern}. A !-box (pronounced ``bang box'') in a string diagram represents a part of the diagram that is able to fan out arbitrarily. That is, the contents of a !-box, along with any wires into or out of the !-box, can be copied $n$ times for any non-negative integer $n$. For example, the !-box diagram below represents the following family of (concrete) string diagrams, one for each $n$: \[ \tikzfig{bang-box-example} \quad \longleftrightarrow \quad \left\{ \ \ \tikzfig{bang-box-example0}\ \ ,\quad \ \ \tikzfig{bang-box-example1}\ \ ,\quad \ \ \tikzfig{bang-box-example2}\ \ ,\quad \ \ \tikzfig{bang-box-example3}\ \ ,\quad \ \ \ldots\ \ \right\} \] All of the resulting string diagrams are well-defined because all of our generators can have arbitrary arities. We can also use !-boxes in diagram equations, as long as each !-box on the LHS has a corresponding !-box on the RHS, and the inputs/outputs in each !-box match. Such a rule represents a family of equations where each \textit{pair} of corresponding !-boxes is replicated $n$ times: \[ \tikzfig{unit-bangboxed} \quad \longleftrightarrow \quad \left\{ \ \ \tikzfig{unit-bb0}\ \ ,\quad \ \ \tikzfig{unit-bb1}\ \ ,\quad \ \ \tikzfig{unit-bb2}\ \ ,\quad \ \ \ldots\ \ \right\} \] Note the dashed box on the right-hand side of the first equation denotes an empty diagram. With this notation, the definition of grey spiders \eqref{eq:grey-spider} becomes \begin{equation}\label{eq:grey-spider-dfn} \tikzfig{X-spider-dfn-bb} \end{equation}\par\noindent Additionally, the rules (ZS1), (HS1), (BA1), and (BA2) from Figure~\ref{fig:ZH-rules} become \[ \text{(ZS1)}\quad \tikzfig{Z-spider-rule-bb} \qquad \text{(HS1)}\quad \tikzfig{H-spider-rule-bb} \qquad \text{(BA1)}\quad \tikzfig{ZX-bialgebra-bb} \qquad \text{(BA2)}\quad \tikzfig{ZH-bialgebra-bb} \] Using the rules in this form makes it straightforward to prove !-box generalisations of the rules (M), (U), (A), and (I). \begin{lemma}\label{lem:bb-rules} The ZH-calculus satisfies the following rules: \[ \text{(M!)}\;\; \tikzfig{multiply-rule-bb} \qquad \text{(U!)}\;\; \tikzfig{unit-bangboxed} \qquad \text{(A!)}\;\; \tikzfig{avg-lemma} \qquad \text{(I!)}\;\; \tikzfig{intro-rule-bangboxed} \] \end{lemma} \noindent This lemma is proved in Appendix~\ref{sec:bang-rules}. At this point, it is worth highlighting the special cases of (M!) and (U!) where the !-box is expanded $0$ times: \[ \tikzfig{scalar-mult} \qquad\qquad\qquad\qquad\qquad\qquad \tikzfig{scalar-rule} \] These rules enable us to multiply scalars at will, and in particular eliminate scalars by multiplying by the inverse. From hence forth, we will use this fact without further comment during our proofs. In this paper, we use a mild, but very useful, extension of the usual !-box notation, which allows !-boxes to be indexed by a the elements of a finite set. For example, indexing over the finite set $\mathbb B^2 := \{ 00, 01, 10, 11 \}$, we can write expressions such as: \[ \tikzfig{indexed-example} \ \ :=\ \ \ \tikzfig{index-example-rhs} \] This extends to equations in the obvious way: \[ \left(\ \tikzfig{index-example-rule}\ \right) \ \ := \ \ \left( \ \tikzfig{index-example-rule-inst}\ \right) \] where we require corresponding !-boxes on the LHS and RHS to be indexed by the \textit{same} finite set. Note that inputs and outputs of a copy associated with the index $x \in X$ on the LHS are matched with inputs and outputs of the \textit{same} copy on the RHS. We recover the behaviour of normal, un-labelled !-boxes by interpreting a !-box without a label as being indexed by an \textit{arbitrary} finite set, e.g. \[ \tikzfig{Z-spider-rule-bb} \qquad \longleftrightarrow \qquad \tikzfig{Z-spider-rule-bb-index} \quad \textrm{(for any finite sets $X$ and $Y$)} \] \section{Completeness} \label{s:completeness} We show that the ZH-calculus is complete by demonstrating the existence of a unique normal form for ZH-diagrams. It is first worth noting that, because we can turn inputs into outputs arbitrarily (cf. the beginning of section~\ref{sec:relations}), it suffices to consider diagrams which have only outputs. We call these \textit{states}. Concretely, these are interpreted as column vectors (i.e. kets). For states $\psi,\phi$, let $\psi * \phi$ be the \textit{Schur product} of $\psi$ and $\phi$ obtained by plugging the $i$-th output of $\psi$ and $\phi$ into \dotmult{white dot}, for each $i$: \ctikzfig{schur} It follows from (ZS1) that $*$ is associative and commutative, so we can write $k$-fold Schur products $\psi_1 * \psi_2 * \ldots * \psi_k$ without ambiguity. For any finite set $J$ with $|J| = k$, let $\prod_{j\in J} \psi_j$ be the $k$-fold Schur product. Let $\mathbb B^n$ be the set of all $n$-bitstrings. For any $\vec{b} := b_1\ldots b_n \in \mathbb B^n$, define the \textit{indexing map} $\iota_{\vec{b}}$ as follows: \begin{equation}\label{eq:iota-dfn} \iota_{\vec{b}} \; = \; \tikzfig{indexing-box} \; = \; \left(\greyphase{\neg}\right)^{1 - b_1} \ldots \left(\greyphase{\neg}\right)^{1 - b_n}. \end{equation} Then normal forms are given by the following $2^n$-fold Schur products: \begin{equation}\label{eq:nf-formula} \prod_{\vec{b} \in \mathbb B^n} \big( \iota_{\vec{b}} \circ H_n(a_{\vec{b}}) \big) \end{equation} where $H_n(a_{\vec{b}})$ is the arity-$n$ H-box (considered as a state) labelled by an arbitrary complex number $a_{\vec{b}}$. A normal form diagram can be seen as a collection of $n$ spiders, fanning out to $2^n$ H-boxes, each with a distinct configuration of NOT's corresponding to the $2^n$ bitstrings in $\mathbb B^n$. Diagrammatically, normal forms are: \[ \tikzfig{nf-bbox}\ \ :=\ \ \tikzfig{nf-picture} \] \begin{theorem}\label{thm:nf-unique} Normal forms are unique. In particular: \begin{equation}\label{eq:nf-concrete} \intf{ \, \prod_{\vec{b} \in \mathbb B^n} \big( \iota_{\vec{b}} \circ H_n(a_{\vec{b}}) \big) } = \sum_{\vec{b} \in \mathbb B^n} a_{\vec{b}} \ket{\vec{b}}. \end{equation} \end{theorem} \begin{proof} The map $\iota_{\vec b}$ is a permutation that acts on computational basis elements as $\ket{\vec c} \mapsto \ket{\vec c \oplus \vec b \oplus \vec 1}$. In particular, it sends the basis element $\ket{\vec 1}$ to $\vec b$. Hence $\iota_{\vec b} \circ H_n(a_{\vec b})$ is a vector with $a_{\vec b}$ in the $\vec b$-th component and $1$ everywhere else. The Schur product of all such vectors indeed gives the RHS of ~\eqref{eq:nf-concrete}. \end{proof} Since equation~\eqref{eq:nf-concrete} gives us a means of constructing any vector in $\mathbb C^{2^n}$, Theorem~\ref{thm:nf-unique} can also be seen as a proof of universality of the ZH calculus, independent of the encoding into ZX we gave in section~\ref{s:ZX-translation}. We now prove 2 lemmas which will assist in manipulating normal forms: \begin{lemma}\label{lem:X-copy} The NOT operator copies through white spiders: \ctikzfig{X-copy} \end{lemma} \begin{proof} Starting from the left-hand side, \[ \tikzfig{X-copy-proof} \qedhere \] \end{proof} \begin{lemma}\label{lem:iota-copy} The $\iota_{\vec{b}}$ operator copies through white spiders, i.e.\ for any $\vec{b}\in\mathbb B^n$: \ctikzfig{iota-copy} \end{lemma} \begin{proof} This follows immediately from Lemma~\ref{lem:X-copy} via the definition of $\iota_{\vec{b}}$ \eqref{eq:iota-dfn}. \end{proof} \begin{lemma}\label{lem:convolution-iota} The ZH-calculus enables the computation of the Schur product of two maps of the form $\iota_{\vec{b}}\circ H_n(x)$ and $\iota_{\vec{b}}\circ H_n(y)$ for any $\vec{b}\in\mathbb B^n$ and $x,y\in\mathbb C$: \ctikzfig{convolution-iota} \end{lemma} \begin{proof} Apply Lemma~\ref{lem:iota-copy}, followed by (M!). \end{proof} We will now show that normal form diagrams, when combined in various ways, can also be put into normal form. Let \tikzfig{nf} denote an arbitrary normal-form diagram. It is straightforward to see that permuting the outputs of a normal-form diagram merely interchanges the bits in the coefficients $a_{\vec b}$. Hence, normal forms are preserved under permutations of outputs. Furthermore: \begin{proposition}\label{prop:extension} A diagram consisting of a normal form diagram juxtaposed with \dotunit{white dot}\ can be brought into normal form using the rules of the ZH-calculus: \ctikzfig{extension} \end{proposition} \begin{proof} Starting from the left-hand side, which we expand using the indexed !-box notation, \ctikzfig{extension-proof} The last diagram is a normal form diagram with $n+1$ outputs, i.e.\ the desired result. \end{proof} \begin{proposition}\label{prop:convolution} The Schur product of two normal form diagrams can be brought into normal form using the rules of the ZH-calculus. \ctikzfig{convolution-nf} \end{proposition} \begin{proof} This follows from (ZS1) and Lemma~\ref{lem:convolution-iota}. \end{proof} \begin{corollary}\label{cor:tensor-product} The tensor product of two normal form diagrams can be brought into normal form using the rules of the ZH-calculus. \end{corollary} \begin{proof} A tensor product can be expressed as \ctikzfig{tensor-product} The diagram NF$_1$ and the leftmost $m$ copies of \dotunit{white dot}\ can be combined into one normal-form diagram with $(n+m)$ outputs by successive applications of Proposition~\ref{prop:extension}. Similarly, the rightmost $n$ copies of \dotunit{white dot}\ and NF$_2$ can be combined into one normal-form diagram with $(n+m)$ outputs. The desired result then follows by Proposition~\ref{prop:convolution}. \end{proof} \begin{remark}\label{rem:scalar-juxtaposition} Note that a single scalar H-box is a normal form diagram. Corollary~\ref{cor:tensor-product} thus implies that a diagram consisting of a normal form diagram juxtaposed with a scalar H-box can be brought into normal form. In the following proofs, we will therefore ignore scalars for simplicity: they can be added back in and then incorporated to the normal form without problems. \end{remark} We are now ready to prove the most difficult case, which is contraction. The majority of the work goes into proving Lemma~\ref{lem:big-disconnect}, which we call the Disconnect Lemma. It uses the (O) rule to disconnect the $2^n$-legged $\dotonly{white dot}\xspace$-spider arising from a contraction of a normal form into $2^{n-1}$ separate cups. It was proven with the help of the graphical proof assistant Quantomatic. Details and full proof are given in Appendix~\ref{sec:disconnect}. \begin{proposition}\label{prop:contraction} The diagram resulting from applying \dotcounit{white dot}\ to an output of a normal form diagram can be brought into normal form: \ctikzfig{whitecounit-nf} \end{proposition} \begin{proof} Starting from an arbitrary normal form, with a \dotcounit{white dot} plugged into the right most output, we have: \[ \scalebox{0.8}{\tikzfig{contraction-thm-pf}} \] Then, we can apply Lemma~\ref{lem:big-disconnect}: \[ \scalebox{0.8}{\tikzfig{contraction-thm-pf2}} \] The final diagram is in normal form, which completes the proof. \end{proof} Our strategy will now be to show that any diagram can be decomposed into H-boxes, combined via the operations of extension, convolution, and contraction. This will give us a completeness proof, thanks to the following proposition. \begin{lemma}\label{lem:H-box-nf} Any H-box can be brought into normal form using the rules of the ZH-calculus. \end{lemma} \begin{proof} The matrix of an H-box $H_n(a)$ has 1's in every entry but the very last one. Hence, to bring an H-box into normal form, we just need to introduce `dummy' 1's for every other matrix entry. We demonstrate the principle using a binary H-box but the argument is analogous for any other arity: \[ \tikzfig{H-nf-example} \qedhere \] \end{proof} To simplify the decomposition of diagrams into H-boxes, we prove a few corollaries. \begin{corollary}\label{cor:cup-nf} The diagram of a single cup can be brought into normal form: \[ \tikzfig{cup-nf} \] \end{corollary} \begin{proof} We can rewrite the cup as a pair of H-boxes using (HS2). This can then be written in terms of extension, convolution, and contraction as follows: \ctikzfig{binary-Z-decomposition} Hence, we can apply Lemma~\ref{lem:H-box-nf} and Propositions \ref{prop:extension}, \ref{prop:convolution}, and \ref{prop:contraction} to get a normal form. \end{proof} \begin{corollary}\label{cor:whitemult-nf} The diagram resulting from applying \dotmult{white dot}\ to a pair of outputs of a normal form diagram can be brought into normal form. \begin{equation}\label{eq:whitemult-nf} \tikzfig{whitemult-nf} \end{equation} \end{corollary} \begin{proof} Applying a \dotmult{white dot}\ to a pair of outputs has the same result as convolving with a cup, then contracting one of the outputs. That is, we can decompose \eqref{eq:whitemult-nf} as follows: \ctikzfig{whitemult-decomp} then apply Corollary \ref{cor:cup-nf} and Propositions \ref{prop:extension}, \ref{prop:convolution}, and \ref{prop:contraction}. \end{proof} \begin{corollary}\label{cor:cap-nf} Applying a cap to a normal form diagram results in another normal form diagram: \ctikzfig{cap-nf} \end{corollary} \begin{proof} Since the cap can be decomposed as $\dotcounit{white dot} \circ \dotmult{white dot}$, the result follows immediately from Corollary~\ref{cor:whitemult-nf} and Proposition~\ref{prop:contraction}. \end{proof} Thanks to Corollaries~\ref{cor:tensor-product} and \ref{cor:cap-nf}, we are able to turn any diagram of normal forms into a normal form. It only remains to show that the generators of the ZH-calculus can themselves be made into normal forms. We have already shown the result for H-boxes, so we only need the following. \begin{lemma}\label{lem:Z-spider-nf} Any Z-spider can be brought into normal form using the rules of the ZH-calculus. \end{lemma} \begin{proof} We can turn \dotunit{white dot}{} into an H-box using (U) and then bring it into normal form via Lemma~\ref{lem:H-box-nf}. By (ZS1), $\dotonly{white dot}\xspace = \tikzfig{dot-nf}$, which can be brought into normal form using (U), Lemma~\ref{lem:H-box-nf}, and Corollaries~\ref{cor:tensor-product} and \ref{cor:cap-nf}. This covers the cases of Z-spiders with 0 or 1 incident wires. We can decompose any Z-spider with $n\geq 2$ incident wires as a tensor product of $(n-1)$ cups, with each cup \dotmult{white dot}-ed with its neighbours: \ctikzfig{n-ary-Z-decomposition} If $n=2$, no \dotmult{white dot} are needed and the equality is by (ZS2) instead of (ZS1). In either case, the diagram can be brought into normal form by applying Corollaries~\ref{cor:tensor-product}, \ref{cor:cup-nf}, and \ref{cor:whitemult-nf}. \end{proof} \begin{theorem} The ZH-calculus is complete: for any ZH diagrams $D_1$ and $D_2$, if $\llbracket D_1 \rrbracket = \llbracket D_2 \rrbracket$ then $D_1$ is convertible into $D_2$ using the rules of the ZH-calculus. \end{theorem} \begin{proof} By Theorem~\ref{thm:nf-unique}, it suffices to show that any ZH diagram can be brought into normal form. Lemmas~\ref{lem:H-box-nf} and \ref{lem:Z-spider-nf} suffice to turn any generator into normal form. Corollary~\ref{cor:tensor-product} lets us turn any tensor product of generators into a normal form and Corollary~\ref{cor:cap-nf} lets us normalise any arbitrary wiring. \end{proof} \section{Applications and future work}\label{s:applications} We will now briefly survey some of the potential applications for the ZH-calculus. We begin with the simple observation that $n$-ary H-boxes let us generalise the usual string diagrammatic description the controlled-Z gate (as in e.g. the ZX calculus) to an $n$-controlled-Z gate: \ctikzfig{n-controlled-Z} Using the decomposition of controlled-Z gates above, a representation of graph states as ZX-diagrams was given in~\cite{DP1}, which in turn gave a fully diagrammatic derivation of the local complementation law for graph states~\cite{DP1} and a new procedure for extracting circuits from certain computations in the one-way model~\cite{DP2}. Passing from $\wedge Z$ to $\wedge^n Z$ gives an analogous representation for \textit{hypergraph states}: \[ \tikzfig{gs-graph-s} \qquad\textrm{\Large $\leadsto$}\qquad \tikzfig{gs-hypergraph} \] Indeed this was the original motivation for considering H-boxes of arbitrary arity. Using a method analogous to proofs in Appendix~\ref{sec:bang-rules}, we can routinely introduce !-boxes to known rules involving graph states (e.g. local complementation and feed-forward rules) to generalise them to hypergraph states. For example, introducing !-boxes to the local complementation rule enables complementing hyperedges of arbitrary arity overlapping on a single vertex: \[ \scalebox{0.8}{\tikzfig{lc1}} \ \ = \ \ \scalebox{0.8}{\tikzfig{lc2}} \qquad\textrm{\Large $\leadsto$}\qquad \scalebox{0.8}{\tikzfig{lc-bb1}} \ \ = \ \ \scalebox{0.8}{\tikzfig{lc-bb2}} \] This potentially gives a powerful new language and set of techniques for working with hypergraph states. Exploring these techniques, and the relationship to known rules for manipulating hypergraph states is a topic of future work. In another direction, if we consider diagrams whose H-boxes are labelled by a fixed root of unity $\omega := \exp(i \pi/2^m)$, we obtain an encoding for unitary gates described by arbitrary \textit{phase polynomials}~\cite{moscamatroid}, i.e. gates of the form $U_{\phi} \ket{\vec b} = \omega^{\phi(\vec b)} \ket{\vec b}$ for some polynomial $\phi(\vec b)$ over $n$ boolean variables. These have a simple graphical representation, where Z-spiders represent variables and $\omega$-labelled H-boxes represent terms in the phase polynomial. For example: \[ \tikzfig{phase-poly} \qquad\qquad \textrm{where}\qquad \phi(\vec b) = {\color{purple} b_1 b_2} + {\color{purple} b_1 b_2 b_3} + {\color{purple} b_3 b_4} \] One can then straightforwardly show basic properties of these unitaries (e.g.~composition, commutation, and replacement of non-linear AND terms by linear XOR terms) using the rules of the ZH-calculus. The phase polynomial formalism for $m = 2$ has been used extensively in studying optimisation problems for Clifford+T circuits~\cite{MeetInMiddle,moscamatroid,AmyMoscaReedMuller,campbelltcount}, and it was recently shown that all diagonal gates in the Clifford hierarchy are of the form $U_\phi$, where the level of the hierarchy depends on $m$ and the degree of $\phi$~\cite{DiagHierarchy}. Gaining access to this phase polynomial structure diagrammatically could therefore yield new methods for quantum circuit optimisation and/or fault tolerant computation through automated diagram rewriting in tools like Quantomatic. \bigskip \noindent \textbf{Acknowledgements.} The authors would like to thank Simon Perdrix and Mariami Gachechiladze for the fruitful conversations in which the foundations of the ZH-calculus were developed. We are also grateful to Niel de Beaudrap for interesting discussions about applications of the ZH-calculus and to Sal Wolffs for careful reading of our proofs (and pointing out a major omission in Corollary~\ref{cor:whitemult-nf}). The research leading to these results has received funding from the European Research Council under the European Union's Seventh Framework Programme (FP7/2007-2013) ERC grant agreement no.\ 334828 (Backens) and 320571 (Kissinger). The paper reflects only the authors' views and not the views of the ERC or the European Commission. The European Union is not liable for any use that may be made of the information contained therein. \bigskip \bibliographystyle{eptcs}
{ "timestamp": "2019-01-30T02:11:08", "yymm": "1805", "arxiv_id": "1805.02175", "language": "en", "url": "https://arxiv.org/abs/1805.02175" }
\section{Introduction and main results} \label{Sec1} We consider the stationary Stokes system with variable coefficients \begin{equation} \label{171230@A1} \left\{ \begin{aligned} \mathcal{L} u+\nabla p=D_\alpha f_\alpha &\quad \text{in }\, \Omega,\\ \operatorname{div}u=g &\quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} where $\Omega$ is a bounded domain in $\mathbb{R}^d$, $d\ge 2$. The differential operator $\mathcal{L}$ is in divergence form acting on column vector-valued functions $u=(u^1,\ldots,u^d)^\top$ as follows: $$ \mathcal{L} u=D_\alpha (A^{\alpha\beta}D_\beta u), $$ where the coefficients $ A^{\alpha\beta}= A^{\alpha\beta}(x)$ are $d\times d$ matrix-valued functions on $\Omega$, which satisfy the strong ellipticity condition, i.e., there is a constant $\lambda\in (0,1]$ such that for any $x\in \mathbb{R}^d$ and $ \xi_\alpha \in \mathbb{R}^d$, $\alpha\in \{1,\ldots,d\}$, we have $$ |A^{\alpha\beta}(x)|\le \lambda^{-1}, \quad \sum_{\alpha,\beta=1}^dA^{\alpha\beta}(x)\xi_\beta\cdot \xi_\alpha\ge \lambda \sum_{\alpha=1}^d|\xi_\alpha|^2. $$ In a recent paper \cite{arXiv:1803.05560}, we investigated minimal regularity assumptions on coefficients and data for $W^{1,\infty}$ and $C^1$ regularity of weak solutions to the Stokes system in a ball and a half ball. One of the results in \cite{arXiv:1803.05560} is that every weak solutions of \eqref{171230@A1} satisfy $$ (u,p)\in C^1(\Omega')^d\times C(\Omega'), \quad \Omega' \Subset \Omega $$ provided that the coefficients and data are of {\emph{Dini mean oscillation}}. We say that a function is of Dini mean oscillation if its $L^1$-mean oscillation satisfies the Dini condition; see Definition \ref{D2} for more precise definition. This class of functions was first introduced by Dong-Kim in \cite{MR3620893} for $C^1$ and $C^2$ regularity of solutions to elliptic equations in divergence and nondivergence form. A local weak type-$(1,1)$ estimate for $(Du, p)$ was also proved in \cite{arXiv:1803.05560}. In this paper, we extend the aforementioned results in \cite{arXiv:1803.05560} to domains up to the boundary. More precisely, we prove that weak solutions of the Stokes system \eqref{171230@A1} with zero Dirichlet boundary condition satisfy \begin{equation} \label{180103@A1} (u,p)\in C^1(\overline{\Omega})^d\times C(\overline{\Omega}) \end{equation} provided that the coefficients and data are of Dini mean oscillation, and that $\Omega$ has $C^{1,\rm{Dini}}$ boundary. As an application, we obtain Schauder estimate and regularity for weak solutions, which were studied in \cite[Theorem 1.3, p. 198]{MR0641818}. We also prove the global weak type-$(1,1)$ estimate for $(Du, p)$ under a stronger assumption on the coefficients and the boundary. Our argument in establishing \eqref{180103@A1} is based on the approach used in \cite{MR3747493}, where the authors proved boundary $C^1$-estimates for divergence type elliptic equations $$ D_i (a^{ij}D_j u)=\operatorname{div}f $$ with Dini mean oscillation coefficients on a domain having $C^{1,\rm{Dini}}$ boundary. The key ingredient is $L^q$-mean oscillation estimates with $q\in (0,1)$ for derivatives of solutions on the boundary. In \cite{MR3747493}, such mean oscillation estimates were obtained near a flat boundary and then the boundary $C^1$-estimate follows from that on the half ball since the mapping of flattening boundary preserves the regularity assumptions on the coefficients and data. However, this argument does not work for the Stokes system because after the mapping the pressure term and the divergence equation give rise to extra terms which are {\em not} of Dini mean oscillation. In this paper, we establish the $L^q$-mean oscillation estimate near curved boundary. To this end, we fix a point $x_0=(x_{01},x_0')\in \partial \Omega$ and a coordinate system so that the $C^{1,\rm{Dini}}$ function $\chi$ defining $\partial\Omega$ near $x_0$ satisfies $|\nabla_{x'}\chi(x_0')|=0$. Then, in this coordinate system, we employ the mapping of flattening boundary to control the $L^q$-mean oscillation at $x_0$. Therefore, our mean oscillation estimate at the boundary point $x_0$ depends on the coordinate system and the $C^{1,\rm{Dini}}$ function $\chi$ associated with $x_0$; see Lemma \ref{171101@lem1}. This makes the arguments much more involved. The remainder of the paper is organized as follows. In the rest of this section, we state our main results along with some definitions and assumptions. In Section \ref{Sec2}, we provide the proofs of the main theorems. In Appendix, we provide the proofs of some lemmas used in the paper. For any $x\in \overline{\Omega}$ and $r>0$, we denote $\Omega_r(x)=\Omega\cap B_r(x)$, where $B_r(x)$ is a usual Euclidean ball of radius $r$ centered at $x$. We denote $B_r^+(x)=B_r(x)\cap \mathbb{R}^d_+$, where $$ \mathbb{R}^d_+=\{x=(x_1,x')\in \mathbb{R}^d:x_1>0, \, x'\in \mathbb{R}^{d-1}\}. $$ For $0 < q \le \infty$, let $L^q(\Omega)$ be the space consisting of measurable functions on $\Omega$ that are $q$-th integrable. We define $$ \tilde{L}^q(\Omega)=\{f\in L^q(\Omega): (f)_\Omega=0\}, $$ where $(f)_\Omega$ is the average of $f$ over $\Omega$, i.e., $$ (f)_\Omega=\dashint_\Omega f\,dx=\frac{1}{|\Omega|}\int_\Omega f\,dx. $$ For $1\le q\le \infty$, we denote by $W^{1,q}(\Omega)$ the usual Sobolev space and by $W^{1,q}_0(\Omega)$ the completion of $C^\infty_0(\Omega)$ in $W^{1,q}(\Omega)$. We define the H\"older semi-norm by $$ [f]_{C^{\gamma}(\Omega)}:=\sup_{\substack{x,y\in \Omega \\ x\neq y}} \frac{|f(x)-f(y)|}{|x-y|^\gamma}, \quad 0<\gamma<1. $$ We say that a measurable function $\omega:(0,a]\to [0,\infty)$ is a Dini function provided that there are constants $c_1, c_2>0$ such that \begin{equation} \label{171006@eq1} c_1\omega(t)\le \omega(s)\le c_2\omega(t) \quad \text{whenever }\, \frac{t}{2}\le s\le t\le a \end{equation} and that $\omega$ satisfies the Dini condition \begin{equation} \label{180315@A1} \int_0^{a} \frac{\omega (t)}{t} \,dt<\infty. \end{equation} \begin{definition} \label{D2} Let $f\in L^1(\Omega)$. \begin{enumerate}[$(i)$] \item We say that $f$ is {\em{uniformly Dini continuous}} in $\Omega$ if the function $\varrho_{f}:(0,1] \to [0,\infty)$ defined by $$ \varrho_{f}(r):=\sup_{x_0\in \Omega} \sup_{x,y\in \Omega_r(x_0)}|f(x)-f(y)| $$ is a Dini function. \item We say that $f$ is of {\em{Dini mean oscillation}} in $\Omega$ if the function $\omega_{f}:(0, 1]\to [0,\infty)$ defined by $$ \omega_{f}(r):=\sup_{x\in \overline{\Omega}}\dashint_{\Omega_r(x)} \big|f-(f)_{\Omega_r(x)}\big|\,dy $$ satisfies the Dini condition $$ \int_0^{1} \frac{\omega_{f}(t)}{t}\,dt<\infty. $$ \end{enumerate} \end{definition} \begin{remark} \label{171020@rmk1} Assume that $|\Omega_r(x)|\ge A_0 r^d$ for all $x\in \overline{\Omega}$ and $0<r\le 1$. If $f$ is of Dini mean oscillation in $\Omega$, then $f$ is uniformly continuous in $\Omega$ with its modulus of continuity controlled by $\omega_f$. Moreover, since $\omega_{f}$ satisfies the condition \eqref{171006@eq1} with $(c_1,c_2)=(c_1,c_2)(d,A_0)$ (see, for instance, \cite[p. 495]{MR3615500}), we have that $\omega_{f}:(0,1]\to [0,\infty)$ is a Dini function. \end{remark} \begin{definition} \label{D3} Let $\Omega$ be a domain in $\mathbb{R}^d$. We say that $\Omega$ has $C^{1, \rm{Dini}}$ boundary if there exist a constant $R_0\in (0,1]$ and a Dini function $\varrho_0:(0, 1]\to [0,\infty)$ such that the following holds: For any $x_0=(x_{01},x_0')\in \partial \Omega$, there exists a $C^{1,\rm{Dini}}$ function (i.e., $C^1$ function whose derivatives are uniformly Dini continuous) $\chi:\mathbb{R}^{d-1}\to \mathbb{R}$ and a coordinate system depending on $x_0$ such that \begin{equation} \label{171101@E1} \varrho_{\nabla_{x'}\chi}(r)\le \varrho_0(r) \quad \text{for all }\, r\in (0,R_0), \end{equation} and that in the new coordinate system, we have \begin{equation} \label{171101@E2} |\nabla_{x'}\chi(x_0')|=0, \quad \Omega_{R_0}(x_0)=\{x\in B_{R_0}(x_0): x_1>\chi(x')\}. \end{equation} \end{definition} Now, we state our main theorems. \begin{theorem} \label{M4} Let $\Omega$ be a bounded domain in $\mathbb{R}^d$ having $C^{1,\rm{Dini}}$ boundary. Assume that $(u,p)\in W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ is the weak solution of \begin{equation} \label{171006@eq2} \left\{ \begin{aligned} \mathcal{L} u+\nabla p=D_\alpha f_\alpha \quad \text{in }\, \Omega,\\ \operatorname{div} u=g-(g)_\Omega \quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} where $f_\alpha\in L^{2}(\Omega)^d$ and $g\in L^2(\Omega)$. \begin{enumerate}[$(a)$] \item If $A^{\alpha\beta}$, $f_\alpha$, and $g$ are of Dini mean oscillation in $\Omega$, then we have $$ (u,p)\in C^1(\overline{\Omega})^d\times C(\overline{\Omega}). $$ \item Let $0<\gamma_0<1$ and $\partial \Omega$ be $C^{1,\gamma_0}$, i.e., $\varrho_0 (r) = N r^{\gamma_0}$ for some constant $N>0$. If it holds that $[A^{\alpha\beta}]_{C^{\gamma_0}(\Omega)}+[f_\alpha]_{C^{\gamma_0}(\Omega)}+[g]_{C^{\gamma_0}(\Omega)}<\infty$, then we have $$ (u,p)\in C^{1,\gamma_0}(\overline{\Omega})^d\times C^{\gamma_0}(\overline{\Omega}). $$ \end{enumerate} \end{theorem} \begin{remark} By the same reasoning as \cite[Remark 2.4]{arXiv:1803.05560}, one can extend the results in Theorem \ref{M4} to the solution of the system $$ \left\{ \begin{aligned} \mathcal{L} u+\nabla p=f+D_\alpha f_\alpha \quad \text{in }\, \Omega,\\ \operatorname{div} u=g-(g)_\Omega \quad \text{in }\, \Omega, \end{aligned} \right. $$ where $f\in L^q(\Omega)^d$ with $q>d$. \end{remark} In the next theorem, we prove the global weak type-$(1,1)$ estimate for $Du$ and $p$. \begin{theorem} \label{M5} Let $\Omega$ be a bounded domain in $\mathbb{R}^d$ having $C^{1,\rm{Dini}}$ boundary. Assume that $(u, p)\in W^{1,q}_0(\Omega)^d\times \tilde{L}^q(\Omega)$ is the weak solution of \eqref{171006@eq2}, where $f_\alpha\in L^q(\Omega)^d$, $g\in L^q(\Omega)$, and $q\in (1,\infty)$. If $A^{\alpha\beta}$ are of Dini mean oscillation in $\Omega$ and \begin{equation} \label{171127@B1} \varrho_0(r)+\omega_{A^{\alpha\beta}}(r)\le C_0 (\ln r)^{-2}, \quad \forall r\in (0,1/2), \end{equation} then for any $t>0$, we have \begin{equation} \label{180315@eq2} \big|\{x\in \Omega:|Du(x)|+|p(x)|>t\}\big|\le \frac{C}{t}\int_\Omega (|f_\alpha|+|g|)\,dx, \end{equation} where the constant $C$ depends only on $d$, $\lambda$, $\Omega$, $R_0$, $\varrho_0$, $\omega_{A^{\alpha\beta}}$, and $C_0$. \end{theorem} \begin{remark} \label{180419@rmk1} Under the hypothesis of Theorem \ref{M4} $(a)$, the unique solvability of the problem \eqref{171006@eq2} is available in the solution space $W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ as well as $W^{1,q}_0(\Omega)^d\times \tilde{L}^q(\Omega)$ with $q\in (1,\infty)$, when $f_\alpha\in L^{q}(\Omega)^d$ and $g\in L^q(\Omega)$; see the proof of Theorem \ref{M5}. Therefore, in Theorems \ref{M4} and \ref{M5}, the weak solutions indeed exist. \end{remark} We present the $W^{1,q}$-estimate for a $W^{1,1}$-weak solution, which follows from Theorem \ref{M4}, the solvability result mentioned in Remark \ref{180419@rmk1}, and the argument in Brezis \cite{MR2465684} (see also \cite[Appendix]{MR2548032}). For a proof, one may refer to the proofs of \cite[Theorems 2.5 and 5.4]{arXiv:1803.05560}, where we proved the $W^{1,q}$-estimates for $W^{1,1}$-weak solutions to the Stokes system with partially Dini mean oscillation coefficients in a ball and a half ball. \begin{corollary} \label{180419@cor1} Let $q\in (1,\infty)$ and $\Omega$ be a bounded domain in $\mathbb{R}^d$ having $C^{1,\rm{Dini}}$ boundary. Assume that $(u, p)\in W^{1,1}_0(\Omega)^d\times \tilde{L}^1(\Omega)$ is a weak solution of \eqref{171006@eq2}, where $f_\alpha\in L^q(\Omega)^d$ and $g\in L^q(\Omega)$. If $A^{\alpha\beta}$ are of Dini mean oscillation in $\Omega$, then we have $(u, p)\in W^{1,q}_0(\Omega)^d\times \tilde{L}^q(\Omega)$ with the estimate $$ \|u\|_{W^{1,q}(\Omega)}+\|p\|_{L^q(\Omega)}\le C\big(\|u\|_{W^{1,1}(\Omega)}+\|p\|_{L^1(\Omega)}+\|f_\alpha\|_{L^q(\Omega)}+\|g\|_{L^q(\Omega)}\big), $$ where the constant $C$ depends only on $d$, $\lambda$, $\Omega$, $R_0$, $\varrho_0$, and $\omega_{A^{\alpha\beta}}$. \end{corollary} We finish this section with a remark that, by Corollary \ref{180419@cor1} the results in Theorems \ref{M4} and \ref{M5} still hold under the assumption that $(u, p)\in W^{1,1}_0(\Omega)^d\times \tilde{L}^1(\Omega)$. \section{Proof of main Theorems} \label{Sec2} Hereafter in the paper, we use the following notation. \begin{notation} For nonnegative (variable) quantities $A$ and $B$, we denote $A\lesssim B$ if there exists a generic positive constant C such that $A \le CB$. We add subscript letters like $A\lesssim_{a,b} B$ to indicate the dependence of the implicit constant $C$ on the parameters $a$ and $b$. \end{notation} \subsection{Proof of Theorem \ref{M4}} We shall derive a priori estimates for $Du$ and $p$ by assuming that $(u,p)\in C^1(\overline{\Omega})^d\times C(\overline{\Omega})$. The general case follows from a standard approximation argument. Throughout this proof, we use the following notation and properties. Recall that $\varrho_0$ is the Dini function from Definition \ref{D3}. \begin{enumerate}[i.] \item We set $q=1/2$ and $$ \Phi(x_0, r):=\inf_{\substack{\theta\in \mathbb{R} \\ \Theta\in \mathbb{R}^{d\times d}}}\bigg(\dashint_{\Omega_r(x_0)}|Du-\Theta|^q+|p-\theta|^q\,dx\bigg)^{1/q}. $$ \item For any $x\in \overline{\Omega}$ and $r\in (0,1]$, we have \begin{equation} \label{171127@eq3} r^d\lesssim_{d,R_0, \varrho_0} |\Omega_r(x)|. \end{equation} \item For $\gamma\in (0,1)$ and $\kappa\in (0,1/2]$, we define $$ \tilde{\varrho}_{0}(r):=\varrho_0(r)+\sum_{i=1}^\infty \kappa^{\gamma i}\big(\varrho_{0}(\kappa^{-i}r)[\kappa^{-i} r<1]+\varrho_{0}(1)[\kappa^{-i}r\ge 1]\big), $$ where we use Inverse bracket notation; i.e., $[P]=1$ if $P$ is true and $[P]=0$ otherwise. By Lemma \ref{171024@lem1}, $\tilde{\varrho}_0:(0, 1]\to [0,\infty)$ is a Dini function satisfying \begin{equation} \label{171102@eq3} \tilde{\varrho}_0(t)\lesssim_{\varrho_0} \tilde{\varrho}_0(s)\lesssim_{\varrho_0}\tilde{\varrho}_0(t) \quad \text{whenever }\, \frac{t}{2}\le s\le t\le 1. \end{equation} Moreover, by the comparison principle for Riemann integrals, we have $$ \sum_{j=0}^\infty \tilde{\varrho}_0(\kappa^j r)\lesssim_{\varrho_0,\kappa} \int_0^r \frac{\tilde{\varrho}_0(t)}{t}\,dt<\infty $$ for all $r\in (0,1]$. \item For $\gamma\in (0,1)$, $\kappa\in (0,1/2]$, and $f\in L^1(\Omega)$, we denote $$ \tilde{\omega}_{f}(r):=\sum_{i=1}^\infty \kappa^{\gamma i}\big(\omega_{f}(\kappa^{-i}r)[\kappa^{-i} r<1]+\omega_{f}(1)[\kappa^{-i}r\ge 1]\big). $$ By Remark \ref{171020@rmk1}, \eqref{171127@eq3}, and Lemma \ref{171024@lem1}, if $f$ is of Dini mean oscillation in $\Omega$, then $\tilde{\omega}_{f}:(0, 1]\to [0,\infty)$ is a Dini function satisfying $$ \tilde{\omega}_{f}(t)\lesssim_{d,R_0, \varrho_0} \tilde{\omega}_{f}(s)\lesssim_{d,R_0, \varrho_0} \tilde{\omega}_{f}(t) \quad \text{whenever }\,\frac{t}{2}\le s\le t\le 1. $$ Moreover, we have \begin{equation} \label{171229@eq1a} \sum_{j=0}^\infty \tilde{\omega}_{f}(\kappa^j r)\lesssim_{d ,R_0, \varrho_0, \kappa} \int_0^{r} \frac{\tilde{\omega}_{f}(t)}{t}\,dt <\infty \end{equation} for all $r\in (0,1]$. \end{enumerate} To prove Theorem \ref{M4}, we will use the following three lemmas related to $L^q$-mean oscillation estimates for $Du$ and $p$. The first lemma is about the interior estimates, which is an adaptation of \cite[Lemma 4.3]{arXiv:1803.05560}. \begin{lemma} \label{171228@lem1} Let $x_0\in \Omega$ and $\gamma\in (0,1)$. Under the same hypothesis of Theorem \ref{M4} $(a)$, there exists a constant $\kappa_1\in (0,1/2]$ depending only on $d$, $\lambda$, and $\gamma$, such that the following hold. \begin{enumerate}[$(i)$] \item For any $0<\kappa\le \kappa_1$ and $0<r\le \min\{1,\operatorname{dist}(x_0, \partial \Omega)/4\}$, we have $$ \begin{aligned} \sum_{j=0}^\infty \Phi(x_0, \kappa^j r)&\lesssim_{d,\lambda,\gamma,R_0, \varrho_0,\kappa} \Phi(x_0, r)\\ &\quad +\|Du\|_{L^\infty(B_r(x_0))}\int_0^r \frac{\tilde{\omega}_{A^{\alpha\beta}}(t)}{t}\,dt+\int_0^r \frac{\tilde{\omega}_{f_\alpha}(t)+\tilde{\omega}_{g}(t)}{t}\,dt. \end{aligned} $$ \item For any $0<\kappa\le \kappa_1$ and $0<\rho\le r\le \min\{1,\operatorname{dist}(x_0, \partial \Omega)/4\}$, we have $$ \Phi(x_0, \rho) \lesssim_{d,\lambda,\gamma,\kappa} \left(\frac{\rho}{r}\right)^\gamma \Phi(x_0, r)+ \|Du\|_{L^\infty(B_r(x_0))}\tilde{\omega}_{A^{\alpha\beta}}(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_g (\rho). $$ \end{enumerate} \end{lemma} \begin{proof} By following the proof of \cite[Lemma 4.3]{arXiv:1803.05560}, we see that $$ \begin{aligned} \Phi(x_0, \kappa r) &\le C_0 \kappa \Phi(x_0, r)\\ &\quad +C_0\kappa^{-d/q}\big(\|Du\|_{L^\infty (B_r(x_0))}\omega_{A^{\alpha\beta}}(r)+\omega_{f_\alpha}(r)+\omega_g(r)\big) \end{aligned} $$ for all $0<\kappa\le 1/2$ and $0<r\le\min\{1,\operatorname{dist}(x_0, \partial \Omega)/4\}$, where $C_0=C_0(d,\lambda)>0$. We take $\kappa_1=\kappa_1(d,\lambda,\gamma)\in (0, 1/2]$ such that $C_0 \kappa_1^{1-\gamma}\le 1$. Then for any $0<\kappa\le \kappa_1$, we have $$ \Phi(x_0, \kappa r) \le \kappa^\gamma \Phi(x_0, r)+C\big( \|Du\|_{L^\infty (B_r(x_0))}\omega_{A^{\alpha\beta}}(r) + \omega_{f_\alpha}(r)+\omega_g(r)\big), $$ where $C=C(d,\lambda,\gamma,\kappa)$. By iterating, we obtain for $j\in \{1,2,\ldots\}$ that \begin{equation} \label{171229@eq2} \begin{aligned} \Phi(x_0, \kappa^j r)&\le \kappa^{\gamma j} \Phi(x_0, r)\\ &\quad +C\big(\|Du\|_{L^\infty(B_r(x_0))}\tilde{\omega}_{A^{\alpha\beta}}(\kappa^j r)+\tilde{\omega}_{f_\alpha}(\kappa^j r)+\tilde{\omega}_g(\kappa^j r)\big), \end{aligned} \end{equation} where we used the fact that \begin{equation} \label{171229@eq3} \sum_{i=1}^j \kappa^{\gamma(i-1)}\omega_{\bullet}(\kappa^{j-i}r)\le \kappa^{-\gamma} \tilde{\omega}_{\bullet}(\kappa^j r). \end{equation} Taking the summations of both sides of \eqref{171229@eq2} with respect to $j=0,1,2,\ldots$, and using \eqref{171229@eq1a}, we see that the assertion $(i)$ holds. For given $\rho\in (0, r]$, let $j$ be an integer such that $$ \kappa^{j+1}<\frac{\rho}{r}\le \kappa^j. $$ If $j=0$, then obviously we have $$ \Phi(x_0, \rho)\lesssim_{d,\kappa} \Phi(x_0, r)\lesssim_{d,\kappa,\gamma} \left(\frac{\rho}{r}\right)^{\gamma}\Phi(x_0,r). $$ On the other hand, if $j\ge 1$, then by \eqref{171229@eq2} with $\rho$ in place of $\kappa^j r$, we get $$ \begin{aligned} \Phi(x_0, \rho)&\lesssim \kappa^{\gamma j}\Phi(x_0, \kappa^{-j}\rho)+\|Du\|_{L^\infty(B_{\kappa^{-j}\rho}(x_0))}\tilde{\omega}_{A^{\alpha\beta}}(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_g(\rho)\\ &\lesssim \left(\frac{\rho}{r}\right)^\gamma\Phi(x_0, r)+\|Du\|_{L^\infty(B_r(x_0))}\tilde{\omega}_{A^{\alpha\beta}}(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_g(\rho). \end{aligned} $$ Therefore, the assertion $(ii)$ holds. The lemma is proved. \end{proof} In the next lemma, we prove $L^q$-mean oscillation estimates of linear combinations of $Du$ and $p$ at $x_0\in \partial \Omega$. We note that the $L^q$-mean oscillation and its estimates depend on the coordinate system associated with $x_0$. \begin{lemma} \label{171101@lem1} Let $x_0\in \partial \Omega$ and $\gamma\in (0,1)$. Let us fix a $C^{1,\rm{Dini}}$ function $\chi:\mathbb{R}^{d-1}\to \mathbb{R}$ and a coordinate system associated with $x_0$ satisfying \eqref{171101@E1} and \eqref{171101@E2} in Definition \ref{D3}. In this coordinate system, we define $$ \Psi(x_0, r):=\inf_{\substack{\theta\in \mathbb{R} \\ \Theta\in \mathbb{R}^{d}}}\bigg(\dashint_{\Omega_r(x_0)}|D_1 u-\Theta|^q +\sum_{i=2}^d|D_i \chi D_1 u+D_i u|^q+|p-\theta|^q\,dx\bigg)^{1/q}. $$ Then under the same hypothesis of Theorem \ref{M4} $(a)$, there exist constants $$ R_1=R_1(\varrho_0, R_0)\in (0, R_0/4) \quad \text{and}\quad \kappa_{2}=\kappa_{2}(d,\lambda,\gamma, R_0,\varrho_0)\in (0,1/8] $$ such that the following hold. \begin{enumerate}[$(i)$] \item For any $0<\kappa\le \kappa_2$ and $0<r\le 2R_1$, we have $$ \begin{aligned} \sum_{j=0}^\infty \Psi(x_0, \kappa^j r) &\lesssim_{d,\lambda,\gamma, R_0,\varrho_0 ,\kappa} \Psi(x_0, r)\\ &\quad + \big(\|Du\|_{L^\infty(\Omega_r(x_0))}+\|p\|_{L^\infty(\Omega_r(x_0))}\big)\int_0^r \frac{\tilde{\varrho}_0(t)+\tilde{\omega}_{A^{\alpha\beta}}(t)}{t}\,dt\\ &\quad +\|f_\alpha\|_{L^\infty(\Omega_r(x_0))}\int_0^r \frac{\tilde{\varrho}_0(t)}{t}\,dt +\int_0^r \frac{\tilde{\omega}_{f_\alpha}(t)+\tilde{\omega}_g(t)}{t}\,dt. \end{aligned} $$ \item For any $0<\kappa\le \kappa_2$ and $0<\rho\le r\le 2R_1$, we have $$ \begin{aligned} \Psi(x_0,\rho) &\lesssim_{d,\lambda,\gamma, R_0,\varrho_0,\kappa} \left(\frac{\rho}{r}\right)^\gamma \Psi(x_0, r)\\ &\quad +\big(\|Du\|_{L^\infty(\Omega_r(x_0))}+\|p\|_{L^\infty(\Omega_r(x_0))}\big) \big(\tilde{\varrho}_0(\rho)+\tilde{\omega}_{A^{\alpha\beta}}(\rho)\big)\\ &\quad +\|f_\alpha\|_{L^\infty(\Omega_r(x_0))}\tilde{\varrho}_0(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_g(\rho). \end{aligned} $$ \end{enumerate} \end{lemma} \begin{proof} Recall that we use $0=(0,0')$, $x=(x_1,x')$, and $y=(y_1,y')$ to denote points in $\mathbb{R}^d$. Without loss of generality, we assume that $x_0=0\in \partial \Omega$ and $\chi(0')=0$. We denote $B_R=B_R(0)$, $B_R^+=B_R^+(0)$, and $\Omega_R=\Omega_R(0)$. Since $|\nabla_{x'}\chi(0')|=0$, it follows from \eqref{171101@E1} that there exists a constant $R_1=R_1(\varrho_0,R_0)\in (0, R_0)$ satisfying \begin{equation} \label{171101@eq4} |\nabla_{x'}\chi(x')|\le 1/2 \quad \text{if }\, |x'|\le R_1. \end{equation} Let $\Gamma(y)=(y_1+\chi(y'),y')$ and $\Lambda(x)=\Gamma^{-1}(x)=(x_1-\chi(x'),x')$. We divide the proof into several steps. {\em{Step 1}.} In this step, we prove that \begin{equation} \label{171101@D2} B_{R_1/2}^+\subset \Lambda (\Omega_{R_1}), \end{equation} \begin{equation} \label{171101@D2a} \Omega_{r/2}\subset \Gamma(B_{r}^+) \subset \Omega_{2r} \quad \text{for }\, r\in (0, R_1/2]. \end{equation} To prove \eqref{171101@D2}, assume that $y\in B_{R_1/2}^+$. Then we have $$ \begin{aligned} |y_1+\chi(y')|^2+|y'|^2&\le 2|y_1|^2+2|\chi(y')|^2+|y'|^2\\ &\le |y|^2+|y_1|^2+2|\chi(y')|^2\\ &< \frac{R_1^2}{2} +2|\chi(y')|^2. \end{aligned} $$ Notice from \eqref{171101@eq4} that $$ |\chi(y')|^2=|\chi(y')-\chi(0')|^2\le \frac{|y'|^2}{4}<\frac{R_1^2}{4}. $$ Combining the above two inequalities, we have $|y_1+\chi(y')|^2+|y'|^2< R_1^2$, which implies that $y\in \Lambda(\Omega_{R_1})$. Thus we get \eqref{171101@D2}. Using a similar argument, we have \eqref{171101@D2a}. {\em{Step 2}.} In this step, we use the standard technique of flattening the boundary. We denote $$ v(y)=u(\Gamma(y)), \quad \pi(y)=p(\Gamma(y)), \quad b(y)=(0, D_2 \chi(y'),\ldots,D_d\chi(y'))^{\top}. $$ Since $(u,p)$ satisfies \eqref{171006@eq2}, we have that $$ \left\{ \begin{aligned} D_\alpha(\mathcal{A}^{\alpha\beta}D_\beta v)+\nabla \pi=D_\alpha F_\alpha +D_1 (\pi b) &\quad \text{in }\, B_{R_1}^+,\\ \operatorname{div} v =G + D_1v \cdot b &\quad \text{in }\, B_{R_1}^+,\\ v=0 &\quad \text{on }\, B_{R_1}\cap \partial \mathbb{R}^d_+, \end{aligned} \right. $$ where we set $$ \mathcal{A}^{\alpha\beta}=D_\ell \Lambda^\beta D_k \Lambda^\alpha {A}^{k\ell}(\Gamma), \quad F_\alpha=D_k \Lambda^\alpha {f}_k(\Gamma), \quad G=g(\Gamma)-(g)_\Omega. $$ Let $0<r\le R_1/4$. For a given function $f$, we denote $ \overline{f}=(f)_{B_r^+}$. Define an elliptic operator $\mathcal{L}_0$ by $$ \mathcal{L}_0 v=D_\alpha(\overline{\mathcal{A}^{\alpha\beta}}D_\beta v), $$ and observe that $(v,\pi)$ satisfies $$ \left\{ \begin{aligned} \mathcal{L}_0 v+\nabla \pi=D_\alpha \mathcal{F}_\alpha &\quad \text{in }\, B_{R_1}^+,\\ \operatorname{div} v =\mathcal{G} + \overline{G} &\quad \text{in }\, B_{R_1}^+,\\ v=0 &\quad \text{on }\, B_{R_1}\cap \partial \mathbb{R}^d_+, \end{aligned} \right. $$ where $$ \mathcal{F}_\alpha=\big(\overline{\mathcal{A}^{\alpha\beta}}-\mathcal{A}^{\alpha\beta}\big)D_\beta v+F_\alpha-\overline{F_\alpha} + \delta_{1\alpha} \pi b, \quad \mathcal{G}=G-\overline{G} +D_1 v\cdot b. $$ Here, $\delta_{ij}$ is the usual Kronecker delta symbol. We decompose \begin{equation} \label{171101@D1a} (v,\pi)=(v_1,\pi_1)+(v_2,\pi_2), \end{equation} where $(v_1,\pi_1)\in W^{1,2}_0(B_{4r}^+)^d\times \tilde{L}^2(B_{4r}^+)$ is the weak solution of the problem $$ \left\{ \begin{aligned} \mathcal{L}_0 v_1+\nabla \pi_1=D_\alpha(I_{B_r^+}\mathcal{F}_\alpha) &\quad \text{in }\, B_{4r}^+,\\ \operatorname{div} v_1=I_{B_r^+}\mathcal{G}-\big(I_{B_r^+}\mathcal{G}\big)_{B_{4r}^+} &\quad \text{in }\, B_{4r}^+. \end{aligned} \right. $$ Here, $I_{B_r^+}$ is the characteristic function. By \cite[Lemma 6.5]{arXiv:1803.05560} with scaling, we have for $t>0$ that $$ \big|\{y\in B_r^+:|Dv_1(y)|+|\pi_1(y)|>t\}\big|\lesssim_{d,\lambda} \frac{1}{t}\int_{B_r^+} ( |\mathcal{F}_\alpha|+|\mathcal{G}|)\,dy. $$ This inequality implies that for $\tau>0$, $$ \begin{aligned} &\int_{B_r^+}(|Dv_1|+|\pi_1|)^q\,dy\\ &=\bigg(\int_0^\tau+\int_\tau^\infty \bigg)q t^{q-1} \big|\{y\in B_r^+ : |Dv_1(y)|+|\pi_1(y)|>t\}\big|\,dt\\ &\lesssim |B_r^+|\tau^q +\bigg(\int_{B_r^+}|\mathcal{F}_\alpha|+|\mathcal{G}|\,dy\bigg)\tau^{q-1}. \end{aligned} $$ By optimizing over $\tau$ and taking the $q$-th root, we have \begin{equation} \label{171101@D1} \bigg(\dashint_{B_r^+} (|Dv_1|+|\pi_1|)^q\,dy\bigg)^{1/q}\lesssim \dashint_{B_r^+} (|\mathcal{F}_\alpha|+|\mathcal{G}|)\,dy. \end{equation} Since $(v_2,\pi_2)=(v,\pi)-(v_1,\pi_1)$ satisfies $$ \left\{ \begin{aligned} \mathcal{L}_0 v_2+\nabla \pi_2=0 &\quad \text{in }\, B_r^+,\\ \operatorname{div} v_2=\big(I_{B_r^+}\mathcal{G}\big)_{B_{4r}^+}+\overline{G} &\quad \text{in }\, B^+_r,\\ v_2=0 &\quad \text{on }\, B_r\cap \partial \mathbb{R}^d_+, \end{aligned} \right. $$ by \cite[Lemma 6.3]{arXiv:1803.05560}, we have for any $\kappa\in (0,1/2]$, \begin{equation} \label{171101@D1b} \begin{aligned} &\bigg(\dashint_{B_{\kappa r}^+}\big|D_1v_2-(D_1v_2)_{B_{\kappa r}^+}\big|^q+|D_{y'}v_2|^q+\big|\pi_2-(\pi_2)_{B_{\kappa r}^+}\big|^q\,dy\bigg)^{1/q}\\ &\lesssim_{d,\lambda} \kappa \inf_{\Theta\in \mathbb{R}^d}\bigg(\dashint_{B_r^+} |D_1v_2-\Theta |^q+|D_{y'}v_2|^q\,dy\bigg)^{1/q}. \end{aligned} \end{equation} Observe from \eqref{171101@D1a} that $$ \begin{aligned} &\bigg(\dashint_{B_{\kappa r}^+} \big|D_1v-(D_1v_2)_{B_{\kappa r}^+}\big|^q+|D_{y'}v|^q+\big|\pi-(\pi_2)_{B_{\kappa r}^+}\big|^q\,dy\bigg)^{1/q}\\ &\lesssim \bigg(\dashint_{B_{\kappa r}^+} \big|D_1v_2-(D_1v_2)_{B_{\kappa r}^+}\big|^q+|D_{y'}v_2|^q +\big|\pi_2-(\pi_2)_{B_{\kappa r}^+}\big|^q\,dy \bigg)^{1/q}\\ &\quad +\bigg(\dashint_{B_{\kappa r}^+} |Dv_1|^q+|\pi_1|^q\,dy\bigg)^{1/q}. \end{aligned} $$ Using this inequality together with \eqref{171101@D1} and \eqref{171101@D1b}, we obtain that $$ \begin{aligned} &\inf_{\substack{\theta\in \mathbb{R} \\ \Theta\in \mathbb{R}^d}}\bigg(\dashint_{B_{\kappa r}^+} |D_1v-\Theta |^q+|D_{y'}v|^q+|\pi-\theta |^q\,dy\bigg)^{1/q}\\ &\lesssim_{d,\lambda} \kappa\inf_{\Theta\in \mathbb{R}^d} \bigg(\dashint_{B_r^+}|D_1 v-\Theta|^q+|D_{y'}v|^q\,dy\bigg)^{1/q} + \kappa^{-d/q}\dashint_{B_r^+}(|\mathcal{F}_\alpha|+|\mathcal{G}|)\,dy. \end{aligned} $$ Thus, from the definitions of $\mathcal{F}_\alpha$ and $\mathcal{G}$, and the fact that $$ \dashint_{B_r^+}|b|\,dy=\dashint_{B_r^+}|b-b(0)|\,dy\le \varrho_0(r), $$ we get \begin{equation} \label{171101@D4} \begin{aligned} &\inf_{\substack{\theta\in \mathbb{R} \\ \Theta\in \mathbb{R}^d}}\bigg(\dashint_{B_{\kappa r}^+} |D_1v-\Theta |^q+|D_{y'}v|^q+|\pi-\theta |^q\,dy\bigg)^{1/q}\\ &\lesssim \kappa\inf_{\Theta\in \mathbb{R}^d} \bigg(\dashint_{B_r^+}|D_1 v-\Theta|^q+|D_{y'}v|^q\,dy\bigg)^{1/q}\\ &\quad + \kappa^{-d/q}\big(\|Dv\|_{L^\infty(B_r^+)}+\|\pi\|_{L^\infty(B_r^+)}\big)\bigg(\varrho_0(r)+\dashint_{B_r^+} \big|\mathcal{A}^{\alpha\beta}-\overline{\mathcal{A}^{\alpha\beta}}\big|\,dy\bigg)\\ &\quad + \kappa^{-d/q}\dashint_{B_r^+} \big( \big|F_\alpha-\overline{F_\alpha} \big|+\big|G-\overline{G}\big| \big)\,dy. \end{aligned} \end{equation} We note that $$ \sup_{y,z\in B_r^+}|D\Lambda(y)-D\Lambda(z)|\le \varrho_0(r), \quad \sup_{y\in B_r^+} |D\Lambda(y)|\le 1/2. $$ Using this and following the proof of \cite[Lemma 2.1]{MR3747493}, we have $$ \dashint_{B_r^+} \big|\mathcal{A}^{\alpha\beta}-\overline{\mathcal{A}^{\alpha\beta}}\big|\,dy\lesssim_{d,\lambda} \varrho_0(r)+\dashint_{B_r^+}\big|A^{\alpha\beta}(\Gamma)-\overline{A^{\alpha\beta}(\Gamma)}\big|\,dy. $$ Hence, by the change of variables, \eqref{171101@D2a}, and $\varrho_0(r)\lesssim_{\varrho_0} \varrho_0(2r)$, we see that $$ \dashint_{B_r^+} \big|\mathcal{A}^{\alpha\beta}-\overline{\mathcal{A}^{\alpha\beta}}\big|\,dy\lesssim_{d,\lambda,\varrho_0} \varrho_0(2r)+ \omega_{A^{\alpha\beta}}(2r). $$ Similarly, we have $$ \dashint_{B_r^+}\big(\big|F_\alpha-\overline{F_\alpha}\big|+\big|G-\overline{G}\big|\big)\,dy\lesssim_{d,\varrho_0} \|f_\alpha\|_{L^\infty(\Omega_{2r})}\varrho_0(2r)+\omega_{f_\alpha}(2r)+\omega_g(2r). $$ Therefore, using the change of variables, \eqref{171127@eq3}, and \eqref{171101@D2a}, we get from \eqref{171101@D4} that \begin{equation} \label{171101@D5} \begin{aligned} &\inf_{\substack{\theta\in \mathbb{R} \\ \Theta\in \mathbb{R}^d}}\left(\dashint_{\Omega_{\kappa r/2}}|D_1 u-\Theta|^q+\sum_{i=2}^d|D_i \chi D_1u +D_i u|^q+|p-\theta|^q\,dx\right)^{1/q}\\ &\lesssim_{d,\lambda, R_0, \varrho_0} \kappa\inf_{\Theta\in \mathbb{R}^d} \left(\dashint_{\Omega_{2r}}|D_1 u-\Theta|^q+\sum_{i=2}^d|D_i \chi D_1u +D_i u|^q\,dx\right)^{1/q}\\ &\quad +\kappa^{-d/q} \big(\|Du\|_{L^\infty(\Omega_{2r})}+\|p\|_{L^\infty(\Omega_{2r})}\big)(\varrho_0(2r)+\omega_{A^{\alpha\beta}}(2r))\\ &\quad +\kappa^{-d/q}(\|f_\alpha\|_{L^\infty(\Omega_{2r})}\varrho_0(2r)+\omega_{f_\alpha}(2r)+\omega_{g}(2r)). \end{aligned} \end{equation} for $0<r\le R_1/4$ and $\kappa\in (0,1/2]$. {\em{Step 3}.} We are ready to prove the lemma. By replacing $\kappa/4$, $2r$, and $R_1/2$ by $\kappa$, $r$, and $2R_1$ in \eqref{171101@D5}, we obtain for $0<r\le 2R_1$ and $\kappa\in (0,1/8]$ that $$ \begin{aligned} \Psi(0,\kappa r) &\le C_0 \kappa \Psi(0, r)\\ &\quad + C_0\kappa^{-d/q} \big(\|Du\|_{L^\infty(\Omega_{r})}+\|p\|_{L^\infty(\Omega_{r})}\big)(\varrho_0(r)+\omega_{A^{\alpha\beta}}(r))\\ &\quad +C_0\kappa^{-d/q}(\|f_\alpha\|_{L^\infty(\Omega_r)}\varrho_0(r)+\omega_{f_\alpha}(r)+\omega_{g}(r)), \end{aligned} $$ where $C_0=C_0(d,\lambda, R_0,\varrho_0)>0$. We take $\kappa_2=\kappa_2(d,\lambda,\gamma,R_0,\varrho_0)\in (0,1/8]$ so that $C_0\kappa_2^{1-\gamma}\le 1$. Then for any $0<\kappa\le \kappa_2$, we have $$ \begin{aligned} \Psi(0,\kappa r) & \le \kappa^{\gamma} \Psi(0, r)+ C \big(\|Du\|_{L^\infty(\Omega_{r})}+\|p\|_{L^\infty(\Omega_{r})}\big)(\varrho_0(r)+\omega_{A^{\alpha\beta}}(r))\\ &\quad +C(\|f_\alpha\|_{L^\infty(\Omega_r)}\varrho_0(r)+\omega_{f_\alpha}(r)+\omega_{g}(r)), \end{aligned} $$ where $C=C(d,\lambda,\gamma, R_0,\varrho_0,\kappa)>0$. By iterating, we obtain for $j\in \{1,2,\ldots\}$ that \begin{equation} \label{171229@eq3a} \begin{aligned} \Psi(0, \kappa^j r) & \le \kappa^{\gamma j} \Psi(0, r) \\ &\quad + C \big(\|Du\|_{L^\infty(\Omega_{r})}+\|p\|_{L^\infty(\Omega_{r})}\big)(\tilde{\varrho}_0(\kappa^j r)+\tilde{\omega}_{A^{\alpha\beta}}(\kappa^j r))\\ &\quad +C(\|f_\alpha\|_{L^\infty(\Omega_r)} \tilde{\varrho}_0(\kappa^j r)+\tilde{\omega}_{f_\alpha}(\kappa^j r)+\tilde{\omega}_{g}(\kappa^j r)), \end{aligned} \end{equation} where we used \eqref{171229@eq3} and $$ \sum_{i=1}^j \kappa^{\gamma(i-1)}\varrho_0(\kappa^{j-i}r)\le \kappa^{-\gamma} \tilde{\varrho}_{0}(\kappa^j r). $$ The estimate \eqref{171229@eq3a} corresponds to \eqref{171229@eq2}. The rest of the proof is identical to that of Lemma \ref{171228@lem1} and is omitted. \end{proof} By combining Lemmas \ref{171228@lem1} and \ref{171101@lem1}, we obtain the following $L^q$-mean oscillation estimates for $Du$ and $p$. \begin{lemma} \label{171102@lem5} Let $x_0\in \Omega$ and $\gamma\in (0,1)$. Under the same hypothesis of Theorem \ref{M4} $(a)$, if $R_1=R_1(\varrho_0,R_0)$ is the constant from Lemma \ref{171101@lem1} and $$ \kappa=\kappa(d,\lambda,\gamma,R_0, \varrho_0)=\min\{\kappa_1,\kappa_2\}, $$ where $\kappa_1$ and $\kappa_2$ are constants from Lemmas \ref{171228@lem1} and \ref{171101@lem1}, then the following hold. \begin{enumerate}[$(i)$] \item For any $0<r\le R_1$, we have \begin{equation} \label{171103@eq5} \begin{aligned} &\sum_{j=0}^\infty\Phi(x_0, \kappa^j r)\lesssim_{d,\lambda,\gamma, R_0, \varrho_0} r^{-d} \big(\|Du\|_{L^1(\Omega_{3r}(x_0))}+\|p\|_{L^1(\Omega_{3r}(x_0))}\big) \\ &\quad +\big(\|Du\|_{L^\infty(\Omega_{3r}(x_0))}+\|p\|_{L^\infty(\Omega_{3r}(x_0))}\big) \int_0^r \frac{\varrho^\sharp_0(t) + \omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt \\ &\quad + \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}\int_0^r \frac{\varrho_0^\sharp(t)}{t}\,dt+\int_0^r \frac{\omega_{f_\alpha}^\sharp(t)+{\omega}_{g}^\sharp(t)}{t}\,dt, \end{aligned} \end{equation} where each integration is finite; see Remark \ref{180106@rmk1} \item For any $0<\rho\le r\le R_1$, we have \begin{equation} \label{171102@eq2a} \begin{aligned} &\Phi(x_0, \rho)\lesssim_{d,\lambda,\gamma,R_0, \varrho_0} \left(\frac{\rho}{r}\right)^\gamma r^{-d} \big(\|Du\|_{L^1(\Omega_{3r}(x_0))}+\|p\|_{L^1(\Omega_{3r}(x_0))}\big)\\ &\quad +\big(\|Du\|_{L^\infty(\Omega_{3r}(x_0))}+\|p\|_{L^\infty(\Omega_{3r}(x_0))}\big)(\varrho^\sharp_0(\rho)+{\omega}^\sharp_{A^{\alpha\beta}}(\rho))\\ &\quad + \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}{\varrho}_0^\sharp(\rho)+{\omega}_{f_\alpha}^\sharp(\rho)+{\omega}_{g}^\sharp(\rho). \end{aligned} \end{equation} \end{enumerate} Here, we set $$ \varrho^\sharp_0(\rho):=\sup_{\rho\le R\le R_1}\left(\frac{\rho}{R}\right)^\gamma\tilde{\varrho}_0(R), \quad \omega^\sharp_{\bullet}(\rho):=\sup_{\rho\le R\le R_1}\left(\frac{\rho}{R}\right)^\gamma \tilde{\omega}_{\bullet}(R). $$ \end{lemma} \begin{remark} \label{180106@rmk1} Note that $\varrho^\sharp_0$ is a Dini function; see \cite[pp. 463--464]{MR3747493}. By the definition of $\varrho_0^\sharp$ and \eqref{171102@eq3}, we have $$ 2^{-\gamma}\varrho_0^\sharp(t)\le \varrho_0^\sharp(s)\lesssim_{\gamma,\varrho_0} \varrho_0^\sharp(t), \quad \frac{t}{2}\le s\le t \le R_1. $$ Therefore, using the comparison principle for Riemann integrals, we get \begin{equation} \label{171103@eq1} \sum_{j=0}^\infty \varrho_0^\sharp(\kappa^j r)\lesssim_{\gamma, \varrho_0,\kappa}\int_0^r \frac{\varrho_0^\sharp(t)}{t}\,dt<\infty, \quad 0<r \le R_1. \end{equation} Similarly, we have \begin{equation} \label{171103@eq1a} \sum_{j=0}^\infty \omega_{f}^\sharp(\kappa^j r)\lesssim_{d,\gamma,R_0, \varrho_0,\kappa}\int_0^r \frac{\omega_{f}^\sharp(t)}{t}\,dt<\infty, \quad 0<r\le R_1, \end{equation} for any $f$ having Dini mean oscillation in $\Omega$. \end{remark} \begin{proof}[Proof of Lemma \ref{171102@lem5}] The estimate \eqref{171103@eq5} is an easy consequence of the estimate \eqref{171102@eq2a}. Indeed, for $j\in \{0,1,2,\ldots\}$, by taking $\rho=\kappa^j r$ in \eqref{171102@eq2a}, we have \begin{equation} \label{171103@eq5a} \begin{aligned} &\Phi(x_0, \kappa^j r)\lesssim \kappa^{\gamma j} r^{-d} \big(\|Du\|_{L^1(\Omega_{3r}(x_0))}+\|p\|_{L^1(\Omega_{3r}(x_0))}\big)\\ &\quad +\big(\|Du\|_{L^\infty(\Omega_{3r}(x_0))}+\|p\|_{L^\infty(\Omega_{3r}(x_0))}\big)(\varrho^\sharp_0(\kappa^j r)+{\omega}^\sharp_{A^{\alpha\beta}}(\kappa^j r))\\ &\quad + \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}{\varrho}_0^\sharp(\kappa^j r)+{\omega}_{f_\alpha}^\sharp(\kappa^j r)+{\omega}_{g}^\sharp(\kappa^j r). \end{aligned} \end{equation} Taking the summations of both sides of \eqref{171103@eq5a} with respect to $j=0,1,2,\ldots$, and using \eqref{171103@eq1} and \eqref{171103@eq1a}, we conclude \eqref{171103@eq5}. To complete the proof, it suffices to prove that \eqref{171102@eq2a} holds. Without loss of generality, we assume that $x_0=0\in {\Omega}$. We denote $B_R=B_R(0)$ and $\Omega_R=\Omega_R(0)$. Let $0<\rho\le r \le R_1$. Note that if $r/6< \rho\le r$, then \eqref{171102@eq2a} follows from the definition of $\Phi$. Hence we only need to consider the case of $0<\rho\le r/6$. We consider the following three cases: $$ r\le \operatorname{dist}(0,\partial \Omega), \quad \operatorname{dist}(0,\partial \Omega)\le 4\rho , \quad 4\rho < \operatorname{dist}(0,\partial \Omega) < r. $$ \begin{enumerate}[i.] \item $r\le \operatorname{dist}(0,\partial \Omega)$: Set $R=r/4$. Since $B_{4R}\subset \Omega$, by Lemma \ref{171228@lem1} $(ii)$, we have $$ \Phi(0,\rho)\lesssim \left(\frac{\rho}{R}\right)^{\gamma} \Phi(0, R) +\|Du\|_{L^\infty(B_R)}\tilde{\omega}_{A^{\alpha\beta}}(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_{g}(\rho). $$ Thus from the fact that $$ \tilde{\omega}_{\bullet}(\rho)\le \omega^\sharp_{\bullet}(\rho), \quad \Phi(0, R)\lesssim R^{-d}\big(\|Du\|_{L^1(\Omega_R)}+\|p\|_{L^1(\Omega_R)}\big), $$ we get \eqref{171102@eq2a}. \item $\operatorname{dist}(0,\partial \Omega)\le 4\rho$: We take $y_0\in \partial \Omega$ such that $\operatorname{dist}(0,\partial \Omega)=|y_0|$. We fix a $C^{1,\rm{Dini}}$ function $\chi$ and a coordinate system associated with $y_0$ satisfying \eqref{171101@E1} and \eqref{171101@E2}. In this coordinate system, using \eqref{171127@eq3} and the fact that $\Omega_\rho \subset \Omega_{5\rho}(y_0)$, we have $$ \Phi(0, \rho) \lesssim_{d,R_0, \varrho_0} \Psi(y_0,5\rho)+\bigg(\dashint_{\Omega_{5\rho}(y_0)} \sum_{i=2}^d|D_i \chi D_1 u|^q\,dx\bigg)^{1/q}, $$ where $\Psi$ is given in Lemma \ref{171101@lem1}. Note that $$ |D_{x'} \chi(x')|=|D_{x'}\chi(x')-D_{x'} \chi(y_0')|\le \varrho_0(5\rho), \quad x'\in B_{5\rho}'(y_0'). $$ Using this together with Lemma \ref{171101@lem1} $(ii)$, we obtain that \begin{align} \label{171129@eq2} \Phi(0,\rho)&\lesssim \Psi(y_0, 5\rho)+ \varrho_0(5\rho) \|Du\|_{L^\infty(\Omega_{5\rho}(y_0))}\\ \nonumber &\lesssim \left(\frac{\rho}{r}\right)^{\gamma}\Psi(y_0,r)+\big(\||Du|+|p|\|_{L^\infty(\Omega_r(y_0))}\big) \big(\tilde{\varrho}_0(5\rho)+\tilde{\omega}_{A^{\alpha\beta}}(5\rho)\big)\\ \label{171129@eq1} & \quad + \|f_\alpha\|_{L^\infty(\Omega_r(y_0))}\tilde{\varrho}_0(5\rho)+\tilde{\omega}_{f_\alpha}(5\rho)+\tilde{\omega}_g(5\rho). \end{align} Since it holds that $$ \Omega_r(y_0)\subset \Omega_{3r}, \quad \tilde{\varrho}(5\rho)\lesssim_{\gamma} \varrho_0^\sharp(\rho), \quad \tilde{\omega}_{\bullet}(5\rho)\lesssim_{\gamma} \omega_{\bullet}^\sharp(\rho), $$ $$ \Psi(y_0, r)\lesssim r^{-d}\big(\|Du\|_{L^1(\Omega_{3r})}+\|p\|_{L^1(\Omega_{3r})}\big), $$ we get \eqref{171102@eq2a} from \eqref{171129@eq1}. \item $4\rho < \operatorname{dist}(0, \partial \Omega) < r$: Set $R=\operatorname{dist}(0, \partial \Omega)/4$, and observe that $$ \rho < R, \quad 5R < 2r\le 2R_1. $$ Since $B_{4R}\subset \Omega$, by Lemma \ref{171228@lem1} $(ii)$, we have \begin{equation} \label{171129@eq1b} \Phi(0,\rho)\lesssim \left(\frac{\rho}{R}\right)^{\gamma} \Phi(0, R) + \|Du\|_{L^\infty(B_R)} \tilde{\omega}_{A^{\alpha\beta}}(\rho)+\tilde{\omega}_{f_\alpha}(\rho)+\tilde{\omega}_{g}(\rho). \end{equation} We take $y_0\in \partial \Omega$ such that $\operatorname{dist}(0,\partial \Omega)=|y_0|$. We fix a $C^{1,\rm{Dini}}$ function $\chi$ and a coordinate system associated with $y_0$ satisfying \eqref{171101@E1} and \eqref{171101@E2}. In this coordinate system, similar to \eqref{171129@eq1}, we have \begin{align} \nonumber \Phi(0, R) &\lesssim \Psi(y_0, 5R)+\varrho_0(5R)\|Du\|_{L^\infty(\Omega_{5R}(y_0))}\\ \nonumber &\lesssim \left(\frac{R}{r}\right)^\gamma \Psi(y_0, 2r)+\big(\||Du|+|p|\|_{L^\infty(\Omega_{2r}(y_0))}\big)\big(\tilde{\varrho}_0(5R)+\tilde{\omega}_{A^{\alpha\beta}}(5R)\big)\\ \label{171129@eq1a} &\quad + \|f_\alpha\|_{L^\infty(\Omega_{2r}(y_0))}\tilde{\varrho}_0(5R)+\tilde{\omega}_{f_\alpha}(5R)+\tilde{\omega}_g(5R). \end{align} Combining \eqref{171129@eq1b} and \eqref{171129@eq1a}, and using the fact that $$ \Omega_{2r}(y_0)\subset \Omega_{3r}, \quad \Psi(y_0, 2r)\lesssim r^{-d}\big(\|Du\|_{L^1(\Omega_{3r})}+\|p\|_{L^1(\Omega_{3r})}\big), $$ we get \eqref{171102@eq2a}. \end{enumerate} The lemma is proved. \end{proof} Now we are ready to prove the assertion $(a)$ in the theorem. \begin{proof}[Proof of Theorem \ref{M4} $(a)$] In this proof, we fix $\gamma\in (0, 1)$. Let $R_1=R_1(\varrho_0, R_0)\in (0, R_0/4)$ be the constant from Lemma \ref{171101@lem1} and $\kappa=\kappa(d,\lambda, \gamma,R_0, \varrho_0)\in (0,1/8]$ be the constant from Lemma \ref{171102@lem5}. We denote $$ \mathcal{U}=|Du|+|p|, \quad \mathcal{G}(r)=\int_0^r \frac{\omega^\sharp_{f_\alpha}(t)+\omega^\sharp_{g}(t)}{t}\,dt. $$ We first derive $L^\infty$-estimates for $Du$ and $p$. Let $x_0\in {\Omega}$ and $0<r \le R_1$. We take $\theta_{x_0, r}\in \mathbb{R}$ and $\Theta_{x_0,r}\in \mathbb{R}^{d\times d}$ to be such that $$ \Phi(x_0, r)=\bigg(\dashint_{\Omega_r(x_0)} |Du-\Theta_{x_0, r}|^q+|p-\theta_{x_0,r}|^q\, dx\bigg)^{1/q}. $$ Similarly, we find $\theta_{x_0,\kappa^ir}\in \mathbb{R}$ and $\Theta_{x_0, \kappa^i r}\in \mathbb{R}^{d\times d}$ for $i\in \{1,2,\ldots\}$. Recall the assumption that $(u,p)\in C^1(\overline{\Omega})^d\times C(\overline{\Omega})$. Thus, since the right-hand side of \eqref{171103@eq5a} goes to zero as $j\to \infty$, we see that \begin{equation} \label{171103@eq7d} \lim_{i\to \infty} \theta_{x_0,\kappa^i r}=p(x_0), \quad \lim_{i\to \infty}\Theta_{x_0, \kappa^i r}=Du(x_0). \end{equation} By averaging the inequality $$ |\Theta_{x_0, \kappa r}-\Theta_{x_0,r}|^q\le |Du-\Theta_{x_0,\kappa r}|^q+|Du-\Theta_{x_0,r}|^q $$ on $\Omega_{\kappa r}(x_0)$ and taking the $q$-th root, we have $$ |\Theta_{x_0,\kappa r}-\Theta_{x_0,r}| \lesssim \Phi(x_0, \kappa r)+\Phi(x_0,r). $$ Similarly, we have $ |\theta_{x_0,\kappa r}-\theta_{x_0,r}| \lesssim \Phi(x_0, \kappa r)+\Phi(x_0,r)$. Thus by iterating and \eqref{171103@eq7d}, we have \begin{equation} \label{171103@eq7e} |Du(x_0)-\Theta_{x_0,r}|+|p(x_0)-\theta_{x_0,r}| \lesssim \sum_{j=0}^\infty \Phi(x_0,\kappa^j r). \end{equation} This inequality together with Lemma \ref{171102@lem5} $(i)$ implies $$ \begin{aligned} &|Du(x_0)-\Theta_{x_0,r}|+|p(x_0)-\theta_{x_0,r}| \\ &\lesssim r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x_0))}+\|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x_0))} \int_0^r \frac{\varrho^\sharp_0(t) + \omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt \\ &\quad + \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}\int_0^r \frac{\varrho_0^\sharp(t)}{t}\,dt+\mathcal{G}(r). \end{aligned} $$ Note that $$ |\Theta_{x_0,r}|+|\theta_{x_0,r}|\lesssim \Phi(x_0,r)+r^{-d}\|\mathcal{U}\|_{L^1(\Omega_r(x_0))} \lesssim r^{-d}\|\mathcal{U}\|_{L^1(\Omega_r(x_0))}. $$ Combining the above two inequalities, we have $$ \begin{aligned} \mathcal{U}(x_0) &\le C_1 r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x_0))}+ C_1 \|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x_0))} \int_0^r \frac{\varrho^\sharp_0(t) + \omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt \\ &\quad + C_1 \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}\int_0^r \frac{\varrho_0^\sharp(t)}{t}\,dt+ C_1\mathcal{G}(r), \end{aligned} $$ where $C_1=C_1(d,\lambda,\gamma, R_0, \varrho_0)$. We take $r_0\in (0, R_1]$ so that $$ C_1\int_0^{r_0} \frac{\varrho^\sharp_0(t) + \omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt\le \frac{1}{3^d}. $$ Then for any $x_0 \in \Omega$ and $0 < r \le r_0$, we have that \begin{equation} \label{171127@eq2} \begin{aligned} \mathcal{U}(x_0) &\le C_1 r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x_0))}+ 3^{-d} \|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x_0))} \\ &\quad + 3^{-d} \|f_\alpha\|_{L^\infty(\Omega_{3r}(x_0))}+ C_1\mathcal{G}(r). \end{aligned} \end{equation} Here, the constant $r_0$ depends only on $d$, $\lambda$, $\gamma$, $R_0$, $\varrho_0$, and $\omega_{A^{\alpha\beta}}$. Now let us fix $x_0\in \Omega$ and $0<R\le R_1$. For $k\in \{2,3,\ldots\}$, we denote $r_k=R(1-2^{1-k})$. Since $r_{k+1}-r_k=2^{-k}R$, we have $\Omega_{4r}(y)\subset \Omega_{r_{k+1}}(x_0)$ for any $y\in \Omega_{r_k}(x_0)$ and $r= 2^{-k-2} R$. We take $k_0$ sufficiently large such that $ 2^{-k_0-2} R_1\le r_0$. Then by \eqref{171127@eq2} with $r=2^{-k-2} R$, we have for $k\ge k_0$ that $$ \begin{aligned} \|\mathcal{U}\|_{L^\infty(\Omega_{r_{k}}(x_0))} &\le C_1 \left(\frac{2^{k+2}}{R}\right)^{d} \|\mathcal{U}\|_{L^1(\Omega_{r_{k+1}}(x_0))}+3^{-d} \|\mathcal{U}\|_{L^\infty(\Omega_{r_{k+1}}(x_0))}\\ &\quad +3^{-d} \|f_\alpha\|_{L^\infty(\Omega_{r_{k+1}}(x_0))}+C_1\mathcal{G}(R). \end{aligned} $$ By multiplying both sides of the above inequality by $3^{-dk}$ and summing the terms with respect to $k=k_0,k_0+1,\ldots$, we see that $$ \begin{aligned} \sum_{k=k_0}^\infty 3^{-dk}\|\mathcal{U}\|_{L^\infty(\Omega_{r_{k}}(x_0))} &\le C R^{-d} \|\mathcal{U}\|_{L^1(\Omega_{R}(x_0))} +\sum_{k=k_0+1}^\infty 3^{-dk} \|\mathcal{U}\|_{L^\infty(\Omega_{r_{k}}(x_0))}\\ &\quad +C \|f_\alpha\|_{L^\infty(\Omega_{R}(x_0))}+C\mathcal{G}(R), \end{aligned} $$ where each summation is finite and $C=C(d,\lambda, \gamma,R_0, \varrho_0)>0$. By subtracting $$ \sum_{k=k_0+1}^\infty 3^{-dk}\|\mathcal{U}\|_{L^\infty(\Omega_{r_k}(x_0))} $$ from both sides of the above inequality, we get the following $L^\infty$-estimate for $Du$ and $p$: \begin{equation} \label{171103@eq7} \|\mathcal{U}\|_{L^\infty(\Omega_{R/2}(x_0))} \le C \big(R^{-d} \|\mathcal{U}\|_{L^1(\Omega_{R}(x_0))} +\|f_\alpha\|_{L^\infty(\Omega_{R}(x_0))}+\mathcal{G}(R)\big) \end{equation} for any $x_0\in {\Omega}$ and $R\in (0,R_1]$, where $C=C(d,\lambda,\gamma, R_0, \varrho_0, \omega_{A^{\alpha\beta}})$. Next, we shall derive estimates of the modulus of continuity of $Du$ and $p$. We first claim that for any $x\in \Omega$ and $0<\rho\le r\le R_1/4$, we have \begin{equation} \label{171103@eq7a} \begin{aligned} &\sum_{j=0}^\infty \Phi(x,\kappa^j \rho) \lesssim_{d,\lambda,\gamma,R_0, \varrho_0} \left(\frac{\rho}{r}\right)^{\gamma} r^{-d}\|\mathcal{U}\|_{L^1(\Omega_{10r}(x))}\\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{10r}(x))}+\|f_\alpha\|_{L^\infty(\Omega_{10r}(x))}\big)\int_0^\rho \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\rho). \end{aligned} \end{equation} We consider the following two cases: $$ 4\rho\le \operatorname{dist}(x,\partial \Omega) \quad \text{and} \quad 4\rho>\operatorname{dist}(x,\partial \Omega). $$ \begin{enumerate}[i.] \item $4\rho\le \operatorname{dist}(x,\partial \Omega)$: Since $B_{4\rho}(x) \subset \Omega$, by Lemma \ref{171228@lem1} $(i)$, we have $$ \begin{aligned} \sum_{j=0}^\infty \Phi(x, \kappa^j \rho)&\lesssim \Phi(x,\rho)+\|Du\|_{L^\infty(B_\rho(x))}\int_0^\rho \frac{\tilde{\omega}_{A^{\alpha\beta}}(t)}{t}\,dt\\ &\quad +\int_0^\rho\frac{\tilde{\omega}_{f_\alpha}(t)+\tilde{\omega}_g(t)}{t}\,dt. \end{aligned} $$ From Lemma \ref{171102@lem5} $(ii)$, it follows that $$ \begin{aligned} \Phi(x,\rho)&\lesssim \left(\frac{\rho}{r}\right)^{\gamma}r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x))}+\|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x))}(\varrho^\sharp_0(\rho)+\omega^\sharp_{A^{\alpha\beta}}(\rho))\\ &\quad +\|f_\alpha\|_{L^\infty(\Omega_{3r}(x))}\varrho^\sharp_0(\rho)+\omega^\sharp_{f_\alpha}(\rho)+\omega_g^\sharp(\rho). \end{aligned} $$ Combining the above two inequalities, and using the fact that \begin{equation} \label{180314@A1} \tilde{\omega}_{\bullet}(\rho)\le \omega_\bullet^\sharp(\rho) \lesssim \int_0^\rho \frac{\omega_\bullet^\sharp(t)}{t}\,dt, \quad \varrho_0^\sharp(\rho) \lesssim \int_0^\rho \frac{\varrho_0^\sharp(t)}{t}\,dt, \end{equation} we get \begin{equation} \label{171124@eq1} \begin{aligned} &\sum_{j=0}^\infty \Phi(x, \kappa^j \rho)\lesssim \left(\frac{\rho}{r}\right)^{\gamma} r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x))}\\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x))}+\|f_\alpha\|_{L^\infty(\Omega_{3r}(x))}\big) \int_0^\rho \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\rho). \end{aligned} \end{equation} This inequality implies \eqref{171103@eq7a}. \item $4\rho>\operatorname{dist}(x, \partial \Omega)$: Let $i_0$ be the integer such that $4\kappa^{i_0+1}\rho\le \operatorname{dist}(x, \partial \Omega)<4\kappa^{i_0}\rho$. Since $B_{4\kappa^{i_0+1}\rho}(x)\subset \Omega$, by the same reasoning as in \eqref{171124@eq1}, we have $$ \begin{aligned} &\sum_{j=i_0+1}^\infty \Phi(x,\kappa^j \rho)=\sum_{j=0}^\infty \Phi(x, \kappa^{j+i_0+1} \rho) \lesssim \left(\frac{\kappa^{i_0+1}\rho}{r}\right)^{\gamma} r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x))} \\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x))}+\|f_\alpha\|_{L^\infty(\Omega_{3r}(x))}\big) \int_0^{\kappa^{i_0+1}\rho} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\kappa^{i_0+1}\rho). \end{aligned} $$ Thus we get (using $\kappa^{i_0+1}\rho\le \rho$) \begin{equation} \label{171124@eq1a} \begin{aligned} &\sum_{j=i_0+1}^\infty \Phi(x,\kappa^j \rho) \lesssim \left(\frac{\rho}{r}\right)^{\gamma} r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{3r}(x))} \\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{3r}(x))}+\|f_\alpha\|_{L^\infty(\Omega_{3r}(x))}\big) \int_0^{\rho} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\rho). \end{aligned} \end{equation} We take $y_0\in \partial \Omega$ such that $|y_0|=\operatorname{dist}(x, \partial \Omega)$. We fix a coordinate system associated with $y_0$ satisfying \eqref{171101@E2}. Observe that for $j\in \{0,1,\ldots,i_0\}$, we have $$ \Omega_{\kappa^j \rho}(x) \subset \Omega_{5\kappa^j \rho}(y_0). $$ Then similar to \eqref{171129@eq2}, we obtain $$ \Phi(x, \kappa^j \rho)\lesssim \Psi(y_0, 5\kappa^j \rho)+\varrho_0(5\kappa^j \rho)\|Du\|_{L^\infty(\Omega_{5\rho}(y_0))}. $$ Summing the terms with respect to $j=0,1,\ldots,i_0$, and using the fact that $$ \sum_{j=0}^{i_0} \varrho_0(5\kappa^j \rho)\le \sum_{j=0}^\infty \tilde{\varrho}_0(5\kappa^j \rho)\lesssim \int_0^{5\rho}\frac{\varrho^\sharp_0(t)}{t}\,dt , $$ we have \begin{equation} \label{180314@A2} \sum_{j=0}^{i_0} \Phi(x,\kappa^j \rho)\lesssim \sum_{j=0}^{i_0} \Psi(y_0, 5\kappa^j \rho)+\|Du\|_{L^\infty(\Omega_{5\rho}(y_0))}\int_0^{5\rho}\frac{\varrho^\sharp_0(t)}{t}\,dt. \end{equation} Recall that $0<5\rho\le 5r\le 2R_1$. Hence, by Lemma \ref{171101@lem1} and \eqref{180314@A1}, we get the following two inequalities: $$ \begin{aligned} &\sum_{j=0}^{i_0} \Psi(x,5\kappa^j \rho) \lesssim \Psi(y_0, 5\rho)\\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{5\rho}(y_0))}+\|f_\alpha\|_{L^\infty(\Omega_{5\rho}(y_0))}\big) \int_0^{5\rho} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(5\rho), \end{aligned} $$ $$ \begin{aligned} &\Psi(y_0, 5\rho)\lesssim \left(\frac{\rho}{r}\right)^\gamma\Psi(y_0, 5r)\\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{5r}(y_0))}+\|f_\alpha\|_{L^\infty(\Omega_{5r}(y_0))}\big) \int_0^{5\rho} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(5\rho). \end{aligned} $$ Combining these together, we get from \eqref{180314@A2} that \begin{equation} \label{171104@eq1} \begin{aligned} &\sum_{j=0}^{i_0} \Phi(x,\kappa^j \rho) \lesssim \left(\frac{\rho}{r}\right)^{\gamma} r^{-d} \|\mathcal{U}\|_{L^1(\Omega_{10r}(x))}\\ &\quad+\big(\|\mathcal{U}\|_{L^\infty(\Omega_{10r}(x))}+\|f_\alpha\|_{L^\infty(\Omega_{10r}(x))}\big) \int_0^{\rho} \frac{{\varrho}^\sharp_0(t)+ {\omega}^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\rho), \end{aligned} \end{equation} where we used the fact that $\Omega_{5r}(y_0)\subset \Omega_{10r}(x)$, $$ \int_0^{5\rho}\frac{\varrho^\sharp_0(t)}{t}\,dt \lesssim \int_0^\rho \frac{\varrho_0^\sharp(t)}{t}\,dt, \quad \int_0^{5\rho}\frac{\omega^\sharp_\bullet (t)}{t}\,dt \lesssim \int_0^\rho \frac{\omega_{\bullet}^\sharp(t)}{t}\,dt. $$ Therefore, we get \eqref{171103@eq7a} from \eqref{171124@eq1a} and \eqref{171104@eq1}. \end{enumerate} Now we are ready to estimate the modulus of continuity of $Du$ and $p$. Let $x_0\in {\Omega}$ and $0<R\le R_1$. Let $x,y\in \Omega_{R/4}(x_0)$ with $\rho:=|x-y|\le R/40$. Then for any $z\in \Omega_\rho(x)\cap \Omega_\rho(y)$, we have \begin{align*} &|Du(x)-Du(y)|^q \\ &\le |Du(x)-\Theta_{x,\rho}|^q+|\Theta_{x,\rho}-\Theta_{y,\rho}|^q + |Du(y)-\Theta_{y,\rho}|^q\\ &\le 2\sup_{y_0\in \Omega_{R/4}(x_0)}|Du(y_0)-\Theta_{y_0,\rho}|^q+|Du(z)-\Theta_{x,\rho}|^q+|Du(z)-\Theta_{y,\rho}|^q. \end{align*} By taking average over $z\in \Omega_\rho(x)\cap \Omega_\rho(y)$ and taking the $q$-th root, we have \begin{align*} |Du(x)-Du(y)|& \lesssim \sup_{y_0\in \Omega_{R/4}(x_0)}|Du(y_0)-\Theta_{y_0,\rho}|+\Phi(x,\rho)+\Phi(y,\rho)\\ &\lesssim \sup_{y_0\in \Omega_{R/4}(x_0)} \Bigg(\sum_{j=0}^\infty \Phi(y_0, \kappa^j \rho)+\Phi(y_0, \rho)\Bigg)\\ &\lesssim \sup_{y_0\in \Omega_{R/4}(x_0)} \sum_{j=0}^\infty \Phi(y_0, \kappa^j \rho) \end{align*} where we used \eqref{171103@eq7e} in the second inequality. Similarly, we get the same bound for $p$, and thus, by using \eqref{171103@eq7a} and the fact that $$ \Omega_{R/4}(y_0)\subset \Omega_{R/2}(x_0) \quad \text{for }\, y_0\in \Omega_{R/4}(x_0), $$ we obtain $$ \begin{aligned} &|Du(x)-Du(y)|+|p(x)-p(y)| \lesssim \left(\frac{\rho}{R}\right)^{\gamma} R^{-d}\|\mathcal{U}\|_{L^1(\Omega_{R/2}(x_0))}\\ &\quad +\big(\|\mathcal{U}\|_{L^\infty(\Omega_{R/2}(x_0))}+\|f_\alpha\|_{L^\infty(\Omega_{R/2}(x_0))}\big) \int_0^\rho \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+\mathcal{G}(\rho). \end{aligned} $$ Therefore, by \eqref{171103@eq7}, we have \begin{equation} \label{171128@C1} \begin{aligned} &|Du(x)-Du(y)|+|p(x)-p(y)|\\ &\le C R^{-d}\|\mathcal{U}\|_{L^1(\Omega_{R}(x_0))}\left(\left(\frac{|x-y|}{R}\right)^{\gamma} +\int_0^{|x-y|} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt\right) \\ &\quad +C\|f_\alpha\|_{L^\infty(\Omega_{R}(x_0))}\int_0^{|x-y|} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt\\ &\quad + C\mathcal{G}(R) \int_0^{|x-y|} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt+C\mathcal{G}(|x-y|) \end{aligned} \end{equation} for any $x,y\in \Omega_{R/4}(x_0)$ with $|x-y|\le R/40$, where $x_0\in {\Omega}$, $0<R\le R_1$, and $C>0$ is a constant depending only on $d$, $\lambda$, $\gamma$, $R_0$, $\varrho_0$, and $\omega_{A^{\alpha\beta}}$. We note that if $x,y\in \Omega_{R/4}(x_0)$ with $|x-y|>R/40$, then by \eqref{171103@eq7}, we have \begin{equation} \label{180315@eq1} \begin{aligned} &|Du(x)-Du(y)|+|p(x)-p(y)| \\ &\le C \left(\frac{|x-y|}{R}\right)^\gamma \big(R^{-d}\|\mathcal{U} \|_{L^1(\Omega_{R}(x_0))}+\|f_\alpha \|_{L^\infty(\Omega_{R}(x_0))}+\mathcal{G}(R)\big). \end{aligned} \end{equation} The assertion $(a)$ in Theorem \ref{M4} is proved. \end{proof} We now turn to the proof of the assertion $(b)$ in the theorem. \begin{proof}[Proof of Theorem \ref{M4} $(b)$] In this proof, we set $\gamma=\frac{1+\gamma_0}{2}$ and $\varrho_0(r) = Nr^{\gamma_0}$, where $\gamma_0\in (0,1)$ and $N>0$. Let $R_1=R_1(\varrho_0,R_0)\in (0, R_0/4)$ be the constant from Lemma \ref{171101@lem1} and $\kappa=\kappa(d,\lambda,\gamma,R_0, \varrho_0)\in (0,1/8]$ be the constant from Lemma \ref{171102@lem5}. Here, we note that $$ R_1=R_1(\gamma_0, N, R_0) \quad \text{and}\quad \kappa=\kappa(d,\lambda, \gamma_0, N, R_0). $$ By the same reasoning as in \cite[Lemma 8.1 $(b)$]{arXiv:1803.05560}, we have $$ \tilde{\varrho}_0(r)=\varrho_0(r)+\sum_{i=1}^\infty \kappa^{\gamma i}\big(\varrho_{0}(\kappa^{-i}r)[\kappa^{-i} r<1]+\varrho_{0}(1)[\kappa^{-i}r\ge 1]\big)\lesssim_{\kappa,\gamma_0,N} r^{\gamma_0} $$ and $$ \tilde{\omega}_f(r)=\sum_{i=1}^\infty \kappa^{\gamma i}\big(\omega_{f}(\kappa^{-i}r)[\kappa^{-i} r<1]+\omega_{f}(1)[\kappa^{-i}r\ge 1]\big)\lesssim_{\kappa,\gamma_0} [f]_{C^{\gamma_0}(\Omega)} r^{\gamma_0} $$ for any function $f$ satisfying $[f]_{C^{\gamma_0}(\Omega)}<\infty$ and $0<r\le R_1$. Then it follows from the definitions of $\varrho_0^\sharp$ and $\omega^\sharp_f$ that $$ {\varrho}_0^\sharp (r)\lesssim r^{\gamma_0} , \quad {\omega}^\sharp_f(r)\lesssim [f]_{C^{\gamma_0}(\Omega)} r^{\gamma_0}. $$ Therefore, by \eqref{171103@eq7}, \eqref{171128@C1}, and \eqref{180315@eq1}, we conclude that $$ \begin{aligned} &\|Du\|_{L^\infty(\Omega_{R/2}(x_0))}+\|p\|_{L^\infty(\Omega_{R/2}(x_0))} +R^{\gamma_0}\big([Du]_{C^{\gamma_0}(\Omega_{R/4}(x_0))}+[p]_{C^{\gamma_0}(\Omega_{R/4}(x_0))}\big)\\ &\le C R^{-d} \big(\|Du\|_{L^1(\Omega_{R}(x_0))}+\|p\|_{L^1(\Omega_R(x_0))}\big)\\ &\quad +C\|f_\alpha\|_{L^\infty(\Omega_{R}(x_0))}+C R^{\gamma_0} \big([f_\alpha]_{C^{\gamma_0}(\Omega)}+[g]_{C^{\gamma_0}(\Omega)}\big) \end{aligned} $$ for any $x_0\in {\Omega}$ and $R\in (0,R_1]$, where $C>0$ is a constant depending only on $d$, $\lambda$, $\gamma_0$, $N$, $R_0$, and $[A^{\alpha\beta}]_{C^{\gamma_0}(\Omega)}$. This completes the proof of the assertion $(b)$ in Theorem \ref{M4}, and that of Theorem \ref{M4}. \end{proof} \subsection{Proof of Theorem \ref{M5}} To prove the theorem, we consider the following two cases: $$ 2\le q<\infty, \quad 1<q<2. $$ \begin{enumerate}[i.] \item $2\le q<\infty$: We only need to consider the case when $q=2$. We adapt the arguments in the proof of \cite[Theorem 1.9]{MR3747493}, where the authors proved the weak type-$(1,1)$ estimate for $W^{1,2}$-weak solutions to elliptic equations. By the hypothesis of the theorem, $\Omega$ is a Lipschitz domain, which implies that the $W^{1,2}_0$-solvability of the problem \begin{equation} \label{171127@eq5} \left\{ \begin{aligned} \mathcal{L} u+\nabla p=D_\alpha f_\alpha \quad &\text{in }\, \Omega\\ \operatorname{div} u=g-(g)_\Omega \quad &\text{in }\, \Omega \end{aligned} \right. \end{equation} is available (see, for instance, \cite[Lemma 3.2]{MR3693868}). Define a bounded linear operator $T$ on $L^2(\Omega)^{d\times d}\times L^2(\Omega)$ by $$ T(f_1,\ldots,f_d,g)=(D_1u,\ldots,D_d u,p), $$ where $(u,p)\in W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ is the weak solution of \eqref{171127@eq5}. To get the desired estimate \eqref{180315@eq2}, it suffices to show that $T$ satisfies the hypothesis of the following lemma. \begin{lemma} \label{171127@lem1} Let $\Omega$ be a bounded domain in $\mathbb{R}^d$ satisfying \begin{equation} \label{180315@eq4} |\Omega_r(x)|\ge A_0 r^d \quad \text{for all }\, x\in \overline{\Omega} \, \text{ and }\, r\in (0, \operatorname{diam}\Omega]. \end{equation} Let $T$ be a bounded linear operator from $L^2(\Omega)^k$ to $L^2(\Omega)^k$, where $k\in \{1,2,\ldots\}$. Suppose that for any $x_0\in \Omega$, $0<r<\mu \operatorname{diam}\Omega$, and $g\in \tilde{L}^2(\Omega)^k$ with $\operatorname{supp}g\subset \Omega_r(x_0)$, we have $$ \int_{\Omega\setminus B_{cr}(x_0)}|Tg|\,dx\le C\int_{\Omega_r(x_0)}|g|\,dx, $$ where $\mu\in (0,1)$, $c\in (1,\infty)$, and $C\in (0, \infty)$. Then for any $t>0$ and $f\in L^2(\Omega)^k$, we have $$ \big|\{x\in \Omega:|Tf(x)|>t\}\big|\lesssim_{d,\Omega, k,\mu,c,C,A_0} \frac{1}{t}\int_{\Omega}|f|\,dx. $$ \end{lemma} \begin{proof} See \cite[Lemma 4.1]{MR3747493}. \end{proof} We note that by \eqref{171127@eq3}, $\Omega$ satisfies \eqref{180315@eq4} with $A_0=A_0(d,R_0, \varrho_0, \operatorname{diam}\Omega)$. We claim that $T$ satisfies the hypothesis of Lemma \ref{171127@lem1} with $$ \mu=\frac{1}{4}\min\left\{1, \frac{R_1}{\operatorname{diam}\Omega}\right\}, \quad c=4 , \quad C=C(d,\lambda,\Omega, R_0, \varrho_0, \omega_{A^{\alpha\beta}}, C_0)>0. $$ Here and in this proof, $R_1$, $\kappa$, $\tilde{\varrho}_0$, $\tilde{\omega}_\bullet$, $\varrho^\sharp_0$, and $\omega^\sharp_{\bullet}$ are those in the proof of Theorem \ref{M4}. Fix $x_0\in \Omega$ and $0<r<\mu\operatorname{diam}\Omega$. Assume that $(u,p)\in W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ is the weak solution of \eqref{171127@eq5}, where $f_\alpha\in \tilde{L}^2(\Omega)^d$ and $g\in \tilde{L}^2(\Omega)$ are supported in $\Omega_r(x_0)$. Let $R\in [4 r, \operatorname{diam}\Omega)$ so that $\Omega\setminus B_R(x_0)\neq \emptyset$, and let $\mathcal{L}^*$ be the adjoint operator of $\mathcal{L}$, i.e., $$ \mathcal{L}^*v=D_\alpha(A^{\alpha\beta}_*D_\beta v), \quad A^{\alpha\beta}_*=(A^{\beta \alpha})^{\top}. $$ Then by \cite[Lemma 3.2]{MR3693868}, for given $$ \phi_\alpha\in C^\infty_0(\Omega_{2R}(x_0)\setminus B_R(x_0))^d, \quad \psi\in C^\infty_0(\Omega_{2R}(x_0)\setminus B_R(x_0)), $$ there exists a unique $(v,\pi)\in W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ satisfying \begin{equation} \label{171127@eq6a} \left\{ \begin{aligned} \mathcal{L}^*v+\nabla \pi=D_\alpha \phi_\alpha \quad \text{in }\, \Omega,\\ \operatorname{div} v=\psi-(\psi)_{\Omega} \quad \text{in }\, \Omega, \end{aligned} \right. \end{equation} and \begin{equation} \label{171127@eq6} \||Dv|+|\pi|\|_{L^2(\Omega)}\lesssim_{d,\lambda,\Omega} \||\phi_\alpha|+|\psi|\|_{L^2(\Omega_{2R}(x_0)\setminus B_R(x_0))}. \end{equation} By applying $u$ and $v$ as test functions to \eqref{171127@eq6a} and \eqref{171127@eq5}, respectively, we have \begin{equation} \label{171127@eq6b} \begin{aligned} &\int_\Omega (D_\alpha u\cdot \phi_\alpha+p \psi )\,dx\\ &=\int_{\Omega_r(x_0)} \big(D_\alpha v- (D_\alpha v)_{\Omega_r(x_0)}\big)\cdot f_\alpha+\big(\pi-(\pi)_{\Omega_r(x_0)}\big) g\,dx. \end{aligned} \end{equation} Observe that $$ 4r \le \min\{R_1, R\}<\operatorname{diam}\Omega. $$ Since $\phi_\alpha=\psi=0$ in $\Omega_R(x_0)$, by \eqref{171128@C1}, \eqref{180315@eq1}, and H\"older's inequality, we obtain that for any $x,y\in \Omega_r(x_0)$, \begin{equation} \label{180320@A1} \begin{aligned} &|Dv(x)-Dv(y)|+|\pi (x)- \pi(y)|\\ &\le C R^{-d/2}\||Dv|+|\pi|\|_{L^2(\Omega_{R}(x_0))}\bigg(\left(\frac{r}{R}\right)^{\gamma} +\int_0^{2r} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt\bigg), \end{aligned} \end{equation} where $\gamma=1/2$ and $C=C(d,\lambda, \Omega,R_0, \varrho_0, \omega_{A^{\alpha\beta}})$. Combining \eqref{171127@eq6} -- \eqref{180320@A1}, and then using the duality, we see that \begin{equation} \label{180315@eq6} \int_{\Omega_{2R}(x_0)\setminus B_R(x_0)} (|Du|+|p|)\,dx \lesssim M\bigg(\left(\frac{r}{R}\right)^{\gamma} +\int_0^{2r} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt\bigg), \end{equation} where we set $$ M=\int_{\Omega_r(x_0)}( |f_\alpha|+|g|)\,dx. $$ Notice from \eqref{171127@B1} and \cite[Eq. (3.5)]{MR3620893} that $$ \tilde{\varrho}_0(\rho)+\tilde{\omega}_{A^{\alpha\beta}}(\rho)\le C (\ln \rho)^{-2}, \quad \forall \rho\in (0,1/2), $$ where $C=C(\gamma,\kappa,C_0)=C(d,\lambda,R_0, \varrho_0, C_0)$. Then it is routine to verify that $$ {\varrho}^\sharp_0(\rho)+{\omega}^\sharp_{A^{\alpha\beta}}(\rho)\le C(\ln \rho)^{-2}, \quad \forall \rho\in (0,R_1], $$ and thus, we have $$ \int_0^{2r} \frac{\varrho^\sharp_0(t)+\omega^\sharp_{A^{\alpha\beta}}(t)}{t}\,dt \lesssim \left(\ln \frac{1}{r}\right)^{-1}. $$ This inequality together with \eqref{180315@eq6} yields $$ \int_{\Omega_{2R}(x_0)\setminus B_R(x_0)} (|Du|+|p|)\,dx \lesssim \bigg(\left(\frac{r}{R}\right)^\gamma +\left(\ln \frac{1}{r}\right)^{-1}\bigg) M. $$ Let $N$ be the smallest positive integer such that $\Omega\subset B_{2^{N+1}r}(x_0)$. By taking $R=2^{i+1} r$, $i\in \{1,2,\ldots,N-1\}$, and using $N-1\lesssim \ln (1/r)$, we have $$ \int_{\Omega\setminus B_{4 r}(x_0)} (|Du|+|p|)\,dx\le C \sum_{k=1}^{N-1} \big(2^{-k\gamma}+(\ln(1/r))^{-1}\big)M\le C M, $$ where $C=C(d,\lambda,\Omega, R_0, \varrho_0, \omega_{A^{\alpha\beta}}, C_0)$. Therefore, the map $T$ satisfies the hypothesis of Lemma \ref{171127@lem1}. \item $1<q<2$: In this case, we use an approximation argument together with the result in the first case, and the $W^{1,q}$-estimate for the Stokes system in \cite{MR3693868} (see also \cite{MR3758532}). By \cite[Theorem 5.1 and Corollary 5.3]{MR3693868}, the $W^{1,q}$-estimate and solvability are available when the domain $\Omega$ has Lipschitz boundary with a small Lipschitz constant and the coefficients $A^{\alpha\beta}$ have vanishing mean oscillations (VMO): \begin{equation} \label{180316@eq1} \lim_{\delta\to 0} \sup_{x\in \overline{\Omega}} \sup_{r\in (0, \delta]}\dashint_{B_r(x)}|A^{\alpha\beta}-(A^{\alpha\beta})_{B_r(x)}|\,dy=0. \end{equation} The coefficients $A^{\alpha\beta}$ considered in this paper are VMO in the sense that (see Remark \ref{180213@rmk1}) \begin{equation} \label{180213@A1} \lim_{\delta\to 0} \sup_{x\in \overline{\Omega}} \sup_{r\in (0, \delta]}\dashint_{\Omega_r(x)}|A^{\alpha\beta}-(A^{\alpha\beta})_{\Omega_r(x)}|\,dy=0, \end{equation} which is slightly weaker than \eqref{180316@eq1}. However, it is easy to check that the proofs of \cite[Theorem 5.1 and Corollary 5.3]{MR3693868} still work under the condition \eqref{180213@A1}. Now, we are ready to prove \eqref{180315@eq2} when $q\in (1,2)$. Assume that $(u, p)\in W^{1,q}_0(\Omega)^d\times \tilde{L}^q(\Omega)$ is the weak solution of \eqref{171006@eq2}, where $f_\alpha\in L^q(\Omega)^d$ and $g\in L^q(\Omega)$. Let $\{f_{\alpha, k}\}\subset L^2(\Omega)^d$ and $\{g_k\}\subset L^2(\Omega)$ be sequences such that \begin{equation} \label{180316@eq5} f_{\alpha, k}\to f_\alpha, \quad g_k \to g \quad \text{in }\, L^q(\Omega) \, \text{ as }\, k\to \infty. \end{equation} By the $W^{1,2}_0$-solvability of the problem \eqref{171006@eq2}, for $k\in \{1,2,\ldots,\}$, there exists a unique weak solution $(u_k, p_k)\in W^{1,2}_0(\Omega)^d\times \tilde{L}^2(\Omega)$ of \eqref{171006@eq2} with $f_{\alpha, k}$ and $g_k$ in place of $f_\alpha$ and $g$. Then by the result in the first case, we see that $$ \big|\{x\in \Omega:|Du_k(x)|+|p_k(x)|>t\}\big|\le \frac{C'}{t}\int_\Omega (|f_{\alpha,k}|+|g_k| )\,dx, \quad \forall t>0, $$ where $C'=C'(d,\lambda, \Omega, R_0, \varrho_0, \omega_{A^{\alpha\beta}}, C_0)$. Moreover, since $(u-u_k, p-p_k)\in W^{1,q}_0(\Omega)^d\times \tilde{L}^q(\Omega)$ satisfies $$ \left\{ \begin{aligned} \mathcal{L} (u-u_k)+\nabla (p-p_k)=D_\alpha (f_\alpha-f_{\alpha,k}) \quad &\text{in }\, \Omega,\\ \operatorname{div} (u-u_k)=g-g_k-(g)_\Omega +(g_k)_{\Omega} \quad &\text{in }\, \Omega, \end{aligned} \right. $$ by the $W^{1,q}$-estimate and \eqref{180316@eq5}, we have $$ \begin{aligned} &\|Du-Du_k\|_{L^q(\Omega)}+\|p-p_k\|_{L^q(\Omega)}\\ &\lesssim \|f_\alpha-f_{\alpha,k}\|_{L^q(\Omega)}+\|g-g_k\|_{L^q(\Omega)} \to 0 \quad \text{as }\, k\to \infty. \end{aligned} $$ Observe that $$ \begin{aligned} &\big|\{x\in \Omega:|Du(x)|+|p(x)|>t\}\big|\\ &\le \big|\{x\in \Omega:|Du_k(x)|+|p_k(x)|>t/2\}\big|\\ &\quad +\big|\{x\in \Omega:|Du(x)-Du_k(x)|+|p(x)-p_k(x)|>t/2\}\big|\\ &\lesssim_{C'} \frac{1}{t}\int_\Omega (|f_{\alpha,k}|+|g_k|)\,dx+\frac{1}{t^q}\int_\Omega (|Du-Du_k|+|p-p_k|)^q\,dx. \end{aligned} $$ Since the right-hand side of the above inequality converges to $$ \frac{1}{t}\int_\Omega (|f_{\alpha}|+|g|)\,dx, $$ we get the desired estimate \eqref{180315@eq2}. \end{enumerate} The theorem is proved. \qed \section{Appendix} \label{Sec3} In Appendix, we provide the proofs of some lemmas used in the previous section. \begin{lemma} \label{171024@lem1} Let $\omega:(0,a]\to [0,\infty)$ be a Dini function satisfying \eqref{171006@eq1} and \eqref{180315@A1}. Set $$ \tilde{\omega}(r):=\sum_{i=1}^\infty \kappa^{\gamma i}\big(\omega(\kappa^{-i}r)[\kappa^{-i}r<a]+\omega(a)[\kappa^{-i}r\ge a]\big), $$ where $\gamma\in (0,1)$ and $\kappa\in (0,1/2]$. Then $\tilde{\omega}:(0,a]\to [0,\infty)$ is also a Dini function satisfying \begin{equation} \label{171024@eq2} \tilde{\omega}(t)\lesssim_{c_1} \tilde{\omega}(s) \lesssim_{c_2} \tilde{\omega}(t) \quad \text{whenever }\, \frac{t}{2}\le s\le t\le a \end{equation} and that \begin{equation} \label{171101@eq1} \int_0^a \frac{\tilde{\omega}(t)}{t}\,dt<\infty. \end{equation} \end{lemma} \begin{proof} Set $$ \hat{\omega}(r)= \left\{ \begin{aligned} \omega(r) &\quad \text{if }\, r<a,\\ \omega(a) &\quad \text{if }\, r\ge a, \end{aligned} \right. $$ and observe that $$ \tilde{\omega}(r)=\sum_{i=1}^\infty \kappa^{\gamma i}\hat{\omega}(\kappa^{-i}r). $$ Let $\frac{t}{2}\le s\le t\le a$. To prove \eqref{171024@eq2}, it suffices to show that for any $i\in \{1,2,\ldots\}$, we have \begin{equation} \label{180103@eq1a} \hat{\omega}(\kappa^{-i}t)\lesssim_{c_1} \hat{\omega}(\kappa^{-i}s)\lesssim_{c_2}\hat{\omega}(\kappa^{-i}t). \end{equation} For $i$ satisfying $\kappa^{-i}t< a$, by \eqref{171006@eq1} and the fact that $$ \frac{\kappa^{-i}t}{2}\le \kappa^{-i}s\le \kappa^{-i} t, $$ we have $$ \hat{\omega}(\kappa^{-i}t)=\omega(\kappa^{-i}t)\lesssim_{c_1} \omega(\kappa^{-i}s)=\hat{\omega}(\kappa^{-i}s)\lesssim_{c_2} \omega(\kappa^{-i}t)=\hat{\omega}(\kappa^{-i}t), $$ which gives \eqref{180103@eq1a}. On the other hand, for $i$ satisfying $\kappa^{-i} t\ge a$, we consider the two cases: $$ \kappa^{-i}s< a, \quad \kappa^{-i}s\ge a. $$ If $\kappa^{-i}s< a$, then by \eqref{171006@eq1} and the fact that $$ \frac{a}{2}\le \kappa^{-i}s< a, $$ we have $$ \hat{\omega}(\kappa^{-i}t)=\omega(a)\lesssim_{c_1}\omega(\kappa^{-i}s)=\hat{\omega}(\kappa^{-i}s) \lesssim_{c_2} \omega(a)=\hat{\omega}(\kappa^{-i}t), $$ which implies \eqref{180103@eq1a}. If $\kappa^{-i}s\ge a$, then by the definition of $\hat{\omega}$, we obtain that $$ \hat{\omega}(\kappa^{-i}t)=\hat{\omega}(\kappa^{-i}s). $$ Thus we prove that \eqref{180103@eq1a} holds. For the proof of \eqref{171101@eq1}, we refer to \cite[Lemma 1]{MR2927619}. The lemma is proved. \end{proof} \begin{lemma} \label{180213@lem1} Let $\omega:(0,a]\to [0,\infty)$ be a Dini function satisfying \eqref{171006@eq1} and \eqref{180315@A1}. Then for any $\varepsilon>0$, there exists $\delta\in (0,1)$, depending only on $c_1$ and $\varepsilon$, such that $$ \sup_{r\in (0, \delta]} \omega(r)<\varepsilon. $$ \end{lemma} \begin{proof} Observe that $$ \omega(r)\le C_0 \inf_{s\in[r/2,r]}\omega (s)\le C_0 \int_{r/2}^r \frac{\omega (s)}{s}\,ds $$ for any $r\in (0,a]$, where $C_0=C_0(c_1)$. Therefore, for given $\varepsilon>0$, if we take $\delta=\delta(c_1,\varepsilon)>0$ such that $$ \int_0^\delta \frac{\omega(s)}{s}\,ds<\frac{\varepsilon}{C_0}, $$ then $\omega(r)<\varepsilon$ for all $r\in (0,\delta]$. \end{proof} \begin{remark} \label{180213@rmk1} From Remark \ref{171020@rmk1} and Lemma \ref{180213@lem1}, it follows that if $f$ is of Dini mean oscillation in $\Omega$ satisfying Definition \ref{D2} $(ii)$, then $f$ has vanishing mean oscillation in the sense that $$ \lim_{\delta\to 0} \sup_{x\in \overline{\Omega}} \sup_{r\in (0, \delta]} \dashint_{\Omega_r(x)} |f-(f)_{\Omega_r(x)}|\,dy=0. $$ \end{remark} \bibliographystyle{plain}
{ "timestamp": "2018-05-08T02:17:25", "yymm": "1805", "arxiv_id": "1805.02506", "language": "en", "url": "https://arxiv.org/abs/1805.02506" }
\section{Introduction} Spin exchange (SE) is among the most elementary two body interactions in quantum many body systems. Between two neutral atoms, this exchange can occur within valence electron spins, nuclear spins, or between the electron and nuclear spins. Its coherent teeterboard-like coupling facilitates excitation exchange between two spinor particles and plays an important role in interesting quantum phenomena ranging from versatile magnetic ordered states such as ferromagnetic or antiferromagnetic phases \cite{Ho1998,Ohmi1998}, collective atomic spin-mixing dynamics in both bosonic~\cite{Pechkis2013,Kuwamoto2004,Schmaljohann2004,Chang2004,Chang2005,Widera2005, Kronjager2006,Black2007,Klempt2009,He2015} and fermionic~\cite{Krauser2012,Krauser2014,PhysRevLett.110.250402,PhysRevA.87.043610} quantum gases, etc. SE can also be employed for spin-squeezing and entangled state generation and preparation in atomic spinor systems \cite{Luo620,Lucke773,Gross2011,PhysRevLett.107.210406}, and for coherence and quantum state transfer in quantum information studies using color centers or NMR techniques \cite{Chen2015,Neumann542,PhysRevLett.102.057403,PhysRevLett.93.130501,Plenio2013,Cai2013}. SE interaction between heteronuclear atoms is typically small or even minute in magnitude compared to other energy scales, such as the density dependent mean field, linear or even quadratic Zeeman shifts, etc. Controlled SE is thus difficult unless a resonance is encountered. Between atoms of the same species, this exchange resonance naturally appears due to their identical pseudo-spin construct, i.e., with the same level spacing, as has already been studied extensively for spin mixing in $^{87}$Rb atomic Bose-Einstein condensate (BEC) \cite{Kronjager2006,Widera2005}. If two atoms in the $F=1$ ground states are initially prepared in the $m_F=0$ state, SE flips one atom spin up into the $m_F=+1$ state while the other one gets flipped down into the $m_F=-1$ , or {\it vice versa}. For $^{87}$Rb atoms, this interaction is calibrated by a spin dependent scattering length $c_2\sim 0.3\,(a_B) <0$, which denotes a ferromagnetic interaction (with $a_B$ the Bohr radius). It is much smaller than the spin independent scattering length $c_0\sim 100\,(a_B)>0$. At realized condensate densities, $|c_2|$ is typically not more than a few Hz. The quadratic Zeeman shift, which differentially detunes the level spacings between the up ($|m_F=0\rangle\to |m_F=1\rangle$) and down ($|m_F=0\rangle\to |m_F=-1\rangle$) flips, causes the SE to be off resonant. Thus despite of the null out of the linear Zeeman shifts respectively for the up and down spin flips, observation of coherent spin mixing limits the background bias $B$ field to be around $1$ Gauss. Further tuning around the resonance can be accomplished via the ac-stark shifts from a dressing microwave coupled to the $F=2$ manifold \cite{Luo620,Gerbier2006,Zhao2014}. In NMR physics, spin exchange between electronic and nuclear spin can be tunned by Hartmann-Hahn double resonance (HHDR) \cite{HHDR1962,Plenio2013,Cai2013} since nuclear spin is not sensitive to external field. In addition to spin mixing dynamics, recent studies in SE also concern the physics associated with interspecies SE interactions in mixtures of heteronuclear atoms and their properties such as the ground state phases and entanglement~\cite{Shi2006,Luo2007,Xu2009,Xu2010,Shi2010,Zhang2010,Xu2010b,Xu2011,Shi2011,Xu2012,Li2015}. The first SE driven coherent heteronuclear spin dynamics are observed in an ultracold bosonic mixture of ($F=1$) $^{87}$Rb and $^{23}$Na atoms~\cite{Li2015}, which is nicely described by mean field based theories as in single atomic species~\cite{Xu2009,Xu2012}. The dynamical effort of SE interaction $\propto (s_+^{(a)}s_-^{(b)}+s_-^{(a)}s_+^{(b)})$ between two unlike ($\eta={a,b}$) spin-$1/2$ atoms ($\vec s^{(\eta)}$) heavily depends on their differential Zeeman shifts. For the case of $^{87}$Rb and $^{23}$Na atoms in the $F=1$ ground states mentioned above, their Land{\'e} g-factors are essentially the same because of their equal nuclear and electron spins. Hence, an accidental interspecies SE resonance occurs at $B_c\sim 1.69\,\rm G$, a small but non-zero $B$ field. More generally, the Land{\'e} g-factors for unlike atoms can be very different, leading to a large Zeeman level spacing mismatch ($\sim$ 1 MHz) even at a moderately low magnetic field ($\sim$ 1 Gauss). Such a large detuning can completely overwhelm the typical rate $|c_2|$ of SE. The other option of working at a near zero bias $B$ field is difficult due to the experimental challenge of controlling the (fluctuating) ambient magnetic field. This paper presents a general scheme for promoting resonant SE between heteronuclear atoms by compensating for their energy level mismatch using an appropriately modulated $B$ field or rf-field. The basic idea is illustrated in Fig. \ref{fig1} with the modulation frequency resonant to the level spacing mismatch. Such a scheme is of course limited to realizable frequency ranges of available technologies. The different Land{\'e} g-factors for the heteronuclear atoms result in the different couplings with the modulated $B$ field. As we will show in the following tuning the amplitude and/or the frequency of the driving field controls the interspecies SE dynamics. We will first illustrate the basic operation of our scheme for a simple model of two unlike atoms. The result obtained is then applied to a realistic experiment of $^{87}$Rb and $^{23}$Na mixture, accompanied with detailed numerical simulations. Perspective applications to more general cases are then discussed together with a realistic assessment of the potential restrictions. \begin{figure}[tbp] \centering \includegraphics[width=\linewidth]{fig1.pdf} \caption{(color online). (a) A schematic illustration for interspecies SE assisted by periodic driving. (b) The time-dependent detuning $\delta(t)$ (black solid line) in the presence of the drive with a period $T=2\pi/\omega$. Effective interspecies SE occurs when $|\delta(t)|\le c$ (blue shaded region) for the (red) highlighted time windows for various driving amplitude $\Omega\lesssim \delta$, $\Omega=\delta$, $\Omega\gtrsim \delta$, and $\Omega\gg \delta$.} \label{fig1} \end{figure} \section{Two atom physics} Without loss of generality, we assume an isotropic interspecies spin-spin interaction (SSI) of strength $c$ between the two heteronuclear atoms. The model Hamiltonian thus becomes \begin{eqnarray} H &=&\hbar\omega_a s_z^{(a)}+\hbar\omega_b s_z^{(b)}+c\,\bold{s}^{(a)}\cdot\bold{s}^{(b)}+H_D(t), \label{sec1:model_Hamiltonian}\\ H_D(t) &=& \hbar\Omega_a\, s_z^{(a)}\cos{\omega t}+\hbar\Omega_b\, s_z^{(b)}\cos{\omega t}, \label{sec1:effective_Hamiltonian} \end{eqnarray} where $s_\mu^{(\eta)}$ ($\mu=x,y,z$, and $\eta=a,b$) denotes the spin-1/2 matrix for atom $\eta$ with level spacing $\hbar\omega_\eta$ between spin up $|e_\eta\rangle$ and down $|g_\eta\rangle$ states. $H_D(t)$ describes the couplings between atoms and an external periodic driving ($B$) field along the $z$-axis direction. Other forms of coupling such as $\propto s_x^{(\eta)}$ or $\propto s_y^{(\eta)}$ give similar results and will not be discussed here explicitly. Even at a small $B$ field, the mismatch between the pseudo-spin level spacings for two unlike atoms, can be much larger than their SE interaction, i.e., $\delta=\omega_a-\omega_b\gg |c|/\hbar$, assuming $\omega_a>\omega_b$. Thus efficient SE dynamics calls for suitable level shifts to compensate for this mismatch. Ac-stark shift from a microwave field is often employed, although it provides for only a small $\delta$ \cite{Gerbier2006,Zhao2014}. Our idea is instead to apply an external $\pi$-polarized oscillating rf or microwave field with frequency $\omega\sim \delta$. As illustrated in Fig.~{\ref{fig1}(a), when the above condition is satisfied, the interspecies SE $|g_a,e_b\rangle \leftrightarrow |e_a,g_b\rangle$ can hit a resonance assisted by the absorption or emission of an oscillation quantum (or photon) of energy $\hbar\omega$. The instantaneous level mismatch between the two-atom states $|g_a,e_b\rangle$ and $|e_a,g_b\rangle$ reduces to $\delta(t)=(\omega_a+\Omega_a\cos{\omega t})-(\omega_b+\Omega_b\cos{\omega t})=\delta+\Omega \cos{\omega t}$. The differential coupling $\Omega\equiv\Omega_a-\Omega_b$ tunes SE into resonance $\delta(t)\sim c$ analogous to differential Zeeman shifts tunes a magnetic Feshbach resonance, albeit at selected instants due to the explicit time dependence here. At a fixed $\omega$, the windows for near-resonant SE within one driving period are highlighted (red) in Fig.~\ref{fig1}(b) for various driving amplitude. The largest time window appears for $\Omega\gtrsim \delta$, which is more rigorously confirmed by the Floquet theory. \begin{figure}[tbp] \centering \includegraphics[width=0.96\linewidth]{fig2.pdf} \caption{(color online). Numerical results compared to analytical ones for $\delta=\omega_a-\omega_b=3\,\rm kHz$ and $c/\hbar=10\,\rm Hz$ with detuning $\Delta=\omega-\delta$. (a) Time evolution of fractional populations for $\Delta=5$ Hz and $\Omega=\omega=\delta$. Spin oscillation periods (b) and amplitudes (c) from numerical evolutions with the original Hamiltonian Eq.~(\ref{sec1:model_Hamiltonian}) (black solid lines) and the effective Hamiltonian (red dashed lines). (d) The dependence of $c_{\text{eff}}$ on $\Omega$ at $\omega=\delta$. The red dashed line denotes the analytic formula $c_{\rm eff}=cJ_1(\Omega/\omega)$ while the black solid line is based on the oscillation periods computed from the dynamics of the original Hamiltonian.} \label{fig2} \end{figure} In the high frequency limit $\omega\sim\delta\gg c/\hbar$, an effective time-independent Hamiltonian emerges \begin{eqnarray} H_{\rm eff}&=& \hbar(\omega_a-{\omega}/{2})s_z^{(a)}+\hbar(\omega_b+{\omega}/{2})s_z^{(b)} \nonumber \\ &&-c_{\text{eff}}\,\bold{s}^{(a)}\cdot\bold{s}^{(b)} +\tilde{c}\,s_z^{(a)}s_z^{(b)}, \label{eqn13} \end{eqnarray} as detailed in the appendix below with $c_{\rm eff}=cJ_1(\Omega/\omega)$ and $\tilde{c}=c[1-J_1(\Omega/\omega)]$. The minus sign in front of $c_{\text{eff}}$ does not imply that the SSI has changed its sign entirely due to the follow up term $\propto s_z^{(a)} s_z^{(b)}$. For our idea to work, the coupling amplitudes for the two atoms must be different, i.e., $\Omega_a\neq\Omega_b$, or $\Omega\neq 0$ as otherwise $c_{\text{eff}}=0$. Our proposal thus can be applied when the two atoms are coupled to a driving field with different strength, a condition that is almost always satisfied for heternuclear atoms when their pseudo-spin states exhibit different Land{\'e} g-factors. The analytical results above are confirmed by numerical simulations for the full dynamics including the periodic drive at $\delta=\omega_a-\omega_b=3\,\rm kHz$ and $c/\hbar=10\,\rm Hz$ (satisfying $\delta\gg c$). The simulation starts with the two atoms initially in the state $|g_a,e_b\rangle$. Figure~\ref{fig2} shows the nice agreement between analytical and numerical results. The peaks for both the period and amplitude are located at $\Delta=\omega-\delta=0$ as expected. The numerical result for the effective SE interaction strength, as shown in Figs. \ref{fig2}(d)(red dashed line), is derived by matching the frequency of spin population oscillation (from Fourier analysis) to the analytical result $\sqrt{4c_{\text{eff}}^2+\Delta^2}/2$ given by effective Hamiltonian (\ref{eqn13}). We fix $\Delta=0$ and change $\Omega$ such that $c_{\text{eff}}$ reduces simply to the frequency of spin oscillation. \section{Spinor mixture of ${}^{87}$$\text{Rb}$ and ${}^{23}$$\text{Na}$} We next extend the above discussion for two atoms to a mixture of bosonic spinor $^{23} $Na($\eta=a$) and $^{87}$Rb ($\eta=b$) atoms in the ground $F=1$ states~\cite{Li2015}. This represents a special case as their level spacing mismatch is smaller because the nuclear and electronic spins for both atoms are the same. Their near-resonant interspecies spin dynamics are recently observed around $B_c\sim 1.6$ (Gauss). In the off resonant case when their energy level mismatch is much larger than the interspecies SE strength, this combination still represents a nice example to test our idea of periodic driving assisted resonant SE. \begin{figure*}[!htp] \centering \includegraphics[scale=0.65]{fig3.pdf} \caption{The dependence of SE dynamics on $\omega$ for Rb (red line) and Na (blue line) atoms at $B_0=2.2\,\rm G$ where the Zeeman energy level spacing mismatch between the two spin states $|-1,0\rangle$ and $|0,-1\rangle$ is $\delta\simeq 2\pi\times 227$ Hz and $\Omega=\delta$. (a1-a2) Coherent spin oscillations of balanced (a1) and unbalanced (a2) atomic populations at different detuning. The black dashed lines denote populations of state $|1\rangle$. (b1-b2) The dependence of oscillation amplitude on $\Delta$ for balanced (b1) and unbalanced (b2) mixtures. (c1-c2) The same as above but for the oscillation period in balanced (c1) and unbalanced (c2) mixtures. } \label{fig3} \end{figure*} The model Hamiltonian is detailed in the appendix with $m_\eta$ the atomic mass, and $\mu=m_1m_2/(m_1+m_2)$ the interspecies reduced mass. $V_\eta$ denotes the trap potential, and $p_\eta$ and $q_\eta$ are respectively the linear and quadratic Zeeman shifts, while $c_0^{(\eta)}$ and $c_2^{(\eta)}$ label the intra-atomic density-density and SE interaction strengths. The interspecies spin-independent, spin-exchange, and spin-singlet pairing interaction strengths are denoted by $\alpha$, $\beta$, and $\gamma$ as before in studies of binary mixture SE dynamics \cite{Xu2009} and their values are known to be $(\alpha,\beta,\gamma)=2\pi\hbar^2a_B/\mu\times(78.9,-2.5,0.06)$ for this mixture. The experiments of Ref. \cite{Li2015} are carried out for a $^{23}$Na atomic BEC with a cold thermal $^{87}$Rb atomic gas in an optical dipole trap. Their dynamics are governed by the following coupled equations \begin{widetext} \begin{eqnarray} i\hbar\frac{\partial}{\partial t}\phi&=& \left[ -\frac{\hbar^2}{2m_a}\nabla^2-p_aF_z +q_aF_z^2+V_a +c_0^{(a)}\text{Tr}(n_{a})+c_2^{(a)} (\phi^{\dagger}\mathbf{F}\phi)\cdot\mathbf{F} \right]\phi\nonumber\\ &&+[\alpha\text{Tr}(n_b) +\beta\text{Tr}(\mathbf{F}n_b) \cdot\mathbf{F}+\gamma\mathcal{U}_{b}]\phi,\\ \frac{\partial}{\partial t}f&=&-\frac{\bold{p}}{m_b}\cdot \nabla_{\bold{r}}f+\nabla_{\mathbf{r}}V_b\cdot\nabla_{\mathbf{p}}f +\frac{1}{i\hbar}[U,f]+\frac{1}{2}\{\nabla_{\mathbf{r}}U,\nabla_{\mathbf{r}}f\}, \end{eqnarray} with \begin{eqnarray} U&=&-p_bF_z+q_bF_{z}^2+c_0^{(b)}\text{Tr}(n_b) +c_0^{(b)}n_b+c_2^{(b)}\text{Tr}(\mathbf{F}n_b)\cdot\mathbf{F}+c_2^{(b)}\mathbf{F}n_b\cdot\mathbf{F}\nonumber\\ &&+\alpha\text{Tr}(n_a)+\beta\text{Tr}(\mathbf{F}n_a)\cdot\mathbf{F}+\gamma\mathcal{U}_a, \end{eqnarray} \end{widetext} where the Na condensate is described by its mean field $\phi=\langle\hat{\phi}_a \rangle=(\phi_{1},\phi_0,\phi_{-1})^{T}$ and $(n_{a})_{ij}\equiv\phi^{*}_j\phi_i$, the Rb gas is described by the collisionless Boltzmann equation in terms of the Wigner function $f_{ij}(\bold{r},\bold{p},t)=\langle e^{iHt/\hbar}\hat{f}_{ij}(\bold{r},\bold{p})e^{-iHt/\hbar}\rangle$ and $\hat{f}_{ij}\equiv\int d\bold{r}' e^{-i\bold{p}\cdot\bold{r}/\hbar}\hat{\psi}_j^{\dagger}(\bold{r}-\bold{r}'/2) \hat{\psi}_i(\bold{r}+\bold{r}'/2)$. We define $(n_b(\mathbf{r},t))_{ij}=\int d\mathbf{p} f_{ij}(\mathbf{r},\mathbf{p},t)/(2\pi\hbar)^3$, $(\mathcal{U}_b)_{ij}=(-1)^{i-j}(n_b)_{\bar{j}\bar{i}}/3$, and $(\mathcal{U}_a)_{ij}=(-1)^{i-j}(n_a)_{\bar{j}\bar{i}}/3$ with $\bar{i}=-i$. When one atomic species is non-condensed, the single-mode approximation (SMA)~\cite{Li2015} is well satisfied for both atomic species. The resulting simplified equations above forms the basis of our numerical study. The accidental resonance reported in Ref.~\cite{Li2015} at $B_c\sim 1.69\,\rm G$ is between the two atom states $|m_F^{(a)}=0,m_F^{(b)}=-1\rangle \leftrightarrow |-1,0\rangle$. Away from this resonance with either increasing or decreasing $B$ field, the interspecies SE dynamics are suppressed. Our scheme comes with a $\pi$-polarized periodic rf or microwave field coupled to the atoms \begin{eqnarray} H_D(t)=\cos{\omega t}\int d\bold{r}\left\lbrace \hbar\Omega_a\hat{\phi}^{\dagger}F_{z}^{(a)}\hat{\phi} + \hbar\Omega_b\hat{\psi}^{\dagger}F_{z}^{(b)}\hat{\psi} \right\rbrace . \hskip 12pt \end{eqnarray} At $B=2.2\,\rm G$, for instance, the level spacing mismatch between the two atom spin states $|0,-1\rangle$ and $|-1,0\rangle$ is $\delta\simeq 2\pi\times 227~\rm Hz$, which is much larger than the typical SE strength $\beta$. The intra-species spin dynamics are also suppressed due to the large quadratic Zeeman shifts at this $B$ field. We numerically explored this case for both balanced and imbalanced populations of $^{87}$Rb and $^{23}$Na atoms, starting with a coherent superposition internal states for both species. To promote strong effective interspecies SE, $\Omega=\delta$ is taken, and $\omega$ is varied in the vicinity of the two atom resonance $\sim\delta$. For the balanced case with $N_a=N_b=6\times 10^4$ atoms, we consider an initial configuration with $50\%$ population of Rb (Na) atoms in the state $|-1\rangle$ ($|0\rangle$), $40\%$ in $|0\rangle$ ($|-1\rangle$), and $10\%$ in $|+1\rangle$ ($|+1\rangle$). For the unbalanced case of $N_b=6.33\times 10^4$ and $N_a=10.40\times 10^4$, the initial states for both atoms are prepared with $36$\% population in state $|0\rangle$, $57\%$ in $|-1\rangle$, and $7\%$ in $|+1\rangle$ approximately. The resulting near-resonant interspecies SE dynamics are shown in Fig.~\ref{fig3}. Both the amplitude and period of spin oscillations are found to tune with $\omega$. The resonance peak is seen to be shifted from the two atom case of $\Delta=0$ due to mean field interactions, while the width of resonance remains of the same order as that induced by the bare SE interaction strength at weak $B$ field shown in Ref.~\cite{Li2015}. It is interesting to point out that for the controlled SE dynamics the periodic external drive introduced does not seem to affect other SSI channels since it does not induce single particle excitation as shown in Figs.~\ref{fig3}(a1,a2) (black dashed line). Finally we note that our idea for controlled SE as discussed differs from both recently demonstrated scenario~\cite{Li2015} and the widely known HHDR applied in NV-center \cite{Plenio2013,Cai2013}. The first scenario is based on shifting of the resonance field $B_c$ with an optically induced species-dependent (time-independent) static synthetic $B$ field. Complications to balance the amount of species- and spin-dependent vector light shifts do not arise in our scheme. In the second scenario, at least one of the atom system is in strong driving limit and being dressed by the external field. Resonant spin exchange occurs when the dressed states splitting matches the level spacing of another atom. While in our case, the spin state is neither dressed nor flipped by driving filed and collective spin dynamics occurs due to inherent SSI between atoms. Thus our idea is more generally grouped into Floquet engineering and can be applied to tune effective interspecies SE for various types of spinor atomic mixtures. \section{Conclusion} In conclusion, we present a general scheme to engineer resonant heteronuclear atomic spin dynamics by applying a periodic coupling field. This applies for interatomic species spin dynamics when the Zeeman energy level spacing mismatch between the two species is much larger than their SSI strength. Our method is applicable to several ongoing mixture experiments, and is illustrated for the mixture of $^{23}$Na and $^{87}$Rb atoms where spin dynamics were previously observed in the $F=1$ ground stats at near zero field. A simple calculation using Fermi's golden rule shows that inelastic decay rate associated with SE collision is about $10^{-14}$ $\text{cm}^{3}\cdot \text{s}^{-1}$ for the $^{23}\text{Na}-^{87}\text{Rb}$ atom mixture, which should provide for a sufficiently long life time to carry out the proposed periodic modulation experiment. Another promising candidate system for applying our idea is the $^6$Li-$^{23}$Na (Fermi-Bose) mixture which exhibits two zero crossings for the Zeeman level mismatch at $B=0$ G and $B=70.2$ G between the $|-1/2,1\rangle\leftrightarrow |1/2,0\rangle$ states \cite{ArnoTrautmann2016}. \section*{Acknowledgement} This work is supported by the National Basic Research Program of China (973 program) (No. 2013CB922004), NSFC (No. 91421305, No. 11574100, No. 11654001, and No. 11374176) and the National Thousand-Young-Talents Program.
{ "timestamp": "2018-05-08T02:13:35", "yymm": "1805", "arxiv_id": "1805.02310", "language": "en", "url": "https://arxiv.org/abs/1805.02310" }
\section{Introduction} \textit{Introduction.---} Electron shelving occurs in atoms when the stream of photons emitted by a laser-driven strong transition is interrupted by quantum jumps to metastable states; these jumps introduce finite dark periods, hence blinking, in the resonance fluorescence scattering. The blinking or intermittency of the fluorescence is a stationary random process whose statistics of bright and dark periods are well studied \cite{NaSD86,SNBT86,BHIW86,PlKn98}. Recently, it was shown to be possible to reverse the onset of a dark period \cite{Minev18}. The photon statistics \cite{MeSc90} and phase-dependent fluctuations \cite{CaRG16} of blinking resonance fluorescence have also been studied in some detail. The atom's ensemble average resonance fluorescence shows signatures of shelving. The population of the excited state of the strong transition, for example, reaches a short term quasi-stationary state (typical of the two-level system) followed by a long decay to the final steady state at nearly the decay rate of the weak transition \cite{PlKn98}. Stationary spectra of blinking resonance fluorescence have also been studied: Hegerfeldt and Plenio \cite{HePl95} and Garraway et al \cite{GaKK95} found that for a bichromatically driven V- and $\Lambda$-type three-level atom (3LA) the spectrum consists of a delta-peaked coherent term, an incoherent Mollow-like spectrum \cite{Mollow69}, and a novel feature given by a narrow inelastic peak. This narrow peak is the spectral signature of the slow decay of the atomic populations, caused by the presence of a slow decay channel that randomly interrupts the fluorescence of a strongly driven transition. The narrow peak was measured by B\"uhner and Tamm with a single $^{171} \mathrm{Yb}^+$ ion by heterodyne detection \cite{BuTa00}. Evers and Keitel \cite{EvKe02} then proved that the narrow peak grows at the expense of the coherent peak, as the difference between the intensity of the coherent peaks of a two-level atom (2LA) and a 3LA. Little attention has been paid to the spectrum of blinking resonance fluorescence as a dynamical observable. Only the spectrum during a single bright period, of variable length, has been considered so far \cite{HePl96}; it was the Mollow spectrum, proving that the narrow peak is a feature of the random interruption of the fluorescence. One then asks how the narrow peak emerges if the dark periods are taken into account during the ensemble average measurement of the spectrum. In this paper we investigate time-dependent spectra of a single three-level atom undergoing blinking resonance fluorescence, that is, including both bright and dark periods in the ensemble evolution. Our main result is that the narrow inelastic component due to electron shelving develops much later than the two-level Mollow spectrum, but before the average dark time has passed. For this purpose we calculate the Eberly-W\'odkiewicz (EW) physical spectrum \cite{EbWo77}, which gives the most rigorous theoretical description for time-dependent spectra. In this model, the source field is scanned by a nonzero bandwidth filter prior to photodetection, handling properly the time-energy uncertainty that arises when both time and frequency are to be resolved. The EW spectrum has been applied to study nontrivial dynamics of optical systems, for example: the effects of switching-on \cite{EbKW80} and switching-off the laser \cite{HuTE82}, initial atomic coherence \cite{GoMo87}, and coherent population trapping \cite{JLDS89} in resonance fluorescence; spontaneous emission (the first prediction of the Rabi doublet) \cite{SaNE83}, Dicke superradiance \cite{CaSC96} and frequency-filtered photon correlations \cite{Valle12} in cavity QED. The EW spectrum has also been applied to the spontaneous emission in front of a moving mirror \cite{GHD+10,Mirza15} and two-atom entanglement \cite{HoFi10} in QED. \begin{figure}[h] \includegraphics[width=3.7cm,height=3.5cm]{Fig1_atom.png} \caption{\label{fig:3LA} Scheme of the three-level atom showing laser excitation of the $|e \rangle - |g \rangle$ transition with Rabi frequency $\Omega$, detuning $\Delta$, and spontaneous decay rate $\gamma$, and spontaneous decay via the metastable state $| a \rangle$ at rates $ \gamma_d, \gamma_a$. } \end{figure} \textit{Model.---} Our system, depicted in Fig.~\ref{fig:3LA}, consists of a three-level atom with one laser-driven transition with Rabi frequency $\Omega$, detuning $\Delta$ and decay rate $\gamma$, whose fluorescence is monitored. The excited state $|e \rangle$ also decays to a long-lived intermediate state $|a \rangle$ at the rate $\gamma_d$, and from this to the ground state at the rate $\gamma_a$. The Markovian master equation in the frame rotating at the laser frequency is \begin{eqnarray} \dot{\rho} &=& -i [\mathcal{H},\rho] +{\gamma}\mathcal{L}[\sigma_{ge}] \rho +{\gamma_d} \mathcal{L}[\sigma_{ae}] \rho + {\gamma_a} \mathcal{L}[\sigma_{ga}] \rho , \end{eqnarray} where $\mathcal{H} = \Delta \, \sigma_{eg} \sigma_{ge} +\Omega ( \sigma_{eg} +\sigma_{ge})/2$ is the atom-laser Hamiltonian in the rotating wave approximation and $\mathcal{L}[\mathcal{O}]\rho \equiv \mathcal{O}\rho\mathcal{O}^\dagger - (\mathcal{O}^\dagger\mathcal{O}\rho +\rho\mathcal{O}^\dagger\mathcal{O})/2$ are spontaneous decay superoperators. The atomic operators $\sigma_{jk} = |j \rangle\langle k|$ obey $\sigma_{jk} \sigma_{lm} = \sigma_{jm} \delta_{kl}$. Because of the pure spontaneous emission decay, the incoherent nature of the $|e \rangle - |a \rangle - |g \rangle$ channel decouples the equations for the coherences involving the $|a \rangle$ state from those of the laser driven $|e \rangle - |g \rangle$ transition \cite{CaRG16,EvKe02}. The Bloch equations of the effective two-level system can then be written in compact form as \begin{eqnarray} \label{eq:BlochEqs} \langle \dot{\mathbf{s}} (t) \rangle &=& \mathbf{M} \langle \mathbf{s}(t) \rangle +\mathbf{b} , \\ \mathbf{s} &\equiv& \left( \sigma_{ge}, \sigma_{eg}, \sigma_{ee}, \sigma_{gg} \right)^T , \\ \mathbf{b} &=& (0,0,0,\gamma_a)^T , \end{eqnarray} \begin{eqnarray} \label{eq:matrixM} \mathbf{M} &=& \left( \begin{array}{cccc} -i \Delta -\gamma_+/2 & 0 & i\Omega/2 & -i\Omega/2 \\ 0 & i \Delta -\gamma_+/2 & -i\Omega/2 & i\Omega/2 \\ i\Omega/2 & -i\Omega/2 & - \gamma_+ & 0 \\ -i\Omega/2 & i\Omega/2 & \gamma_- & -\gamma_a \end{array} \right) , \nonumber \\ \end{eqnarray} \begin{eqnarray} \gamma_+ = \gamma + \gamma_d , \qquad \gamma_- = \gamma - \gamma_a . \end{eqnarray} Above, $\dot{\mathbf{s}}$ is the derivative of $\mathbf{s}$ with respect to time. In general, the Bloch equations are solved numerically. However, accurate approximate analytical solutions in the resonant case, $\Delta =0$, in the regime (\ref{eq:shelvingCond}) were obtained by two of us in \cite{CaRG16}. The populations and coherences show the typical short-term decay at the rate $3\gamma_+/4$ reminiscent of the 2LA dynamics and a long-term decay, at roughly $\gamma_a$, that signals shelving in the metastable state $|a \rangle$ \cite{PlKn98}. The solutions in the steady state are \begin{subequations} \begin{eqnarray} \langle \sigma_{eg} \rangle_{st} &=& \frac{i \Omega[ \gamma_+ + i 2\Delta ]} {(2 +q)\Omega^2 +\gamma_+^2 +4\Delta^2} , \\ \langle \sigma_{gg} \rangle_{st} &=& \frac{\Omega^2 +\gamma_+^2 +4\Delta^2} {(2 +q)\Omega^2 +\gamma_+^2 +4\Delta^2} , \\ \langle \sigma_{ee} \rangle_{st} &=& \frac{\Omega^2} {(2 +q)\Omega^2 +\gamma_+^2 +4\Delta^2} , \label{eq:rho_eeSS} \\ \langle \sigma_{aa} \rangle_{st} &=& \frac{q \Omega^2} {(2 +q)\Omega^2 +\gamma_+^2 +4\Delta^2} , \end{eqnarray} \end{subequations} where \begin{eqnarray} q &=& \gamma_d / \gamma_a . \end{eqnarray} and $\langle \sigma_{ge} \rangle_{st} = \langle \sigma_{eg} \rangle_{st}^{\ast}$. This system features blinking, with long bright and dark periods in the fluorescence of the $|e \rangle - |g \rangle$ transition due to electron shelving in the metastable state $| a \rangle$, if the decay rates obey the relation \begin{eqnarray} \label{eq:shelvingCond} \gamma \gg \gamma_d , \gamma_a . \end{eqnarray} A random telegraph model can be used to calculate the average length of the bright and dark periods \cite{EvKe02,PeKn88}. For this derivation the equation for the metastable state, $\dot{\rho}_{aa} = \gamma_d \rho_{ee} - \gamma_a \rho_{aa}$, is needed ($\rho_{jk} = \langle \sigma_{kj} \rangle$). During a bright period the state $|a \rangle$ is never occupied, $\rho_{aa}(t) =0$. The average bright time $T_B$ is defined as $T_B^{-1} = (\dot{\rho}_{aa})_{t \to \infty}$, where the limit means a time long enough for the two-level transition $|g \rangle - |e \rangle$ to reach the steady state, so $\rho_{ee}(\infty) \to (\rho_{ee}^{st})_{2LA}$. Thus, with $q=0$ and $\gamma_+ \to \gamma$ in Eq.~(\ref{eq:rho_eeSS}), we have \begin{eqnarray} \label{eq:AvBrightPeriod} T_B = \frac{2\Omega^2 +\gamma^2 +4\Delta^2} {\gamma_d \Omega^2} . \end{eqnarray} Similarly, the average dark time $T_D$ is defined as $T_D^{-1} = (\dot{\rho}_{aa})_{t \to \infty}$ but, during a dark period $\rho_{aa}(t) =1$ and $\rho_{ee}(t) = 0$, hence \begin{eqnarray} \label{eq:AvDarkPeriod} T_D = \gamma_a^{-1} . \end{eqnarray} The three-level scheme of Fig.~\ref{fig:3LA} is a simplified theoretical representation of the complex energy level structure of an $^{171} \mathrm{Yb}^+$ ion under the driving configuration presented in \cite{BuTa00}. In this paper the stationary spectrum of $^{171} \mathrm{Yb}^+$ was measured where, in order to reduce the dark periods in the ion's fluorescence, additional incoherent pumping from $| a \rangle$ to a fourth level (not shown) with faster decay to $| g \rangle$ was applied. Thus, $\gamma_d$ is considered an effective decay rate that includes such pumping. \textit{Stationary Power Spectrum.---} The stationary Wiener-Khintchine power spectrum is given by the Fourier transform of the field autocorrelation function \cite{FiTa17}, \begin{eqnarray} S(\omega) &=& \mathrm{Re} \int_0^{\infty} d\tau e^{-i \omega \tau} \langle \sigma_{eg} (0) \sigma_{ge} (\tau) \rangle_{st} . \end{eqnarray} By writing the atomic operators as the sum of a mean, $\langle \sigma_{jk} \rangle_{st}$, plus fluctuations, $\tilde{\Delta} \sigma_{jk}(t)$, that is, $\sigma_{jk} (t) = \langle \sigma_{jk} \rangle_{st} +\tilde{\Delta} \sigma_{jk} (t) $, we can separate the spectrum into a coherent part \begin{eqnarray} \label{eq:I_coh} S_{coh}(\omega) &=& |\langle \sigma_{eg} \rangle_{st} |^2 \mathrm{Re} \int_0^{\infty} e^{-i \omega \tau} d\tau = \pi |\langle \sigma_{eg} \rangle_{st} |^2 \delta(\omega) , \nonumber \\ &=& \frac{ \pi \Omega^2 (\gamma_+^2 +4\Delta^2) } {[(2+q)\Omega^2 +\gamma_+^2 +4\Delta^2 ]^2 } \delta(\omega) , \end{eqnarray} due to elastic scattering, and an incoherent part \begin{eqnarray} \label{eq:Sinc} S_{inc}(\omega) &=& \mathrm{Re} \int_0^{\infty} d\tau e^{-i \omega \tau} \langle \tilde{\Delta} \sigma_{eg}(0) \tilde{\Delta} \sigma_{ge}(\tau) \rangle_{st} , \end{eqnarray} due to atomic fluctuations. For the strong transition of the V and $\Lambda$ 3LA's, $S_{inc}(\omega)$ consists of a spectrum nearly identical to the 2LA Mollow one (peaks of width of the order of $\gamma$, a single one in the weak driving limit and a triplet in the strong excitation regime \cite{Mollow69}) plus a narrow peak of nearly Lorentzian shape at the laser frequency due to the presence of electron shelving \cite{HePl95,GaKK95}. B\"uhner and Tamm experimentally measured the narrow peak near the saturation regime by heterodyne detection \cite{BuTa00}. Later, Evers and Keitel \cite{EvKe02} studied the narrow peak in detail and found that it comes at the expense of the coherent peak of the 2LA spectrum. Noting in Eq.~(\ref{eq:I_coh}) that $q >0$, the coherent peak of the 3LA is smaller than that of the 2LA. Writing $\left(S_{coh} \right)_{NLA} =I_{NLA} \delta(\omega)$, for $N=2,3$, the relative intensity of the narrow inelastic peak is given by the difference in the size of the coherent peak of the two- and three-level atoms, $I_{np} = I_{2LA} - I_{3LA}$, \begin{eqnarray} \label{eq:I_narrowpeak} I_{np} &=& \left( |\langle \sigma_{eg} \rangle_{st} |^2 \right)_{2LA} - \left( |\langle \sigma_{eg} \rangle_{st} |^2 \right)_{3LA} \nonumber \\ &=& \frac{ \Omega^4 [(2+q) \gamma^2 -2\gamma_+^2 +4\Delta^2 q ]} {[2\Omega^2 +\gamma^2 +4\Delta^2 ]^2 [(2+q)\Omega^2 +\gamma_+^2 +4\Delta^2 ]^2 } . \nonumber \\ \end{eqnarray} The narrow peak becomes smaller for increasing Rabi frequencies, but increasing detuning enhances the peak if the Rabi frequency is increased \cite{EvKe02}; this peak is the largest for a detuning $\Delta_{max}^2 = \left[ (q-2) \Omega^2 -2 \gamma^2 \right]/8$. The width of the narrow peak is accurately given by \cite{HePl95,EvKe02} \begin{eqnarray} \label{eq:widthextra} \Gamma_{np} &=& T_D^{-1} +T_B^{-1} \nonumber \\ &=& \gamma_a \left[ 1+ \frac{ q\Omega^2 } {2\Omega^2 +\gamma^2 +4\Delta^2} \right] . \end{eqnarray} An analytic formula for the full stationary spectrum on resonance in the regime (\ref{eq:shelvingCond}) has been given in \cite{CaRG16}. \textit{Time-Dependent Spectrum.---} We calculate time-dependent spectra (TDS) using the physical spectrum of Eberly and W\'odkiewicz \cite{EbWo77} \begin{eqnarray} S(D,t,\Gamma) &=& \Gamma \int_{t_0}^t dt_1 \int_{t_0}^t dt_2 \,\, e^{-(\Gamma/2 -i D)(t-t_1)} \nonumber \\ && \times e^{-(\Gamma/2 +i D)(t-t_2)} \langle \sigma_{eg}(t_1) \sigma_{ge}(t_2) \rangle , \end{eqnarray} where $D=\omega - \omega_l$ is the detuning of the laser frequency $\omega_l$ from the filter's frequency $\omega$, and $\Gamma$ is the filter's bandwidth. Admittedly, the calculation of TDS is not a simple task, and more often than not a numerical solution is required. Some authors often wish to avoid the filter effects and resort to simpler, yet probably defective, approaches \cite{EbWo77,FiTa17}. The inclusion of the filter ensures that the time-energy uncertainty is properly accounted for in theoretical calculations. An additional benefit of filtering is that it can enhance important features and the signal to noise ratio in the measured TDS of weak signals. For computation purposes it is convenient to rewrite the double integral in terms of integrals for $t_2$ and $\tau = t_1 -t_2$ \cite{EbKW80}; making $t_0 =0$ we have \begin{eqnarray} S(\omega, t, \Gamma) &=& 2\Gamma \mathrm{Re} \Big{[} \int_{0}^t dt_2 e^{-\Gamma(t-t_2)}\int_{0}^{t-t_2} d\tau \, e^{(\Gamma/2 -i D)\tau} \nonumber \\ && \times \langle \sigma_{eg}(t_2+\tau)\sigma_{ge}(t_2)\rangle \Big{]} . \end{eqnarray} To solve for the two-time correlations we apply the quantum regression formula \cite{Carm02} to Eq.~(\ref{eq:BlochEqs}) obtaining \begin{eqnarray} \partial_{\tau} \langle \mathbf{u}(t_2,\tau) \rangle &=& \mathbf{M} \langle \mathbf{u} (t_2,\tau) \rangle +\mathbf{c} (t_2) , \end{eqnarray} where \begin{eqnarray*} \mathbf{u} (t_2,\tau) &=& \big[ \sigma_{ge} (t_2+\tau) \sigma_{ge} (t_2) , \sigma_{eg} (t_2+\tau) \sigma_{ge} (t_2) , \\ && \sigma_{ee} (t_2+\tau) \sigma_{ge} (t_2) , \sigma_{gg} (t_2+\tau) \sigma_{ge} (t_2) \big]^T , \\ \mathbf{c}(t_2) &=& (0,0,0, \gamma_a \langle \sigma_{ge} (t_2) \rangle)^T , \nonumber \end{eqnarray*} which we solve numerically with initial condition $\mathbf{u}(t_2,0) = \left(0, \sigma_{ee} (t_2), 0, \sigma_{ge} (t_2) \right)^T$. The number of parameters in our system makes it very difficult to obtain analytical expresions for the TDS. Figures~\ref{fig2:tds_sat}-\ref{fig4:tds_strong_det} show our results for the TDS of our blinking system. Figure~\ref{fig2:tds_sat} displays the spectra in the excitation regime near saturation, $\Omega = \gamma_+/4$. A narrow peak develops for long times, $\gamma t \gg 1$, above a background given by the usual broad peak of width $\sim \gamma$ formed on a shorter time scale of several lifetimes, $\gamma^{-1}$. \begin{figure}[b] \includegraphics[width=8.5cm,height=5.5cm]{Fig2_tds_sat_inset.pdf} \caption{\label{fig2:tds_sat} Time-dependent spectra for moderate laser field strength, $\Omega =\gamma_+/4 =0.2625 \gamma$, $\gamma_d=0.05 \gamma$ and $\gamma_a=0.015 \gamma$. The filter's bandwidth is $\Gamma = 0.1 \gamma$. The inset shows the spectrum at $\gamma t =150$ in semi-log scale and wider frequency range to reveal the broad component. } \end{figure} To better appreciate the different time scales for the appearance of the spectral components we show the TDS in the strong field regime, $\Omega = 3.5 \gamma$. In Fig.~\ref{fig3:tds_strong}, while the triplet is well developed for times $\gamma t \sim 10$ the narrow peak arises at about $\gamma t \sim 20$. As expected from the stationary spectrum, the narrow peak in the strong field regime is smaller than in the saturation regime \cite{EvKe02,CaRG16}. Hence, as suggested in \cite{EvKe02}, some detuning notably enhances the narrow peak against the spectral background of the Mollow triplet, as shown in Fig.~\ref{fig4:tds_strong_det}. A slight asymmetry occurs in the detuned case that vanishes in the long time limit \cite{EbKW80}; in this case one of the sidebands is closer to the atomic resonance and is larger than the other \cite{EbKW80}, while the asymmetry in the center of the spectrum gets smaller (see inset). More pronounced spectral asymmetries are found, for example, in detuned pulsed laser resonance fluorescence \cite{GuMH18}. \begin{figure}[t] \includegraphics[width=8.5cm, height=5.1cm]{Fig3_tds_strong.pdf} \caption{\label{fig3:tds_strong} Same as Fig.~\ref{fig2:tds_sat} but for strong driving, $\Omega=3.5\gamma$.} \end{figure} \begin{figure}[b] \includegraphics[width=8.5cm, height=5.5cm]{Fig4_tds_strong_det1_inset.pdf} \caption{ TDS for strong field, $\Omega=3.5\gamma$, but detuning $\Delta=1$. The other parameters as in Fig.~\ref{fig2:tds_sat}. The inset shows the diminishing asymmetry of the center of the spectrum for increasing time. } \label{fig4:tds_strong_det} \end{figure} It is important to note that while the narrow peak develops much later than the Mollow spectrum, it does actually emerge, if not stabilize, well before an average dark time $T_D$ has passed. The presence of dark periods in the fluorescence is felt soon in the ensemble's evolution: in some realizations of the ensemble the dark period may occur before the bright one. From Eqs.~(\ref{eq:AvBrightPeriod}) and (\ref{eq:AvDarkPeriod}) it is seen that the average bright time depends on both laser and atomic parameters, while the average dark period depends only on the effective lifetime $\gamma_a^{-1}$ of the metastable state $|a \rangle$. In the TDS sequences of Figs.~\ref{fig2:tds_sat} -- \ref{fig4:tds_strong_det}, $\gamma T_D \simeq 67$, and $\gamma T_B \simeq 330, 42$, and 48, respectively. They reveal the time scale of the dark and bright periods in the ensemble evolution. We have to discuss also effects of the filter on the EW time-dependent spectrum. First, it could be argued that the observed narrow peak is the filter-broadened coherent spectral component. This is not the case because the delta peak is a steady state feature of the spectrum \cite{FiTa17}; it should not appear in a TDS, however long is the \textit{finite} observation time. What we undoubtedly see is the incoherent narrow peak produced by random interruptions in the fluorescence of the strong transition $|g \rangle - |e \rangle$ caused by the atom's excursions into the weak transition channel $|e \rangle \to |a \rangle \to |g \rangle$ \cite{HePl95}. Moreover, the narrow peak grows at the expense of the coherent peak \cite{EvKe02}: its intensity is the difference among the intensities of the coherent peak of the two- and three-level systems, Eq.~(\ref{eq:I_narrowpeak}). Another issue is the choice of filter bandwidth $\Gamma$. On one hand, it must be able to resolve the different spectral components, therefore $\Gamma$ should be a fraction of the width, $\sim \gamma$, of the Mollow spectral peaks. On the other, $\Gamma$ cannot be infinitely small, as is assumed for the stationary spectrum \cite{EbWo77}. The filter bandwidth in our plots, $\Gamma = 0.1 \gamma$, was chosen to focus on the narrow peak: for $\Gamma > \Gamma_{np}$ the filter sets the observed width of the narrow peak. The filter bandwidth also has dynamical consequences due to the time-energy uncertainty; the filter has to saturate in order to finish its transient effect and begin to produce stable spectra. This occurs after a time $\Gamma t >1$. Hence, a narrow filter $\Gamma < \gamma$ causes a delay in the stabilization of the fast-forming Mollow-like spectrum \cite{EbKW80}, while the narrow peak stabilizes soon since $\Gamma < \Gamma_{np}$. The transient effects on a spectrum are therefore felt for very long times, as seen in the temporary reduction of the spectra of Figs.~\ref{fig3:tds_strong} and \ref{fig4:tds_strong_det}. The different time scales due to atomic and filter parameters make it very difficult to fully assess the TDS analytically. Finally, we have used a density-operator-based approach, for which the TDS is the statistical average of infinitely many realizations. However, while the individual records of bright and dark periods are buried in the ensemble average, the impact of the latter on the TDS is evident in the emergence of the incoherent narrow peak. \textit{Conclusions.---} We have investigated the time-dependent spectrum of intermittent resonance fluorescence and found that the narrow incoherent peak due to electron shelving emerges and stabilizes much later than the Mollow spectrum. We trust that an experimental observation of blinking resonance fluorescence TDS is within reach. TDS of two-level atom resonance fluorescence have been observed \cite{GoMo87} and measurements of shelving fluorescence have reached the accuracy required for applications such as precision measurements of fundamental constants and optical ion clocks \cite{GNJ+14,HLT+14}. We think that even for nonergodic blinking such as that of quantum dots or molecules \cite{StHB09}, whose TDS have been studied in \cite{LeBa15}, the Eberly-W\'odkiewicz physical spectrum would be of great benefit. The observation and interpretation of TDS could help to describe the dynamics of other systems with separate time scales such as super- and sub-radiance \cite{vLFL+13} and entanglement \cite{HoFi10} in collective atomic dynamics. \textit{Acknowledgments.---} R.~R.-A. wishes to thank CONACyT, Mexico for scholarship 379732. H.M.C.-B. thanks Prof. J. R\'ecamier for hospitality at ICF-UNAM.
{ "timestamp": "2018-05-11T02:02:48", "yymm": "1805", "arxiv_id": "1805.02267", "language": "en", "url": "https://arxiv.org/abs/1805.02267" }
\section{Introduction} Network embedding is an emerging research topic in recent years, aiming to represent nodes by low-dimensional vectors while maintaining the structures and properties of the network \cite{cui2017survey}. Many methods have been proposed for network embedding, such as using random walks \cite{perozzi2014deepwalk}, matrix factorization \cite{zhang2018arbitrary} and deep learning \cite{wang2016structural}. With these methods, many network analysis tasks can be fulfilled in vector spaces and benefit from off-the-shelf machine learning models. Despite such progress, the targeted networks of the existing methods are often in thousand or million scale. However, many real networks have billions of nodes and edges, such as social networks, e-commerce networks and the Internet. The billion-scale networks pose great computational challenges to the existing methods. The bottleneck lies in that the existing methods are all learning-based and thus involve computationally expensive optimization procedures. For example, Stochastic Gradient Descend (SGD) is a commonly used optimization method in network embedding \cite{tang2015line}, but it requires a great number of iterations to converge, which may not be feasible for billion-scale networks. One way to accelerate is to resort to distributed computing solutions, but the optimization methods, like SGD, often require global embedding information for searching gradients, leading to intense communication cost. As a result, how to design an efficient and effective billion-scale network embedding method that is friendly to distributed computing is still an open problem. Different from learning-based methods, random projection is a simple and powerful technique to form low-dimensional embedding spaces while preserving the structures of the original space. It is also friendly to distributed computing, and thus widely exploited in large-scale data scenarios \cite{vempala2005random}. But the extremely sparse structures of real networks pose great challenges to applying random projection to network embedding. The existing work \cite{cui2017survey} has demonstrated that high-order proximities between nodes are essential to be preserved in network embedding and can effectively address the sparsity issue. Hence, how to design a high-order proximity preserved random projection method while maintaining the efficiency of the method is the key problem of billion-scale network embedding. In this paper, we propose RandNE\footnote{The code is available at https://github.com/ZW-ZHANG/RandNE.} (Iterative Random Projection Network Embedding), a novel and simple billion-scale network embedding method based on high-order proximity preserved random projection. Specifically, we propose using Gaussian random projection to minimize the matrix factorization objective function of preserving the high-order proximity. In order to avoid the explicit calculation of high-order proximities which induces high computational complexities, we design an iterative projection procedure, enabling arbitrary high-order proximity preserved random projection with a linear time complexity. Theoretical analysis is provided to guarantee that i) RandNE is much more computationally efficient than the existing methods, ii) it can well support distributed computing without any communication cost between different servers in the calculation, and iii) it can efficiently incorporate the dynamic changes of the networks without error aggregation. These merits make RandNE a promising solution for billion-scale network embedding, even in dynamic environments. Extensive experiments are conducted in network reconstruction, link prediction and node classification tasks on multiple datasets with different scales, ranging from thousands to billions of nodes and edges. The results show that RandNE can boost the efficiency of network embedding by about 2 orders of magnitude over state-of-the-art methods\footnote{These algorithms are tested using the source code published by their authors.} while achieving a superior or comparable accuracy. For the WeChat\footnote{One of the largest social network platforms in China.} network with 250 millions nodes and 4.8 billion edges, RandNE can produce 512-dimensional embeddings within 7 hours with 16 distributed servers. The contributions of our paper are summarized as follows: \begin{itemize} \item We propose RandNE, a novel and simple random projection based network embedding method that enables billion-scale network embedding. \item We design an iterative projection procedure to realize high-order proximities preserved random projection efficiently without explicitly calculating the high-order proximities. \item We theoretically and empirically prove that RandNE can well support distributed computing without communication cost and can efficiently deal with dynamic networks without error aggregation. \end{itemize} The rest of this paper is organized as follows. In Section 2, we briefly review related works. We give our problem formulation in Section 3 and introduce our proposed method in Section 4. Experimental results are reported in Section 5. Finally, we summarize in Section 6. \section{Related Work} Network embedding has attracted considerable research attention in the past few years, aiming to bridge the gap between network analysis and off-the-shelf machine learning techniques. Here, we briefly review some representative network embedding methods, and readers are referred to \cite{cui2017survey} for a comprehensive survey. The flourish of network embedding research begins when DeepWalk \cite{perozzi2014deepwalk} first proposes using truncated random walks to explore the network structure and utilizes the skip-gram model \cite{mikolov2013efficient} from word embedding to derive the embedding vectors of nodes. LINE \cite{tang2015line} takes a similar idea with an explicit objective function by setting the walk length as one, and introduces the negative sampling strategy \cite{mikolov2013distributed} to accelerate the training procedure. Node2vec \cite{grover2016node2vec} generalizes these two methods by taking potentially biased random walks for more flexibility. These random walks based methods are proven equivalent to factorizing a high-order proximity matrix \cite{qiu2018network}. On the other hand, explicit matrix factorization methods have been proposed for network embedding. GraRep \cite{cao2015grarep} directly applies SVD to preserve high-order proximity matrices. HOPE \cite{ou2016asymmetric} proposes using generalized SVD to preserve the asymmetric transitivity in directed networks. Community structure, an important mesoscopic structure of the network, is preserved by non-negative matrix factorization in \cite{wang2017community}. \cite{chen2017fast} introduces a unified framework for matrix factorization and utilizes a sparsification technique to speed up SVD. Another approximate matrix factorization technique is introduced in \cite{yang2017fast}. AROPE \cite{zhang2018arbitrary} improves these works by preserving arbitrary-order proximity simultaneously. Deep learning model is also applied to network embedding. SDNE \cite{wang2016structural} first considers the high non-linearity in network embedding and proposes a deep auto-encoder to preserve the first two order proximities. DHNE \cite{tu2018structural} extends this framework for preserving the indecomposability in hyper-networks. Besides static networks, how to embed dynamic networks where nodes and edges change over time also attracts research attention. DHPE \cite{zhu2018high} and DANE \cite{li2017attributed} propose using matrix perturbation to handle the changes of edges. DepthLGP \cite{ma2018depthlgp} adopts a Gaussian process to handle out-of-sample nodes. DynamicTriad \cite{zhou2018dynamic} considers the triangle closure characteristic of network evolving. Despite their remarkable performance, these methods are all learning-based and the targeted networks are often in thousand or million scale. In \cite{zhou2017scalable}, a modification of DeepWalk is applied to a billion-scale network aliItemGraph. However, their method has the same time complexity as DeepWalk, which is much more computationally expensive than our method by over two orders of magnitude (see Figure \ref{fig:time} in the experiments section). Besides, it does not address the problem of distributed computing or handling dynamic networks. How to embed networks with side information is also explored. For instance, \cite{dong2017metapath2vec,chen2017task} utilize metapaths to embed heterogenous information networks where node and edge types are available. Node attributes and node labels are taken into consideration in \cite{yang2015network,li2017attributed} and \cite{tu2016max,pan2016tri,yang2016revisiting} respectively. In this paper, we focus on the most fundamental case that only the network structure is available. Another closely related topic is random projection \cite{vempala2005random,arriaga2006algorithmic,shi2012margin,choromanski2017unreasonable}, which is widely adopted in dimension reduction. But existing random projection methods do not consider the sparsity problem in network embedding, and thus cannot be directly applied. \section{Notations and Problem Formulation} \subsection{Notations} First, we summarize the notations used in this paper. For a network $G$ with $N$ nodes and $M$ edges, we use $\mathbf{A}$ to denote the adjacency matrix. In this paper, we mainly consider undirected networks, so $\mathbf{A}$ is symmetric. $\mathbf{A}(i,:)$ and $\mathbf{A}(:,j)$ denote its $i^{th}$ row and $j^{th}$ column respectively. $\mathbf{A}(i,j)$ is the element in the $i^{th}$ row and $j^{th}$ column. $\mathbf{A}^T$ denotes the transpose of $\mathbf{A}$. Throughout the paper, we use bold uppercase characters to denote matrices and bold lowercase characters to denote vectors, e.g. $\mathbf{X}$ and $\mathbf{x}$ respectively. We use dot to denote the matrix product of two matrices, e.g. $\mathbf{B} \cdot \mathbf{C}$. Functions are marked by curlicue, e.g. $\mathcal{F}(\cdot)$. \subsection{Problem Formulation} To represent nodes in a network by low-dimensional vectors, one commonly adopted objective function in network embedding is matrix factorization, which decomposes a targeted similarity function of the adjacency matrix $\mathcal{F}(\mathbf{A}) \in \mathbb{R}^{N \times N}$ into the product of two low-dimensional matrices $\mathbf{U},\mathbf{V} \in \mathbb{R}^{N \times d}$ with the following objective function: \begin{equation}\label{eq:obj1} \min_{\mathbf{U},\mathbf{V}} \left\| \mathcal{F}(\mathbf{A})- \mathbf{U} \cdot \mathbf{V}^T \right\|_p, \end{equation} where $p$ is the norm and $d$ is the dimensionality of the embedding. In this paper, we only consider undirected networks and symmetric similarities, so $\mathbf{U} = \mathbf{V}$. We also focus on the spectral norm, i.e. $p=2$, which is widely adopted \cite{liberty2013simple}. The adjacency matrix $\mathbf{A}$ could be replaced by other variants, such as the Laplacian matrix or transition matrix \cite{qiu2018network}. Here, we focus on the adjacency matrix unless stated otherwise. The previous work has shown that high-order proximities are essential to be preserved in network embedding, which can be formulated as a polynomial function of the adjacency matrix \cite{yang2017fast,chen2017fast}. In this paper, we assume that $\mathcal{F}(\mathbf{A})$ is a positive semi-definite function, so it can be formulated as $\mathcal{F}(\mathbf{A}) = \mathbf{S} \cdot \mathbf{S}^T$. Then, we can rewrite Eq. \eqref{eq:obj1} as: \begin{equation}\label{eq:obj2} \begin{gathered} \min_{\mathbf{U}} \left\| \mathbf{S} \cdot \mathbf{S}^T - \mathbf{U} \cdot \mathbf{U}^T \right\|_2\\ \mathbf{S} = \alpha_0 \mathbf{I} + \alpha_1 \mathbf{A} + \alpha_2 \mathbf{A}^2 + ... + \alpha_q \mathbf{A}^q, \end{gathered} \end{equation} where $\mathbf{S}$ is the high-order proximity matrix, $\alpha_0, \alpha_1, ... ,\alpha_q$ are pre-defined weights and $q$ is the order. From Eckart-Young theorem \cite{eckart1936approximation}, it is well known that Singular Value Decomposition (SVD) can lead to the optimal solution of Eq. \eqref{eq:obj2}. However, SVD is computationally expensive and thus not suitable for large-scale networks. \section{RandNE: the Proposed Method} \subsection{Gaussian Random Projection Embedding} To minimize the objective function in Eq. \eqref{eq:obj2}, an extremely simple yet effective method is random projection, and Gaussian random projection is widely used \cite{vempala2005random}. Formally, let $\mathbf{R} \in \mathbb{R}^{N \times d}$ and each element of $\mathbf{R}$ follows an i.i.d Gaussian distribution $\mathbf{R}(i,j) \sim \mathcal{N}\left(0,\frac{1}{d} \right)$. Then, the embeddings $\mathbf{U}$ can be obtained by performing a matrix product: \begin{equation}\label{eq:emb} \mathbf{U} = \mathbf{S} \cdot \mathbf{R} = \left( \alpha_0 \mathbf{I} + \alpha_1 \mathbf{A} + \alpha_2 \mathbf{A}^2 + ... + \alpha_q \mathbf{A}^q \right) \mathbf{R}, \end{equation} i.e. we randomly project the proximity matrix $\mathbf{S}$ into a low-dimensional subspace. Gaussian random projection has an theoretical guarantee, as we specific in the following theorem. \begin{theorem}\label{theorem:bound} For any similarity matrix $\mathbf{S}$, denote its rank as $r_{\mathbf{S}}$. Then, for any $\epsilon \in \left(0,\frac{1}{2} \right)$, the following equation holds: \begin{equation} P \left[ \; \left\| \mathbf{S} \cdot \mathbf{S}^T - \mathbf{U} \cdot \mathbf{U}^T \right\|_2 > \epsilon \left\| \mathbf{S}^T \cdot \mathbf{S} \right\|_2 \right] \leq 2 r_{\mathbf{S}} e^{-\frac{\left( \epsilon^2 - \epsilon^3 \right) d}{4} }, \end{equation} where $\mathbf{U} = \mathbf{S} \cdot \mathbf{R}$ and $\mathbf{R}$ is a Gaussian random matrix. \end{theorem} \begin{IEEEproof} See appendix. \end{IEEEproof} The theorem basically shows that the residual of our projection $\mathbf{S} \cdot \mathbf{S}^T - \mathbf{U} \cdot \mathbf{U}^T$ has a much smaller spectral radius compared to the spectral radius of the original high-order proximity $\mathbf{S} \cdot \mathbf{S}^T$. In other words, the embedding $\mathbf{U}$ captures the ``core component'' of the high-order proximity. As a result, performing a Gaussian random projection can effectively minimize the objective function in Eq. \eqref{eq:obj2}. Actually, Gaussian random projection is also known to have other merits, such as preserving the margin for classification \cite{shi2012margin}, which we omit for brevity. For the projection matrix, it is proven that orthogonal Gaussian random matrix can further improve the accuracy, which can be easily obtained by performing a Gram Schmidt process on the columns of the Gaussian random matrix \cite{choromanski2017unreasonable}. In this paper, we use the orthogonal Gaussian random matrix as the projection matrix. However, since $\mathbf{S}$ may not be a sparse matrix, directly calculating $\mathbf{S}$ and performing the projection is still time consuming and not scalable to large-scale networks. \subsection{Iterative Projection} To address the efficiency problem, we design an iterative projection procedure to avoid the explicit calculation of the high-order proximity matrix $\mathbf{S}$. Specifically, from Eq. \eqref{eq:emb}, we can decompose $\mathbf{U}$ into matrices of different orders: \begin{equation}\label{eq:combine} \mathbf{U} = \alpha_0 \mathbf{U}_0 + \alpha_1 \mathbf{U}_1 + ... + \alpha_q \mathbf{U}_q, \end{equation} where $\mathbf{U}_i = \mathbf{A}^i \cdot \mathbf{R}, 0\leq i \leq q$. Then, the decomposed parts, $\mathbf{U}_1 ...\mathbf{U}_q$, can be calculated iteratively: \begin{equation}\label{eq:irp} \mathbf{U}_i = \mathbf{A} \cdot \mathbf{U}_{i-1} \ , \ \forall 1 \leq i \leq q. \end{equation}Note that in Eq. \eqref{eq:irp}, we only need to calculate the matrix product of the adjacency matrix and a low-dimensional matrix. Since the adjacency matrix is sparse, we can use sparse matrix products, which are highly scalable and efficient. \begin{algorithm}[t] \caption{RandNE: Iterative Random Projection Network Embedding}\label{alg1} \begin{algorithmic}[1] \REQUIRE Adjacency Matrix $\mathbf{A}$, Dimensionality $d$, Order $q$, Weights $\alpha_0,\alpha_1,...,\alpha_q$ \ENSURE Embedding Results $\mathbf{U}$ \STATE Generate $\mathbf{R} \in \mathbb{R}^{N \times d} \sim \mathcal{N}(0,\frac{1}{d})$ \label{step1} \STATE Perform a Gram Schmidt process on $\mathbf{R}$ to obtain the orthogonal projection matrix $\mathbf{U}_0$ \label{step2} \FOR{i in 1:q}\label{step3} \STATE Calculate $\mathbf{U}_i = \mathbf{A} \cdot \mathbf{U}_{i-1}$ \ENDFOR\label{step4} \STATE Calculate $\mathbf{U} = \alpha_0 \mathbf{U}_0 + \alpha_1 \mathbf{U}_1 + ... + \alpha_q \mathbf{U}_q$ \label{step5} \end{algorithmic} \end{algorithm} \subsection{Time Complexity and Distributed Computing} We show our algorithm framework in Algorithm \ref{alg1}. Then, we analyze the time complexity of Algorithm \ref{alg1}. The complexity of line \ref{step1} is $O(N\cdot d)$, the complexity of line \ref{step2} is $O(N\cdot d^2)$, the complexity of each iteration from line \ref{step3} to line \ref{step4} is $O(M \cdot d)$ and the complexity of line \ref{step5} is $O(q \cdot N \cdot d)$, where $N$ and $M$ are the number of nodes and edges in the network respectively, $q$ is the preset order and $d$ is the dimensionality of the embedding space. As a result, the overall time complexity is $O\left(N \cdot d^2 + M \cdot q \cdot d\right)$, i.e. our method is linear with respect to the network size. From the above analysis, we can also see that our method is extremely efficient because it only needs to iterate $q$ times, and within each iteration, only a simple matrix product needs to be calculated. In contrast, although some existing network embedding methods are also proven to have linear time complexities, such as the embedding methods based on SGD \cite{tang2015line} or SVD \cite{ou2016asymmetric}, they inevitably need dozens or hundreds of iterations in the optimization. As a result, our method is more efficient than these methods by orders of magnitude, and is thus more suitable for billion-scale network embedding. In addition, according to the property of matrix products, each dimension (i.e. column) of $\mathbf{U}_i$ can be calculated separately without any information from other dimensions. We formalize this property in the following theorem. \begin{theorem}\label{The:distributed} For any $j \neq l$, the calculation of $\mathbf{U}_i(:,j)$ and $\mathbf{U}_i(:,l), 1 \leq i \leq q$ from line \ref{step3} to line \ref{step4} in Algorithm \ref{alg1} are independent, if $\mathbf{A}$ and $\mathbf{U}_0$ are known to all the servers. \end{theorem} \begin{IEEEproof} Straightforward from the property of matrix products. \end{IEEEproof} The theorem shows that our method naturally supports distributed computing by allocating the calculation of different dimensions into distributed servers, and no communication is needed during the calculation process, if $\mathbf{A}$ and $\mathbf{U}_0$ are known to all the servers. We design a simple distributed protocol using the theorem, as specified in Algorithm \ref{alg3}. \begin{algorithm}[t] \caption{Distributed Calculation of RandNE}\label{alg3} \begin{algorithmic}[1] \REQUIRE Adjacency matrix $\mathbf{A}$, Initial Projection $\mathbf{U}_0$, Parameters of RandNE, $K$ Distributed Servers \ENSURE Embedding Results $\mathbf{U}$ \STATE Broadcast $\mathbf{A}$, $\mathbf{U}_0$ and parameters into $K$ servers \STATE Set i = 1 \REPEAT \IF{There is an idle server $k$} \STATE Calculate $\mathbf{U}(:,i)$ in server $k$ \STATE Set i = i + 1 \STATE Gather $\mathbf{U}(:,i)$ from server $k$ after calculation \ENDIF \UNTIL{$i > d$} \STATE Return $\mathbf{U}$ \end{algorithmic} \end{algorithm} For networks that cannot be stored in the memory or when the number of servers exceeds the dimensionality, we can use more advanced distributed matrix multiplication algorithms, such as \cite{vastenhouw2005two,boman2013scalable}, which we leave as the future work. This is in contrast with the existing methods which can only be parallelizable within one server but are hard to be distributed because of communication cost. This merit lays another foundation for applying RandNE to billion-scale networks. \subsection{Dynamic Updating}\label{sec:update} As many real networks are dynamic, we next show how to efficiently update RandNE to incorporate dynamic changes. First, we focus on the changes of edges. From Algorithm \ref{alg1}, to update the final embedding vectors, we only need to update the decomposed parts $\mathbf{U}_i, 1 \leq i \leq q$. Formally, we denote the changes in the adjacency matrix as $\Delta \mathbf{A}$ and the changes in $\mathbf{U}_i$ as $\Delta \mathbf{U}_i, 1\leq i \leq q$. From Eq. \eqref{eq:irp}, we have: \begin{equation}\label{eq:update} \begin{gathered} \mathbf{U}_i + \Delta \mathbf{U}_i = \left( \mathbf{A} + \Delta \mathbf{A} \right) \cdot \left( \mathbf{U}_{i-1} + \Delta \mathbf{U}_{i-1} \right) \\ \Rightarrow \Delta \mathbf{U}_i = \mathbf{A} \cdot \Delta \mathbf{U}_{i-1} + \Delta \mathbf{A} \cdot \mathbf{U}_{i-1} + \Delta \mathbf{A} \cdot \Delta \mathbf{U}_{i-1}. \end{gathered} \end{equation} Then, we can iteratively calculate $\Delta \mathbf{U}_i$ using Eq. \eqref{eq:update}. Besides, nodes in the network may also be added or deleted. As for deleting nodes, it can be treated equivalently as the changes of edges by deleting all edges of the deleted nodes. For newly added nodes, we first add some empty nodes (i.e. without any edge) to make the dimensionality of the matrices match. Specifically, we denote $N'$ as the number of added nodes. For the projection matrix $\mathbf{U}_0$, we can generate an additional orthogonal Gaussian random matrix $\hat{\mathbf{U}}_0 \in \mathbb{R}^{N' \times d}$ and concatenate it with the current projection matrix $\mathbf{U}_0$ to form the new projection matrix $\mathbf{U}_0'$. For other $\mathbf{U}_i,1\leq i \leq q$, we can easily find that they have all-zero elements for the empty nodes, i.e. we only need to add $N'$ all-zero rows to match the dimensionality. Then, we can add edges to those newly added nodes using Eq. \eqref{eq:update}. \begin{algorithm}[t] \caption{Dynamic Updating of RandNE}\label{alg2} \begin{algorithmic}[1] \REQUIRE Adjacency Matrix $\mathbf{A}$, Dynamic Changes $\Delta \mathbf{A}$, Previous Projection Results $\mathbf{U}_0,\mathbf{U}_1,...,\mathbf{U}_q$ \ENSURE Updated Projection Results $\mathbf{U}^\prime_0,\mathbf{U}^\prime_1,...,\mathbf{U}^\prime_q$ \IF{$\Delta \mathbf{A}$ includes $N'$ new nodes} \label{step11} \STATE Generate an orthogonal projection $\hat{\mathbf{U}}_0 \in \mathbb{R}^{N' \times d}$ \STATE Concatenate $\hat{\mathbf{U}}_0$ with $\mathbf{U}_0$ to obtain $\mathbf{U}^\prime_0$ \STATE Add $N'$ all-zero rows in $\mathbf{U}_1...\mathbf{U}_q$ \ENDIF \label{step12} \STATE Set $\Delta \mathbf{U}_0 = 0$ \label{step13} \FOR{i in 1:q} \STATE Calculate $\Delta \mathbf{U}_i$ using Eq. \eqref{eq:update} \STATE Calculate $\mathbf{U}^\prime_i = \mathbf{U}_i + \Delta \mathbf{U}_i$ \ENDFOR \label{step14} \end{algorithmic} \end{algorithm} We show the framework of dynamic updating in Algorithm \ref{alg2}. As only local changes are involved, the updating is computationally efficient, as we specific in the following theorem. \begin{theorem}\label{thm3} The time complexity of Algorithm \ref{alg2} is linear with respect to the number of changed nodes and number of changed edges respectively. \end{theorem} \begin{IEEEproof} See appendix. \end{IEEEproof} Another merit of our updating method is that it has no error aggregation, i.e. the dynamic updating algorithm leads to the same results as rerunning the static algorithm. We formalize this property in the following theorem. \begin{theorem}\label{thm4} Denote $\mathbf{A}_0$ and $\Delta \mathbf{A}_1, \Delta \mathbf{A}_2,..., \Delta \mathbf{A}_t$ as the initial adjacency matrix and its dynamic changes in $t$ time steps respectively. Denote $\mathbf{U}$ as the final embedding results of applying Algorithm \ref{alg1} to $\mathbf{A}_0$ and then updating $t$ times using $\Delta \mathbf{A}_1, \Delta \mathbf{A}_2,..., \Delta \mathbf{A}_t$ and Algorithm \ref{alg2}. Denote $\mathbf{U}'$ as the embedding results of applying Algorithm \ref{alg1} to $\mathbf{A} = \mathbf{A}_0 + \Delta \mathbf{A}_1 + \Delta \mathbf{A}_2 + ... + \Delta \mathbf{A}_t$. If $\mathbf{U}$ and $\mathbf{U}'$ are calculated using the same hyper-parameters and random seed, then $\mathbf{U} = \mathbf{U}'$. \end{theorem} \begin{IEEEproof} Since the same random seed is used, $\mathbf{U}_0$ = $\mathbf{U}_0'$. Then, applying Eqs. \eqref{eq:combine} \eqref{eq:irp} \eqref{eq:update} leads to the results. \end{IEEEproof} Combining Theorem \ref{thm3} and Theorem \ref{thm4}, our updating method can effectively incorporate the dynamic changes of networks with high computational efficiency. \section{Experiments} \subsection{Experimental Setting} To comprehensively evaluate the efficacy of RandNE, we first conduct experiments on 3 moderate-scale social networks\footnote{http://socialcomputing.asu.edu/pages/datasets}: BlogCatalog, Flickr, Youtube, and then evaluate our method on a billion-scale network, WeChat. The statistics of the datasets are summarized in Table \ref{Datasets}. \begin{table} \centering \caption{The Statistics of Datasets}\label{Datasets} \begin{tabular}{ l | c | c | c } \hline Dataset & \# Nodes & \# Edges & \# Labels \\ \hline BlogCatalog & 10,312 & 667,966 & 39 \\ \hline Flickr & 80,513 & 11,799,764 & 47 \\ \hline Youtube & 1,138,499 & 5,980,886 & 195 \\ \hline WeChat & 250 million & 4.8 billion & - \\ \hline \end{tabular} \end{table} We compare our method with the following baselines: \begin{itemize} \item DeepWalk \cite{perozzi2014deepwalk}\footnote{https://github.com/phanein/deepwalk} uses random walks and the skip-gram model to learn embeddings. We use two parameter settings: one suggested in the paper and one used in the implementation of the authors, and report the best results. \item LINE \cite{tang2015line}\footnote{https://github.com/tangjianpku/LINE} explicitly preserves the first two order proximities, denoted as LINE$_{1st}$ and LINE$_{2nd}$ respectively. We exclude the results of concatenating them because no obvious improvement is observed. We use the default parameter settings except the number of training samples, which we conduct a line search for the optimal value. \item Node2vec \cite{grover2016node2vec}\footnote{https://github.com/snap-stanford/snap} generalizes DeepWalk and LINE by using potentially biased random walks. We use the default settings for all parameters except the bias parameters $p,q$, which we conduct a grid search from $\left\{0.5,1,2\right\}$. \item SDNE \cite{wang2016structural}\footnote{https://github.com/suanrong/SDNE} proposes a deep auto-encoder to preserve the first and the second order proximities simultaneously. We use the default parameter settings and the auto-encoder structure in the implementation of the authors. \end{itemize} There are also other methods like GraRep \cite{cao2015grarep} and M-NMF \cite{wang2017community}, but we exclude them here for their scalability issues. We also exclude the results of SDNE on Youtube because it fails to terminate in one week. On the WeChat network, as all these baselines cannot terminate within acceptable time, we mainly compare our method with other simpler graph-based methods. For our method RandNE, we set the order $q=3$ with a grid search for the weights. Please note that the weights only affect the last step of our algorithm (i.e., line \ref{step5} in Algorithm \ref{alg1}), so our proposed method is very efficient in tuning them. For the node classification task, we use transition matrix to replace the adjacency matrix because substantial improvement is observed. All hyper-parameters of our method and the baselines are tuned using a small validation set, which we set as 10\% for moderate-scale networks and 1\% for the billion-scale network. For all the methods, we uniformly set the dimensionality as $d = 128$ unless stated otherwise. All experiments are conducted in a single PC with 2 I7 processors and 48GB memory, except for Section \ref{sec:wechat}, where we run our method in a distributed cluster. \subsection{Moderate-scale Networks} \subsubsection{Running Time Comparison} To compare the efficiency of different methods, we first report the running time of all the methods in Figure \ref{fig:time}. The results show that RandNE can boost the efficiency by more than 24 times over the baselines on all networks, which is consistent to our time complexity analysis. Note that the baselines are tested using the source code published by their authors. We realize that there might be slight differences in the programming languages and implementation details, but the impact of these factors can be safely ignored considering the improvement of 24 times. So we directly report their results for reproducibility concerns. The extreme efficiency lays the foundation for applying RandNE to billion-scale networks. \begin{figure} \centering \hspace{0.6cm} \includegraphics[width=7.5cm]{Running_Time.eps} \caption{The running time comparison of different methods. Our method RandNE can boost the efficiency by more than 24 times over state-of-the-art methods on all networks.} \label{fig:time} \end{figure} \subsubsection{Network Reconstruction}\label{sec:recon} \begin{table} \caption{AUC scores of Network Reconstruction}\label{table:recon} \centering \begin{tabular}{ c | c | c | c } \hline Dataset & BlogCatalog & Flickr & Youtube \\ \hline RandNE & $\mathbf{0.958}$ & $\mathbf{0.953}$ & 0.982 \\ \hline DeepWalk & 0.843 & 0.951 & 0.995 \\ \hline LINE$_{1st}$ & 0.901 & 0.947 & $\mathbf{0.999}$ \\ \hline LINE$_{2nd}$ & 0.761 & 0.936 & 0.970 \\ \hline Node2vec & 0.805 & 0.890 & 0.952 \\ \hline SDNE & 0.950 & 0.919 & - \\ \hline \end{tabular} \end{table} \begin{figure*}[t] \centering \hspace{-0.5cm} \includegraphics[width=12cm]{Recon.eps} \caption{The Precision@K of network reconstruction on moderate-scale networks. We train embedding vectors and rank pairs of nodes according to their inner-product similarities. The results show that our proposed method can well preserve the network structure and reconstruct the given network.} \label{fig:recon} \end{figure*} One basic objective for network embedding is to reconstruct the network. Specifically, we train embedding vectors and rank pairs of nodes according to their inner product similarities. Then, the top ranking pairs are used to reconstruct the network because large similarities indicate high probabilities of having edges. For the evaluation metrics, we use Area Under Curve (AUC) \cite{fawcett2006introduction} and Precision@K \cite{wang2016structural} defined as: \begin{equation}\label{eq:prec} Precision@K = \frac{1}{K} \sum_{i=1}^{K} \delta_i, \end{equation} where $\delta_i = 1$ means the $i^{th}$ reconstructed pair is correct, $\delta_i = 0$ represents a wrong reconstruction and $K$ is the number of evaluated pairs. On Youtube, the number of possible pairs of nodes $\frac{N(N-1)}{2}$ is too large to evaluate, so we sample 1\% for evaluation, as in \cite{ou2016asymmetric}. The results are shown in Table \ref{table:recon} and Figure \ref{fig:recon}. Our proposed method consistently outperforms the baselines on the metric Precision@K. On AUC, our method achieves the best performance on BlogCatalog and Flickr, and has comparable performance on Youtube. Considering the significant improvement in efficiency and the simplicity of our model, we regard the performance of RandNE in accuracy aspect to be satisfactory and somewhat beyond expectation, which well demonstrates the effectiveness of random projection in preserving network structure. \subsubsection{Link Prediction}\label{sec:lp} Link prediction, aiming to predict future links using the current network structure, is an important task of network embedding. In our experiments, we randomly hide 30\% of the edges for testing. After training embedding vectors on the rest of the network, we rank pairs of nodes in a similar way as network reconstruction and evaluate the results on the testing network. The process is repeated 5 times and the average results are reported. From Table \ref{table:lp} and Figure \ref{fig:lp}, we can see that our proposed method still outperforms the baselines in nearly all cases except the AUC score on Youtube, as in network reconstruction. The results demonstrate that besides reconstructing the network, RandNE also has good inference abilities, which we attribute to effectively preserving the high-order proximity. \begin{table} \caption{AUC scores of Link Prediction.}\label{table:lp} \centering \begin{tabular}{ c | c | c | c } \hline Dataset & BlogCatalog & Flickr & Youtube \\ \hline RandNE & $\mathbf{0.944}$ & $\mathbf{0.940}$ & 0.887 \\ \hline DeepWalk & 0.760 & 0.938 & 0.909 \\ \hline LINE$_{1st}$ & 0.667 & 0.909 & 0.847 \\ \hline LINE$_{2nd}$ & 0.762 & 0.932 & $\mathbf{0.959}$ \\ \hline Node2vec & 0.650 & 0.865 & 0.778 \\ \hline SDNE & 0.940 & 0.926 & - \\ \hline \end{tabular} \end{table} \begin{figure*}[t] \centering \hspace{-0.3cm} \includegraphics[width=12cm]{LP.eps} \caption{The Precision@K of link prediction on moderate-scale networks. We randomly split the network into training and testing. After training embedding vectors on the training network, we predict links by ranking similarities of node pairs and make evaluation on the testing network. The results show that our proposed method outperforms the baselines in link prediction.} \label{fig:lp} \end{figure*} \subsubsection{Node Classification}\label{sec:nodeclassification} Node classification is a typical application of network embedding. Specifically, we follow the experimental setting in baselines and randomly split the nodes into training and testing set. Then, an one-vs-all logistic regression with L2 regularization \cite{fan2008liblinear} is trained using the embeddings on the training set, and tested on the testing set. Following \cite{tang2015line}, we normalize each row of the embedding vectors. We use two measurements, Macro-F1 and Micro-F1 \cite{perozzi2014deepwalk}, to evaluate the performance. The average results of 5 runs are reported. \begin{figure*}[t] \centering \hspace{0.5cm} \includegraphics[width=13.5cm]{Classification.eps} \caption{The results of node classification on moderate-scale networks. We train an one-vs-all logistic regression on the embedding vectors as the classifier. The results show that our proposed method achieves comparable performance with the baselines.} \label{fig:classification} \end{figure*} From Figure \ref{fig:classification}, different networks show different patterns in terms of node classification performance. On Flickr, RandNE achieves the best results while Node2vec and LINE show good performance on BlogCatalog and Youtube respectively. One plausible explanation for such inconsistency is that different networks have different inherent structures corresponding to the specific classification task, and no single method can dominate others on all datasets. But in general, we can safely conclude that RandNE has comparable results with the baselines in node classification while being significantly faster. \subsubsection{Structural Role Classification}\label{sec:class} Recently, how to preserve the structural role of nodes in network embedding attracts some research attention \cite{ribeiro2017struc2vec}, which has important applications such as influence maximization and measuring node centrality. To validate the effectiveness of our method in structural role classification, we conduct experiments on three air-traffic networks\footnote{https://github.com/leoribeiro/struc2vec} from Brazilian, European and American as in \cite{ribeiro2017struc2vec}, where the networks have 131 nodes and 2,006 edges, 399 nodes and 11,986 edges, 1,190 nodes and 27,198 edges respectively. The networks are constructed by assigning airports as nodes and airlines as edges. Each node is assigned a ground-truth label ranging from 1 to 4 to indicate the level of activities of the airports. The experimental setting is similar to node classification in Section \ref{sec:nodeclassification} except that we use accuracy, i.e. the percentage of nodes whose labels are correctly predicted, as the measurement since the labels have the same size. We uniformly set the dimensionality of embedding as 16 since the networks have small sizes. The average results of 20 runs are reported. From Figure \ref{fig:structural}, we can see that RandNE consistently achieves the best results on European Flights Network. On Brazilian and American Flights Network, RandNE is only second to SDNE with tiny differences. However, RandNE is much more efficient than SDNE by about 4 orders of magnitude (see Figure \ref{fig:time}). The results demonstrate that RandNE can effectively capture the structural role of nodes. \begin{figure*}[t] \centering \hspace{0cm} \includegraphics[width=13.5cm]{Structural.eps} \caption{The accuracy of structural role classification. We use embedding vectors to predict the structural role of nodes in air-traffic networks. The results show that RandNE effectively captures the structural role of nodes.} \label{fig:structural} \end{figure*} \subsubsection{Analysis} In RandNE, we use iterative random projection to preserve high-order proximities. Here we analyze the effect of the proximity order, or equivalently, the number of iterations $q$. We report the results of varying $q$ from 1 to 3 with the same experimental settings. For brevity, we only report AUC scores of link prediction and the accuracy of structural role classification on American Flights in Figure \ref{fig:ParaOrder}, while other datasets and tasks show similar patterns. The results show that iterative random projection ($q > 1$) greatly and consistently outperforms the simple random projection ($q=1$), demonstrating the importance of preserving high-order proximities in network embedding. \begin{figure}[t] \centering \includegraphics[width=8.5cm]{Para_Order.eps} \caption{Parameter analysis. Figure shows that the high-order proximity ($q > 1$) greatly outperforms the simple random projection ($q=1$), demonstrating the importance of preserving high-order proximities.} \label{fig:ParaOrder} \end{figure} \begin{figure}[t] \centering \includegraphics[width=7.5cm]{Para_Scale.eps} \caption{Scalability analysis. Figure shows the linear time complexity of our method with respect to the number of nodes and number of edges respectively. } \label{fig:ParaScale} \end{figure} To verify the scalability of RandNE, we conduct experiments on random networks (i.e. the Erdos Renyi model \cite{erdos1960evolution}). We record the running time when fixing the number of nodes (as one million) or fixing the number of edges (as ten million) while varying the other. Figure \ref{fig:ParaScale} shows that the running time grows linearly with respect to the number of nodes and number of edges respectively, verifying the scalability of RandNE. Since our method is based on random projection, we also empirically analyze the impact of randomization. Specifically, we set different random seeds for RandNE with the same experimental setting, and report the mean value and standard deviation of 10 runs. For brevity, we only report the AUC scores of network reconstruction and link prediction in Table \ref{table:randomization}, while other tasks show similar patterns. The results show that our method is quite stable with respect to randomization. \begin{table}[t] \caption{Mean value and standard deviation of AUC with 10 different initializations.}\label{table:randomization} \centering \begin{tabular}{ c | c | c | c } \hline Dataset & BlogCatalog & Flickr & Youtube \\ \hline Reconstruction & $0.958\pm0.001$ & $0.954\pm0.003$ & $0.981\pm$ 0.002\\ \hline Link Prediction & $0.945\pm0.001$ & $0.941\pm0.002$ & $0.887\pm0.001$ \\ \hline \end{tabular} \end{table} \subsection{A Billion-scale Network}\label{sec:wechat} WeChat\footnote{http://www.wechat.com/en/} is one of the largest online social networks in China with more than one billion active users. We use the friendships data provided by WeChat from January 21, 2011 (the launch day of WeChat) to January 20, 2013, which contains 250 million nodes and 4.8 billion edges in total. The data is strictly anonymized for privacy purposes. Since no node label information is available, we mainly conduct experiments on network reconstruction and link prediction. As none of the aforementioned network embedding method can be applied to network of such a scale, we compare our method with two widely used graph-based measures: Common Neighbors and Adamic Adar \cite{liben2007link}. The benchmark accuracy of random guessing is also added. The experiments are conducted in a distributed cluster with 16 computing servers, where each server has 2 Xeon E5 CPU and 128GB memory. For RandNE, we set the embedding dimensionality as $d=512$. \begin{table}[t] \caption{AUC scores of network reconstruction on WeChat network.}\label{table:auclarge} \centering \begin{tabular}{ c | c } \hline Method & AUC \\ \hline RandNE & $\mathbf{0.989}$ \\ \hline Common Neighbors & 0.783 \\ \hline Adamic Adar & 0.783 \\ \hline Random & 0.500 \\ \hline \end{tabular} \end{table} \subsubsection{Network Reconstruction} The experimental setting is similar to moderate-scale networks (Section \ref{sec:recon}), i.e. we rank pairs of nodes according to their inner product similarities and reconstruct the network. We report the AUC scores in Table \ref{table:auclarge}. Here we omit the other metric, Precision@K, because the number of possible node pairs $\frac{N(N-1)}{2} \approx 10^{16}$ is so large that even sampling is infeasible. The results show that our proposed method greatly outperforms Common Neighbor and Adamic Adar. A plausible reason is that our method preserves the high-order proximity information in the embedding vectors by performing the iterative random projection, while the baselines only count local proximities. Adamic Adar, as a frequency-weighted modification of Common Neighbors, has the same accuracy as Common Neighbors because on the billion-scale network, AUC score mainly depends on whether two nodes have neighbors instead of the weights. \subsubsection{Dynamic Link Prediction} To simulate the evolving scenarios of real networks, we first randomly split the network into 30\% training and 70\% testing, train embedding vectors on the training set and evaluate the link prediction results on the testing set. Then, we randomly select 10\% (with respect to the whole network) from the testing set as the evolving part and add it to the training set. After updating the embedding vectors using the new training data, we evaluate the link prediction results on the rest of the network. The process is repeated until when 70\% of the network becomes training and the rest 30\% is testing. We adopt two versions of our method: one by dynamic updating as the network evolves (RandNE-D), and one by re-running the algorithm at each time (RandNE-R). To fairly compare the effectiveness of our dynamic updating method and re-running the algorithm, we use the same random seed for RandNE-D and RandNE-R. The results in Table \ref{table:auclarge2} show that our proposed method again consistently outperforms the baselines on the AUC scores. Besides, RandNE-D shows identical results as RandNE-R, verifying that our dynamic updating has no error aggregation. All methods have larger AUC scores as training data increases because more information is provided. \begin{table}[t] \caption{AUC scores of dynamic link prediction on WeChat.}\label{table:auclarge2} \centering \begin{tabular}{ c | c | c | c | c | c} \hline Observed Edges & 30\% & 40\% & 50\% & 60\% & 70\% \\ \hline RandNE-D & $\mathbf{0.646}$ & $\mathbf{0.689}$ & $\mathbf{0.726}$ & $\mathbf{0.756}$ & $\mathbf{0.780}$\\ \hline RandNE-R & $\mathbf{0.646}$ & $\mathbf{0.689}$ & $\mathbf{0.726}$ & $\mathbf{0.756}$ & $\mathbf{0.780}$\\ \hline Common Neighbors & 0.575 & 0.611 & 0.647 & 0.681 & 0.712 \\ \hline Adamic Adar & 0.575 & 0.611 & 0.647 & 0.681 & 0.712 \\ \hline Random & 0.500 & 0.500 & 0.500 & 0.500 & 0.500 \\ \hline \end{tabular} \end{table} \subsubsection{Speedup via Distributed Computing} We evaluate the performance of our method in distributed computing by reporting the speedup ratio. We divide the 16 servers into 4 sub-clusters, with each sub-cluster containing 4 servers. We vary the number of sub-clusters used for distributed computing and record the running time and speedup ratio. Figure \ref{fig:ParaSpeedup} shows that our method has a linear speedup ratio with a slope of approximately 0.8. The slope is slightly less than 1 because of subtle differences in the servers and some extra costs, e.g. reading data. We also report the exact running time in Table \ref{table:time}. It shows that RandNE can learn all the node embeddings of WeChat within 7 hours with 16 normal servers, which is promising for real billion-scale networks. \begin{table}[t] \caption{The running time of RandNE via distributed computing.}\label{table:time} \centering \begin{tabular}{ c | c | c | c | c} \hline Number of Sub-clusters & 1 & 2 & 3 & 4 \\ \hline Running Time(s) & 82157 & 46029 & 33965 & 24757 \\ \hline \end{tabular} \end{table} \begin{figure}[t] \centering \includegraphics[width=4.5cm]{Para_Speedup.eps} \caption{The speedup ratio of RandNE via distributed computing.} \label{fig:ParaSpeedup} \end{figure} We also analyze the impact of the proximity order $q$ on WeChat network, which shows similar patterns as moderate-scale networks in Figure \ref{fig:ParaOrder}. We omit the results here for brevity. \section{Conclusion} In this paper, we study the problem of embedding billion-scale networks while preserving high-order proximities. We propose RandNE, a novel and simple network embedding method based on random projection and design an iterative projection procedure to efficiently preserve the high-order proximities. Theoretical analysis shows that i) RandNE is more computationally efficient than the existing methods by orders of magnitude, ii) it can well support distributed computing without any communication cost in the calculation, and iii) it can efficiently incorporate the dynamic changes of the networks without error aggregation. Extensive experimental results on multiple datasets with different scales demonstrate the efficiency and efficacy of our proposed method. One future direction is to generalize this idea to incorporate node attributes and node labels. It is also interesting to explore other random projection methods beyond Gaussian projection. \section*{Acknowledgment} This work was supported in part by National Program on Key Basic Research Project (No. 2015CB352300), National Natural Science Foundation of China (No. 61772304, No. 61521002, No. 61531006, No. 61702296), National Natural Science Foundation of China Major Project (No. U1611461), the research fund of Tsinghua-Tencent Joint Laboratory for Internet Innovation Technology, and the Young Elite Scientist Sponsorship Program by CAST. All opinions, findings and conclusions in this paper are those of the authors and do not necessarily reflect the views of the funding agencies. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-09-11T02:17:58", "yymm": "1805", "arxiv_id": "1805.02396", "language": "en", "url": "https://arxiv.org/abs/1805.02396" }
\subsection{$(\Delta+1)$ Coloring for $\Delta\leq n^{3/4}$}\label{sec:threequarter} In this section, we show that one can modify the algorithm of the previous section, so that it can be implemented efficiently already for larger values of $\Delta \in [\sqrt{n},n^{4/3}]$. \begin{lemma}\label{lem:det-small-deg} There exists a deterministic algorithm that given a graph $G=(V,E)$ of maximum degree $\Delta=O(n^{3/4})$ where each node $u$ has a palette $\mathsf{Pal}(u)$ of at least $\deg(u,G)+1$ colors in the range $[1,\ldots, \Delta+1]$ computes a list-coloring in $O(\log \Delta)$ rounds. \end{lemma} We will have $O(\log \Delta)=O(\log n)$ phases, each will color at least a constant faction of the nodes. Each phase $j$ will be implemented in two steps. The first step computes a large subset of nodes $A'$ that contains at least a constant fraction of the currently uncolored nodes, along with a set of colors $S(u)\subseteq \mathsf{Pal}(u)$ for each node $u \in A'$. These $S(u)$ sets will have two desired properties. On the one hand, they will be large enough so that when running a single (slightly modified) step of Algorithm $\mathsf{SimpleRandColor}$ using the $S(u)$ sets as the palettes, each node will be colored with constant probability. On the other hand, they will be sufficiently small to allow each node to send it to all its \emph{relevant} neighbors. We will say that $u$ and $v$ are relevant neighbors if they are neighbors in $G$ and $S(u) \cap S(v) \neq \emptyset$. The number of relevant neighbors of a node serves as an upper bound on the number of its competitors on a given color. The second step of the phase will then simulate a single round of Algorithm $\mathsf{SimpleRandColor}$ in a similar way to the algorithm of Sec. \ref{sec:deg-very-small} up to minor modifications. We first describe a randomized algorithm that uses only $O(1)$-wise independence and then explain how to derandomize it efficiently. \subsubsection{Randomized Algorithm with $O(1)$-Wise Independence (Phase $j$).} For each node $u$, let $\mathsf{Pal}(u)$ be the set of currently free colors in the palette of $u$ at the beginning of phase $j$, and denote by $F_u=|\mathsf{Pal}(u)|$. Let $V_j$ the set of uncolored nodes at the begining of the phase and $\Gamma_j(u)=\Gamma(u)\cap V_j$, i.e., the set of currently uncolored neighbors of $u$. Clearly, $F_u > |\Gamma_j(u)|$. \paragraph{Step 1.} The goal is to compute a subset $A'$ and a set of colors $S(v) \subseteq \mathsf{Pal}(v)$ for each $v \in A'$, that will be used as the palettes in Step 2. Every uncolored node $u$ partitions its palette $\mathsf{Pal}(u)$ into $\Delta^{1/3}$ bins, each corresponds to a consecutive range of $\Delta^{2/3}$ colors. Specifically, for every $i \in \{1,\ldots, \Delta^{1/3}-1\}$, let $B_{i,u}$ be the set of free colors in $\mathsf{Pal}(u)$ restricted to the range $[(i-1)\Delta^{1/3}+1, i \cdot\Delta^{1/3}]$. Formally, $$B_{i,u}=[(i-1)\Delta^{1/3}+1, i \cdot\Delta^{1/3}] \cap \mathsf{Pal}(u) \mbox{~~and~~} B_{\Delta^{1/3}}=[\Delta-\Delta^{1/3}+1,\Delta+1] \cap \mathsf{Pal}(u)~.$$ A bin $B_{i,u}$ is \emph{small} if $|B_{i,u}|\leq \Delta^{1/12}$ and otherwise it is \emph{large}. Let $A_0$ be the subset of all currently uncolored nodes that at least $1/10$ fraction of their free colors are in small bins. That is, $A_0=\{ u \in V_j ~\mid~ \sum_{i: |B_{i,u}|\leq \Delta^{1/12}}|B_{i,u}|\geq F_u/10\}$. Let $A_1=V_j \setminus A_0$ be the set of remaining uncolored nodes. First assume that $|A_0|\geq |A_1|$. In this case, $A'=A_0$, and for every $u \in A'$, let $S(u)=\bigcup_{i: |B_{i,u}|\leq \Delta^{1/12}} B_{i,u}$ be the set of all the colors in the small bins of $u$. From now assume that $|A_1| > |A_0|$. We will describe how to compute $A' \subseteq A_1$ and the $S(u)$ sets for each $u \in A'$. Each node $u \in A_1$ sets $S(u)=B_{i,u}$ with probability $|B_{i,u}|/F_u$ for every $i \in \{1,\ldots, \Delta^{1/3}\}$. The decisions of the nodes are $O(1)$-wise independent. Then, each node $u$ sends to its $A_1$-neighbors the index $i_u$ of its chosen bin. Let $r_{i_u}$ be the total number of $u$'s neighbors in $A_1$ that picks the $i_u$'th bin. We then say that $u$ is \emph{happy} if $r_{i_u}\leq 11|B_{i,u}|$. That is, the number of relevant neighbors of $u$ w.r.t $S(u)$ is at most $11|S(u)|$. Otherwise, the node is sad. The final set $A'$ contains all the happy nodes. This completes the description of the first step (of the $j$'th phase). \paragraph{Step 2.} In the second step, we apply a single round of Algorithm $\mathsf{SimpleRandColor}$ on the nodes in $A'$ using the $S(u)$ sets as their palettes. The only modification is that we reduce the probability of a node to pick a color to be $1/22$ (rather than $1/2$). That is, each $u \in A'$ first flips a biased coin such that w.p. $21/22$, $c_u=0$; and with the remaining probability, $u$ picks a color uniformly at random in $S(u)$. The nodes decisions are pairwise independent. \paragraph{Analysis.} We first show that each phase can be implemented in $O(1)$ number of rounds. In the first step, each node in $A_1$ sends to each of its at most $\Delta$ neighbors, the statistics on the number of free colors in each of its bins, i.e., $|B_{i,u}|$ for every $i \in \{1,\ldots, \Delta^{1/3}\}$. The total number of sent messages is $O(\Delta^{4/3})=O(n)$. Thus this can be done in $O(1)$ rounds. We next claim that the number of nodes that got colored in phase $j$ is reduced by a constant factor in expectation, and thus after $O(\log n)$ phases all nodes are colored w.h.p. We start by showing the following: \begin{lemma}\label{cor:final} With high probability, at the end of the first phase, we have computed a subset of nodes $A'$ and a subset colors $S(u) \subseteq \mathsf{Pal}(u)$ for each $u \in A'$ such that (i) $A'$ contains at least a constant fraction of the uncolored nodes $V_j$ in expectation, (ii) the number of $u$'s relevant neighbors in $A'$ is at most $11|S(u)|$, where a node $v \in \Gamma_j(u) \cap A'$ is a relevant neighbor of $u$ if $S(u) \cap S(u) \neq \emptyset$. \end{lemma} To prove the lemma, we will first consider the more interesting case where $|A_1|\geq |A_0|$. We will show that there are many happy nodes in $A_1$ after the first step. Let $R_{i,u}$ be the random variable indicating the number of $u$'s neighbors that chose the $i$'th bin, for every $i \in \{1,\ldots, \Delta^{1/3}\}$. We call a bin $B_{i,u}$ \emph{bad} if $|B_{i,u}| \leq \mathsf{Exp}(R_{i,u})/10$, and otherwise it is \emph{good}. Note that the classification into bad and good does not depend on the random choices. \begin{claim}\label{cl:clonce} W.h.p, for every good and large bin $B_{i,u}$, it holds that $R_{i,u}\leq 11 \cdot |B_{i,u}|$. \end{claim} \begin{proof} Fix a node $u \in A_1$, and consider a good and large bin $B_{i,u}$ with $\mu=\mathsf{Exp}(R_{i,u})$. Since $|B_{i,u}|=\Omega(\Delta^{1/12})$ (as it is large) and as $|B_{i,u}|=\Omega(\mathsf{Exp}(R_{i,u}))$ (as it is good), using the concentration inequality for $c$-wise independence from \cite{Vadhan12}, we get that $$\mathsf{Prob}(|R_{i,u}-\mu|\geq |B_{i,u}|)\leq ((c\mu+c^2)/|B_{i,u}|^2)^{c}\leq 1/n^{10}.$$ Therefore, with high probability it holds that $R_{i,u}\leq \mu+|B_{i,u}|\leq 11|B_{i,u}|$ . The claim holds by applying the union bound over all $\Delta^{1/3}$ bins of $u$, and over all $n$ nodes. \end{proof} \begin{claim}\label{cl:cltwo} The probability that $u$ chooses a bad bin is at most $1/10$. \end{claim} \begin{proof} Since $\mathsf{Exp}(R_{i,u})=\sum_{v \in \Gamma_j(u)\cap A_1} |B_{i,v}|/F_v$, we have that $$\sum_i \mathsf{Exp}(R_{i,u})\leq \sum_i \sum_{v \in \Gamma_j(u)} |B_{i,v}|/F_v =\sum_{v \in \Gamma_j(u)}\sum_i |B_{i,v}|/F_v \leq F_u-1~.$$ Therefore, $$\mathsf{Prob}(u \mbox{ chooses bad bin})=\sum_{i: |B_{i,u}| \leq \mathsf{Exp}(R_{i,u})/10} |B_{i,u}|/F_u \leq \sum_{i: |B_{i,u}| \leq \mathsf{Exp}(R_{i,u})/10} \mathsf{Exp}(R_{i,u})/(10F_u) \leq 1/10~.$$ \end{proof} \begin{corollary}\label{cor:happy} The probability that $u \in A_1$ is happy is at least $1/2$. \end{corollary} \begin{proof} There are three bad events that prevent $u$ from being happy. (i) $u$ chose a bad bin. By Claim \ref{cl:cltwo}, this happens with probability at most $1/10$.\\ (ii) $u$ chose a small bin. Since $u \in A_1$, at least $9/10$ of its colors are in large bins. The probability to choose a small bin is at most $\sum_{i: |B_{i,u}|\leq \Delta^{1/12}}|B_{i,u}|/F_u\leq 1/10$.\\ (iii) $u$ chose a large and good bin $B_{i,u}$, but with at least $r_{i_u}> 11 \cdot |B_{i,u}|$ relevant neighbors. By Claim \ref{cl:clonce}, this happens with probability at most $1/n^5$. Therefore, the probability that none of the bad events happened is at least $1/2$. \end{proof} \begin{proof}[Proof of Lemma \ref{cor:final}] First consider the simpler case where $|A_1|\geq |A_0|$. By Cor. \ref{cor:happy}, $|A'|$ contains at least half of the nodes in $A_1$ in expectation. Since each node in $A'$ is happy, claim (ii) holds as well. Next, consider the complementary case where $A'=A_0$ and for every $u \in A_0$, the set $S(u)$ is the union of the colors in the small bins of $u$. By the definition of $A_0$, $|S(u)|\geq 1/10 |\Gamma'(u)| \geq 1/10 |\Gamma(u)\cap A_0|$, where $\Gamma'(u)$ is the set of $u$'s neighbor at the beginning of the $j$'th phase. Claim (ii) holds as well. \end{proof} Due to Lemma \ref{cor:final}, we know that $A'$ is large in expectation, and thus it is sufficient to show that Step 2 colors each node $u \in A'$ with constant probability (when using pairwise independence). Assume that $c_u >0$. Then, for any neighbor $v$, w.p. $1/22$ it holds that $c_v\neq 0$. In addition, given that $c_v \neq 0$, the probability that $c_u=c_v$ is $1/|S(u)|$. Over all, $\mathsf{Prob}(c_u =c_v ~\mid~ c_u\neq 0)=1/22 \cdot 1/|S(u)|$. Since the number of $u$'s relevant neighbors is at most $11|S(u)|$, by applying the union bound, we get that the probability that $u$'s color is legal, given that $c_u \neq 0$, is $1/2$. The probability that $c_u\neq 0$ is $1/22$, and thus the probability the $u$ is colored is $1/44$. \paragraph{Derandomization for the First Step.} In the case where $|A_0|\geq |A_1|$ (i.e., many nodes have lots of colors in small bins), the first step is deterministic. So, we will consider the case where $|A_1|\geq |A_0|$. Since the arguments are based on $O(1)$-wise independence, the total seed length is $O(\log n)$. By letting each node $v$ send the values of $|B_{i,u}|$ for every $i$, for any possible seed, each node $u$ can simulate the selection of the bins made by its neighbors. Our goal is to maximize the number of happy nodes: nodes $u$ that picked a bin $B_{i,u}$ with low competition, i.e., such that the number of their $A_1$-neighbors that picked that bin is at most $11|B_{i,u}|$. Since in expectation, the number of these nodes is at least a constant fraction of the current uncolored nodes, using the method of conditional expectation, we can compute the desired $O(\log n)$-bit seed within $O(1)$ rounds. \paragraph{Derandomization for the Second Step.} The second step would start by letting each node $u \in A'$ send its set $S(u)$ to each of its relevant neighbors in $A'$. In the case where $A'=A_0$, each node sends $S(u)$ to all its $A'$-neighbors. In the other case, for each node $u \in A'$, $S(u)=B_{i,u}$ for some $i \in \{1,\ldots, \Delta^{1/3}\}$. The node $u$ will then send $S(u)$ to any $A'$-neighbor $v$ that also picked its $i^{th}$ bin, i.e., to any neighbor $v \in A'$ with $S(v)=B_{i,v}$. These are the only potential competitors for $u$ on its colors. We are now in the situation where each node knows the ``palettes" of its relevant neighbors. From that point on the derandomization of the single step of the modified $\mathsf{SimpleRandColor}$ Algorithm is basically the same as in Sec. \ref{sec:deg-very-small}. As shown above, when using pairwise independence, a node in $A'$ gets colored w.p. at least $1/44$. As each node $u$ knows the palettes $S(v)$ of its relevant neighbors, using a seed (and the nodes ID), they can simulate the decision of their relevant neighbors. The goal would be to maximize the number of colored nodes using the method of conditional expectation, in the exact same manner as in Sec. \ref{sec:deg-very-small}. \paragraph{Round Complexity.} Unlike the randomized algorithm, our deterministic solution requires that the nodes will be able to simulate the decisions made by their neighbors. For that purpose we will need to send the set $S(u)$ of colors to all relevant neighbors of $u$. We will show that this is possible both for $A_0$ and for the case where $A' \subseteq A_1$. First assume that $|A_0|\geq |A_1|$. The total number of messages sent and received\footnote{As each node in $A_0$ has at most $O(\Delta^{5/12})$ neighbors in $A_0$ by definition.} by a node $u \in A_0$ is $O(\Delta^{2/3+2/12})=O(n)$. So sending all these palettes can be done in $O(1)$ number of rounds. Now consider the case where $|A_1|\geq |A_0|$ and let $A' \subseteq A_1$ be the set of happy nodes. Each node $u \in A'$ sends its chosen bin $B_{i,u}$ to $O(|B_{i,u})$ relevant neighbors. Since $|B_{i,u}|=O(\Delta^{2/3})$, each node sends at most $O(\Delta^{4/3})=O(n)$ messages. Again, this can be done in $O(1)$ number of rounds. \subsection{Handling the General Case} We will have a $O(1)$-round procedure to partition the nodes into $\ell=O(\Delta^{1/4})$ sub-graphs $G_1,\ldots, G_{\ell}$ and one additional sub-graph $G^*$ that will be list-colored after coloring the $G_i$ subgraphs. Each subgraph $G_i$ will be assigned a disjoint set of $\Delta(G_i)+1$ colors, and thus we will be able to color all these subgraphs in parallel using $O(\log \Delta)$ rounds using the algorithm of Lemma \ref{lem:det-small-deg}. We first show that the desired partitioning can be done with $O(1)$-wise independence and then explain how to derandomize it. Each node picks a subgraph $i \in [1,\ell]$ w.p. $p_i=1/\ell-q/\ell$ for $\ell=\lceil \Delta^{1/4} \rceil$ and $q=2\Delta^{-5/16}$. The decisions of the nodes are $O(1)$-wise independent. Using the basic concentration bound for $O(1)$-wise independence, we get that w.h.p., $$\Delta(G_i)\leq \Delta \cdot p_i+(\Delta\cdot p_i)^{7/12} \leq \Delta^{3/4}-q \cdot \Delta^{3/4}+\Delta^{7/16}\leq \Delta^{3/4}-1~.$$ Therefore the total number of colors consumed by the $\ell$ subgraphs is at most $\Delta$. By using concentration bounds again, w.h.p., the maximum degree of the left over-subgraph $G^*$ is at most $O(\Delta \cdot q)=O(\Delta^{11/16})=O(n^{3/4})$. Thus after coloring the graphs $G_1,\ldots, G_\ell$, the final sub-graph $G^*$ can be colored in $O(\log \Delta)$ rounds by applying again the algorithm of Lemma \ref{lem:det-small-deg}. We now show that the random partitioning can be derandomized. Given a seed of $O(\log n)$-bits seed, each vertex $u$ can determine its degree in its chosen subgraph. We have $\ell+1$ subgraphs $G_1, \ldots, G_\ell, G^*$ such that w.h.p. over the $O(\log n)$-bit random seed the maximum degree of each $G_i$ subgraph is at most $\Delta^{3/4}$ and the maximum degree of $G^*$ is $O(\Delta^{11/16})$. For a given seed, we say that the subgraph $G_i$ is bad if its maximum degree exceeds $\Delta^{3/4}$. In the same manner, $G^*$ is bad if its maximum degree exceeds $\Delta^{11/16}$. Our goal is to minimize the number of bad subgraphs. For each partial assignment, the node $u$ will compute the probability that its maximum degree violates the desired bound (i.e., $\Delta^{3/4}$ for $G_i$ and $\Delta^{5/6}$ for $G^*$). We will then pick the assignment that minimize the probability (or expectation) of these violations. Since the expected number of bad subgraphs over the random seeds is zero, using this conditional expectation method we will find a seed whose total number of violations is $0$. Next, the algorithm applies the procedure of Lemma \ref{lem:det-small-deg} to color the subgraphs $G_1,\ldots, G_\ell$, and then applied the same procedure to list-color the final subgraph $G^*$. We next observe that these subgraphs can be colored simultaneously within the same number of rounds. This clearly holds for the randomized procedure described in Subsec. \ref{sec:threequarter}. The derandomization procedure is based on revealing a chunk of $\lfloor \log n \rfloor$ bits at at time by sending $O(\log n)$ bits to at most each node in the graph. Since we need to run at most $n^{1/4}$ instances in parallel the derandomization will be slightly changed. Instead of revealing a chunk of $\lfloor \log n \rfloor$ bits, the algorithm of Lemma \ref{lem:det-small-deg} will reveal only $\lfloor \log n/2 \rfloor$ bits in $O(1)$ rounds. Since the seed length is $O(\log n)$, the derandomization will be still done in $O(1)$ rounds. As revealing the values of $\lfloor \log n/2 \rfloor$ bits in the seed requires only $2^{\lfloor \log n/2 \rfloor}\leq n^{1/2}$ nodes (one per assignment to these bits), all the $O(n^{1/4})$ instances can be now derandomized simultaneously. \section{Introduction} Graph coloring is one of the most central symmetry breaking problems, with a wide range of applications to distributed systems and wireless networks. The most studied coloring problem is the $(\Delta+1)$ \emph{vertex} coloring in which all nodes are given the \emph{same} palette of $\Delta+1$ colors, where $\Delta$ is the maximum degree in the graph. Vertex coloring among other LCL problems\footnote{LCL stands for Locally Checkable Labelling problems, see \cite{naor1995can}.} (e.g., MIS, matching) are traditionally studied in the ${\mathsf{LOCAL}}$\ model in which any two neighboring vertices in the input graph can exchange arbitrarily long messages. \par In recent years there has been a tremendous progress in the understanding of the randomized and the deterministic complexities of many LCL problems in the ${\mathsf{LOCAL}}$\ model \cite{ChangKP16,ghaffari2016complexity,chang2017time,fischer2017sublogarithmic,newclasses18}. Putting our focus on the $(\Delta+1)$ coloring problem, in a seminal work, Schneider and Wattenhofer \cite{schneider2010new} showed that increasing the number of colors from $\Delta+1$ to $(1+\epsilon)\Delta$ has a dramatic effect on the round complexity and coloring can be computed in just $O(\log^* n)$ rounds when $\epsilon=\Omega(1)$ and $\Delta > \mathsf{poly}\log n$. This has led to two recent breakthroughs. Harris, Schneider and Su \cite{harris2016distributed} showed an $O(\sqrt{\log \Delta})$-round algorithm for $(\Delta+1)$ coloring, providing a separation for the first time between MIS and coloring (due to the MIS lower bound of \cite{kuhn2016local}). In a recent follow-up breakthrough, Chang, Li and Pettie \cite{CHP18} extended the technique of \cite{harris2016distributed} to obtain the remarkable and quite extraordinary round complexity of $O(\log^* n+\mathsf{Det}_{\deg}(\mathsf{polylog} n))$ for the $(\Delta+1)$-list coloring problem where $\mathsf{Det}_{\deg}(n')$ is the deterministic round complexity of $(\deg+1)$ list coloring algorithm\footnote{In the $(\deg+1)$ list coloring problem, each vertex $v$ is given a palette with $\deg(v,G)+1$ colors.} in $n'$-vertex graph. Both of these recent breakthroughs use messages of large size, potentially of $\Omega(n)$ bits. In view of these recent advances, the understanding of LCL problems in bandwidth-restricted models is much more lacking. Among these models, the congested clique model \cite{lotker2005minimum}, which allows all-to-all communication has attracted a lot of attention in the last decade and more recently, in the context of LCL problems \cite{berns2012super,hegeman2014near,hegeman2015lessons,Censor-HillelPS17,GhaffariMIS17,gregorylocal18}. In the congested clique model, each node can send $O(\log n)$ bits of information to any node in the network (i.e., even if they are not connected in the input graph). The ubiquitous of overlay networks and large scale distributed networks make the congested clique model far more relevant (compared to the ${\mathsf{LOCAL}}$\ and the ${\mathsf{CONGEST}}$\ models) in certain settings. \paragraph{Randomized LCL in the Congested Clique Model.} Starting with Barenboim et al. \cite{barenboim2016locality}, currently, all efficient randomized algorithms for classical LCL problems have the following structure: an initial randomized phase and a \emph{post-shattering} deterministic phase. The shattering effect of the randomized phase which dates back to Beck \cite{beck1991algorithmic}, breaks the graph into subproblems of $\mathsf{poly} \log n$ size to be solved deterministically. In the congested-clique model, the shattering effect has an even more dramatic effect. Usually, a node survives (i.e., remained undecided) the randomized phase with probability of $1/\mathsf{poly}(\Delta)$. Hence, in expectation the size of the remaining unsolved graph is\footnote{Using the bounded dependencies between decisions, this holds also with high probability.} $O(n)$. At that point, the entire unsolved subgraph can be solved in $O(1)$ rounds, using standard congested clique tools (e.g., the routing algorithm by Lenzen \cite{lenzen2013route}). Thus, as long as the main randomized part uses short messages, the congested clique model ``immediately" enjoys an improved round complexity compared to that of the ${\mathsf{LOCAL}}$\ model. In a recent work \cite{GhaffariMIS17}, Ghaffari took it few steps farther and showed an $\widetilde{O}(\sqrt{\log \Delta})$-round randomized algorithm for MIS in the congested clique model, improving upon the state-of-the-art complexity of $O(\log \Delta+2^{O(\sqrt{\log\log n})})$ rounds in the ${\mathsf{LOCAL}}$\ model, also by Ghaffari \cite{ghaffari2016improved}. When considering the $(\Delta+1)$ coloring problem, the picture is somewhat puzzling. On the one hand, in the ${\mathsf{LOCAL}}$\ model, $(\Delta+1)$ coloring is provably simpler then MIS. However, since all existing $o(\log \Delta)$-round algorithms for $(\Delta+1)$ coloring in the ${\mathsf{LOCAL}}$\ model, use large messages, it is not even clear if the power of all-to-all communication in the congested clique model can compensate for its bandwidth limitation and outperform the ${\mathsf{LOCAL}}$\ round complexity, not to say, even just match it. We note that on hind-sight, the situation for MIS in the congested clique was somewhat more hopeful (compared to coloring), for the following reason. The randomized phase of Ghaffari's MIS algorithm although being in the ${\mathsf{LOCAL}}$\ model \cite{ghaffari2016improved}, used \emph{small} messages and hence could be implemented in the ${\mathsf{CONGEST}}$\ model with the same round complexity. To sum up, currently, there is no $o(\log \Delta)$-round algorithm for $(\Delta+1)$ coloring in any bandwidth restricted model, not even in the congested-clique. \paragraph{Derandomization of LCL in the Congested-Clique Model.} There exists a curious gap between the known complexities of randomized and deterministic solutions for local problems in the ${\mathsf{LOCAL}}$\ model (\cite{ChangKP16,ghaffari2016complexity}). Censor et al. \cite{Censor-HillelPS17} initiated the study of \emph{deterministic} LCL algorithms in the congested clique model by means of derandomization. The main take home message of \cite{Censor-HillelPS17} is as follows: for most of the classical LCL problems there are $\mathsf{poly} \log n$ round randomized algorithms (even in the ${\mathsf{CONGEST}}$\ model). For these algorithms, it is usually sufficient that the random choices made by vertices are \emph{almost} independent. This implies that each round of the randomized algorithm can be simulated by giving all nodes a shared random seed of $\mathsf{poly} \log n$ bits. To dernadomize a single round of the randomized algorithm, nodes should compute (deterministically) a seed which is at least as ``good"\footnote{The random seed is usually shown provide a large progress in expectation. The deterministically computed seed should provide a progress at least as large as the expected progress of a random seed.} as a random seed would be. To compute this seed, they need to estimate their ``local progress" when simulating the random choices using that seed. Combining the techniques of conditional expectation, pessimistic estimators and bounded independence leads to a simple ``voting"-like algorithm in which the bits of the seed are computed \emph{bit-by-bit}. Once all bits of the seed are computed, it is used to simulate the random choices of that round. For a recent work on other complexity aspects in the congested clique, see \cite{korhonen2017brief}. \subsection{Main Results and Our Approach} We show that the power of all-to-all communication compensates for the bandwidth restriction of the model: \begin{mdframed}[hidealllines=true,backgroundcolor=gray!25] \vspace{-8pt} \begin{theorem}\label{thm:maincol} There is a randomized algorithm that computes a $(\Delta+1)$ coloring in $O(\log\log \Delta \cdot \log^* n)$ rounds of the congested clique model, with high probability\footnote{As usual, by high probability we mean $1-1/n^c$ for some constant $c\geq 1$.}. \vspace{-3pt} \end{theorem} \end{mdframed} This significantly improves over the state-of-the-art of $O(\log \Delta)$-round algorithm for $(\Delta+1)$ in the congested clique model. It should also be compared with the round complexity of $(2^{O(\sqrt{\log\log n})})$ in the ${\mathsf{LOCAL}}$\ model, due to \cite{CHP18}. As noted by the authors, reducing the ${\mathsf{LOCAL}}$\ complexity to below $O((\log\log n)^2)$ requires a radically new approach. Our $O(\log\log \Delta \cdot \log^*n)$ round algorithm is based on a recursive degree reduction technique which can be used to color any almost-clique graph with $\Delta=\widetilde{O}(n^{1-o(1)})$ in essentially $O(\log^* \Delta)$ rounds. \begin{mdframed}[hidealllines=true,backgroundcolor=gray!25] \vspace{-8pt} \begin{theorem}\label{thm:coleps} (i) For every $\epsilon \in (0,1)$, there is a randomized algorithm that computes a $(\Delta+1)$ coloring in $O(\log(1/\epsilon) \cdot \log^* n)$ rounds for graphs with $\Delta=O((n/\log n)^{1-\epsilon})$, (ii) This also yields a $(\Delta+\Delta^{1/2+o(1)})$ coloring in $O(\log^*\Delta)$ rounds, with high probability. \vspace{-3pt} \end{theorem} \end{mdframed} Claim (ii) improves over the $O(\Delta)$-coloring algorithm of \cite{hegeman2015lessons} that takes $O(\log\log \log n)$ rounds in \emph{expectation}. We also provide fast \emph{deterministic} algorithms for $(\Delta+1)$ list coloring. The stat-of-the-art in the ${\mathsf{LOCAL}}$\ model is $\widetilde{O}(\sqrt{\Delta})+\log^* n$ rounds due to Fraigniaud, Heinrich, Marc and Kosowski \cite{fraigniaud2016local}. \begin{mdframed}[hidealllines=true,backgroundcolor=gray!25] \vspace{-8pt} \begin{theorem}\label{thm:coldet} There is a deterministic algorithm that computes a $(\Delta+1)$ coloring in $O(\log \Delta)$ rounds of the congested clique model and an $O(\Delta^2)$ coloring in $O(1)$ rounds. \vspace{-3pt} \end{theorem} \end{mdframed} In \cite{Censor-HillelPS17}, a deterministic algorithm for $(\Delta+1)$ coloring in $O(\log \Delta)$ rounds was shown only for the case where $\Delta=O(n^{1/3})$. Here it is extended for $\Delta=\Omega(n^{1/3})$. This is done by derandomizing an $(\Delta+1)$-list coloring algorithm which runs in $O(\log n)$ rounds. Similarly to \cite{Censor-HillelPS17}, we first show that this algorithm can be simulated when the random choices made by the nodes are pairwise independent. Then, we enjoy the small search space and employ the method of conditional expectations. Instead of computing the seed bit by bit, we compute it in chunks of $\lfloor \log n\rfloor$ bits at a time, by fully exploiting the all-to-all power of the model. \paragraph{The Challenges and the Degree Reduction Technique.} Our starting observation is that the CLP algorithm \cite{CHP18} can be implemented in $O(\log^* \Delta)$ rounds in congested clique model for $\Delta=O(\sqrt{n})$. When $\Delta=O(\sqrt{n})$, using Lenzen's routing algorithm \cite{lenzen2013route}, each node can learn in $O(1)$ rounds, the palettes of all its neighbors along with the neighbors of its neighbors. Such knowledge is mostly sufficient for the CLP algorithm to go through. To handle large degree graphs, we design a graph sparsification technique that essentially reduces the problem of $(\Delta+1)$ coloring for an arbitrarily large $\Delta=\widetilde{O}(n^{1-\epsilon})$ into $\ell=O(\log(1/\epsilon))$ (non-independent) subproblems. In each subproblem, one has to compute a $(\Delta'+1)$ coloring for a subgraph with $\Delta'=O(\sqrt{n})$, which can be done in $O(\log^* \Delta)$ rounds, using a modification of the CLP algorithm, that we describe later on. \setlength{\columnsep}{18pt}% \begin{wrapfigure}{r}{7cm} \centering \vspace{-5pt} \includegraphics[width=0.28\textwidth]{coloringcol.pdf} \vspace{-5pt} \caption{{\footnotesize Illustration of the recursive sparsification. Gray boxes correspond to subgraphs with maximum degree $O(\sqrt{\Delta})$.} \vspace{-2pt}} \label{fig:recursivecol} \end{wrapfigure} Since there many dependencies between these $\ell$ sub-problems, it is required by our algorithm to solve them one-by-one, leading to a round complexity of $O(\log(1/\epsilon) \log^* \Delta)$. See \Cref{fig:recursivecol} for an illustration of the recursion levels. To get an intuition into our approach and the challenges involved, consider an input graph $G$ with maximum degree $\Delta=(n/\log n)^{1-\epsilon}$ and a palette $\mathsf{Pal}(G)=\{1,\ldots, \Delta+1\}$ given to each node in $G$. A natural approach (also taken in \cite{hegeman2015lessons}) for handling a large degree graph is to decompose it (say, randomly) into $k$ vertex disjoint graphs $G_1, G_2, \ldots , G_{k}$, allocate a \emph{distinct} set of colors for each of the subgraphs taken from $\mathsf{Pal}(G)$ and solve the problem recursively on each of them, enjoying (hopefully) smaller degrees in each $G_i$. Intuitively, assigning a \emph{disjoint} set of colors to each $G_i$ has the effect of \emph{removing} all edges connecting nodes in different subgraphs. Thus, the input graph $G$ is sparsified into a graph $G'=\bigcup G_i$ such that a legal coloring of $G'$ (with the corresponding palettes given to the nodes) is a legal coloring for $G$. The main obstacle in implementing this approach is that assigning a distinct set of $\Delta(G_i)+1$ colors to each of the $G_i$ subgraph might be beyond the budget of $\Delta+1$ colors. Indeed in \cite{hegeman2015lessons} this approach led to $O(\Delta)$ coloring rather than $(\Delta+1)$. To reduce the number of colors allocated to each subgraph $G_i$, it is desirable that the maximum degree $\Delta(G_i)$ would be as small as possible, for each $G_i$. This is exactly the problem of $(k,p)$ \emph{defective coloring} where one needs to color the graph with $k$ colors such that the number of neighbors with the same color is at most $p$. To this point, the best defective coloring algorithm for large degrees is the randomized one: let each node pick a subgraph $G_i$ (i.e., a color in the defective coloring language) uniformly at random. By a simple application of Chernoff bound, it is easy to see that the partitioning is ``almost" perfect: w.h.p., for every $i$, $\Delta(G_i)\leq \Delta/k+\sqrt{\log n\cdot \Delta/k}$. Hence, allocating $\Delta(G_i)+1$ colors to each subgraphs consumes $\Delta+\widetilde{O}(\sqrt{\Delta k})$ colors. To add insult to injury, this additive penalty of $\widetilde{O}(\sqrt{\Delta k})$ is only for one recursion call! It is interesting to note that the parameter $k$ -- number of subgraphs (colors) -- plays a key role here. Having a large $k$ has the benefit of sharply decreasing the degree (i.e., from $\Delta$ to $\Delta/k$). However, it has the drawback of increasing the standard deviation and hence the total number of colors used. Despite these opposing effects, it seems that for whatever value of $k$ chosen, increasing the number of colors to $\Delta+\Delta^{\epsilon}$ is unavoidable. Our approach bypasses this obstacle by partitioning only a large fraction of the vertices into small-degree subgraphs \emph{but not all of them}. Keeping in mind that we can handle efficiently graphs with maximum degree $\sqrt{n}$, in every level of the recursion, roughly $1-1/\sqrt{\Delta}$ of the vertices are partitioned into subgraphs $G_1,\ldots, G_k$. Let $\Delta(G_i)$ be the maximum degree of $G_i$. The remaining vertices join a left-over subgraph $G^*$. The number of subgraphs, $k$, is chosen carefully so that allocating $\Delta(G_i)+1$ colors to each of the $k$ subgraphs, consumes at most $\Delta$ colors, on the one hand; and that the degree reduction in each recursion level is large enough on the other hand. These subgraphs are then colored recursively, until all remaining subgraphs have degree of $O(\sqrt{n})$. Once all vertices in these subgraphs are colored, the algorithm turns to color the left-over subgraph $G^*$. Since the maximum degree in $G^*$ is $O(\sqrt{n})$, it is tempting to use the CLP algorithm to complete the coloring, as this can be done in $O(\log^* n)$ rounds for such bound on the maximum degree. This is not so immediate for the following reasons. Although the degree of $v$ in $G^*$ is $O(\sqrt{n})$, the graph $G^*$ cannot be colored independently (as at that point, we ran out of colors to be solely allocate to $G^*$). Instead, the coloring of $G^*$ should agree with the coloring of the rest of the graph and each $v$ might have $\Omega(\Delta)=n^{1-\epsilon}$ neighbors in $G$. At first glance, it seems that this obstacle is easily solved by letting each $v \in G^*$ pick a subset of $\deg(v,G^*)=O(\sqrt{n})$ colors from its palette (i.e., removing the colors taken by its neighbors in $G\setminus G^*$). Now, one can consider only the graph $G^*$ with maximum degree $\sqrt{n}$, where each vertex has a palette of $\deg(v,G^*)+1$ colors. Unfortunately, this seemingly plausible approach has a subtle flaw: for the CLP algorithm it is essential that each vertex receives a palette with \emph{exactly} $\Delta(G^*)+1$ colors. This is indeed crucial and as noted by the authors adopting their algorithm to a $(\deg+1)$ coloring algorithm is highly non-trivial and probably calls for a different approach. In our setting, allocating each vertex $v \in G$ the exact same number of colors seems to be impossible as the number of available colors of each $v$ depends on the number of its neighbors in $G\setminus G^*$, and this number has some fluctuations due to the random partitioning of the vertices. To get out of this impasse, we show that after coloring all vertices in $G\setminus G^*$, every vertex $v \in G^*$ has $r_v \in [\Delta(G^*) \pm (\Delta(G^*))^{3/5}]$ available colors in its palette where $r_v \geq \deg(v,G^*)$. In other words, all vertices can be allocated ``almost" the same number of colors, but not exactly the same. We then carefully revise the basic definitions of the CLP algorithm and show that the analysis still goes through (upon minor changes) for this narrow range of variation in the size of the palettes. \paragraph{Paper Organization.} In \Cref{sec:clpcc}, we explain how the CLP algorithm of \cite{CHP18} can be simulated in $O(\log^* \Delta)$ congested-clique rounds when $\Delta=O(\sqrt{n})$. In \Cref{sec:deltathreefour}, we illustrate the degree-reduction technique on the case where $\Delta=O((n/\log n)^{3/4})$. \Cref{sec:deltaeps} extends this approach for $\Delta=O((n/\log n)^{1-\epsilon})$ for any $\epsilon \in (0,1)$, and \Cref{sec:generalcol} handles the general case and provides the complete algorithm. Finally, \Cref{sec:detcol} discusses deterministic coloring algorithms. \section{The Chang-Li-Pettie (CLP) Alg. in the Congested Clique}\label{sec:clpcc} \paragraph{High-level Description of the CLP Alg. in the ${\mathsf{LOCAL}}$\ Model.} In the description below, we focus on the main randomized part of the CLP algorithm \cite{harris2016distributed}, and start by providing key definitions and ideas from \cite{harris2016distributed}. Harris-Schneider-Su algorithm is based on partitioning the graph into an $\epsilon$-sparse subgraph and a collection of vertex-disjoint $\epsilon$-dense components, for a given input parameter $\epsilon$. Since the CLP algorithm extends this partitioning, we next formally provide the basic definitions from \cite{harris2016distributed}. For an $\epsilon \in (0,1)$, an edge $e=(u,v)$ is an $\epsilon$-friend if $|N(u) \cap N(v)|\geq (1-\epsilon)\cdot \Delta$. The endpoints of an $\epsilon$-friend edge are $\epsilon$-friends. A vertex $v$ is $\epsilon$-dense if $v$ has at least $(1-\epsilon)\Delta$ $\epsilon$-friends, otherwise it is $\epsilon$-sparse. A key structure that arises from the definition of $\epsilon$ dense vertices is that of $\epsilon$-\emph{almost clique} which is a connected component of the subgraph induced by the $\epsilon$-dense vertices and $\epsilon$-friend edges. The dense components, $\epsilon$-\emph{almost} cliques, have some nice properties: each component $C$ has at most $(1+\epsilon)\Delta$ many vertices, each vertex $v \in C$ has $O(\epsilon \Delta)$ neighbors outside $C$ (called \emph{external} neighbors) and $O(\epsilon\Delta)$ vertices in $C$ which are not its neighbors. In addition, $C$ has weak diameter at most $2$. Coloring the dense vertices consists of $O(\log_{1/\epsilon} \Delta)$ phases. The efficient coloring of dense regions is made possible by generating a random proper coloring inside each clique so that each vertex has a small probability of receiving the same color as one of its external neighbors. To do that, in each cluster a random permutation is computed and each vertex selects a tentative color from its palette excluding the colors selected by lower rank vertices. Since each component has weak diameter at most $2$, this process is implemented in $2$ rounds of the ${\mathsf{LOCAL}}$\ model. The remaining \emph{sparse} subgraph is colored using a Schneider-Wattenhofer style algorithm \cite{schneider2010new} within $O(\log(1/\epsilon))$ rounds. In Chang-Li-Pettie algorithm the vertices are partitioned into $\ell=\lceil \log\log \Delta \rceil$ layers in decreasing level of density. This hierarchical partitioning is based on a sequence of $\ell$ sparsity thresholds $\epsilon_1,\ldots,\epsilon_\ell$ where $\epsilon_i=\sqrt{\epsilon_{i-1}}$. Roughly speaking, level $i$ consists of the vertices which are $\epsilon_i$-dense but $\epsilon_{i-1}$-sparse. Instead of coloring the vertices layer by layer, the algorithm partitions the vertices in level $i$ into large and small components and partitions the layers into $O(\log^* \Delta)$ strata, vertices in the same stratum would be colored simultaneously. The algorithm colors vertices in $O(\log^* \Delta)$ phases, giving priority to vertices in small components. The procedures that color the dense vertices are of the same flavor as those of Harris-Schneider-Su. The key benefit in having the hierarchical structure is that the dense-coloring procedure is applied for $O(1)$ many phases on each stratum, rather than applying it for $O(\log_{1/\epsilon}\Delta)$ phases as in \cite{harris2016distributed}. \paragraph{An $O(\log^* \Delta)$-Round Alg. for $\Delta=O(\sqrt{n})$ in the Congested Clique.} We next observe that the randomized part of the CLP algorithm \cite{CHP18} can be implemented in the congested clique model when $\Delta=O(\sqrt{n})$ within $O(\log^* \Delta)$ rounds. We note that we obtain a round complexity of $O(\log \Delta^*)$ rather than $O(\log^* n)$ as in \cite{CHP18}, due to the fact that the only part of the CLP algorithm that requires $O(\log^* n)$ rounds was for coloring a subgraph with maximum constant degree. In the congested-clique model such a step can be implemented in $O(1)$ rounds using Lenzen's routing algorithm. We show: \begin{theorem}\label{thm:colorsqrtn} For every graph with maximum degree $\Delta=O(\sqrt{n})$, there is an $O(\log^* \Delta)$-round randomized algorithm that computes $(\Delta+1)$-list coloring in the congested clique model. \end{theorem} The main advantage of having small degrees is that it is possible for each node to collect its $2$-neighborhood in $O(1)$ rounds (i.e., using Lenzen's routing \cite{lenzen2013route}). As we will see, this is sufficient in order to simulate the CLP algorithm in $O(\log^* \Delta)$ rounds. The hierarchical decomposition of the vertices depends on the computation of $\epsilon$-dense vertices. By collecting the neighbors of its neighbors, every vertex can learn its $\epsilon$-dense friends and based on that deduce if it is an $\epsilon$-dense vertex for every $\epsilon$. In particular, for every edge $(u,v)$, $v$ can learn the minimum $i$ such that $u$ and $v$ are $\epsilon_i$-friends (i.e, the ``threshold" $\epsilon$). To allow each vertex $v$ compute the $\epsilon$-almost cliques to which it belongs, we do as follows. Each vertex $v$ sends to each of its neighbors $N(v)$, the minimum $\epsilon_i$ such that $u,v$ are $\epsilon_i$-friends, for every $u \in N(v)$. Since the weak diameter of each almost-clique is at most $2$, each vertex has collected all the required information from its $2^{nd}$ neighborhood to locally compute its $\epsilon_i$-almost cliques for every $\epsilon_i$. Overall, each vertex sends $O(\Delta)$ messages and receives $O(\Delta^2)=O(n)$ messages, collection this information can be done in $O(1)$ rounds for all nodes, using Lenzen's routing algorithm. The next obstacle is the simulation of the algorithm that colors the $\epsilon$-dense vertices. Since each $\epsilon$-almost clique $C$ has $(1+\epsilon)\Delta=O(\sqrt{n})$ vertices, we can make the leader of each such $C$ learn the palettes of all the vertices in its clique as well as their neighbors in $O(1)$ rounds. The leader can then locally simulate the dense-coloring procedure and notify the output color to each of its almost-clique vertices. Finally, coloring the sparse regions in a Schneider-Wattenhofer style uses messages of size $O(\Delta)$ and hence each vertex is the target of $O(\Delta^2)=O(n)$ messages which again can be implemented in $O(1)$ many rounds. By the above description, we also have: \begin{corollary}\label{cor:colorsqrtnparallel} Given $q$ \emph{vertex}-disjoint subgraphs $G_1,\ldots, G_q$ each with maximum degree $\Delta=O(\sqrt{n})$, a $(\Delta+1)$ coloring can be computed in $O(\log^* \Delta)$ rounds, for all subgraphs simultaneously. \end{corollary} A more detailed description of the algorithm and the proof of Cor. \ref{cor:colorsqrtnparallel} appears in \Cref{sec:clpdetails}. \def\APPENDFULLCLPSMALL{ The hierarchy is based on a sequence of $\ell=O(\log\log \Delta)$ sparsity parameters $(\epsilon_1,\ldots, \epsilon_\ell)$ and a set of vertices $V^*$ that are not yet colored. The sequence of sparsity parameters satisfies: (i) $\epsilon_1=\Delta^{-1/10}$, $\epsilon_i=\sqrt{\epsilon_{i-1}}$ and $\epsilon_\ell=1/K$ for a large enough constant $K$. For a sparsity parameter $\epsilon_i$, let $V^d_{\epsilon_i},V^{s}_{\epsilon_i}$ be the set of vertices which are $\epsilon_i$-dense (resp., sparse). Based on its $2$-hop neighborhood, each vertex locally computes if it is $\epsilon_i$-dense (in $V^d_{\epsilon_i}$) or $\epsilon_i$-sparse. This $\ell=O(\log\log \Delta)$ bits of information (i.e., there are $\ell$ sparsity parameters) are exchanged between neighbors on $G$ in a single round\footnote{In fact, it is even sufficient to send the first $i$ such that $v$ is an $\epsilon_i$-dense.}. The sparsity of the vertices define a hierarchy of $\ell$ \emph{levels}: $V_1, \ldots, V_\epsilon$ where $V_1=V^* \cap V_{\epsilon_1}^d$, $V_i=V^* \cap (V_{\epsilon_i}^d \setminus V_{\epsilon_{i-1}}^d)$ and $V_{sp}=V^* \cap V_{\epsilon_\ell}$. These $\ell$ levels are further grouped $s=O(\log^* \Delta)$ \emph{Strata} $W_1,\ldots, W_s$ where $W_1=V_1$ and $$W_k=\bigcup_{i: \epsilon_i \in (\xi_{k-1},\xi_k]} V_i \mbox{~~where~} \zeta_1=\epsilon_1, ~, \xi_k=1/\log(1/\xi_{k-1})~.$$ The computation of $\epsilon_i$-\emph{almost} clique for every $\epsilon_i$ can be computed using the $O(1)$-round connectivity algorithm of \cite{Jurdzinski018}. That is, the $\epsilon_i$-\emph{almost} clique are the connected components of the graph $G[V_{\epsilon_i}^d]$. Thus, in $O(\log\log \Delta)$ rounds, each vertex knows the members of its $\epsilon_i$-almost clique for every $\epsilon_i$. Each $i$-layer $V_i$ is partitioned into \emph{blocks} based on these clique structures. In particular, letting $\{C_1, C_2, \ldots, \}$ be $\epsilon_i$-almost cliques, then each clique $C_j$ defines a block $B_j=C_j \cap V_i$, that is the block $B_j$ contains the subset of vertices in $C_j$ that are $\epsilon_i$-dense but are \emph{not} $\epsilon_{i-1}$-dense. These blocks can be easily computed in $O(1)$ rounds as they only depend on the layer of the vertex and on the knowledge of the cliques. This block collection $(B_{i,1},B_{i,2}, \ldots, )$ is a partition of $V_i$, we call a block $B_{i,j}$ as a layer-$i$ block (and also stratum $k$ block for the right $k$). The algorithm uses a tree structure on these blocks: A layer-$i$ block $B$ is \emph{descendant} of layer-$j$ block $B'$ if $j>i$ and both $B, B'$ are subsets of the same $\epsilon_{j}$-almost clique. The root contains of the subset of sparse vertices, $V_{\epsilon_\ell}$. In $O(1)$ rounds, each vertex $v$ knows for each vertex $u \in G$ the first index $i$ such that $u$ is $\epsilon_i$-dense. In addition, in $O(\log\log\Delta)$, the vertex can tell the $\epsilon_i$-almost clique ID of all the vertices in the graph, for every $i \in \{1,\ldots, \ell\}$. Thus the entire block tree $\mathcal{T}$ is known to all the vertices. The algorithm colors vertices according to their stratum. The vertices in the stratum are divided into two: those belonging to \emph{large} blocks and those remaining to \emph{small} blocks. A stratum-$k$ block $B$ is a \emph{large} block if $|B|\geq \Delta/\log^2(1/\xi_k)$ and there is no other stratum-$k'$ block for $k'\geq k$ such that $B$ is a descendent of $B'$ and $|B'|\geq \Delta/\log^2(1/\xi_{k'})$. Otherwise, $B$ is \emph{small}. Define $V_i^S,V_i^L,W_k^S$ and $W_k^L$ be the set of all vertices in layer-$i$ small blocks, layer-$i$ large blocks, stratum-$k$ small blocks, and stratum-$k$ large blocks. Finally, the algorithm defines a partition of $W_k$ into super-blocks $(R_1,R_2, \ldots, )$ in the following manner. Let $i',i'+1,\ldots, i$ be the set of layers of stratum $k$. Let $C_1,\ldots, C_i$ be the set of $\epsilon_i$-almost cliques. Then for each $C_j$, define $R_j=C_j \cap W_k$ as the \emph{stratum}-$k$ super-block. \paragraph{Main Steps of the Coloring algorithm.} The first step of the algorithm applies Alg. $\mathsf{OneShotColoring}$ for $O(1)$ rounds which colors a small constant fraction of the vertices. In this algorithm the vertices simply send one color to their neighbors and hence it is trivially implemented in $O(1)$ rounds. Let $V^*$ be the remaining uncolored vertices. By the description above, the vertices compute the hierarchical decomposition into layers, blocks, stratum and super-blocks. In addition, all vertices know the virtual tree $\mathcal{T}$ which defines the relation between the blocks. We have that $V^*=(W_1^S, \ldots, W_s^S,W_1^L,\ldots, W_s^L,V_{sp})$. The vertices are colored in $s+2$ stages. First, all the small blocks are colored in $s$ phases: stratum by stratum. In other words, all the vertices in the small block are colored according in order: $W_s^S,\ldots, W_1^S$. Next, the algorithm colors the vertices of $W'=\bigcup_{j=2}^s W^L_j$, i.e., all the vertices in large blocks, except for those belonging to blocks of the first layer $V_1$. Lastly, the vertices of the large block $W_1^S$ are colored. At the end of this hierarchy-based coloring, there is a (small) subset of vertices that failed to be colored and a subset of sparse vertices in $V_{sp}$). These vertices are colored by a similar procedure, a final cleanup phase. In the high-level, the CLP algorithm consists of two main procedures: a coloring procedure for the dense vertices and a coloring procedure of the sparse vertices. \paragraph{Coloring $\epsilon$-Dense Vertices.} In the general setting, one is given $g$ subgraphs $S_1,\ldots, S_g$ (e.g., collection of $\epsilon$-almost cliques), each with weak diameter $2$. The coloring algorithm attempts to legally color a large fraction (as a function of the density) of the vertices in these subgraphs. In Alg. $\mathsf{DenseColoringStep}$ of CLP, all vertices agree on a value $Z_{ex}$ which is a lower bound on the number of access colors w.r.t $S_j$. In addition, each $v \in S_j$ is associated with a parameter $D_v$. The important parameter is $\delta_v=D_v /Z_{ex}$. The algorithm will attempt to color $1-\delta_v$ fraction of the vertices. To color $S_j$ the verices are permuted in increasing order by the $D$-value, and are colored one by one based on the order given by the permutation. In the local model, since each subgraph $S_j$ has weak diameter $2$, this coloring step can be done in $O(1)$ rounds. In the congested clique model, we cannot enjoy the small diameter of each clique. Instead we use the fact that each subgraphs $S_1,\ldots, S_g$ is \emph{small}. Since each $S_j$ is a \emph{super-block} it is a subset of some $\epsilon_i$-almost clique $C$ and each such clique has $(1+\epsilon)\Delta=O(\Delta)$ vertices. By letting each vertex $v \in S_j$ send its palette of colors and its neighbors to the leader $v_j$ of $S_j$, the leader can locally simulate this permutation-based coloring. Overall, each leader needs to receive $O(\Delta^2)=O(n)$ messages, which can be done in $O(1)$ rounds for all the subgraphs $S_j$ simultaneously, using Lenzen's routing algorithm. The vertices in the small blocks in each strata $W_k$ for $k\geq 2$ are colored by $O(1)$ applications of Alg. $\mathsf{DenseColoringStep}$. At that point, each vertex $v \in V^*$ will have at most $O(\epsilon^5 \Delta)$ uncolored neighbors in each layer $V_i$, with probability at least $1-exp(-\Omega(\mathsf{poly}\Delta))$. When $\Delta=\Omega(\log^4 n)$, after applying $\mathsf{DenseColoringStep}$ for $O(1)$ many times, the remaining uncolored part is partitioned into two subgraphs: the first has maximum degree $O(1)$ and hence can be locally colored in $O(1)$ rounds, by collecting it to one leader using Lenzen's routing. The second subgraph has, w.h.p., the property that each of its connected component has size $\mathsf{poly}\log n$. Then, by computing the connected components and collecting the topology of each connected component to the component' leader, this remaining graph can be colored in $O(1)$ many rounds. Coloring the vertices in large blocks (other than $1$) is done applying $6$ times Alg. $\mathsf{DenseColoringStep}$. Coloring of the vertices in $V_1$ that belong to large blocks is more involved but with respect to its implementation in the congested clique, it uses the same tools as above. I.e., after applying $\mathsf{DenseColoringStep}$ for several times, the remaining subgraph can be colored locally in $O(1)$ many rounds (due to small components or constant maximal uncolored degree). \paragraph{Coloring Sparse Vertices and Vertices with Many Access Colors.} The main procedure colors most of the vertices in $W_1,\ldots, W_s$ and leaves only a small fraction of uncolored vertices $U$, each has a large number of excess colors. The vertices in $U \cup V_{sp}$ are colored as follows. Consider first set of vertices in $U$ and orient that edges of $G[U]$ from the sparser endpoint to denser endpoint. Since each vertex knows the later of its neighbors, this is done in one round. If both endpoints are in the same layer, the edge is oriented towards the endpoint of smaller ID. The coloring of the previous steps guarantees that the outgoing degree of each vertex $v \in V_i$ is at most $O(\epsilon_{i-1}^{2.5}\cdot \Delta)$. By the initial coloring step, a vertex has $\Omega(\epsilon_{i-1}^2 \Delta)$ excess colors in its palette. The set of vertices $U$ are then colored using Alg. $\mathsf{ColorBidding}$ with runs in $O(\log^* \Delta)$ rounds in the local model (see Section 3.3 in \cite{CHP18}). It is easy to see that each vertex sends message of size $O(\log n/\epsilon)$ and hence it can be easily implemented in the congested clique model within the same number of rounds when $\Delta=O(\sqrt{n})$. The set of sparse vertices $V_{sp}$ is colored in a similar manner. All vertices in $U \cup V_{sp}$ that failed to be colored are added to the set of bad vertices $V_{bad}$. Overall, coloring the dense regions is done in $O(\log^* \Delta)$ rounds, and the clean-up phase of coloring the sparse region and the remaining uncolored vertices from strata at least $2$ is done in $O(\log^* \Delta)$ as well. Coloring the remaining uncolored vertices from strata $1$ is done in $O(1)$ rounds (this is the only step that requires $O(\log^* n)$ rounds, in the CLP algorithm). We next turn to prove \Cref{cor:colorsqrtnparallel} by showing that given a collection of vertex disjoint subgraphs with maximum degree $O(\sqrt{n})$, all subgraphs can be colored simultaneously in $O(\log^* \Delta)$ rounds. There are three main subroutines. Coloring the dense regions requires collecting the $2$-neighborhood topology, which can be done in $O(1)$ rounds, so long that $\Delta^2=O(n)$. Coloring sparse regions and the remaining uncolored vertices from strata at least $2$ also requires to have small maximum degree. Finally, consider the coloring of the remaining uncolored vertices from strata $1$. There are two types of left-over subgraphs. One with constant maximum degree -- all these subgraphs can be collected to a single leader in $O(1)$ rounds. The other subgraph contains components of poly-logarithmic size, since in such a case we color the subgraph by collecting information to the local leader of each component, this step can be done for all the vertex disjoint subgraphs simultaneously. \paragraph{Handling Non-Equal Palette Size for $\Delta=O(\sqrt{n})$.} The CLP algorithm assumes that each vertex is given a list of \emph{exactly} $(\Delta+1)$ colors. Our coloring algorithms requires a more relaxed setting where each vertex $v$ is allowed to be given a list of $r_v \in [\Delta-\Delta^{3/5},\Delta+1]$ colors where $r_v \geq \deg(v,G)+1$. In this subsection we show: \vspace{-5pt} \begin{lemma}\label{lem:modclp} Given a graph $G$ with $\Delta=O(\sqrt{n})$, if every vertex $v$ has a palette with $r_v\geq \deg(v,G)+1$ colors and $r_v \in [\Delta-\Delta^{3/5},\Delta+1]$ then a list coloring can be computed in $O(\log^* \Delta)$ rounds in the congested clique model. \end{lemma} The key modification for handling non-equal palette sizes is in definition of $\epsilon$-friend (which affects the entire decomposition of the graph). Throughout, let $q=\Delta^{3/5}$ and say\footnote{The value of $q$ is chosen to be a bit above the standard deviation of $\sqrt{\Delta\log n}$ that will occur in our algorithm.} that $u,v$ are $(\epsilon,q)$-friends if $|N(u)\cap N(v)|\geq (1-\epsilon)\cdot (\Delta-q)$. Clearly, if $u,v$ are $\epsilon$-friends, they are also $(\epsilon,q)$-friends. A vertex $v$ is an $(\epsilon,q)$-\emph{dense} if it has at least $(1-\epsilon)\cdot (\Delta-q)$ neighbors which are $(\epsilon,q)$-friends. An $(\epsilon,q)$-almost clique is a connected component of the subgraph induced by $(\epsilon,q)$-dense vertices and their $(\epsilon,q)$ friends edges. We next observe that for the $\epsilon$-values used in the CLP algorithm, the converse is also true up to some constant. \begin{observation}\label{obs:nonep} For any $\epsilon \in [\Delta^{-10},K^{-1}]$, where $K$ is a large constant, and for $q=\Delta^{3/5}$, it holds that if $u,v$ are $(\epsilon,q)$ friends, they are $(2\epsilon)$-friends. Also, if $v$ is an $(\epsilon,q)$-dense, then it is $2\epsilon$-dense. \end{observation} \begin{proof} For any $\epsilon_1\leq \epsilon_2$, it holds that $(1-\epsilon_1)/\epsilon_1 \geq (1-\epsilon_2)/\epsilon_2$. Hence, for any $\epsilon \in [\Delta^{-10},K^{-1}]$, we have: $(1-\epsilon)/\epsilon \leq \Delta^{1/10}\cdot (1-\Delta^{-1/10})\leq \Delta^{2/5},$ yielding that $(1-\epsilon)q \leq \epsilon \cdot \Delta$ for any $\epsilon \in [\Delta^{-10},K^{-1}]$. Since $u,v$ are $(\epsilon,q)$ friends, we have $|N(u)\cap N(v)|\geq (1-\epsilon)\cdot \Delta-(1-\epsilon)\cdot q \geq (1-2\epsilon) \cdot \Delta.$ If $v$ is an $(\epsilon,q)$-dense, then it has at least $(1-\epsilon)(\Delta-q)\geq (1-2\epsilon)\Delta$ neighbors which are $2\epsilon$-friends. \end{proof} In \Cref{sec:modclp}, we provide a detailed proof for \Cref{lem:modclp}. \def\APPENDUNBAL{ \paragraph{Hierarchy, Blocks and Starta.} The entire hierarchy of levels $V_1,\ldots, V_{\ell}$ is based on using the definition of $(\epsilon,q)$-dense vertices (rater than $\epsilon$-dense vertices). Let $\bar{d}_{S,V'}(v)=|(N(v) \cap V')\setminus S|$ be the external degree of $v$ with respect to $S,V'$. Let $a_S(v)=|S \setminus (N(v)\cup \{v\})|$ be the anti-degree of $v$ with respect to $S$. Let $V_{\epsilon,q}^d, V_{\epsilon,q}^s$ be the vertices which are $(\epsilon,q)$-dense (resp., sparse). By \Cref{obs:nonep}, for every $\epsilon \in [\Delta^{-10},1/K]$, it also holds: \begin{observation} For any $(\epsilon,q)$-almost clique $C$, there exists an $\epsilon$-almost clique $C_{\epsilon}$ and an $(2\epsilon)$-almost clique $C_{2\epsilon}$ such that $C_{\epsilon}\subseteq C \subseteq C_{2\epsilon}$. \end{observation} As a result, Lemma 1 of \cite{CHP18} that describes the basic properties of cliques, holds up to small changes in the constants. \begin{lemma}\label{lem:epsqclique}[Adopted from Lemma 1 \cite{CHP18}] The following holds for an $(\epsilon,q)$-almost clique $C$, $\epsilon \leq 1/10$: (i) the external degree of $v \in C$ w.r.t. to $C$ is $\bar{d}_{C,V^d_{\epsilon,q}}(v)\leq 2\epsilon\Delta$, (ii) $a_C(v)\leq 6\epsilon \Delta$, (iii) $|C|\leq (1+6\epsilon)\cdot \Delta$, and (iv) $\mbox{\rm dist}(u,v,G)\leq 2$ for each $(u,v) \in C$, i.e., $C$ has \emph{weak} diameter $2$. \end{lemma} The notion of strata, blocks and super-blocks are all trivially extended using the definition of $(\epsilon,q)$-dense vertices. Note that since each vertex has a palette of at least $\deg+1$ colors, the excess of colors is non-decreasing throughout the coloring algorithm. \paragraph{Initial Coloring Step.} In this step, Alg. $\mathsf{OneShotColoring}$ is applied for $O(1)$ many iterations. We change the coloring probability of Alg. $\mathsf{OneShotColoring}$ to be $p \in (0,1/8)$ rather than $p \in (0,1/4)$. It is then required to prove Lemma 5 of \cite{CHP18} and show that after applying $O(1)$ rounds of Alg. $\mathsf{OneShotColoring}$, for every $(\epsilon,q)$-sparse vertex with $\deg(v)\geq 0.9\Delta$ it holds that: (i) it has at least $\Delta/2$ uncolored neighbors with high probability, and (ii) it has $\Omega(\epsilon^2 \cdot \Delta)$ excess colors with probability $1-exp^{-\Omega(\epsilon^2 \Delta)}$. The proof is exactly the same up to small changes in the constants: In Lemma 11 of \cite{CHP18}, we then get that $Pr[E_c]\leq p|S|/(\Delta-q)\leq 4p\leq 1/2$ as $p \leq 1/8$ In Lemma 12 of \cite{CHP18}, the condition on $Q$ is such that each $u_i \in S$ satisfies that $|\Phi(u_i)\cap Q|\geq (1-\epsilon/2)\cdot (\Delta-q)$. This only change a constant in the proof where $Pr[X_i=1]\geq p/4$. In Lemma 14, since $v$ is an $(\epsilon,q)$-sparse vertex, it is also an $\epsilon$-sparse vertex and thus the lemma again holds up to small changes in the constants. For each $(\epsilon,q)$ non-friend $u_i \in S$, we have $|S \setminus (S' \cup N(u_i))|\geq (\Delta-q)(1-\epsilon/5-\epsilon/100(1-\epsilon))\geq \epsilon \cdot \Delta/4$. All this effects again, only constants in the probabilities. \paragraph{Main Coloring Step.} Consider the coloring of small blocks which is based on Lemma 2 and Lemma 3 of \cite{CHP18}. Both these lemmas hold up to small changes in the constants. In particular, in Lemma 3, we are now given an $(\epsilon_i,q)$ almost clique $C$ and $C_1,\ldots, C_l$ are the $(\epsilon_{i-1},q)$-almost cliques contained in $C$. Since an $(\epsilon,q)$-almost clique is contained in an $2\epsilon$-almost clique, by using \Cref{lem:epsqclique}, we have that either $l=1$ or that $\sum_{j=1}^l |C_j|\leq 2(6\epsilon_i+2\epsilon_{i-1})\leq 14 \epsilon_i \cdot \Delta$. In the proof of Lemma 2, to bound $A_2$, using the bound of the external degree in \Cref{lem:epsqclique}, we get that $A_2\leq 4 \epsilon_{\ell}\Delta$. Coloring the large blocks and the sparse vertices follows the exact same analysis an in \cite{CHP18}. The reason is that the coloring of the large blocks uses only the bounds on the external degrees and clique size (as shown in \Cref{lem:epsqclique}) and those properties hold up to insignificant changes in the constants (the algorithms of the sparse and dense components use the $O$-notation on these bounds, in any case). \Cref{lem:modclp} follows by combining the above with the implementation of the CLP algorithm in the congested clique model of \Cref{thm:colorsqrtn}. \section{$(\Delta+1)$-Coloring for $\Delta=\Omega(\sqrt{n})$} In this section, we describe a new recursive degree-reduction technique. As a warm-up, we start with $\Delta=O((n/\log n)^{3/4})$. We make use of the following fact. \begin{theorem} (\textbf{Simple Corollary of Chernoff Bound})\label{thm:cher} Suppose $X_1$, $X_2$, \dots, $X_\ell \in [0,1]$ are independent random variables, and let $X=\sum_{i=1}^{\ell} X_i$ and $\mu = \mathbb{E}[X]$. If $\mu \geq 5 \log n$, then w.h.p. $X \in \mu \pm \sqrt{5\mu\log n}$, and if $\mu < 5 \log n$, then w.h.p. $X \leq \mu +5\log n$. \end{theorem} \subsection{An $O(\log^* \Delta)$-round algorithm for $\Delta=O((n/\log n)^{3/4})$}\label{sec:deltathreefour} The algorithm partitions $G$ into $O(\Delta^{1/3})$ subgraphs as follows. Let $\ell=\lceil \Delta^{1/3} \rceil$. We define $\ell+1$ subsets of vertices $V_1,\ldots, V_\ell, V^*$. A vertex joins each $V_i$ with probability $$p_i=1/\ell-2\sqrt{5\log n}/(\Delta^{1/3}\cdot \ell),$$ for every $i \in \{1,\ldots, \ell\}$, and it joins $V^*$ with the remaining probability of $p^*=2\sqrt{5\log n}/\Delta^{1/3}$. Let $G_i=G[V_i]$ be the induced subgraph for every $i \in \{1,\ldots, \ell,*\}$. Using Chernoff bound of \Cref{thm:cher}, the maximum degree $\Delta'$ in each subgraph $G_i$, $i \in \{1,\ldots,\ell\}$ is w.h.p.: $$\Delta'\leq \Delta/\ell-2\Delta^{2/3}\sqrt{5\log n}/\ell+\sqrt{5\Delta \log n/\ell}\leq \Delta/\ell-1$$ In the first phase, all subgraphs $G_1, \ldots, G_{\ell}$ are colored independently and simultaneously. This is done by allocating a distinct set of $(\Delta'+1)$ colors for each of these subgraphs. Overall, we allocate $\ell\cdot (\Delta'+1)\leq \Delta$ colors. Since $\Delta'=O(\Delta^{2/3})=O(\sqrt{n})$, we can apply the $(\Delta'+1)$-coloring algorithm of \Cref{cor:colorsqrtnparallel} on all the graphs $G_1,\ldots, G_{\ell}$ simultaneously. Hence, all the subgraphs $G_1, \ldots, G_{\ell}$ are colored in $O(\log^* \Delta)$ rounds. \paragraph{Coloring the remaining left-over subgraph $G^*$.} The second phase of the algorithm completes the coloring for the graph $G^*$. This coloring should agree with the colors computed for $G\setminus G^*$ computed in the previous phase. Hence, we need to color $G^*$ using a \emph{list} coloring algorithm. We first show that w.h.p. the maximum degree $\Delta^*$ in $G^*$ is $O(\sqrt{n})$. The probability of vertex to be in $G^*$ is $p^*=2\sqrt{5\log n}/\Delta^{1/3}$. By Chernoff bound of \ref{thm:cher}, w.h.p., $\Delta^*\leq p^* \cdot \Delta+\sqrt{5p^* \cdot \Delta \cdot \log n}$. Since $\Delta\leq (n/\log n)^{3/4}$, $\Delta^*=O(\sqrt{n})$. To be able to apply the modified CLP of \Cref{lem:modclp}, we show: \begin{lemma}\label{lem:size} Every $v \in G^*$ has at least $\Delta^*-(\Delta^*)^{3/5}$ available colors in its palette after coloring all its neighbors in $G\setminus G^*$. \end{lemma} \begin{proof} First, consider the case where $\deg(v,G)\leq \Delta-(\Delta^*-\sqrt{5\Delta^* \cdot \log n})$. In such case, even after coloring all neighbors of $v$, it still has an access of $\Delta^*-\sqrt{5\Delta^* \cdot \log n}\geq \Delta^*-(\Delta^*)^{3/5}$ colors in its palette after coloring $G\setminus G^*$ in the first phase. Now, consider a vertex $v$ with $\deg(v,G)\geq \Delta-(\Delta^*-\sqrt{5\Delta^* \cdot \log n})$. Using Chernoff bound, w.h.p., $\deg(v,G^*)>(\Delta-(\Delta^*-\sqrt{\Delta^* \cdot 5\log n}))\cdot p^*-\sqrt{5\log n \Delta p^*}\geq \Delta^*-(\Delta^*)^{3/5}.$ \end{proof} Also note that a vertex $v \in G^*$ has at least $\deg(v,G^*)+1$ available colors, since all its neighbors in $G^*$ are uncolored at the beginning of the second phase and initially it was given $(\Delta+1)$ colors. Eventhough, $v \in G^*$ might have $\Omega(\Delta)$ neighbors not in $G^*$, to complete the coloring of $G^*$, by \Cref{lem:size}, after the first phase, each $v$ can find in its palette $r \in [\Delta^*-(\Delta^*)^{3/5},\Delta^*+1]$ available colors and this sub-palette is sufficient for its coloring in $G^*$. Since $\Delta^*=O(\sqrt{n})$, to color $G^*$ (using these small palettes), one can apply the $O(\log^* \Delta)$ round list-coloring algorithm of \Cref{lem:modclp}. \vspace{-5pt} \subsection{An $O(\log(1/\epsilon)\cdot\log^*\Delta)$-round algorithm for $\Delta=O((n/\log n)^{1-\epsilon})$}\label{sec:deltaeps} Let $N=n/(5\log n)$. First assume that $\Delta\leq N/2$ and partitions the range of relevant degrees $[\sqrt{n},N/2]$ into $\ell=\Theta(\log\log \Delta)$ classes. The $y^{th}$ range contains all degrees in $[N^{1-1/2^y},N^{1-1/(2^{y+1})}]$ for every $y \in \{1,\ldots, \ell\}$. Given a graph $G$ with maximum degree $\Delta =O(N^{1-1/(2^{y+1})})$, Algorithm $\mathsf{RecursiveColoring}$ colors $G$ in $y \cdot O(\log^* \Delta)$ rounds, w.h.p. \paragraph{Step (I): Partitioning (Defective-Coloring).} For $i\in \{0,\ldots, y-1\}$, in every level $i$ of the recursion, we are given a graph $G'$, with maximum degree $\Delta_i=O(N^{1-1/2^{y-i+1}})$, and a palette $\mathsf{Pal}_i$ of $(\Delta_i+1)$ colors. For $i=0$, $\Delta_0=\Delta$ and the palette $\mathsf{Pal}_0=\{1,\ldots, \Delta+1\}$. The algorithm partitions the vertices of $G'$ into $q_i+1$ subsets: $V'_1,\ldots, V'_{q_i}$ and a special left-over set $V^*$. The partitioning is based on the following parameters. Set $x=2^{y-i}$ and $$q_i=\lceil \Delta_i^{1/(2x-1)}\rceil \mbox{~~and~~} \delta_i=2\sqrt{5\log n}\cdot q_i^{3/2}/\sqrt{\Delta_i}.$$ Each vertex $v \in V(G')$ joins $V'_j$ with probability $p_j=1/q_i-\delta_i/(q_i)^2$ for every $j \in \{1,\ldots, q_i\}$, and it joins $V^*$ with probability $p^*=\delta_i/q_i$. Note that $p^* \in (0,1)$ as $x\geq 2$. For every $j \in \{1,\ldots, q_i\}$, let $G'_{j}=G'[V'_j]$ and let $G^*=G'[V^*]$. \paragraph{Step (II): Recursive coloring of $G'_1,\ldots, G'_{q_i}$.} Denote by $\widetilde{\Delta}_j$ to be the maximum degree in $G'_j$ for every $j \in \{1,\ldots, q_i\}$ and by $\Delta^*$, the maximum degree in $G^*$. The algorithm allocates a distinct subset of $(\widetilde{\Delta}_j+1)$ colors from $\mathsf{Pal}_i$ for every $j \in \{1,\ldots, q_i\}$. In the analysis, we show that w.h.p. $\mathsf{Pal}_i$ contains sufficiently many colors for that allocation. The subgraphs $G'_1,\ldots, G'_{q_i}$ are colored recursively and simultaneously, each using its own palette. It is easy to see that the maximum degree of each $G'_j$ is $O(N^{1-1/2^{y-i}})$ (which is indeed the desire degree for the subgraphs colored in level $i+1$ of the recursion). \paragraph{Step (III): Coloring the left-over graph $G^*$.} Since the algorithm already allocated at most $\Delta_i$ colors for coloring the $G'_j$ subgraphs, it might run out of colors to allocate for $G^*$. This last subgraph is colored using a list-coloring algorithm only after all vertices of $G'_1,\ldots, G'_{q_i}$ are colored. Recall that $\Delta^*$ is the maximum degree of $G^*$. In the analysis, we show that w.h.p. $\Delta^*=O(\sqrt{n})$. For every $v \in G^*$, let $\mathsf{Pal}(v) \subseteq \mathsf{Pal}_i$ be the remaining set of available colors after coloring all the vertices in $V'_1,\ldots, V'_{q_i}$. Each vertex $v \in G^*$ computes a new palette $\mathsf{Pal}^*(v) \subseteq \mathsf{Pal}(v)$ such that: (i) $|\mathsf{Pal}^*(v)|\geq \deg(v, G^*)+1$, and (ii) $|\mathsf{Pal}^*(v)| \in [\Delta^*-\Delta^{3/5},\Delta^*+1]$. In the analysis section, we show that w.h.p. this is indeed possible for every $v \in G^*$. The algorithm then applies the modified CLP algorithm, and $G^*$ gets colored within $O(\log^* \Delta)$ rounds. \setlength{\columnsep}{19pt}% \begin{wrapfigure}{r}{7cm} \centering \vspace{-5pt} \includegraphics[width=0.30\textwidth]{coloringonedecomp.pdf} \vspace{-5pt} \caption{ \vspace{-15pt}} \end{wrapfigure} \paragraph{Example:} Assume that input graph $G$ has maximum degree $\Delta_0=n^{15/16}$. The algorithm partitions $G$ into $k_0=n^{1/15}$ subgraphs in the following manner. For sake of clarity, we omit logarithmic factors in the explanation. With probability $1/(\Delta_0)^{1/2+o(1)}$, $v$ joins a left-over subgraph $G^*$, and with the remaining probability it picks a subgraph $[1,k_0]$ uniformly at random. It is easy to see that the maximum degree in each of these subgraphs is at most $\Delta_1=n^{7/8}+n^{7/16}$. A distinct set of $\Delta_1$ colors from $[1,\Delta_0+1]$ is allocated to each of the $k$ subgraphs. Each such subgraph is now partitioned into $k_1=n^{1/8}$ subgraphs plus a left-over subgraph. This continues until all subgraphs have their degrees sharply concentrated around $\sqrt{n}$. At that point, the modified CLP algorithm can be applied on all the subgraphs in the last level $\ell$. Once these subgraphs are colored, the left-over subgraphs in level $\ell-1$ are colored, this continues until the final left-over subgraph of the first level is colored. We next provide a compact and high level description of the algorithm. \begin{mdframed}[hidealllines=false,backgroundcolor=gray!30] \center \textbf{Algorithm $\mathsf{RecursiveColoring}(G',\mathsf{Pal}_i)$} \begin{flushleft} Input: Graph $G'$ with maximum degree $\Delta_i=O(N^{1-1/2^{y-i+1}})$. \\ A palette $\mathsf{Pal}_i$ of $(\Delta_i+1)$ colors (same for all nodes). \end{flushleft} \vspace{-10pt} \begin{itemize} \item Partitions $G'$ into $q_i+1$ vertex-disjoint subgraphs: \begin{itemize} \item $q_i$ vertex-subgraphs $G'_1,\ldots, G'_{q_i}$ with maximum degree $\Delta_{i+1}=O(N^{1-1/2^{y-i}})$. \item Left-over subgraph $G^*$ with maximum degree $\Delta^*=O(\sqrt{n})$. \end{itemize} \item Allocate a distinct palette $\mathsf{Pal}_j \subset \mathsf{Pal}_i$ of $(\Delta_{i+1}+1)$ colors for each $j \leq q_i$. \item Apply $\mathsf{RecursiveColoring}(G_j,\mathsf{Pal}_j)$ for every $j\leq q_i$ simultaneously. \item Apply a $(\Delta_i+1)$-list coloring restricted to $\mathsf{Pal}_i$, to complete the coloring of $G[V^*]$. \end{itemize} \end{mdframed} \paragraph{Analysis.} \begin{lemma} (i) For every $j \in \{1,\ldots, q_i\}$, w.h.p., $\widetilde{\Delta}_j=O(N^{1-1/2^{y-i}})$. (ii) One can allocate $(\widetilde{\Delta}_j+1)$ distinct colors from $\mathsf{Pal}_i$ for each $G'_j$, $j \in \{1,\ldots, q_i\}$. \end{lemma} \begin{proof} Using Chernoff bound of \Cref{thm:cher}, w.h.p., for every $j \in \{1,\ldots, q_i-1\}$, the maximum degree $\widetilde{\Delta}_j$ in $G'_j$ is at most $\widetilde{\Delta}_j=O(\Delta_i/q_i)$. Since $\Delta_i=O(N^{1-2^{y-i+1}})$, claim (i) follows. We now bound the sum of all colors allocated to these subgraphs: $$\widetilde{\Delta}_j\leq \Delta_i/q_i-(\Delta_i\cdot \delta_i)/(q_i)^2+\sqrt{5\log n \cdot \Delta_i/q_i}\leq \Delta_i/q_i-1~.$$ where the last inequality follows by the value of $\delta_i$. We get that $\sum_{j=1}^{q_i}(\widetilde{\Delta}_j+1)\leq \Delta_i$ and since $\mathsf{Pal}_i$ contains $\Delta_i+1$ colors, claim (ii) follows. \end{proof} We next analyze the final step of the algorithm and begin by showing that, w.h.p., the maximum degree in the left-over graph $G^*$ is $O(\sqrt{n})$. By Chernoff bound of \Cref{thm:cher}, w.h.p., the maximum degree $\Delta^*\leq \Delta_i \cdot \delta_i/q_i+\sqrt{\log n\cdot \Delta_i \cdot \delta_i/q_i}$. Since $\Delta_i=O(N^{1-1/2^{y-i+1}})$, we get that $\Delta^*=O(\sqrt{n})$. We now claim: \begin{lemma} After coloring for all the vertices in $G'_1,\ldots, G'_{q_i}$, each vertex $v \in G^*$ has a palette $\mathsf{Pal}^*(v)$ of free colors such that (i) $|\mathsf{Pal}^*(v)|\geq \deg(u,G^*)+1$, and (ii) $|\mathsf{Pal}^*(v)| \in [\Delta^*-(\Delta^*)^{3/5}),\Delta^*+1]$. \end{lemma} \begin{proof} Since each vertex $v \in G'$ has a palette of size $(\Delta_i+1)\geq \deg(v,G')$, after coloring all its neighbors in $G'_1,\ldots, G'_{q_i}$, it has at least $\deg(v,G^*)+1$ free colors in its palette. Claim (ii) follows the same argument as in \Cref{lem:size}. We show that the palette of $v$ has at least $\Delta^*-O(\sqrt{\Delta^*}\cdot 5\log n)\geq (\Delta^*-(\Delta^*)^{3/5})$ available colors after coloring all the vertices in $G\setminus G^*$. First, when $\deg(v,G')\leq \Delta_i-(\Delta^*-\sqrt{\Delta^* \cdot 5\log n})$, then even after coloring all neighbors of $v$ in $G'$, it still has an access of $\Delta^*-\sqrt{\Delta^* \cdot 5\log n}$ colors in its palette. Consider a vertex $v$ with $\deg(v,G')\geq \Delta_i-(\Delta^*-\sqrt{\Delta^* \cdot 5\log n})$. By Chernoff, w.h.p. it holds that: \begin{eqnarray*} \deg(v,G^*)&\geq &(\Delta_i-(\Delta^*-\sqrt{\Delta^* \cdot 5\log n}))\cdot p^*-\sqrt{5\log n\cdot \Delta_i \cdot p^*} \\&\geq& \Delta_i\cdot \delta_i/q_i-O(\sqrt{\Delta_i\cdot \delta_i \log n/q_i}) \geq \Delta^*-(\Delta^*)^{3/5}~. \end{eqnarray*} Hence, by combining with claim (i), the lemma follows. \end{proof} This completes the proof of \Cref{thm:coleps}(i). \paragraph{$(\Delta+\Delta^{1/2+\epsilon})$ Coloring in Log-Star Rounds} \begin{lemma}\label{lem:manycol} For any fixed $\epsilon \in (0,1)$, one can color, w.h.p., a graph with $(\Delta+\Delta^{1/2+\epsilon})$ colors in $O(\log(1/\epsilon) \cdot \log^* \Delta)$ rounds. \end{lemma} \begin{proof} Due to \Cref{thm:colorsqrtn}, it is sufficient to consider the case where $\Delta=\Omega(\sqrt{n})$. Partition the graph into $k=\lfloor \Delta^{\epsilon} \rfloor$ subgraphs $G_1,\ldots, G_k$, by letting each vertex independently pick a subgraph uniformly at random. By Chernoff bound of \Cref{thm:cher}, the maximum degree $\Delta_i$ in each subgraph $G_i$ is at most $\Delta_i\leq \Delta^{1-\epsilon}+\Delta^{1/2-\epsilon/2}\cdot \sqrt{5\log n}$. Allocate a distinct set of $\Delta_i+1$ colors $\mathsf{Pal}_i$ to each subgraph $G_i$. Since $\Delta^{1-\epsilon}=O((n/\log n)^{1-\epsilon/2})$, we can apply Alg. $\mathsf{RecursiveColoring}$ on each of these subgraphs which takes $O(\log(1/\epsilon)\cdot \log^* \Delta)$ rounds. It is easy to see, that since the subgraphs are vertex disjoint, Alg. $\mathsf{RecursiveColoring}$ can be applied on all $k$ subgraphs simultaneously with the same round complexity. Overall, the algorithm uses $\Delta+\Delta^{1/2+\epsilon}$ colors. \end{proof} \vspace{-13pt} \subsection{$(\Delta+1)$ Coloring Algorithm for General Graphs}\label{sec:generalcol} For graphs $G$ with $\Delta\leq n/(10\log n)$, we simply apply Alg. $\mathsf{RecursiveColoring}$. Plugging $\epsilon=1/\log n$ in \Cref{thm:coleps}, we get that this is done in $O(\log\log \Delta \cdot \log^* \Delta)$ rounds. It remains to handle graphs with with $\Delta \in [n/(10\log n),n]$. We partition the graph into $\ell=\lceil 5 \log n \rceil$ subgraphs $G_1,G_2, \ldots, G_\ell$ and a left-over graph $G^*$ in the following manner. Each $v \in V$ joins $G_i$ with probability $p=1/\ell-2\sqrt{5\log n/(\Delta \cdot \ell)}$ for every $i \in \{1,\ldots, \ell\}$, and it joins $G^*$ with probability $p^*=1-\ell \cdot p=\Theta(\log n/\sqrt{\Delta})$. By Chernoff bound, the maximum degree in $G_i$ for $i \in \{1,2,\ldots,\ell\}$ is $\Delta_i\leq \Delta/\ell-2\sqrt{(\Delta \cdot 5\log n)/\ell}+\sqrt{\Delta \cdot 5\log n/\ell}\leq \Delta/\ell-1~.$ Hence, we have the budget to allocate a distinct set $\mathsf{Pal}_i$ of $\Delta_i$ colors for each $G_i$. The first phase applies Algorithm $\mathsf{RecursiveColoring}$ on each $(G_i, \mathsf{Pal}_i)$ simultaneously for every $i$. Since $\Delta_i=O(n/\log n)$, and the subgraphs are vertex-disjoint, this can be done in $O(\log\log \Delta \cdot \log^* \Delta)$ rounds for all subgraphs simultaneously (see \Cref{thm:coleps}(i)). After all the vertices of $G\setminus G^*$ get colored, the second phase colors the left-over subgraph $G^*$. The probability of a vertex $v$ to be in $G^*$ is $O(\log n /\sqrt{\Delta})=O(\log^2 n/\sqrt{n})$. Hence, $G^*$ contains $O(\log^2 n \cdot \sqrt{n})$ vertices with high probability. We color $G^*$ in two steps. First, we use the $\deg+1$ list coloring Algorithm $\mathsf{OneShotColoring}$ from \cite{barenboim2016locality} to reduce the uncolored-degree of each vertex to be $O(\sqrt{n}/\log^2 n)$ with high probability. This can be done in $O(\log\log n)$ rounds. In the second step, the entire uncolored subgraph $G'' \subset G^*$ has $O(n)$ edges and can be solved locally in $O(1)$ rounds. Note that for each $v \in G''$, it is sufficient to consider a palette with $\deg(v,G'')+1$ colors, and hence sending all these palettes can be done in $O(1)$ rounds as well. We are now ready to complete proof of \Cref{thm:maincol}. The correctness of the first phase follows by \Cref{thm:coleps}(i). Note that since the $G_i$'s subgraphs are vertex-disjoint, they can handled simultaneously by Alg. $\mathsf{RecursiveColoring}$. Hence the first coloring phase takes $O(\log\log \Delta\cdot \log^* \Delta)$ rounds. Finally, for the second phase, we use Lemma 5.4 of \cite{barenboim2016locality}. Since we only want to reduce the uncolored degree of each vertex in $G^*$ to be $O(\sqrt{n}/\log^2 n)$, i.e., reduce it be a factor of at most $\log^4 n$, using Lemma 5.4, the uncolored degree of each (relevent) vertex $v$ is reduced by a constant factor w.h.p. and hence after $O(\log\log n)$ rounds, all degrees are $O(\sqrt{n}/\log^2 n)$ as desired, with high probability. \begin{mdframed}[hidealllines=false,backgroundcolor=gray!30] \center \textbf{Algorithm $\mathsf{FastColoring}(G)$} \begin{itemize} \item If $\Delta \leq n/(10\log n)$, call $\mathsf{RecursiveColoring}(G, [1,\Delta+1])$. \item Else, partition $G$ into vertex-subgraphs as follows: \begin{itemize} \item $G'_1,\ldots, G'_q$ with the maximum degree $\Theta(\Delta/\log n)$, and \item a left-over subgraph $G^*$ with maximum degree $\Delta^*=O(\sqrt{n})$. \end{itemize} \item Allocate a distinct palette $\mathsf{Pal}_j \subset [1,\Delta+1]$ of $(\Delta(G_{j})+1)$ colors for each $j \leq q_i$. \item Apply $\mathsf{RecursiveColoring}(G_j,\mathsf{Pal}_j)$ for all $G_1,\ldots, G_q$ simultaneously. \item Apply a $(\deg+1)$-list coloring algorithm on $G^*$ for $O(\log\log n)$ rounds. \item Solve the remaining uncolored subgraph locally. \end{itemize} \end{mdframed} \section{Deterministic Coloring Algorithms}\label{sec:detcol} \paragraph{$O(\Delta^2)$ Coloring in $O(1)$ Rounds.} As a warm-up, we start by showing a very simple algorithm for computing $\Delta^2$ coloring very fast. \begin{lemma} There exists a $O(1)$-round \emph{deterministic} algorithm for $\Delta^2$ coloring. \end{lemma} \begin{proof} Consider the following single round randomized algorithm: each node picks a random color in $[1,\ldots,\Delta^2]$. Nodes exchange their colors with their neighbors and if their color is legal they halt. It is easy to see that a node remains uncolored with probability $1/\Delta$ and this holds even if the random choices are pairwise independent. Hence, if all nodes are given a random seed of length $O(\log n)$, in expectation the number of remaining uncolored vertices is $n/\Delta$. To derandomize this single randomized step, we will use the method of conditional expectation similarly to the algorithm of \Cref{thm:detcoldelta}. We split the seed into chunks of $\lfloor \log n\rfloor$ pieces and describe how to compute the $i^{th}$ chunk using $O(1)$ rounds, given that the first $i-1$ chunks are already computed. We assign a special vertex to each of the $n$ possible assignments of an $\lfloor \log n\rfloor$-size (binary) chunk. Each vertex $u$, computes the probability $p_u(b)$ that it is legally colored given that the assignment to the $i^{th}$ chunk is $b$. Note that $u$ can compute this probability as it depends only at its neighbors and by knowing the IDs of its neighbors, $u$ can simulate their choices (given a seed). Each node $u$ sends the outcome $p_u(b)$ to the node responsible for the assignment $b$. Finally, the assignment that got the largest fraction of removed nodes is elected. Thanks to the method of conditional expectation, when all nodes simulate their random decision using the computed seed, the fraction of removed nodes is at least as the expected one and hence at most $n/\Delta$ vertices remained uncolored. At the point, the remaining graph can be collection to a single node and be locally solved. \end{proof} We next turn to consider the more challenging task of computing $(\Delta+1)$ coloring in $O(\log \Delta)$ rounds. We will first describe a simple algorithm for the case where $\Delta=O(\sqrt{n})$. In this regime, the degrees are small enough to allow each node learning the palettes of its neighbors. Then, we will modify this basic algorithm to allow fast computation even for $\Delta=O(n^{3/4})$. Finally, using the partitioning technique described in the first part of the paper, we will handle the general case. \begin{theorem}\label{thm:detcoldelta} There is a deterministic $(\Delta+1)$ list coloring using $O(\log \Delta)$ rounds, in the congested clique model. \end{theorem} In \cite{Censor-HillelPS17}, a deterministic $(\Delta+1)$ coloring was presented only for graphs with maximum degree $\Delta=O(n^{1/3})$. Here, we handle the case of $\Delta=\Omega(n^{1/3})$. \subsection{$(\Delta+1)$ Coloring for $\Delta=O(\sqrt{n})$}\label{sec:deg-very-small} We first let each node sends its palette to all its neighbors. Since $\Delta=O(\sqrt{n})$, this can be done in $O(1)$ rounds. We will derandomize the following simple $(\Delta+1)$-algorithm that runs in $O(\log n)$ rounds. \begin{mdframed}[hidealllines=false,backgroundcolor=gray!25] \textbf{Round $i$ of Algorithm $\mathsf{SimpleRandColor}$ (for node $v$ with palette $\mathsf{Pal}_v$)} \begin{itemize} \item Let $\mathsf{Pal}_{i,v}$ be the current palette of $v$ containing all its colors that are not yet taken by its neighbors. Let $F_{i,v}=|\mathsf{Pal}_{i,v}|$. \item With probability $1/2$, let $c_v=0$ and w.p. $1/2$ let $c_v$ be chosen uniformly at random from $\mathsf{Pal}_{i,v}$. \item Send $c_v$ to all neighbors and if $c_v \neq 0$ and legal, halt. \end{itemize} \end{mdframed} \begin{observation}\label{obs:pwenough} The correctness of Algorithm $\mathsf{SimpleRandColor}$ is preserved, even if the coin flips are \emph{pairwise}-independent. \end{observation} \begin{proof} We analyze the probability that some vertex $v \in V$ terminates in round $i$, conditioned that it has not terminated before round $i$, for any $i>0$. The probability that $v$ picked in round $i$ a color $c_v\neq 0$ is $1/2$. Suppose that $c_v \neq 0$ and consider some neighbor $u$ of $v$ that has not terminated in round $i$. The probability that $c_v=c_u$ is $1/2(F_{i,v})$. This is because the probability that $c_u\neq 0$ is $1/2$ and the probability that $v$ picked that color among the $F_{i,v}$ is $1/F_{i,v}$. Note that since this argument is only for two neighbors $u,v$, pairwise independence is enough. Let $\deg_i(v)$ be the number of uncolored neighbors of $v$ at the beginning on round $i$. Clearly, $\deg_i(v) \leq F_{i,v}+1$. By applying the union bound over all non-coloring neighbors $u$ of $v$, we get that $v$ is colored with some color in $\mathsf{Pal}_{i,v}$ with probability $1/2$. Hence, overall $v$ is colored with probability $1/4$. \end{proof} The goal of \emph{phase} $i$ in our algorithm is to compute a seed that would be used to simulate the random color choices of \emph{round} $i$ of Alg. $\mathsf{SimpleRandColor}$. This seed will be shown to be good enough so that at least $1/4$ of the currently uncolored vertices, get colored when picking their color using that seed. Let $V_i$ be the set of uncolored vertices at the beginning of phase $i$. We need the following construction of bounded independent hash functions: \begin{lemma}\label{lem:vad}\cite{Vadhan12} \label{lem: d-wise independent} For every $\gamma,\beta,d \in \mathbb{N},$ there is a family of $d$-wise independent functions $\mathcal{H}_{\gamma,\beta} = \set{h : \set{0,1}^\gamma \rightarrow \set{0,1}^\beta}$ such that choosing a random function from $\mathcal{H}_{\gamma,\beta}$ takes $d \cdot \max \set{\gamma,\beta}$ random bits, and evaluating a function from $\mathcal{H}_{\gamma,\beta}$ takes time $poly(\gamma,\beta,d)$. \end{lemma} First, at the beginning of phase $i$, we let each node $u$ send its current palette $\mathsf{Pal}_{i,u}$ to all its neighbors. Since $\Delta=O(\sqrt{n})$, this can be done in $O(1)$ rounds. For our purposes, to derandomize a single round, we use \Cref{lem:vad} with $d=2, \gamma=\log n, \beta=\log \Delta$ and hence the size of the random seed is $\alpha \cdot \log n$ bits for some constant $\alpha$. Instead of revealing the seed bit by bit using the conditional expectation method, we reveal the assignment for a \emph{chunk} of $z=\lfloor \log n \rfloor$ variables at a time. To do so, consider the $i$'th chunk of the seed $Y'_i=(y'_1,\ldots, y'_z)$. For each of the $n$ possible assignments $(b'_1,\ldots, b'_z) \in \{0,1\}^{z}$ to the $z$ variables in $Y'$, we assign a leader $u$ that represent that assignment and receives the conditional expectation values from all the uncolored nodes $V_i$, where the conditional expectation is computed based on assigning $y'_1=b'_1,\ldots, y'_z=b'_z$. Unlike the MIS problem, here the vertex's success depends only on its neighbors (i.e., and does not depend on its second neighborhood). Using the partial seed and the IDs of its neighbors, every vertex $v$ can compute the probability that it gets colored based on the partial seed, its own palette and the palettes of its neighbors. It then sends its probability of being colored using a particular assignment $y'_1=b'_1,\ldots, y'_z=b'_z$ to the leader $u$ responsible for that assignment. The leader node $u$ of each assignment $y'_1=b'_1,\ldots, y'_z=b'_z$ sums up all the values and obtains the expected number of colored nodes conditioned on the assignment. Finally, all nodes send to the leader their computed sum and the leader selects the assignment $(b^*_1,\ldots, b^*_z) \in \{0,1\}^{z}$ of largest value. After $O(1)$ many rounds, the entire assignment of the $O(\log n)$ bits of the seed are revealed. Every yet uncolored vertex $v \in V_i$ uses this seed to simulate the random choice of Alg. $\mathsf{SimpleRandColor}$, that is selecting a color in $\{0,1,2,\ldots, \Delta+1\}\setminus F_v$ and broadcasts its decision to its neighbors. If the color $c_v \neq 0$ is legal, $v$ is finally colored and it notifies its neighbors. By the correctness of the conditional expectation approach, we have that least $1/4 \cdot |V_i|$ vertices got colored. Hence, after $O(\log n)=O(\log \Delta)$ rounds, all vertices are colored. \input{det.tex} \\ \\ \textbf{Acknowledgment:} I am very grateful to Hsin-Hao Su, Eylon Yogev, Seth Pettie, Yi-Jun Chang and the anonymous reviewers for helpful comments. I am also grateful to Philipp Bamberger, Fabian Kuhn and Yannic Maus for noting a missing part in the deterministic $(\Delta+1)$ coloring algorithm of the earlier manuscript. \bibliographystyle{alpha}
{ "timestamp": "2020-01-14T02:10:27", "yymm": "1805", "arxiv_id": "1805.02457", "language": "en", "url": "https://arxiv.org/abs/1805.02457" }
\section{Introduction} The liberation of the electron in the process of strong field ionization via tunneling \cite{keldysh1965ionization,corkum1993plasma,ivanov2005anatomy} does not necessarily lead to the electron leaving the atom for good \cite{nubbemeyer2008strong,shvetsov2009capture}. This effect that is often referred to as `frustrated tunneling ionization' (FTI) is understood by the low kinetic energy of some electrons at the end of the laser pulse which does not allow them to leave the Coulomb potential but results in their capture in a Rydberg state. This process is not only interesting because it produces neutral excited states, which are found to be useful tools in the investigation of other strong field effects \cite{eichmann2009acceleration,eilzer2014steering}, but it also leads to a better understanding of post-ionization dynamics \cite{eichmann2009acceleration,eilzer2014steering,zimmermann2018limit}. Even though the detection of neutral excited states poses some difficulties \cite{nubbemeyer2008strong}, the fact that about 10\% of the liberated electrons end up in a Rydberg state for typical strong field parameters makes it a process that needs to be taken into account in the investigation of many strong field effects \cite{manschwetus2009strong,li2014rescattering,lv2016comparative,liu2012low}. The fraction of electrons that are tunnel ionized and which end up in a Rydberg state was found to depend significantly on parameters of the laser field and the atomic potential, the experimental and theoretical investigation of which helped understand the underlying process of FTI better \cite{nubbemeyer2008strong,shvetsov2009capture,li2014rydberg,Eichmann2016}. In the present work, we focus on the intensity dependence of the ratio of tunnel-ionized electrons which end up in a Rydberg state when using linearly polarized light. This observable has been previously measured by Nubbemeyer et al in \cite{nubbemeyer2008strong}. In \cite{shvetsov2009capture}, Shvetsov-Shilovski et al. have presented analytical estimations and numerical calculations for this experimental data. Here, we build on this work by including non-adiabatic effects, as well as introducing further corrections and expansions of the theory. We find an analytical dependence of Rydberg yield on intensity that agrees better with the experimental results in \cite{nubbemeyer2008strong}. Additionally, we describe wavelength dependent effects, which to the best of our knowledge, have not been predicted so far and should be experimentally measurable. The insights gained in the present study are not only restricted to Rydberg states but address the more general questions of which approximations are useful to describe (i) the initial conditions at the tunnel exit and (ii) the movement of the electron in the superposed potential of the laser and the parent ion. These approximations are the basis of many classical trajectory methods \cite{yudin2001nonadiabatic,shvetsov2016semiclassical}, and are fundamental to our interpretation of many high profile experiments, including recent attoclock measurements \cite{landsman14,camus2017}. The present work therefore demonstrates in what way Rydberg atoms can be used to give answers to these questions and to thus track the electron motion in a strong field ionization process. In particular, our results provide support for the importance of non-adiabatic effects in strong field ionization -- a much debated question that has previously been addressed by investigating photoelectron momenta distributions \cite{boge2013probing,hofmann2016non,arissian10}. These investigations, however, have proved to be inconclusive, with some experiments confirming adiabatic assumptions \cite{boge2013probing,arissian10}, while others pointing to relevance of non-adiabatic effects under typical strong field ionization conditions \cite{hofmann2016non,nirit15}. Since Rydberg yield is measured under different experimental conditions and represents a different class of electrons (inaccessible in typical strong field experiments), its experimental measurements provide an independent test of the prominence of non-adiabatic effects in strong field ionization. Furthermore, this non-adiabaticity manifests itself in the power-law dependence as a function of intensity. Since the absolute value of intensity is therefore not important, the results do not depend on the calibration procedure (something that has been a serious issue in prior studies \cite{boge2013probing,hofmann2016non,arissian10}). Even though there are some effects in FTI that can only be understood based on the time-dependent Schr\"odinger equation \cite{popruzhenko2017quantum,lv2016comparative}, it has been found that electrons that end up in a Rydberg state can be described very well in a semiclassical approximation \cite{shvetsov2009capture,huang2013survival,zhang2014generation,xiong2016correspondence,landsman2013rydberg}. One semiclassical method that is widely used and that also we will use in this paper is called the Classical Trajectory Monte Carlo (CTMC) method \cite{rose1997ultrafast,cohen2001reexamination,landsman04,comtois2005observation}. In this framework, the electron is born at the tunnel exit at a time $t_0$ with an initial velocity $v_{\perp,0}$ perpendicular to the polarization direction where $t_0$ and $v_{\perp,0}$ are sampled according to a probability distribution. Each electron is then propagated in the superposed laser and atomic field solving Newton's equations. In order to determine which electron is captured in a Rydberg state, we evaluate the total electron energy at a time $\tau$ when the pulse has passed. The final total energy $E$ has to be negative in the case of FTI: \begin{equation} E = \frac{v^2}{2}-\frac{1}{r} < 0\label{eq:Rydberg-condition}. \end{equation} Atomic units are used throughout the paper, unless otherwise specified.\\ \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Moon_I=1,5e14_lam=800_N=8.jpg} \includegraphics[width=0.45\textwidth]{Moon_I=1e15_lam=800_N=8.jpg} \caption{The total energy (see colorbar) at the end of the pulse for electrons ionized at $t_0$ with initial transverse momentum $v_{\perp,0}$. All quantities are given in atomic units. The laser pulse here was chosen to have a wavelength of $\lambda=800 \, \mathrm{nm}$ and 8 cycles for two different intensities $I$ specified in the plots. Rydberg states have a negative total energy and one can see how the Rydberg area shrinks for larger intensities.} \label{fig:Rydberg-ellipse} \end{figure} We define the Rydberg yield as the ratio of the number $N^*$ of electrons which are captured in a Rydberg state to the number $N$ of all electrons which tunneled through the potential barrier. As is the case in \cite{shvetsov2009capture}, we initially assume a constant distribution of ionization phases $\phi=\omega t_0$ and initial transverse velocities $v_{\perp,0}$ in the $\phi$ - $v_{\perp,0}$-plane, meaning the Rydberg yield is estimated to be proportional to the ratio $\Sigma^*/\Sigma$ of the areas $\Sigma^*$ and $\Sigma$ which are obtained by integrating in the $\phi$ - $v_{\perp,0}$-plane over the regime of the Rydberg or ionization events, respectively. Fig. \ref{fig:Rydberg-ellipse} displays the Rydberg area for two different intensities for ionization during the central half-cycle. The estimate for the area $\Sigma^*$ of Rydberg states in \cite{shvetsov2009capture} is derived for ionization in that central half-cycle giving \begin{equation} \Sigma^* \propto \frac{\omega}{F_0 \tau^{3/2}} \left(1- 2 \frac{F_0}{(2 I_p)^2} \right)^{-1}\label{eq:Sigma*-Shvetsov}, \end{equation} where $F_0$ denotes the maximal field strength and $I_p$ the ionization potential. Furthermore, in \cite{shvetsov2009capture} the area $\Sigma$ is assumed to be proportional to the width $\sigma_{v_\perp}$ of the distribution of the initial transverse velocity $v_{\perp,0}$ as described by \cite{delone1991energy, ammosov1986tunnel} with \begin{equation} \Sigma \propto \sigma_{v_\perp} \propto \sqrt{F_0}, \end{equation} where the relation $\sigma_{v_\perp} \propto \sqrt{F_0}$ is not trivial and is discussed in more detail in Appendix \ref{app:AppendixA}. Thus, the Rydberg yield is estimated to be proportional to \begin{equation} N^*/N \propto \frac{\omega}{F_0^{3/2} \tau^{3/2}} \left(1- 2 \frac{F_0}{(2 I_p)^2} \right)^{-1} \label{eq:power-law} \\ \end{equation} where the last factor can be neglected for $2F_0 \ll (2I_p)^2 $. Setting all parameters except the intensity $I$ to a constant we thus arrive at the power law $N^*/N \propto F_0^{-3/2} \propto I^{-0.75}$, which is the result presented in \cite{shvetsov2009capture}. However, also the width $\sigma_{\phi}$ of the ionization phase depends on the laser intensity and we should take account of that. As shown in Appendix \ref{app:AppendixA} and as often used \cite{popov2004tunnel,ortmann2018analysis} the adiabatic ADK distribution for ionizations phases $\phi=w\cdot t$ \cite{delone1991energy, ammosov1986tunnel} \begin{equation} P(\phi) \propto \exp \left(-\frac{2(2 I_p(\phi))^{3/2}}{3 F_0 \cdot |\cos(\phi)|}\right) \label{eq:ADK-probability} \end{equation} can be approximated as a Gaussian function with an intensity dependent width $\sigma_{\phi}$ that can be estimated as being proportional to $\sqrt{F_0}$. Consequently, we should set $N \propto \sigma_{v_\perp} \cdot \sigma_{\phi} \propto \sqrt{F_0} \cdot \sqrt{F_0} = F_0 = \sqrt{I}$ obtaining $N^*/N \propto I^{-1}$. This conclusion enables a better understanding of the adiabatic CTMC simulation results displayed in Fig. \ref{New-exponent-for-Shvetsov} where a power law fit to the data yields an exponent of $-1.02$. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig_Exp_vs_ADK_vs_Adiabatic.png} \caption{Rydberg yield for the parameters found in \cite{nubbemeyer2008strong}: $I=1.4\cdot 10^{14}-10^{15}$, FWHM of pulse envelope $= 30$ fs, $\lambda=800$ nm, He atom with $I_p=0.9$. The experimental yield (blue dot) was extracted from \cite{nubbemeyer2008strong}. The adiabatic CTMC simulation (red diamond) was done using the ADK distribution \cite{delone1991energy, ammosov1986tunnel} and the non-adiabatic simulation (green square) is based on \cite{mur2001energy}. The power law used for fitting is described by $N^*/N = a \cdot I^b$ with $b$ given in the legend. The fitting results are represented by lines. Note that the lower absolute values of the experimental yields are due to the decay of the excited states which is not accounted for here (for details see \cite{nubbemeyer2008strong}). As we expect the decay rate to be the same over the depicted intensity regime, this should not affect the decline though.} \label{New-exponent-for-Shvetsov} \end{figure} From the experiment reported in \cite{nubbemeyer2008strong}, the ratio $N^*/N$ can be extracted for various intensities. These values show an intensity dependence of \begin{equation} N^*/N \propto I^{-0.86}, \end{equation} displayed as blue line in Fig. \ref{New-exponent-for-Shvetsov}. So, even though taking into account the intensity-dependent phase-width in the analytical estimation, which shifted the power law exponent from $b=-0.75$ as obtained in \cite{shvetsov2009capture} to $b=-1$, was well captured by the adiabatic CTMC simulations giving $b=-1.02$, we still do not fully understand the experimental result of $b=-0.86$ in this framework. However, when looking at the adiabaticity parameter $\gamma=\omega \sqrt{2 I_p}/F$ \cite{keldysh1965ionization}, we find that, for the intensity regime of $I=1.4\cdot 10^{14}-10^{15} \, \mathrm{W/cm^2}$ at $\lambda=800 \, \mathrm{nm}$, $\gamma$ ranges from 0.5 to 1.2. This is the typical strong field ionization regime, where the relevance of non-adiabatic effects is under debate \cite{boge2013probing,hofmann2016non,arissian10}. We now show that non-adiabatic effects can be observed in Rydberg yield measurements from the power-law dependence alone. This eliminates the concerns about intensity calibration that has haunted prior experiments attempting to observe non-adiabatic effects by measuring electron momenta distributions \cite{boge2013probing,hofmann2016non}. In Fig. \ref{New-exponent-for-Shvetsov}, CTMC simulation results are depicted in green (squares) where the non-adiabatic PPT ionization probability described in \cite{mur2001energy} and \cite{perelomov1966ionization} was used to generate the initial conditions. For a detailed description of this simulation see \cite{hofmann2014interpreting}. A power law fit to this data yields $N^*/N \propto I^{-0.93}$, which improves the CTMC prediction and gives the closest quantitative agreement with the experimental value of $b=-0.86$ of all discussed models. These non-adiabatic effects on the intensity dependence of the Rydberg yield can be explained by the width in the distribution of the starting velocity and the ionization phase, which both increase slower with intensity in the non-adiabatic theory than in the adiabatic one. Since this affects the denominator of the Rydberg yield, we end up with a less negative exponent in the power law. In order to estimate the extent of this effect, we first look at the width $\sigma_{v_\perp} = \sqrt{\omega/(2 c_y)}$ of the transverse velocity distribution for the non-adiabatic case as given in \cite{mur2001energy}. It is \begin{equation} c_y = \tau_0 = \sinh^{-1}(\gamma) \label{eq:cy} \end{equation} which in the adiabatic limit $\gamma \ll 1$ can be approximated by \begin{equation} c_y = \tau_0 \approx \gamma \propto 1/F_0 \propto I^{-0.5}. \label{eq:cy_adiabatic} \end{equation} For the non-adiabatic regime used in this paper we fit a power law to eq. \ref{eq:cy} (Fig. \ref{Non-adiabaticWidth}) and obtain $c_y \propto \gamma^{0.84}$ and thus $\sigma_\perp \propto \gamma^{-0.84/2} \propto F^{0.84/2} \propto I^{0.84/4}$. We proceed analogously with the phase width: In \cite{bondar2008instantaneous} the ionization rate is found to have the exponential dependence $\exp{(\frac{-2 I_p}{\omega}f(\gamma,v_{||},v_{\perp}))}$, so we use $\sigma_{\phi} \propto 1/\sqrt{f}$. In a power law fit to $f(\gamma)$ where we set $v_{||}=0$ and $v_{\perp}=0$ we obtain $f(\gamma) \propto \gamma^{0.89}$ and consequently $\sigma_{\phi} \propto 1/\sqrt{\gamma^{0.89}} \propto F_0^{0.89/2} \propto I^{0.89/4}$ (see Fig. \ref{Non-adiabaticWidth}). Consequently, including the non-adiabatic effect both in the velocity and in the phase width we obtain: \begin{align} \begin{split} &N^*/N \propto \frac{1/\sqrt{I}}{\sigma_\perp \sigma_{\phi}} \\ &\propto \begin{cases} 1/I^{0.5+0.5}=1/I^{1.0} &\mbox{adiabatic} \\ 1/I^{0.5+0.84/4+0.89/4} = 1/I^{0.933} &\mbox{non-adiabatic}. \label{eq:adiabatic_nonadiatic_power_law} \end{cases} \end{split} \end{align} Although this estimate of $b=-0.93$ does not agree perfectly with the power law exponent $b=-0.86$ obtained from the experimental data we got much closer to it. This does not only highlight the relevance of taking account of non-adiabatic effects, but it also shows in what way FTI can be used to investigate the initial conditions at the tunnel exit. In particular, as the discussed effects concern the denominator of the Rydberg yield and thus the total number of tunneled electrons, they are not only relevant for Rydberg related studies but for tunnel ionization in general. For example, the slower growth of the momentum width with intensity when applying non-adiabatic theories as compared to adiabatic theories can also be seen in the data presented in \cite{arissian10,hofmann2016non}. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig_NonAdiabaticWidth.png} \caption{$f(\gamma)$ we find in \cite{bondar2008instantaneous} and $c_y(\gamma)$ as given in \cite{mur2001energy} with the respective power law fits in the $\gamma$-regime defined by the parameters listed in Fig. \ref{New-exponent-for-Shvetsov}.} \label{Non-adiabaticWidth} \end{figure} For infrared ($\lambda=800 \, \mathrm{nm}$) light the estimation of a power law with exponent $b=-1$ matched the adiabatic simulation results rather well (see Fig. \ref{New-exponent-for-Shvetsov}, $b = -1.02$ for the adiabatic CTMC simulations). Since the adiabatic theory is wavelength-independent, we would expect the same scaling to hold for larger wavelengths as well - or even better since the system would be more adiabatic. However, the Rydberg yield from adiabatic simulations at $\lambda=1200 \, \mathrm{nm}$ shows a faster drop with intensity which leads to an exponent of $b =-1.16$ in a power law fit (red diamond with orange line in Fig. \ref{lam-dependence}). For larger wavelengths the drop increases even faster with increasing intensity. In the following we derive a theory which explains this effect, thus making predictions about observing this effect in experimental data as well. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{Fig_lamDependence.png} \caption{Intensity dependent Rydberg yield at $\lambda=1200 \mathrm{nm}$ (all other parameters are chosen as listed in Fig. \ref{New-exponent-for-Shvetsov}). Purple: The adiabatic power law with $b=-1$ (see eq. \eqref{eq:adiabatic_nonadiatic_power_law}). Red diamonds: Adiabatic CTMC simulation data with power law fit (orange line) to it. Blue: Estimation by solving eq. \eqref{eq:VMaxForResult} and \eqref{eq:phiMaxForResultSimplified} exactly, Green: Approximation given by eq. \eqref{eq:Taylor-approx-final}.} \label{lam-dependence} \end{figure} As described in \cite{shvetsov2009capture}, we need the maximal initial transverse velocity $v_{\perp,0,max}$ and the range $\Delta \phi$ of ionization phases for estimating the area of initial events in the $v_{\perp,0}-\phi$ plane which end up in a Rydberg state. From \cite{shvetsov2009capture} it becomes clear that including Coulomb effects plays a minor role when dealing with intensity dependence as this effect cancels out in the derivation of $\Delta \phi = |\phi_{latest}-\phi_{earliest}|$ and only shifts the Rydberg area but does not affect its size. Hence, we neglect the Coulomb potential in the propagation in the following and `turn on' this potential only at the end of the pulse for the evaluation of eq. \eqref{eq:Rydberg-condition}. We define the ionization phase of $\phi=0$ to correspond to ionization at the central field maximum, and set the tunnel exit to $x_e=0$. According to the equations of motion in \cite{corkum1993plasma} the position and velocity at a time $\tau$ just after the pulse has passed can be approximated by \begin{align} x(\tau)&\approx\frac{F_0}{\omega^2} \cos{(\phi)}- \frac{F_0}{\omega} \sin{(\phi)} \cdot \tau \label{eq:final_x} \\ y(\tau)&\approx v_{\perp,0} \cdot \tau \label{eq:final_y}\\ v_x&=-\frac{F_0}{\omega} \sin{(\phi)} \label{eq:final_vx}\\ v_\perp &= v_{\perp,0} \label{eq:final_vy}, \end{align} where $\tau=T/2$, with $T$ the time span between the zeros of the envelope, and the light is linearly polarized in x-direction. Note that these equations of motion differ from the ones used in \cite{shvetsov2009capture} by the term $\frac{F_0}{\omega^2} \cos{(\phi)}$ and the $\lambda$-effect in the intensity dependence that we derive arises from this discrepancy. This also explains why the mentioned effect is weakened for longer pulses where the second term in $x(\tau)$ dominates. \\ For the calculation of $v_{\perp,0,max}$ we substitute eq. \eqref{eq:final_x} and \eqref{eq:final_y} in the limit of $E=0$ in eq. \eqref{eq:Rydberg-condition} and set $\phi=0$ \begin{equation} E = \frac{v_{\perp,0,max}^2}{2}-\frac{1}{\sqrt{\frac{F_0^2}{\omega^4}+v_{\perp,0,max}^2\cdot \tau^2}}=0. \label{eq:VMaxForResult} \end{equation} Analogously, we set $v_{\perp,0}=0$ in the calculation of $\phi_{max}$ in eq. \eqref{eq:Rydberg-condition}, which leads to: \begin{align} \begin{split} \frac{1}{2} & \left(\frac{F_0}{\omega}\right)^2 \cdot \sin{(\phi_{max})}^2 \\&=\frac{1}{\frac{F_0}{\omega^2}\cos{(\phi_{max})}-\frac{F_0}{\omega}\sin{(\phi_{max})} \cdot \tau}. \label{eq:phiMaxForResult} \end{split} \end{align} This expression can be approximated by \begin{equation} \frac{1}{2} \left(\frac{F_0}{\omega}\right)^2 \cdot \phi_{max}^2-\frac{1}{\frac{F_0}{\omega^2}-\frac{F_0}{\omega}\phi_{max} \cdot \tau} = 0 \label{eq:phiMaxForResultSimplified} \end{equation} since $\phi_{max}<0.1$ for the parameters used in this work. Equations \eqref{eq:VMaxForResult} and \eqref{eq:phiMaxForResultSimplified} can be solved analytically for $v_{\perp,max}$ and $\phi_{max}$, respectively (see Appendix \ref{app:AppendixB} for details). The corresponding Rydberg yield is estimated as $N^*/N \propto \phi_{max}(F_0,\omega,\tau) \cdot v_{\perp,0,max}(F_0,\omega,\tau) /F_0$ and the intensity dependence at $\lambda=1200 \, \mathrm{nm}$ can be seen in Fig. \ref{lam-dependence} (blue line), a power law fit to which gives an exponent of $b=-1.15$ This analytical derivation matches the simulation data (red diamonds) very well. As the lengthy, full analytical solution of \eqref{eq:VMaxForResult} and \eqref{eq:phiMaxForResultSimplified} (see Appendix \ref{app:AppendixB}) does not allow for a deeper understanding of which parameters dominate this wavelength dependence, we also derive an approximation for it in Appendix \ref{app:AppendixC} which yields: \begin{equation} N^*/N \propto \frac{\omega}{F_0^{2} \tau^{2/3} (1+\frac{F_0}{2^{4/3} \cdot \omega^2 \cdot \tau^{2/3}}) }. \label{eq:Taylor-approx-final} \end{equation} For the case of $\lambda=1200 \, \mathrm{nm}$, the approximation is plotted in Fig. \ref{lam-dependence} (green line) and a power law fit gives an exponent of $b=-1.12$. This approximation makes clear that for large wavelengths and small pulse durations the Rydberg yield as a function of intensity is less well described by a power law than for small wavelengths. In conclusion, we find that including non-adiabatic effects in the distribution of the ionization times and the initial velocity leads to a different power law exponent in the intensity dependence of the relative Rydberg yield, resulting in better agreement with experimental data. As the two mentioned corrections affect the denominator of the Rydberg ratio and thus the total number of electrons that tunneled out of the atom, these insights and approximations can be used beyond studies of Rydberg atoms where one is interested in the intensity dependence of tunnel ionization in a more general context. Moreover, we find that the power law intensity dependence observed for infrared light breaks down for longer wavelengths. This correction is based on and highlights the importance of including the offset term $F_0/\omega^2 \cos(\phi)$ in the approximation of the position of an electron that is driven by a laser field. All in all, these results show new ways to use Rydberg atoms for retrieving information about the tunneling and propagation step in strong field ionization processes. In particular, measuring Rydberg yield can be used as an independent test for non-adiabatic effects in strong field ionization. An interesting new twist on Rydberg dynamics is provided by the spatial inhomogeneity of electric fields, such as the one resulting in the vicinity of a nanostructure \cite{ortmann17}. Under certain conditions, this field inhomogeneity may even lead to chaotic orbits, which should have a significant impact on what fraction of electrons end up in Rydberg states.
{ "timestamp": "2018-05-08T02:14:51", "yymm": "1805", "arxiv_id": "1805.02384", "language": "en", "url": "https://arxiv.org/abs/1805.02384" }
\section{Notation} Let $\mathcal{S}$ be a finite commutative semigroup. The operation on $\mathcal{S}$ will be denoted by $+$ instead of $*$. The identity element of $\mathcal{S}$, denoted $0_{\mathcal{S}}$ (if exists), is the unique element $e$ of $\mathcal{S}$ such that $e+a=a$ for every $a\in \mathcal{S}$. If $\mathcal{S}$ has an identity element $0_{\mathcal{S}}$, let $${\rm U}(\mathcal{S})=\{a\in \mathcal{S}: a+a'=0_{\mathcal{S}} \mbox{ for some }a'\in \mathcal{S}\}$$ be the group of units of $\mathcal{S}$. The sequence $T$ of terms from the semigroups $\mathcal{S}$ is denoted by $$T=a_1a_2\cdot\ldots\cdot a_{\ell}=\coprod\limits_{a\in \mathcal{S}} a^{\ {\rm v}_a(T)},$$ where ${\rm v}_a(T)$ denotes the multiplicity of the element $a$ occurring in the sequence $T$. By $\cdot$ we denote the operation to join sequences. Let $T_1,T_2$ be two sequences of terms from the semigroups $\mathcal{S}$. We call $T_2$ a subsequence of $T_1$ if $${\rm v}_a(T_2)\leq {\rm v}_a(T_1)$$ for every element $a\in \mathcal{S}$, denoted by $$T_2\mid T_1.$$ In particular, if $T_2\neq T_1$, we call $T_2$ a {\sl proper} subsequence of $T_1$, and write $$T_3=T_1 T_2^{-1}$$ to mean the unique subsequence of $T_1$ with $T_2\cdot T_3=T_1$. Let $$\sigma(T)=a_1+a_2+\cdots+a_{\ell}$$ be the sum of all terms in the sequence $T$. Let $q$ be a prime power, and let ${\mathbb F}_q[x]$ be the ring of polynomials over the finite field ${\mathbb F}_q$. Let $R={\mathbb F}_q[x]\diagup K$ be the quotient ring of ${\mathbb F}_q[x]$ modulo the ideal $K$, and let $\mathcal{S}_R$ be the multiplicative semigroup of the ring $R$. Take an arbitrary element $a\in \mathcal{S}_{R}$. Let $\theta_a\in \mathbb{F}_q[x]$ be the unique polynomial corresponding to the element $a$ with the least degree, thus, $\overline{\theta_a}=\theta_a+K$ is the corresponding form of $a$ in the quotient ring $R$. \noindent $\bullet$ In what follows, since we deal with only the multiplicative semigroup $\mathcal{S}_{R}$ which happens to be commutative, we shall use the terminology {\sl idempotent-sum} and {\sl idempotent-sum free} in place of {\sl idempotent-product} and {\sl idempotent-product free}, respectively. \section{Proof of Theorem \ref{Theorem main}} \begin{lemma} \label{Lemma idempotent form} Let $q$ be a prime power, and let ${\mathbb F}_q[x]$ be the ring of polynomials over the finite field ${\mathbb F}_q$. Let $f$ be a polynomial in ${\mathbb F}_q[x]$ and let $f=p_1^{n_1}p_2^{n_2}\cdots p_r^{n_r}$, where $r\geq 1$, $n_1,n_2,\ldots,n_r\geq 1$, and $p_1, p_2, \ldots,p_r$ are pairwise non-associate irreducible polynomials in $\mathbb{F}_q[x]$. Let $R=\mathbb{F}_q[x]\diagup (f)$ be the quotient ring of $\mathbb{F}_q[x]$ modulo the ideal $(f)$. Let $a$ be an element in the semigroup of $\mathcal{S}_R$. Then $a$ is idempotent if and only if $\theta_a\equiv 0_{\mathbb{F}_q}\pmod {p_i^{n_i}}$ or $\theta_a\equiv 1_{\mathbb{F}_q}\pmod {p_i^{n_i}}$ for every $i\in [1,r]$. \end{lemma} \begin{proof} \ Suppose that $a$ is idempotent. Then $\theta_a\theta_a\equiv \theta_a\pmod f$, which implies that $\theta_a(\theta_a-1_{\mathbb{F}_q})\equiv 0_{\mathbb{F}_q} \pmod {p_i^{n_i}}$ for all $i \in [1,r]$. Since $\gcd(\theta_a, \theta_a-1_{\mathbb{F}_q})=1_{\mathbb{F}_q}$, it follows that for every $i\in [1,r]$, $p_i^{n_i}$ divides $\theta_a$ or $p_i^{n_i}$ divides $\theta_a-1_{\mathbb{F}_q}$, that is, $\theta_a\equiv 0_{\mathbb{F}_q}\pmod {p_i^{n_i}}$ or $\theta_a\equiv 1_{\mathbb{F}_q}\pmod {p_i^{n_i}}$. Then the necessity holds. The sufficiency holds similarly. \end{proof} We remark that in Theorem \ref{Theorem main}, if $K={\mathbb F}_q[x]$, then $R$ is a trivial zero ring and ${\rm I}(\mathcal{S}_R)={\rm D}(\mathcal{S}_R)=1$ and $\Omega(K)=\omega(K)=0$, and if $K$ is the zero ideal then $R={\mathbb F}_q[x]$ and ${\rm I}(\mathcal{S}_R)$ is infinite since any sequence $T$ of any length such that $\theta_a$ is a nonconstant polynomial for all terms $a$ of $T$ is an idempotent-sum free sequence, and thus, the conclusion holds trivially for both cases. Hence, we shall only consider the case that $K$ is nonzero proper ideal of ${\mathbb F}_q[x]$ in what follows. \noindent {\sl Proof of Theorem \ref{Theorem main}.} \ Note that ${\mathbb F}_q[x]$ is a principal ideal domain. Say \begin{equation}\label{equation K=(f)} K=(f) \end{equation} is the principal ideal generated by a polynomial $f\in {\mathbb F}_q[x]$, where \begin{equation}\label{equation factorization of f(x)} f=p_1^{n_{1}}p_2^{n_{2}}\cdots p_r^{n_r}, \end{equation} where $p_1,p_2, \ldots,p_r$ are pairwise non-associate irreducible polynomials of $\mathbb{F}_q[x]$ and $n_i\geq 1$ for all $i\in [1,r]$, equivalently, $$K=P_1^{n_1}P_2^{n_2}\cdots P_r^{n_r}$$ is the factorization of the ideal $K$ into the product of the powers of distinct prime ideals $P_1=(p_1),P_2=(p_2),\ldots,P_r=(p_r).$ Observe that \begin{equation}\label{equation bigomega(K)} \Omega(K)=\sum\limits_{i=1}^r n_i \end{equation} and \begin{equation}\label{equation smallomega(K)} \omega(K)=r. \end{equation} Take a zero-sum free sequence $V$ of terms from the group ${\rm U}(\mathcal{S}_R)$ of length ${\rm D}({\rm U}(\mathcal{S}_R))-1$. Take $b_i\in \mathcal{S}_R$ such that $\theta_{b_i}=p_i$ for each $i\in [1,r]$. Now we show that the sequence $V\cdot \coprod\limits_{i=1}^r b_i^{n_i-1}$ is an idempotent-sum free sequence in $\mathcal{S}_R$. Suppose to the contrary that $V\cdot \coprod\limits_{i=1}^r b_i^{n_i-1}$ contains a {\bf nonempty} subsequence $W$, say $W=V'\cdot \coprod\limits_{i=1}^r b_i^{\beta_i} $, such that $\sigma(W)$ is idempotent, where $V'$ is a subsequence of $V$ and $$\beta_i\in [0,n_i-1]\mbox{ for all } i\in [1,r].$$ It follows that \begin{equation}\label{equation theta sigma(W)} \theta_{ \sigma(W)}=\theta_{ \sigma(V')}\theta_{ \sigma(\coprod\limits_{i=1}^r b_i^{\beta_i})}=\theta_{\sigma(V')}p_1^{\beta_1}\cdots p_r^{\beta_r}. \end{equation} If $\sum\limits_{i=1}^r\beta_i=0$, then $W=V'$ is a {\sl nonempty} subsequence of $V$. Since $V$ is zero-sum free in the group of ${\rm U}(\mathcal{S}_R)$, we derive that $\sigma(W)$ is a nonidentity element of the group ${\rm U}(\mathcal{S}_R)$, and thus, $\sigma(W)$ is not idempotent, a contradiction. Otherwise, $\beta_j>0$ for some $j\in [1,r]$, say \begin{equation}\label{equation beta1 in [1,n1-1]} \beta_1\in [1,n_1-1]. \end{equation} Since $\gcd(\theta_{\sigma(V')},p_1)=1_{\mathbb{F}_q}$, it follows from \eqref{equation theta sigma(W)} that $\gcd(\theta_{\sigma(W)},p_1^{n_1})=p_1^{\beta_1}$. Combined with \eqref{equation beta1 in [1,n1-1]}, we have that $\theta_{\sigma(W)}\not\equiv 0_{\mathbb{F}_q}\pmod {p_1^{n_1}}$ and $\theta_{\sigma(W)}\not\equiv 1_{\mathbb{F}_q}\pmod {p_1^{n_1}}$. By Lemma \ref{Lemma idempotent form}, we conclude that $\sigma(W)$ is not idempotent, a contradiction. This proves that the sequence $V\cdot \coprod\limits_{i=1}^r b_i^{n_i-1}$ is idempotent-sum free in $\mathcal{S}_R$. Combined with \eqref{equation bigomega(K)} and \eqref{equation smallomega(K)}, we have that \begin{equation}\label{equation I(S)geq in case prime power} {\rm I}(\mathcal{S}_R)\geq |V\cdot \coprod\limits_{i=1}^r b_i^{n_i-1}|+1=(|V|+1)+\sum\limits_{i=1}^r (n_i-1)= {\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K). \end{equation} Now we assume that $K$ is factored into either a power of some prime ideal or a product of some pairwise distinct prime ideals in ${\mathbb F}_q[x]$, i.e., either $r=1$ or $n_1=\cdots =n_r=1$ in \eqref{equation factorization of f(x)}. It remains to show the equality ${\rm I}(\mathcal{S}_R)= {\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K)$ holds. We distinguish two cases. \noindent \textbf{Case 1.} \ $r=1$ in \eqref{equation factorization of f(x)}, i.e., $f=p_1^{n_1}$. Take an arbitrary sequence $T$ of length $|T|={\rm D}({\rm U}(\mathcal{S}_R))+n_1-1={\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K)$. Let $T_1=\coprod\limits_{\stackrel{a\mid T}{\theta_a\equiv 0 \pmod {p_1}}} a$ and $T_2=T T_1^{-1}$. Note that all terms of $T_2$ are from ${\rm U}(\mathcal{S}_R)$. By the Pigeonhole Principle, we see that either $|T_1|\geq n_1$ or $|T_2|\geq {\rm D}({\rm U}(\mathcal{S}_R))$. It follows that either $\theta_{\sigma(T_1)}\equiv 0_{\mathbb{F}_q}\pmod {p_1^{n_1}}$, or $T_2$ contains a nonempty subsequence $T_2'$ such that $\sigma(T_2')$ is the identity element of the group ${\rm U}(\mathcal{S}_R)$. By Lemma \ref{Lemma idempotent form}, the sequence $T$ is not idempotent-sum free, which implies that ${\rm I}(\mathcal{S}_R)\leq {\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K)$. Combined with \eqref{equation I(S)geq in case prime power}, we have that $${\rm I}(\mathcal{S}_R)={\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K).$$ \noindent \textbf{Case 2.} \ $n_1=\cdots =n_r=1$ in \eqref{equation factorization of f(x)}, i.e., $f=p_1p_2\cdots p_r$. Then \begin{equation}\label{equation bigomega=smallomega} \Omega(K)=\omega(K)=r. \end{equation} Take an arbitrary sequence $T$ of length $|T|={\rm D}({\rm U}(\mathcal{S}_R))$. For any term $a$ of $T$, let $\widetilde{a}\in \mathcal{S}_R$ be such that for each $i\in [1,r]$, \begin{equation}\label{equaiton tilde a} \theta_{\widetilde{a}}\equiv\left\{ \begin{array}{ll} 1_{\mathbb{F}_q} \pmod {p_i} & \textrm{if $\theta_{a}\equiv 0_{\mathbb{F}_q}\pmod {p_i}$;}\\ \theta_{a} \pmod {p_i} & \textrm{otherwise.}\\ \end{array} \right. \end{equation} Note that $$\widetilde{a}\in {\rm U}(\mathcal{S}_R).$$ Let $\widetilde{T}=\coprod\limits_{a\mid T}\tilde{a}$. Then $\widetilde{T}$ is a sequence of terms from the group ${\rm U}(\mathcal{S}_R)$ with length $|\widetilde{T}|=|T|= {\rm D}({\rm U}(\mathcal{S}_R))$. It follows that there exists a nonempty subsequence $W$ of $T$ such that $\sigma(\coprod\limits_{a\mid W}\tilde{a})$ is the identity element of the group ${\rm U}(\mathcal{S}_R)$, i.e., $\theta_{\sigma(\coprod\limits_{a\mid W}\tilde{a})}\equiv 1_{\mathbb{F}_q}\pmod {p_i}$ for each $i\in [1,r]$. By \eqref{equaiton tilde a}, we derive that $\theta_{\sigma(W)}\equiv 0_{\mathbb{F}_q}\pmod {p_i}$ or $\theta_{\sigma(W)}\equiv 1_{\mathbb{F}_q}\pmod {p_i}$ for each $i\in [1,r]$. By Lemma \ref{Lemma idempotent form}, we conclude that $\sigma(W)$ is idempotent. Combined with \eqref{equation bigomega=smallomega}, we have that ${\rm I}( \mathcal{S}_R)\leq {\rm D}({\rm U}(\mathcal{S}_R))={\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K)$. It follows from \eqref{equation I(S)geq in case prime power} that ${\rm I}(\mathcal{S}_R)={\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K)$, completing the proof. \qed We close this paper with the following conjecture. \begin{conj} \ Let $q>2$ be a prime power, and let ${\mathbb F}_q[x]$ be the ring of polynomials over the finite field ${\mathbb F}_q$. Let $R={\mathbb F}_q[x]\diagup K$ be a quotient ring of ${\mathbb F}_q[x]$ modulo any nonzero proper ideal $K$. Then ${\rm I}(\mathcal{S}_R)={\rm D}({\rm U}(\mathcal{S}_R))+\Omega(K)-\omega(K).$ \end{conj} \bigskip \noindent {\bf Acknowledgements} \noindent This work is supported by NSFC (grant no. 11501561, 61303023).
{ "timestamp": "2018-07-20T02:09:21", "yymm": "1805", "arxiv_id": "1805.02166", "language": "en", "url": "https://arxiv.org/abs/1805.02166" }
\section{Introduction} \label{Intro} The energetic events like flares and coronal mass ejections (CMEs) generally occur in magnetically concentrated locations called active regions (ARs). It is believed that these events are physically related to magnetic complexity and non-potentiality of ARs. However, the question of how the magnetic complexity and non-potentiality trigger the flares and CMEs remains to be unknown. For this reason, solar flare forecasting models mainly depend on statistical relationship between the flare production and the non-potential magnetic parameters characterizing the AR size, strength, morphology, magnetic topology etc. Conventionally, the magnetic complexity and non-potentiality are described with parameters such as the magnetic shear \citep{Hagyard1986,Wang1994}, the horizontal gradient of longitudinal magnetic field \citep{Zirin1993a, Tian2002}, the electric current \citep{Leka1993, WangT1994}, twist parameter $\alpha$ \citep{Pevtsov1994,Hagino2004, Tiwari2009}, magnetic free energy \citep{Metcalf2005a}, current helicity \citep{Abramenko1996,ZhangHQ1999} etc. Although individual cases have identified a direct role of these non-potential parameters in the observed activity \citep{Vemareddy2012}, their relationship is not quite strong enough in statistically significant sample of flare/CME and hence for the prediction of space-weather. Past observational studies have explored the connection between photospheric magnetic fields and solar flares, supporting the claim that solar flares are driven by the non-potentiality of magnetic fields in ARs \citep{Leka1993,WangT1994,WangJ1996,Tian2002,Abramenko2005}. Several observations have revealed that solar flares often occur near polarity inversion lines (PILs) with high gradient of longitudinal magnetic fields and/or strong shear of transverse components \citep{Hagyard1990,Wang1994,Falconer1997,Kosovichev2001,Jing2006,Schrijver2007}. Because of this fact, the magnetic shear, magnetic gradient and the vertical electric currents in the photosphere are the most commonly used measures of the magnetic non-potentiality. Both gradient and shear have been employed as parameters in solar activity forecast models. For example, in a sample of 17 vector magnetograms, \citet{Falconer2001} and \citet{Falconer2003} measured the lengths of strong-sheared and strong-gradient magnetic neutral line segments, respectively. Their study found that strong-sheared, strong-gradient PIL segments are strongly correlated with each other, suggesting them to be prospective predictors of the CME productivity of ARs. \citet{Leka2003a,Leka2003b} investigated the magnitudes and temporal variations of several photospheric magnetic parameters in three ARs. They demonstrated that individually these parameters have little ability to differentiate between flare-producing and flare-quiet regions, but in certain combinations, two populations may be distinguished. \citet{Song2006} showed that the strong-gradient polarity inversion line length (SGPIL) is a viable tool to locate source regions of either CMEs or flares. The definitive flare/CME prediction ability by measuring SGPIL is about 75\% (55 out of 73 events). Based on a study of 298 ARs, \citet{Georgoulis2007} argued the connected magnetic field as the robust metric to distinguish X- and M-flaring regions from non-flaring ones. A further study by \citet{Song2009} proved that SGPIL is the most promising parameter in determining the solar flares, if only one parameter had to choose. In combination with PIL length, total unsigned flux and the effective separation, \citet{Mason2010} introduced the gradient-weighted PIL length as a characteristic for solar flare forecast, however the skill score test still indicates it is not a reliable parameter for real-time predictor of flares. However, \citet{Sadykov2017} showed the importance of PIL characteristics in the solar flare forecasts. Knowledge of the amount of magnetic free energy and its temporal variation associated with flares/CMEs is important to understand the energy storage and release processes in ARs. The deviation from potential energy is taken as a proxy for magnetic free energy. \citet{Leka2007} considered the total excess energy as one of the best performing parameters. \citet{Emslie2012} demonstrated for a sample of 38 flares that the total magnetic free energy was sufficient to explain the flare energy release including CMEs, energetic particles, and hot plasma emission and dynamics. Studies on the correlation between magnetic free energy estimated from three-dimensional non-linear force-free fields (3D NLFFF) and flare index, confirm the physical link between them \citep{Jing2010}. \citet{Su2014} showed that in flaring ARs, the 3D NLFFF magnetic free energy and the magnetic free energy obtained from photospheric magnetic fields have almost equal predictability for flares. Motivated by the above studies, we further investigated the relationship between non-potentiality of magnetic field in statistically significant source ARs and the observed flare/CME productivity using vector magnetograms. The central interest of this study seeks to address the question that the degree of non-potentiality has any correspondence with the magnitude of the flares and the speed of the CMEs. Both flares and CMEs are magnetically driven phenomenon of stored magnetic energy configuration and it is of great importance to distinguish the AR conditions that produce flares and that lead to a flare become eruptive successfully ejecting material as a CME. Note that coronal rain is also a kind of eruption but fail to eject material. Seeking such a link of AR non-potential parameters to flares in the begining and then to CMEs involves a careful manual inspection of which CME comes from which AR and the availability of vector magnetic field measurements within $\pm40^\circ$ of disk center. Therefore, our study is limited in sample compared to several studies based on line-of-sight magnetograms as stated above. In section~\ref{ObsData}, we gave the brief description about the observational data, and the procedures used to calculate the several magnetic field parameters. The results of non-potential measures and their relation to flares and CMEs are discussed in section~\ref{res}. The summary with discussion is presented in Section~\ref{summ}. \section{Observational Data and Analysis Procedure} \label{ObsData} The required vector magnetic field observations are obtained from the Helioseismic and Magnetic Imager (HMI; \citet{Schou2012}) aboard Solar Dynamics Observatory (SDO). For ready use in AR analysis, HMI provides processed space-weather HMI active region patches (SHARPs) at 720s cadence. This data product contains field components $(B_r, B_\theta, B_\phi)$ in spherical coordinate system after the heliographic CEA projection \citep{Calabretta2002} centered on the AR patch. These components are equivalent to $(B_z,-B_y, B_x)$ in cartesian system. More information on the HMI data pipeline and the field transformation is described in \citet{Hoeksema2014,Sun2013}. The data on CME linear speed, flare initiation and end timings were obtained from relevant websites ({\url{https://cdaw.gsfc.nasa.gov/CME\_list/}} \citep{Gopalswamy2009}, {\url{https://www.spaceweatherlive.com/en/archive, http://xrt.cfa.harvard.edu/flare\_catalog/all.html.}}). For a statistically significant inference, we considered 77 flare events during the period from February 2011 to July 2016 and their source ARs are located within 40$^{\circ}$ of the central meridian to minimize the projection effects on the calculation of magnetic properties are shown in Figure ~\ref{Fig_eve_pos}. We also excluded the flares from inter-ARs \citep{Toriumi2017} and/or not associated with polarity inversion line. Thus, our sample constitutes 14 X-class, 42 M-class and 21 C-class flares. Among the sample, 38 flares are associated with CMEs visible at least in LASCO C2 field-of-view. The procedures for the derivation of various magnetic parameters are described as follows. \subsection{Strong gradient and Strong sheared PIL length} PILs mark the separation between positive and negative magnetic flux patches in the photosphere of the ARs. Solar flares generally occur in the strong magnetic regions with strong gradient polarity inversion lines, or in complex polarity patterns. In the past, several researchers measured the PIL length and studied its relationship with flare productivity \citep{Falconer2003,Bokenkamp2007,Schrijver2007,Jing2006,Song2006,Mason2010}. \citet{Bokenkamp2007} developed a three-step iterative algorithm to measure the gradient-weighted polarity inversion line lengths, originally based on the \citet{Falconer2003} method. In the first step, zero gauss contours are identified in the strongly smoothed line of sight magnetic field( $B_{los}$) map and the vector magnetic field map is calculated from the smoothed $B_{los}$ using linear force-free field model \citep{Alissandrakis1981}. From the output of this step, PIL segments are identified as zero Gauss contours with specific thresholds of simulated potential transverse field and horizontal gradient of $B_{los}$. In the second step, the above process is repeated for the less smoothed $B_{los}$ image and the output PIL segments are identified by comparing it with PIL segments obtained with this step and the previous one. Finally, in the third step, the same process is repeated for the unsmoothed $B_{los}$ and the gradient weighted PIL length is obtained by comparing the segment outputs from this step and the previous step. Here in our study, we employed a single step algorithm to measure the Strong Gradient polarity inversion line (SGPIL) and Strong shear polarity inversion line (SSPIL) lengths based on Bokenkamp's algorithm. These PILs are defined in the following paragraphs. Non-potential nature of magnetic field is characterized by magnetic shear as the difference between directions of observed transverse fields ($\mathbf{B}_o$) and potential transverse fields ($\mathbf{B}_p$) and is given by, $\theta= cos^{-1}(\mathbf{B}_o \cdot \mathbf{B}_p / |\mathbf{B}_o│ \mathbf{B}_p|)$ \citep{Ambastha1993,Wang1994}. The PIL segment on which the observed transverse field has strength above the threshold value of 150G and magnetic shear angle has exceeding value of 45$^{\circ}$ is marked as SSPIL \citep{Falconer2003,Vemareddy2015}. We measured the SSPILs automatically ($SSPIL_A$) and also manually ($SSPIL_M$). Our algorithm (automatic) identifies multiple strong shear PILs in an AR and sum of these PIL lengths gives $SSPIL_A$. In this procedure, we smoothed the vertical magnetic field ($B_r$) image to a smoothing factor of 8 pixels(4 arcseconds) and identified the zero gauss contour. Then potential magnetic field is calculated from the strongly smoothed magnetogram and thereby shear map is generated from the computed potential transverse field and the observed transverse field. On applying the thresholds of strong observed transverse field, $B_t$ ($>300$G) and strong shear angle ($>45^{\circ}$) to contour segments, strong field and strong shear PILs are identified. The summation of these PILs lengths gives $SSPIL_A$. In the case of vector magnetograms are not available, the shear information alternatively obtained by horizontal gradients in line-of-sight magnetograms where the SGPIL serves the purpose of SSPIL \citep{Falconer2003}. In this procedure (automatic), we identified the zero gauss contours on smoothed magnetogram. Then potential magnetic field and vertical magnetic field gradient maps were generated by the strongly smoothed magnetogram. Using a threshold of potential transverse field ( $>300$G) and high- vertical magnetic field gradient ($>50$ G/Mm) to the zero Gauss contour segments, $SGPIL_A$ is determined. To illustrate this procedure, we display AR NOAA 11429 in Figure~\ref{Fig_PIL_FL_rib}. The panel \ref{Fig_PIL_FL_rib}(a) shows flare ribbons in the AIA 1600\AA~snapshot taken during GOES soft X-ray peak time (2012-03-07 00:24 UT) of flare X5.4 from AR 11429. The brightening region indicates the flare occurring site. These flare ribbons generally trace SGPIL/SSPIL with some departures as displayed in overplotted Bz map (Figure~\ref{Fig_PIL_FL_rib}). In panel~\ref{Fig_PIL_FL_rib}(c), we traced automatically the \textbf{$SGPIL_A$} marked by blue curve on gradient map of Bz. This difference slightly increases depending on the complexity of ARs. It can be noticed that both $SGPIL_A$ and {$SSPIL_A$ traces the same segments of PIL and measures a length with slight difference. A similar trace is obtained with \textbf{$SSPIL_A$} as shown in shear map of panel~\ref{Fig_PIL_FL_rib}(d). The $SSPIL_A$ also follows the flare brightening region to a large extent with differences at the middle. In panel~\ref{Fig_PIL_FL_rib}(e), we overplotted $SGPIL_A$ (blue curve) on Bz map and the $SSPIL_A$ (blue) is overlayed on vector magnetogram in panel~\ref{Fig_PIL_FL_rib}(f). Owing to differences in automatically traced SGPIL/SSPILs with the actual extent of flare ribbons during peak time, we undertook an experiment by following these PIL segments manually with the flare ribbon informations. Different from automated procedure, the manual tracing is biased by the flare brightening area which overlaps with the SSPIL/SGPIL. In this way, we extract only the strong shear/gradient PIL segment which majorly contributes to the flare brightening. The underlying idea to invoke the information of flare brightening is that the flux along the extent of flare ribbon contributes to strength of the flare but not all PIL segments with strong shear/gradient contributes to the flare intensity. We refer the total length of all these manually traced segments as $SSPIL_M$ and $SGPIL_M$. In future, this manual method can be easily automated (for few cases) as well by applying brightness threshold to AIA 1600~\AA~images and use as mask for HMI magnetograms. However, we are constrained to manually trace these PILs for the reasons of picking peak phase of the flare and verifying the ribbons extent along the PIL instead on either side. This procedure is presently subjective to human errors and may not reproduce the same values. \begin{figure} \centering \includegraphics[width=.49\textwidth,clip=]{Fig_AR_grid_locs} \caption{Heliographic locations of all 77 events used in the study. Red (blue) asterisk symbols represent eruptive (confined) flares.} \label{Fig_eve_pos} \end{figure} \begin{figure*} \centering \includegraphics[width=.9\textwidth,clip=]{Fig_VMgram_PIL} \caption{Illustration of tracing $SSPIL_A$ and $SGPIL_A$ in AR 11429 on March 7, 2012. (a) AIA 1600\AA ~image with flare brightening corresponding to peak time of X5.4 flare, b) HMI photospheric magnetogram Bz with filled contours of the AIA 1600\AA~flare brightening (orange patch) over-plotted, (c) Automatically traced $SGPIL_A$ (blue curve) over-plotted on Bz-gradient map, (d) Automatically traced $SSPIL_A$ over-plotted on magnetic shear map, (e) $SGPIL_A$ over-plotted on Bz map, (f) Vector magnetogram with horizontal magnetic field vectors (red, green arrows) over-plotted on Bz map. $SSPIL_A$ is shown with blue curve. It is noted that the flare brightening generally traces SGPIL/SSPIL with differences (at the middle) increasingly in more complex ARs.} \label{Fig_PIL_FL_rib} \end{figure*} \subsection{Magnetic flux} The total absolute flux in the AR is computed by $\Phi=\Sigma |B_z|dA$, where $dA$ is the area of observation pixel. The emergence of electric current embedded in magnetic flux plays a main role in triggering the flares \citep{Leka1996, Schrijver2005,Vemareddy2015}. As SGPIL being the proxy for such emerging photospheric electric currents, the flux near to SGPILs, defined by $R$, must have strong relation with the flares as first proposed by \citet{Schrijver2007}. Following the procedure prescribed in \citet{Schrijver2007}, we constructed the binary maps of positive and negative strong-field masks with the threshold value of +300 G and -300 G respectively \citep{Guerra2018}, then these maps were dilated by a $6\times6$ square pixel kernel to create dilated positive and negative bitmaps. Regions of overlap of these two bitmaps were identified as PILs. Then this region of overlap is convolved with normalized Gaussian of FWHM 15Mm to create the weighting map. Finally, this weighting map is multiplied by the absolute value of magnetogram($B_z$) and the weighted unsigned flux density integrated over all pixels gives the value of $R$. $R_{SG}$ is measured similarly along the $SGPIL_M$ segments. Note that $R$ and $R_{SG}$ differ by flare brightening information. One such example is shown in Figure~\ref{Fig_Wmap}. \subsection{Magnetic energy} The total energy in the coronal field is estimated using virial theorem equation provided the magnetic field observations at the photospheric surface \citep{Chandrasekhar1961,Molodensky1974,Low1982} and is given by \begin{equation} E = \frac{1}{4\pi} \int_S (x B_x + y B_y) B_z dx dy \end{equation} The use of this equation at the solar photosphere is restricted because the photosphere field is not fully force-free and not precisely flux-balanced in a finite area surrounding the AR of interest. Therefore, the photospheric energy estimate serves only as proxy to the energy content in the AR magnetic structure. The potential magnetic fields are derived from the $B_z$ component by using the Fourier transform method \citep{Gary1989}. Since the potential field state is a minimum energy state, by subtracting potential energy ($E_p$) from the total energy ($E$) one obtains the upper limit of the free energy ($E_f$) available in the AR to account for energetic events likes flares and CMEs. \begin{figure} \centering \includegraphics[width=.49\textwidth,clip=]{Fig3_bz_20110215_wc1600_r_rsg} \caption{Top: The HMI magnetogram B$_{z}$ map of AR 11158 obtained on 15 Feb 2011 during X2.2 flare. Contours of the AIA 1600~\AA~ flare brightening at flare peak time(01:45 UT) are over-plotted. The unsigned flux density of $B_z$ summed over all pixels gives $\Phi$. Middle: Magnetogram $B_z$ multiplied with the weighted map of the field near $SGPIL_A$(Green curves). Summing the absolute values of $B_z$ at these pixels yields $R$. Bottom: Magnetogram $B_z$ multiplied with the weighted map of the field near $SGPIL_M$(Red curve). Summing the absolute values of $B_z$ at these pixels gives $R_{SG}$.} \label{Fig_Wmap} \end{figure} \subsection{Decay index of coronal background field} The coronal magnetic field gradient is referred to decay index $n(z)$, considered as the important parameter in controlling the eruptiveness of an AR. In the flux rope based CME models \citep{Torok2005,Olmedo2010,Cheng2011}, if the decay index of overlying magnetic field reaches a critical value, then it results in torus instability (rapid decaying of overlying magnetic fields), which leads to CME eruption. The decay index is defined as, $n(z) = -\frac{z}{B_h} \frac{\partial B_h}{\partial z}$, where $z$ is the geometrical height from the bottom boundary (photosphere) and $B_h$($=\sqrt{B_x^2+B_y^2}$) is the horizontal field strength. We computed the background field in the entire volume of AR magnetic structure by potential field approximation \citep{Gary1989} using the observed vertical component of the magnetic field at the photosphere. From this extrapolated field, the $B_h$ as a function of height is obtained at eight points along the main PIL and an average of $n(z)$ is then derived. \citet{Torok2005} proposed a constant value of 1.5 as a critical decay index ($n_{crit}$), and is a subjective value based on different conditions. The height at which the $n$-curve reaches the $n_{crit}$ of 1.5 is considered as critical height. We estimated this critical height for all the events in our sample. \begin{figure*}[!ht] \centering \includegraphics[width=.49\textwidth,clip=]{Fig4a_SSPILa} \includegraphics[width=.49\textwidth,clip=]{Fig4b_SGPILa} \includegraphics[width=.49\textwidth,clip=]{Fig4c_SSPILm} \includegraphics[width=.49\textwidth,clip=]{Fig4d_SGPILm} \caption{Top two panels display the scatter plots of automatically detected $SSPIL_A$ and $SGPIL_A$ with flare strength and bottom two panels represents the scatter plots of manually detected $SSPIL_M$ and $SGPIL_M$ respectively. Spearman ranking correlation coefficient ($r$), two sided significance(p-value) and the equation of solid fitted line are inserted in respective panels. The cross, square and asterisk symbols represent the C-class, M-class and X-class flares respectively. Red (blue) color of these symbols correspond to eruptive (confined) flare cases. {Note that manual detection method implies a higher correlation to flare strength.}} \label{Fig_PIL_len} \end{figure*} \section{Results} \label{res} \subsection{PIL Length Versus Flare Strength} High vertical field gradient and strong shear usually appear in the vicinities of PILs, where flares frequently occur \citep{Hagyard1986,Hagyard1988,Falconer2003,Sharykin2016}. Following the procedure described in Section 2.1, we estimated the lengths of both SSPIL and SGPIL for all the sample flaring ARs by automatically called as $SSPIL_A$ \& $SGPIL_A$ and manually as $SSPIL_M$ \& $SGPIL_M$ respectively. In this regard, we used vector magnetograms available immediately before the initiation time of the flares. In Figures~\ref{Fig_PIL_len}(a \& b), we showed the relationship of $SSPIL_A$ and $SGPIL_A$ with the flare strength. As our algorithm identifies multiple PIL segments of high gradient and strong shear in an AR and their summation $SGPIL_A$ and $SSPIL_A$ respectively shows weak correlation with the flare strength at a {Spearman's rank correlation coefficient $\sim$0.4. As we are keen to find the correlation between the flare strength and PIL segment of high gradient and strong shear in the flare brightening region, we traced $SGPIL_M$ and $SSPIL_M$ manually as described in the Section 2.1. \begin{figure*}[!ht] \centering \includegraphics[width=.99\textwidth,clip=]{Fig5_phi_R_Rsg} \caption{Scatter plots of $\Phi$, $R$ and $R_{SG}$ with flare strength in left, middle, and right panels, respectively. Spearman ranking correlation coefficient, two-sided significance(p-value) and the equation of solid fitted line are inserted in respective plots. The cross, square and asterisk symbols represent the C-class, M-class and X-class flares respectively. Red (blue) color of these symbols corresponds to eruptive (confined) flare cases.} \label{Fig_flux} \end{figure*} Here and in the following studies, linear regression analysis is done by using \texttt{FITEXY.PRO} routine available in SolarSoftWare library, where it uses least-square approximation in one-dimension to fit the best straight line to the data with errors in both coordinates. We used standard deviation of both coordinates as error inputs. The uncertainties in both the obtained coefficients are shown in the equation, which are inserted in their respective figure panels. We estimated the Spearman ranking correlation coefficients ($r$) in all our studies and the standard error in r is estimated by $ERR_r = \sqrt((1-r^2)/(n-2))$. From here onwards, we refer Spearman ranking correlation coefficient as correlation coefficient (CC). \begin{figure*}[!ht] \centering \includegraphics[width=.49\textwidth,clip=]{Fig6a_te} \includegraphics[width=.49\textwidth,clip=]{Fig6b_pe} \includegraphics[width=.49\textwidth,clip=]{Fig6c_fe} \includegraphics[width=.49\textwidth,clip=]{Fig6d_diff_fe} \caption{Scatter plots of a) total magnetic energy, b) potential energy, c) free energy, d) decrease in magnetic free energy, against the flare strength in logarithmic scale. Spearman ranking correlation coefficient, two sided significance(p-value) and the equation of solid fitted line are inserted in respective panels. The cross, square and asterisk symbols represent the C-class, M-class and X-class flares respectively. Red (blue) colors of these symbols corresponds to eruptive (confined) flare cases. Note a higher correlation of decrease in free energy with flare strength.} \label{Fig_ene} \end{figure*} In Figures~\ref{Fig_PIL_len}(c \& d), the $SGPIL_M$ and $SSPIL_M$ are plotted against the flare strength, respectively. It can be seen that the length of both $SGPIL_M$ and $SSPIL_M$ increase with flare strength. It indicates that more intense flares tend to occur from ARs having larger PILs weighted by strong vertical field gradients, a proxy for strong shear. ARs with large PILs are indicative of complex field structure, and PILs with large gradients are indicative of shearing. Therefore, the SGPIL and SSPIL both describe the global non-potentiality of ARs. $SGPIL_M$ has relatively better correlation with the flare strength at a CC of 0.87 than that of $SSPIL_M$ with flare strength at a CC of 0.78. These strong correlations with flare strength confirms that the flare productivity depends on the non-potentiality of ARs. Importantly, it is noted that the 35 out of 38 CME associated cases (red symbols) have SGPIL length larger than a threshold of $Log10(SGPIL)=1.5$ (31Mm). This may indicates that the CME occurrence requires a certain minimum SGPIL length for being eruptive from confined environment. \begin{table*} \centering \caption{Contingency table based on $SGPIL_M$ length for the forecast of CMEs} \begin{tabular}{|c|c|c|c|} \hline No. of samples & CME occurrences & Non-CME occurrences & Total \\ \hline Log($SGPIL_M$) $\ge$ 1.5 (yes) & 35 (H) & 22 (FA) & 57 \\ Log($SGPIL_M$) $<$ 1.5 (No) & 3 (M) & 17 (CN) & 20 \\ \hline Total & 38 & 39 & 77 \\ \hline \end{tabular} \label{tab1} \end{table*} The above relation is tested for statistical significance. Contingency table for $Log(SGPIL_M) \ge 1.5$ and CME productivity from all the flaring ARs (both confined and eruptive) in our sample is constructed and shown in Table 1. Fisher's exact test is applied to check for the significance of relationship between these two variables. P-value is determined by using hyper geometric distribution and its value is found to be 2.9$\times$10$^{-4}$, which is statistically significant. This relation is not significant for $SGPIL_A$, as we can see there is no clear threshold value for distinguishing the confined and eruptive flares. Further, this threshold value of $SGPIL_M$ is used as forecasting parameter for CME productivity. Forecast verification measures like True Skill Statistic (TSS) and Heidke Skill Score(HSS) were calculated using the forecast contingency table \citep{Bloomfield2012} as shown in Table~\ref{tab1}. Both of these measures account for the correct random forecasts and can have the range of skill scores from -1 to +1. TSS and HSS were found to be 0.357 and 0.355 respectively. Further, the estimated length of the $SSPIL_M$ is plotted against that of $SGPIL_M$ (plot is not shown). They are highly correlated at a correlation coefficient of 0.92. Our result in a large sample is consistent with \citet{WangHM2006}, who showed a strong correlation between magnetic gradient and magnetic shear in primary PILs of five strong flaring ARs. This high correlation is not a surprise as \citet{Falconer2003} already indicated that SGPIL is a viable proxy for SSPIL and it is well correlated with the CME productivity of ARs. As SGPIL can be measured from line-of-sight magnetogram, it serves as a substitute for SSPIL in CME/flare forecasting studies. \subsection{Total unsigned flux, Magnetic energy versus Flare strength} In the past many studies conducted on the association of flares and the flux content of ARs \citep{Leka2003a,Leka2003b,Schrijver2007,Bobra2015,Kazachenko2017}. They all commonly found a weak correlation between total unsigned flux of whole AR with the flare strength. Recently, \citet{Kazachenko2017} found strong correlation between reconnection flux i.e., the total unsigned flux of flare ribbon area with the flare strength. In the same line, we conducted three different studies to find the relationship between flares and the total unsigned flux quantitatively. Firstly, the total unsigned flux ($\Phi$) of whole AR, secondly Schrijver's $R$ value which is the total unsigned flux within 15Mm of all SGPILs in the AR and thirdly the $R_{SG}$ (similar to the flare ribbon reconnection flux as defined in \citealt{Kazachenko2017}), which is the total unsigned flux within 15Mm of $SGPIL_M$ but the only the flare ribbon extent. \begin{figure*} \centering \includegraphics[width=.98\textwidth]{Fig7_np_fs_cme} \caption{Relation of CME kinematics on AR magnetic properties. The scatter plots of a) $SGPIL_M$, b) decrease in amount of free energy and c) flare strength with CME speed. Spearman ranking correlation coefficient, two-sided significance(p-value) and the equation of solid fitted line are inserted in respective plots. } \label{Fig_CMEkin} \end{figure*} In Figure~\ref{Fig_flux}a, the total unsigned flux from the whole AR is plotted against the flare strength and shows a weak correlation of CC$\sim$0.31 . In Figure~\ref{Fig_flux}(b) , $R$ value is plotted against the flare strength which also shows a weak positive correlation of CC=$\sim$0.33. Whereas in Figure \ref{Fig_flux}c, $R_{SG}$, the flux from the flare eruption site, has strong association with flare strength of CC $\sim$0.70. These results indicate that the flux near to $SGPIL_M$ has strong connection to the flare strength. This suggests that flare strength neither depends on the total unsigned flux of whole AR as it just represents the size of the AR nor depends on the flux near to all strong gradient PILs in an AR but there is a strong physical dependence with the flux from the region of 15Mm about $SGPIL_M$ which overlaps the flare brightening. This result also confirms \citet{Kazachenko2017} result and may be more robust given better statistics. This refined relation points the difficulty of predicting the flare and its strength based on the pre-flare magnetograms. Further, we explored the relationship between magnetic energy and flare strength in Figure~\ref{Fig_ene}. The total magnetic energy for all flare events in our sample at initial and end timings of flares were estimated (Section 2.3). The total magnetic energy and potential energy at the initial times are weakly correlated with the flare strength of correlation coefficient of 0.27 each as shown in panels~\ref{Fig_ene}(a \& b). In panel~\ref{Fig_ene}(c), the magnetic free energy is plotted against the flare strength. Despite the points are scattered, there is a moderate positive correlation with a CC of 0.39. In all our sample, the magnetic free energy estimated at flare end timings are found to be smaller than the magnetic free energy estimated at the flare initial timings. Panel~\ref{Fig_ene}(d) depicts the correlation between the decrease in free energy with the flare strength. Unlike the energy estimates at initial times, the scatter plot shows a moderate positive correlation ($CC=0.55$) indicating a physical link between the decrease in amount of free magnetic energy and intensity of flares. It indicates that the total amount of magnetic energy and potential energy possessed by an AR has very little to do with the intensity of flares that the AR has produced but the difference in total magnetic free energy before/after the flare is directly proportional to the strength of the flare, as one would expect. This result is corroborated by the strong correlation between flare strength and amount of free energy decreased during the flares \citep{Vemareddy2012}, and hence it is clearly evident that larger the amount of energy released, stronger the erupted flares. Notedly, the free energy decrease for the CME cases (35 of 38, red symbols) starts from $2\times10^{31}$ ergs ($Log(31.3)$) and may indicates the threshold energy required for a flare to be eruptive. \begin{figure*} \centering \includegraphics[width=.97\textwidth]{Fig8_dind_eruptive_confined} \caption{Plots of decay index of background horizontal field strength as a function of height. \textit{Left column:} Typical cases of eruptive flares M4.5, X2.2, X1.0 and X2.1 originated from ARs 12158, 11158, 12017 and 12297 respectively. The critical heights in these events are noted as 30.21Mm, 42.4Mm, 40.94Mm and 10.96Mm respectively. \textit{Right column:} Typical cases of the confined flares X2.0, M1.1, C5.4 and C6.9 originated from ARs 12192, 12253, 11936 and 12472 respectively. The observed critical heights for these cases are 82.25Mm, 57.76Mm, 62.51Mm and 103.1Mm respectively. $B_h$ is also shown with the y-axis scale on the right in each panel. The dotted horizontal line refers to the $n_{crit}=1.5$.} \label{Fig_dc} \end{figure*} \subsection{Magnetic properties of Active region versus CME kinematics} From the sample of 77, 38 flares are CME associated (eruptive flares) and have known source ARs. We used the online LASCO/CME catalog to determine the CME/Flare association and linear speed (LS) of the CMEs. The relationship between these 38 source AR properties and the associated CME speeds are examined in Figure~\ref{Fig_CMEkin}. The observed CME speed follows the positive correlation with $SSPIL_M$ and $SGPIL_M$ with a correlation coefficients of 0.43 and 0.44 respectively. For reference, only the plot of $SGPIL_M$ versus CME linear speed is displayed. This implies that faster CMEs tend to initiate in ARs with longer lengths of PIL surrounded by greater magnetic complexity \citep{Falconer2003,Song2006}. There is reasonably high positive correlation between the decrease in magnetic free energy after the flare eruption and CME linear speed (panel~\ref{Fig_CMEkin}(b)) with a CC of 0.56. We explored the relationship between magnetic free energy(FE) before the flare eruption and CME linear speed as well but the correlation is weak (plot is not shown). As the decrease in free energy involves the difference ($FE_{initial}-FE_{end}$), it correlates well with CME speed. Further, we also studied the relationship between total unsigned flux with CME linear speed but found out weak correlation between them (plot is not shown). These results suggest that there is a strong physical link between the released non-potential energy of source AR and CME speed but there is no evidence that CME kinematics depends on the size of the ARs. This result is in accordance with the past studies that were done using line-of-sight magnetograms \citep{chen2011JGRA}. \citet{chen2011JGRA} claimed that the size, strength and complexity of ARs do little with the kinematic properties of CMEs, but have significant effects on the CME productivity. Further, flare strength and CME speed are also positively correlated with a CC of 0.49, shown in panel~\ref{Fig_CMEkin}(c). This suggests a general relation that CME speeds are proportional to flare strength and would underly the action of impulsive reconnection on the expelled CME in agreement with previous studies \citep{Guo2007,YumingWang2007} but with few recently found exceptions (like \citealt{Sun2015}). \subsection{Confined and Eruptive Flares} In this section, we investigated the role of background coronal field in the confined flares (without CMEs) and eruptive flares (associated with CMEs). Following from procedure described in Section 2.4, we estimated the critical decay index heights for all 77 events in our sample. In Figure~\ref{Fig_dc}, $n(z)$ is plotted for typical cases of eruptive (left column panels) and confined flares (right column panels). $B_h$ (average of 8 points along main PIL) as a function of height is also shown in the respective panels. For eruptive panels, the extrapolated field reached this critical value at a height below 45Mm. Notedly, at about 10Mm the curve exhibits a gradual steepness with a "bump" in the eruptive cases and is missing in confined cases with almost a smooth curve. The "bump" is also found in \citet{Cheng2011} study and is interpreted as a distinct shape for eruptive and non-eruptive flare events. It was noticed that the $B_h$ decreases faster in the low corona (about 10Mm), and the appearance of a "bump" in the eruptive flare cases which may indicate the ability of the twisted flux (flux rope) to experience torus instability. From these panels, it is seen that the $n(z)$ curve is steeper reaching $n_{crit}=1.5$ within 40Mm for eruptive cases. For confined ones, the $n(z)$ is gradually reaching $n_{crit}$ well beyond 50Mm height. In Figure~\ref{Fig_dc_ch}, we plot the critical heights of all events versus flare strength. It can be seen that the both the confined (blue circles) and eruptive (red circles) spread all along the $n_{crit}$ height and have no relation to the flare strength. Further, \citep{Liu2008} proposed the typical height of eruption onset is 42$\pm$1 Mm, based on the average initial heights of the 4 observed filament events consisting of two failed eruptions and two full eruptions. Meanwhile, many recent studies of full eruptions indicate a critical height well below 42Mm (eg., \citealt{Cheng2011,Vemareddy2014,Sun2015}). Based on this segregation of the events (vertical dashed line), about 90\% (34 of 38) of the eruptive flares have the critical height less than 42 Mm and nearly 70\% (27 of 39) of the confined flares have the critical height beyond 42Mm as evident in Figure~\ref{Fig_dc_ch}. Though the dependency of critical heights is a matter of individual cases, generally critical heights depend on the strength of background field confinement but does not depends on the intensity of flares. This indicates that the background field for confined cases has extended or stronger confinement than the eruptive ones. Depending on the extent of this confined environment, the unstable core field (or flux rope) near the main PIL would be suppressed or become a CME and is the subject of individual cases. From this statistical study, we propose that a CME is likely from an AR coronal background field where $n(z)$ reaches $n_{crit}$ below 42Mm. \begin{figure} \centering \includegraphics[width=.49\textwidth]{Fig9_fs_cdi} \caption{Scatter plot of critical heights for all the flare events in the sample with the flare strength. Red (blue) circles represent the eruptive (confined) flares. Vertical dashed line refers to critical height of 42Mm and divides the eruptive and confined flares. \label{Fig_dc_ch}} \end{figure} \section{Summary and Discussion} \label{summ} Using the HMI vector magnetic field observations, we have studied the relation of degree of magnetic non-potentiality with the observed flare/CME in ARs. Studying the relation of these properties and establishing statistically significant link with the observed activity is the key in the flare/CME forecasting models. In this connection, we made a systematic analysis of several non-potential proxies, including decrease in free magnetic energy proxy, during flares/CMEs of different magnitude. The chosen flare cases originated from 40 ARs, of which 83\% (19 of 23) in the southern hemisphere, 70\% (12 of 17) in the northern hemisphere follow the dominant helicity sign rule \citep{Pevtsov1995,bao2002}. The automatically detected $SGPIL_A$ and $SSPIL_A$ lengths have weaker positive correlation with the flare strength (CC=0.40) than the manually detected ones ($SGPIL_M$ and $SSPIL_M$). Manual detection accounts the AIA 1600\AA~flare ribbon extension along PIL. Further, the $SGPIL_M$ have stronger positive correlation with flare strength (0.9) than that of $SSPIL_M$ with flare strength (CC=0.8) and therefore the magnetic gradient seems to be better correlated with intensity of solar flares than magnetic shear. It is in a quantitative agreement with \citet{WangHM2006} where they found this relation in a sample of 5 X-class flaring ARs. The total unsigned flux of the entire AR ($\Phi$) and Schrijver's R value were found to be weakly correlated with the flare strength but $R_{SG}$, the total unsigned flux within 15 Mm of $SGPIL_M$ has a statistically significant correlation with flare strength (0.70). This strong correlation signifies the physical link between the PIL flux and flare intensity. The flux near $SGPIL_M$ must be contributing in flaring process similar to the flare ribbon reconnection flux defined by \citet{Kazachenko2017}. As the amount of flux involved in the flaring process increases, the intensity of flares also increases but this effect is camouflaged in $\Phi$ and $R$ value in order to reflect in the intensity of the flares. Both the $SGPIL_M$ and $R_{SG}$ are related to the actual flux involved in the reconnection along PIL. Therefore, it is difficult to predict the flare strength a priori based on pre-flare magnetogram and points the missing key aspect in the flare prediction models \citep{Mason2010}. Total magnetic energy and potential energy of flaring ARs derived from virial theorem were found to be weakly correlated with flare strength whereas the magnetic free energy derived at the initial time of flare events has positive correlation with flare strength. Importantly, magnetic free energy decreases after the flare eruption and the amount of decrease in free energy has the strong positive correlation with flare strength. These results suggest that there is a strong physical link between released magnetic free energy and flare productivity, and also the intensity of flares produced. The amount of total magnetic energy and potential energy possessed in an AR are not much related to the intensity of flares that the AR has produced but how much free energy released, has a major contribution to the intensity of flares. Moreover, we analysed the dependence of CME kinematics from our flaring sample that are associated with CMEs. Both $SSPIL_M$ and $SGPIL_M$ are moderately correlated with CME speed. Also, the amount of magnetic free energy decreased during flare eruptions has relatively strong correlation with CME speed at correlation coefficient of 0.56. These findings imply a general relation that stronger the measures of non-potentiality of source ARs, larger the CME speed \citep{YumingWang2007}. And we also found the most common relation between flare strength and CME speed (CC=0.49), indicating that faster CMEs tend to be associated with more intense flares. In addition, background field appears to be key factor for a flare to be eruptive. In 90\% of eruptive flares, the $n(z)$ curve is steeper reaching $n_{crit}$ within 42Mm, whereas $>$70\% confined flares occur in ARs of $n_{crit}$ beyond 42Mm. Recent study by \citet{Vemareddy2017b} inferred the successive sigmoid formation and eruption in AR 12371 under the slow evolving conditions of predominant negative helicity flux injection from the SGPIL region. Minimum length of the SGPIL may be the signature of the twisted flux rope. In the confined AR 12192 (no CMEs but with X-flares), the magnetic flux normalized helicity flux is smaller by a factor of 10 and has no signatures of twisted flux rope in coronal imaging observations. As the flux rope is a continuous bundle of twisted field structure, its existence also assumes continuous SGPIL, but not as small distributed segments. Therefore, besides with the minimum required SGPIL length (31Mm, Figure~\ref{Fig_PIL_len}), the weak background field ($<42$Mm) are suggested to be the prime factors for a flare to be eruptive. The above inference is tested for the statistical significance. The skill scores estimated in section 3.1 suggests that $SGPIL_M$ does not have the ability to aptly predict the CMEs from flaring ARs. However, owing to the moderate level of skill scores and correlation with CME productivity (Figure~\ref{Fig_CMEkin}(a)), the $SGPIL_M$ combined with the measures of the coronal magnetic field configuration would give better CME predicting capabilities. These conclusions should be tested with many more events before the relationship could be said to be robust enough to have better CME prediction capabilities. Obviously, the next step would be considering many more events and run a machine learning algorithm. The machine Learning algorithm can learn from input data and improve from experience, without human intervention. For our type of study, we can use non-linear classification machine learning algorithm, like support vector machine algorithm used in \citet{Bobra2015}, to have more quantitative rigour and check for robustness of these results. \acknowledgments The data have been used here courtesy of NASA/SDO and HMI science team. We thank the HMI science team for the open data policy of processed vector magnetograms. N.V is a CSIR-SRF, gratefully acknowledges the funding from CSIR. P.V.R is supported by an INSPIRE grant under AORC scheme of Department of Science and Technology. We thank both referee and statistician at ApJ for their encouraging comments and suggestions. \bibliographystyle{apj}
{ "timestamp": "2018-05-08T02:14:12", "yymm": "1805", "arxiv_id": "1805.02348", "language": "en", "url": "https://arxiv.org/abs/1805.02348" }
\section*{Appendix} The following simple fact is immediate from the definition of tree decompositions. \begin{lemma}\label{lem:subtree} Let $(\mathcal{F},\mathcal{T})$ be a tree decomposition of $H$ and let $v\in V(H)$. Denote by $\mathcal{F}(v)$ the subfamily $\{X\in \mathcal{F}: v\in X\}$ of $\mathcal{F}$. Then $\mathcal{F}(v)$ always induces a subtree of $\mathcal{T}$. \end{lemma} We also need the following folklore lemma. \begin{lemma}\label{lem:helly} Let $T$ be a tree and let $T_1, \dots, T_k$ be subtrees which pairwise intersect. Then $\cap_{i=1}^k T_i$ is non-empty. \end{lemma} Using these, we give a proof of Proposition~\ref{prop:subtree}. \begin{proof}[Proof of Proposition~\ref{prop:subtree}] By Lemma~\ref{lem:subtree}, $\mathcal{F}(u)=\{X\in\mathcal{F}:u\in X\}$ induces a subtree of $\mathcal{T}$. Let $U=\{u_1,u_2,\cdots,u_t\}$. We use induction on $t$. For brevity, we say that $\mathcal{F}'$ is \emph{good} for $U$ if $\cup_{F\in\mathcal{F}'}F$ contains $U$ and $\mathcal{F}'$ induces a subtree in $\mathcal{T}$. Suppose $t=2$ and $\mathcal{F}(u_1)$ and $\mathcal{F}(u_2)$ are disjoint. Then the vertex set of the shortest path between the two vertex-disjoint trees $\mathcal{T}[\mathcal{F}(u_1)]$ and $\mathcal{T}[\mathcal{F}(u_2)]$ is the unique minimal good subfamily $\mathcal{F}'$ for $U$. For $t>2$, suppose that $\cap_{i=1}^{t-1}\mathcal{F}(u_i)=\emptyset$. Then, by the induction hypothesis, there exists a minimum good subfamily $\mathcal{F}''\subseteq\mathcal{F}$ for $U\setminus\{u_t\}$. If $\mathcal{F}''$ intersects $\mathcal{F}(u_t)$ then let $\mathcal{F}':=\mathcal{F}''$. This $\mathcal{F}'$ is good for $U$ and, moreover, it is the minimum such family, since every good subfamily for $U$ is again good for $U\setminus\{u_t\}$. Otherwise, if $\mathcal{F}''$ and $\mathcal{F}(u_t)$ are disjoint, add the vertex set of the shortest path between $\mathcal{F}''$ and $\mathcal{F}(u_t)$ to $\mathcal{F}''$. Then the new subfamily $\mathcal{F}'$ induces a tree and hence is good for $U$. It is also minimal and unique, because every good subfamily $\mathcal{F}_0$ for $U$ must contain $\mathcal{F}''$ and, therefore, the shortest path from $\mathcal{F}''$ to $\mathcal{F}(u_t)$. Suppose now that $\cap_{i=1}^{t-1}\mathcal{F}(u_i)$ is non-empty. Let $\mathcal{F}'$ be the vertex set of the shortest path from $\cap_{i=1}^{t-1}\mathcal{F}(u_i)$ to $\mathcal{F}(u_t)$. We claim that $\mathcal{F}'$ is the desired minimum good subfamily $\mathcal{F}'$ for $U$. As each good subfamily $\mathcal{F}''$ for $U$ intersects every $\mathcal{F}(u_i)$, $1\leq i\leq t$, Lemma~\ref{lem:helly} implies that $\mathcal{F}''$ also intersects both $\cap_{i=1}^{t-1}\mathcal{F}(u_i)$ and $\mathcal{F}(u_t)$. Thus, every $\mathcal{F}''$ good for $U$ must contain $\mathcal{F}'$. It remains to prove that $U$ is contained in $\cup_{F\in\mathcal{F}'}F$, as $\mathcal{F}'$ already induces a path in $\mathcal{T}$. Firstly, the end vertex of the path $\mathcal{F}'$ is in $\mathcal{F}(u_t)$ and hence contains $u_t$. On the other hand, since every element of $\cap_{i=1}^{t-1}\mathcal{F}(u_i)$ contains $U\setminus\{u_t\}$, the starting vertex of $\mathcal{F}'$ contains $U\setminus\{u_t\}$. \end{proof} \end{document}
{ "timestamp": "2018-05-08T02:12:12", "yymm": "1805", "arxiv_id": "1805.02238", "language": "en", "url": "https://arxiv.org/abs/1805.02238" }
\section{Introduction} Probabilistic topic models such as latent Dirichlet allocation (LDA) \citep{blei2003latent} have been utilized for analyzing a wide variety of datasets such as document collections, images, and genes. Although vanilla LDA has been favored partly due to its simplicity, one of its limitations is that the output is not necessarily very understandable because the priors on the topics are independent. Consequently, there has been a lot of research aimed at improving probabilistic topic models by utilizing the inherent \emph{structures} of datasets in their modeling (see, e.g., \citet{Blei2006DTM,Li2006PAM}; see Section~\ref{related} for other models). In this work, we aimed to leverage the dynamic and static structures of topics for improving the modeling capability and the understandability of topic models. These two types of structures, which we instantiate below, are essential in many types of datasets, and in fact, each of them has been considered separately in several previous studies. In this paper, we propose a topic model that is aware of both of these structures, namely \emph{dynamic and static topic model} (DSTM). The underlying motivation of DSTM is twofold. First, a collection of documents often has \emph{dynamic structures}; i.e., topics evolve along time influencing each other. For example, topics in papers are related to topics in past papers. We may want to extract such dynamic structures of topics from collections of scientific papers for summarizing research activities. Second, there are also \emph{static structures} of topics such as correlation and hierarchy. For instance, in a collection of news articles, the ``sports'' topic must have the ``baseball'' topic and the ``football'' topic as its subtopic. This kind of static structure of topics helps us understand the relationship among them. The remainder of this paper is organized as follows. In Section~\ref{related}, we briefly review related work. In Section~\ref{main}, the generative model and the inference/learning procedures of DSTM are presented. In Section~\ref{exp}, the results of the experiments are shown. This paper is concluded in Section~\ref{concl}. \section{Related Work}\label{related} Researchers have proposed several variants of topic models that consider the dynamic or static structure. Approaches focusing on the dynamic structure include dynamic topic model (DTM) \citep{Blei2006DTM}, topic over time (TOT) \citep{Wang2006TOT}, multiscale dynamic topic model (MDTM) \citep{iwata2010online}, dependent Dirichlet processes mixture model (D-DPMM) \citep{lin2010construction}, and infinite dynamic topic model (iDTM) \citep{ahmed2010timeline}. These methods have been successfully applied to a temporal collection of documents, but none of them take temporal dependencies between multiple topics into account; i.e., in these models, only a single topic contributes to a topic in the future. For the static structure, several models including correlated topic model (CTM) \citep{lafferty2006correlated}, pachinko allocation model (PAM) \citep{Li2006PAM}, and segmented topic model (STM) \citep{Du2010} have been proposed. CTM models the correlation between topics using the normal distribution as the prior, PAM introduces the hierarchical structure to topics, and STM uses paragraphs or sentences as the hierarchical structure. These models can consider the static structure such as correlation and hierarchy between topics. However, most of them lack the dynamic structure in their model; i.e., they do not premise temporal collections of documents. One of the existing methods that is most related to the proposed model is the hierarchical topic evolution model (HTEM) \citep{song2016discovering}. HTEM captures the relation between evolving topics using a nested distance-dependent Chinese restaurant process. It has been successfully applied to a temporal collection of documents for extracting structure but does not take multiple topics dependencies into account either. In this work, we built a new model to overcome the limitation of the existing models, i.e., to examine both the dynamic and static structures simultaneously. We expect that the proposed model can be applied to various applications such as topic trend analysis and text summarization. \begin{table}[t] \centering {\fontsize{9.25pt}{9.25pt}\selectfont\begin{tabular}{lp{6cm}} \toprule $D^{t}$ & number of documents at epoch $t$\\ $n_d^{t}$ & number of words in the $d$-th doc. at epoch $t$\\ $w^{t}_{d,i}$ & the $i$-th word in the $d$-th doc. at epoch $t$\\ $K$ & total number of subtopics\\ $S$ & number of supertopics\\ $y_{d,i}^{t}$ & supertopic of $w^{t}_{d,i}$\\ $z_{d,i}^{t}$ & subtopic of $w^{t}_{d,i}$\\ ${}^{1}\theta_{d}^{t}$ & multinomial distribution over supertopics for the $d$-th doc. at epoch $t$\\ ${}^{2}\theta_{d,s}^{t}$ & multinomial distribution over subtopics for the $d$-th doc. in $s$-th supertopic at epoch $t$\\ $\phi_{k}^{t}$ & multinomial distribution over words for the $k$-th subtopic at epoch $t$ \\ ${}^{2}\alpha_{s}^{t}$ & static structure weight (prior of ${}^{2}\theta_{d,s}^t$) \\ $\beta^{t}$ & dynamic structure weight between topics at time $t-1$ and those at epoch $t$\\ \bottomrule \end{tabular}}% \vspace{-0.5em} \caption{Notations in the proposed model.} \label{notation} \end{table} \section{Dynamic and Static Topic Model}\label{main} In this section, we state the generative model of the proposed method, DSTM. Afterward, the procedure for inference and learning is presented. Our notations are summarized in Table~\ref{notation}. \subsection{Generative Model} In the proposed model, DSTM, the dynamic and static structures are modeled as follows. \begin{figure}[t] \centering \includegraphics[width=6.4cm]{graph_RERE.pdf} \vspace{-1em} \caption{Graphical model of the proposed model for epochs $t-1$ and $t$.} \label{graphical_model} \end{figure} \paragraph{Dynamic Structure} We model the temporal evolution of topic-word distribution by making it proportional to a weighted sum of topic-word distributions at the previous time (epoch), i.e., \begin{equation} \phi^{t}_{k} \sim \mathrm{Dirichlet} \left( \sum_{k'=1}^K \beta^{t}_{k,k'} {\phi}^{t-1}_{k'} \right), \label{trans} \end{equation} where $\phi^t_k$ denotes the word distribution of the $k$-th topic at the $t$-th time-epoch, and $\beta^t_{k,k'}$ is a weight that determines the dependency between the $k$-th topic at epoch $t$ and the $k'$-th topic at epoch $t-1$. \paragraph{Static Structure} We model the static structure as a hierarchy of topics at each epoch. We utilize the supertopic-subtopic structure as in PAM \citep{Li2006PAM}, where the priors of topics (subtopics) are determined by their supertopic. \paragraph{Generative Process} In summary, the generative process at epoch $t$ is as follows. \begin{enumerate}[topsep=1pt] \setlength{\parskip}{0cm} \setlength{\itemsep}{0cm} \item For each subtopic $k=1,..,K$ , \begin{enumerate} \item Draw a topic-word distribution \\ $\phi_{k}^{t} \sim \mathrm{Dirichlet}(\sum_{k'}\beta^t_{k,k'}{\phi}^{t-1}_{k'})$. \end{enumerate} \item For each document $d=1,...,D^{t}$, \begin{enumerate} \item Draw a supertopic distribution \\ ${}^{1}\theta_{d}^{t}\sim \mathrm{Dirichlet}({}^{1}\alpha^{t})$. \item For each supertopic $s=1,...,S$, \begin{enumerate} \item Draw a subtopic distribution \\ ${}^{2}\theta_{d,s}^{t}\sim \mathrm{Dirichlet}({}^{2}\alpha_{s}^{t})$. \end{enumerate} \item For each word $i=1,...,n_d^{t}$, \begin{enumerate} \item Draw a supertopic-word assignment \\ $y_{d,i}^{t}\sim \mathrm{Multinomial}({}^{1}\theta_{d}^{t})$. \item Draw a subtopic-word assignment \\ $z_{d,i}^{t}\sim \mathrm{Multinomial}({}^{2}\theta^{t}_{d,y^{t}_{d,i}})$. \item Draw a word-observation \\ $w_{d,i}^{t}\sim \mathrm{Multinomial}(\phi_{{z}_{d,i}^{t}}^{t})$. \end{enumerate} \end{enumerate} \end{enumerate} Note that the above process should be repeated for every epoch $t$. The corresponding graphical model is presented in Figure~\ref{graphical_model}. \subsection{Inference and Learning} Since analytical inference for DSTM is intractable, we resort to a stochastic EM algorithm \citep{Andrieu2003} with the collapsed Gibbs sampling \citep{Griffiths5228}. However, such a strategy is still much costly due to the temporal dependencies of $\phi$. Therefore, we introduce a further approximation; we surrogate $\phi_{k'}^{t-1}$ in Eq.~\eqref{trans} by its expectation $\hat{\phi}_{k'}^{t-1}=\mathbb{E}[\phi_{k'}^{t-1}]$. This compromise enables us to run the EM algorithm \emph{for each} epoch in sequence from $t=1$ to $t=T$ without any backward inference. \textcolor{black}{In fact, such approximation technique is also utilized in the inference of MDTM \citep{iwata2010online}.} \textcolor{black}{Note that the proposed model has a moderate number of hyperparameters to be set manually, and that they can be tuned according to the existing know-how of topic modeling. This feature makes the proposed model appealing in terms of inference and learning.} \paragraph{E-step} In E-step, the supertopic/subtopic assignments are sampled. Given the current state of all variables except ${y}_{d,i}^{t}$ and ${z}_{d,i}^{t}$, new values for them should be sampled according to \begingroup\makeatletter\def\f@size{9.5}\check@mathfonts \begin{equation}\begin{aligned} &p({y}_{d,i}^{t}=s,{z}_{d,i}^{t}=k \mid w^t,y^{t},z^{t},\Phi^{t-1},{}^{1}\alpha^{t},{}^{2}\alpha^{t},\beta^{t})\\ &\quad\propto \frac{n^{t}_{d,s\backslash i}+{}^{1}\alpha_{s}^{t}}{n^{t}_{d\backslash i}+\sum_{s=1}^{S}{}^{1}\alpha_{s}^{t}} \cdot \frac{n^{t}_{d,s,k\backslash i}+{}^{2}\alpha_{s,k}^{t}}{n^{t}_{d,s\backslash i}+\sum_{k=1}^{K}{}^{2}\alpha_{s,k}^{t}} \\ &\qquad\cdot \frac{n^{t}_{k,v\backslash i}+\sum_{k'=1}^{K}\beta_{k,k'}^{t} \hat{\phi}^{t-1}_{k',v}}{n^{t}_{k\backslash i}+\sum_{k'=1}^{K}\beta_{k,k'}^{t}}, \end{aligned}\end{equation} \endgroup where $n_{k,v}^{t}$ denotes the number of tokens assigned to topic $k$ for word $v$ at epoch $t$, $n_{k}^{t}\mathalpha{=}\sum_{v}n_{k,v}^{t}$, and $n_{d,s}^{t}$ and $n_{d,s,k}^{t}$ denote the number of tokens in document $d$ assigned to supertopic $s$ and subtopic $k$ (via $s$), at epoch $t$ respectively. Moreover, $n_{\cdot \backslash i}^{t}$ denotes the count yielded excluding the $i$-th token. \paragraph{M-step} In M-step, ${}^{2}\alpha^{t}$ and $\beta^{t}$ are updated using the fixed-point iteration \citep{minka2000estimating}. \begingroup\makeatletter\def\f@size{8.5}\check@mathfonts \begin{align} ({}^{2}\alpha_{s,k}^{t})^* &= {}^{2}\alpha_{s,k}^{t} \frac{\sum_{d=1}^{D^{t}} \Psi(n_{d,s,k}^{t} + {}^{2}\alpha_{s,k}^{t}) - \Psi({}^{2}\alpha_{s,k}^{t})} {\sum_{d=1}^{D^{t}} \Psi(n_{d,s}^{t}+{}^{2}\alpha_{s}^{t}) - \Psi({}^{2}\alpha_{s}^{t})},\\ (\beta^{t}_{k,k'})^* &= \beta^{t}_{k,k'} \frac{\sum_{v}\hat{\phi}^{t-1}_{k', v} B^{t}_{k', v}} {\Psi(n^{t}_{k} + \sum_{k'}\beta^{t}_{k,k'}) - \Psi(\sum_{k'}\beta^{t}_{k,k'})}. \end{align} \endgroup Here, $\Psi$ is the digamma function, ${}^{2}\alpha_{s}^{t} \mathalpha{=} \sum_{k}{}^{2}\alpha_{s,k}^{t}$, and \begingroup\makeatletter\def\f@size{9.2}\check@mathfonts \begin{equation*} B^{t}_{k', v} = \Psi \Bigl( n^{t}_{k, v}+\sum_{k'}\beta^{t}_{k,k'}\hat{\phi}^{t-1}_{k', v} \Bigr) -\Psi \Bigl( \sum_{k'}\beta^{t}_{k,k'}\hat{\phi}^{t-1}_{k', v} \Bigr). \end{equation*} \endgroup \paragraph{Overall Procedure} The EM algorithm is run for each epoch in sequence; at epoch $t$, after running the EM until convergence, $\hat{\phi}^{t}_{k,v}$ is computed by \begin{equation*} \hat{\phi}^{t}_{k,v}=\frac{n^{t}_{k,v}+\sum_{k'}\beta^{t}_{k,k'}\hat{\phi}^{t-1}_{k',v}}{n^{t}_{k}+\sum_{k'}\beta^{t}_{k,k'}}, \end{equation*} and then this value is used for the EM at the next epoch $t+1$. Moreover, see Supplementary~\ref{para_est} for the computation of the statistics of the other variables. \section{Experiments}\label{exp} \subsection{Datasets} We used two datasets comprising technical papers: \textbf{NIPS} \citep{dataset} and \textbf{Drone} \citep{dataset_drone}. \textbf{NIPS} is a collection of the papers that appeared in NIPS conferences. \textbf{Drone} is a collection of abstracts of papers on unmanned aerial vehicles (UAVs) and was collected from related conferences and journals for surveying recent developments in UAVs. The characteristics of those datasets are summarized in Table~\ref{nips_drones}. See Supplementary~\ref{prepro} for the details of data preprocessing. \begin{table}[t] \centering \begin{tabular}{ccc} \toprule & \textbf{NIPS} & \textbf{Drone}\\ \midrule Date&1987--1999&2009--2016\\ \# Documents&1,740&1,035\\ \# Vocabulary&11,443&3,442\\ \# Tokens&2,271,087&68,305\\ \bottomrule \end{tabular} \vspace{-0.5em} \caption{Summary of the datasets.} \label{nips_drones} \end{table} \begin{table*}[t] \centering {\small\scalebox{0.92}[1.0]{\begin{tabular}{ccccccccc} \toprule &&&& \textbf{NIPS} &&& \textbf{Drone} &\\ \cmidrule(lr){4-6} \cmidrule(lr){7-9} &static&dynamic&K30 (S15)&K40 (S20)&K50 (S25)&K15 (S3)&K20 (S3)&K25 (S3)\\ \midrule LDA&-&- & 1455.6 (16.7) &1407.3 (15.9)&1374.6 (16.8)&1624.3 (191.1)&1634.8 (189.1)&1644.7 (193.0)\\ PAM&\checkmark&-& 1455.1 (18.2) &1407.0 (17.5)&1376.9 (16.7)&1587.4 (185.1)&1589.9 (191.4)&1590.8 (186.8)\\ DRTM&-&\checkmark& 1380.7 (18.5) &1308.6 (17.5)&1253.9 (17.9)&1212.5 (153.2)&1206.1 (148.0)&1201.2 (143.5)\\ DSTM&\checkmark&\checkmark &\bf{1378.7 (16.5)}&\bf{1301.0 (17.9)}&\bf{1247.3 (17.2)}&\bf{1194.2 (148.2)}&\bf{1180.0 (147.0)}&\bf{1171.6 (141.4)}\\ \bottomrule \end{tabular}}}% \vspace{-0.5em} \caption{Means (and standard deviations) of PPLs averaged over all epochs for each dataset with different values of $K$ and $S$. The proposed method, DSTM, achieved the smallest PPL.} \label{compared ppl} \end{table*} \begin{figure*}[t] \centering \includegraphics[width=15cm]{topic_drone_trans.pdf} \vspace{-1em} \caption{Part of the topic structure extracted from \textbf{Drone} dataset using the proposed method. The solid arrows denote the temporal evolution of ``planning'' topics. The dotted arrows mean that ``planning'' topics are related to ``hardware'', ``control'', and ``mapping'' topics via some supertopics (filled circles).} \label{topic_trans} \end{figure*} \subsection{Evaluation by Perplexity} First, we evaluate the performance of the proposed method quantitatively using perplexity (PPL): \begingroup\makeatletter\def\f@size{9.5}\check@mathfonts \begin{equation*} \mathrm{PPL}=\exp\left(-\frac{\sum_{d=1}^{D} \sum_{w_d^\text{test}}\log{p(w_{d,i}}|{\mathcal M})}{\sum_{d=1}^{D}n_{d}^\text{test}}\right). \end{equation*} \endgroup For each epoch, we used 90\% of tokens in each document for training and calculated the PPL using the remaining 10\% of tokens. We randomly created 10 train-test pairs and evaluated the means of the PPLs over those random trials. We compared the performance of DSTM to three baselines: LDA \citep{blei2003latent}, PAM \citep{Li2006PAM}, and the proposed model without the static structure, which we term DRTM. See Supplementary~\ref{hypara} on their hyperparameter setting. The means of the PPLs averaged over all epochs for each dataset with different values $K$ are shown in Table~\ref{compared ppl}. In both datasets with every setting of $K$, the proposed model, DSTM, achieved the smallest PPL, which implies its effectiveness for modeling a collection of technical papers. \textcolor{black}{For clarity, we conducted paired t-tests between the perplexities of the proposed method and those of the baselines. On the differences between DSTM and DRTM, the p-values were $4.2\times10^{-2}$ ($K=30$), $7.9\times10^{-5}$ ($K=40$), and $6.4\times10^{-7}$ ($K=50$) for the \textbf{NIPS} dataset, and $1.3\times10^{-4}$ ($K=15$), $8.8\times10^{-5}$ ($K=20$), and $4.9\times10^{-6}$ ($K=25$) for the \textbf{Drone} dataset, respectively.} It is also noteworthy that DRTM shows more significant improvement relative to LDA than PAM does. This suggests that the dynamic structure with multiple-topic dependencies is essential for datasets of this kind. \subsection{Analysis of Extracted Structure} We examined the topic structures extracted from the \textbf{Drone} dataset using DSTM. In Figure~\ref{topic_trans}, we show a part of the extracted structure regarding planning of the UAV's path and/or movement. We identified ``planning'' topics by looking for keywords such as ``trajectory'' and ``motion.'' In Figure~\ref{topic_trans}, each node is labeled with eight most probable keywords. Moreover, solid arrows (dynamic relations) are drawn if the corresponding $\beta^{t}_{k,k'}$ is larger than 200, and dotted arrows (static relations) are drawn between a supertopic and subtopics with the two or three largest values of ${}^{2}\alpha^{t}_{s,k}$. Looking at the dynamic structure, we may see how research interest regarding planning has changed. For example, word ``online'' first emerges in the ``planning'' topic in 2016. This is possibly due to the increasing interest in real-time planning problems, which is becoming feasible due to the recent development of on-board computers. In regard to the static structures, for example, the ``planning'' topic is related to the ``hardware'' and ``control'' topics in 2013 and 2014, whereas it is also related to the ``mapping'' topic in 2015 and 2016. Looking at these static structures, we may anticipate how research areas are related to each other in each year. In this case, we can anticipate that planning problems are combined with mapping problems well in recent years. Note that we cannot obtain these results unless the dynamic and static structures are considered simultaneously. \section{Conclusion}\label{concl} In this work, we developed a topic model with dynamic and static structures. We confirmed the superiority of the proposed model to the conventional topic models in terms of perplexity and analyzed the topic structures of a collection of papers. Possible future directions of research include automatic inference of the number of topics and application to topic trend analysis in various domains. \bibliographystyle{acl_natbib}
{ "timestamp": "2018-05-08T02:11:22", "yymm": "1805", "arxiv_id": "1805.02203", "language": "en", "url": "https://arxiv.org/abs/1805.02203" }
\section{Introduction} Multi-dimensional Feller processes have been useful for modeling the evolution of dynamical systems that are spatially inhomogeneous. These processes have been important models in finance and physics \cite{Bottcher2010}. Of a particular interest is the study of the dependence between the marginal processes. Some different notions of positive dependence include association (A), positive supermodular association (PSA), positive supermodular dependence (PSD), and positive orthant dependence (POD). If a process exemplifies a certain notion of positive dependence between the marginals, then one can better study the evolution of the process \par It is known that L\'{e}vy\text{ } processes in $\mathbb{R}^d$ can be characterized by their characteristic triplet $(b, \Sigma, \nu)$, where $b\in\mathbb{R}^d$ is the non-random linear drift, $\Sigma$ is the covariance matrix of the (continuous) Brownian motion, and $\nu$ is the L\'{e}vy\text{ } measure which characterizes the jump behavior of the process. Feller processes have behavior that is ``locally-L\'{e}vy," i.e. for a Feller process $(X_t^x)_{t\geq0}$ that starts at point $x$ ($X_0^x = x$ a.s.), there exists a L\'{e}vy\text{ } process $(Y_t)_{t\geq0}$ such that, in short-time, the behavior of $(X_t^x)_{t\geq0}$ can be approximated by the behavior of $(Y_t + x)_{t\geq0}$ \cite[p.46]{Schilling2013}. This idea is related to the notion that, if the domain $\mathcal{D}(\mathcal{A})$ is ``rich", i.e. contains $C_c^\infty(\mathbb{R}^d)$, the space of smooth functions with compact support, then the Feller process can be described by a characteristic triplet $(b(x), \Sigma(x), \nu(x,dy))$, where the function $b:\mathbb{R}^d\rightarrow\mathbb{R}^d$ represents non-random component, $\Sigma:\mathbb{R}^d\rightarrow\mathbb{R}^{d\times d}$ represents the continuous diffusion-like behavior, and $x\mapsto\nu(x,dy)$ is a measurable kernel representing the jump behavior of the process. Unlike the L\'{e}vy\text{ } process, the Feller process' triplet has dependence on $x$, the state variable of the process, representing its spatial inhomogeneity. It is these triplets through which we will characterize the different notions of positive dependence. \par Association, the strongest form of positive dependence that we will examine, has been well-studied for infinitely divisible distributions. Infinitely divisible random vectors $X$ also have a characteristic triplet $(b, \Sigma, \nu)$ by the famous L\'{e}vy-Khintchine formula, where $b$ represents the non-random component, $\Sigma$ is covariance of the Gaussian component, and $\nu$ is the L\'{e}vy\text{ } measure of the Poissonian component. Pitt (1982) characterized association for Gaussian distributions $(b,Q,0)$ under the condition that the entries $\Sigma_{ij}$ of $\Sigma$ are non-negative \cite{Pitt1982}. Resnick (1988) proved a sufficient condition for association of Poissonian distributions $(0,0,\nu)$ is that $\nu$ be concentrated on the positive and negative orthants $\mathbb{R}_+^d$ and $\mathbb{R}_-^d$ \cite{Resnick1988}, i.e. \begin{equation} \label{resnick} \nu((\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c) = 0. \end{equation} These results lead to the characterization of association between the marginal processes of a L\'{e}vy\text{ } process, since, for a L\'{e}vy\text{ } process $Y = (Y_t)_{t\geq0}$, $Y_t$ is infinitely divisible for each $t\geq0$, and the process can be described by its characteristic triplet $(b,\Sigma,\nu)$. Herbst and Pitt (1991) extended Pitt's result in \cite{Pitt1982} to Brownian motion with covariance matrix $\Sigma$ \cite{Herbst1991}. For jump-L\'{e}vy\text{ } processes $Y\sim (0,0,\nu)$, Samorodnitsky (1995) showed that condition (\ref{resnick}) is a sufficient and necessary condition for the association of each $Y_t$ \cite{Samorodnitsky1995}. This result was also proven by Houdr\'{e} et. al. (1998) using a covariance identity \cite{Houdre1998}. B\"{a}uerle et. al. (2008) extended Samorodnitsky's results for jump-L\'{e}vy\text{ } processes to association in time, and showed that condition (\ref{resnick}) is also equivalent to PSD and POD \cite{Bauerle2008}. Liggett (1985) proved a necessary and sufficient condition for association of stochastically monotone Markov processes on compact state spaces based on the generator of the process \cite{Liggett1985}. Szekli (1995) and R\"{u}schendorf (2008) extended this result to more general state spaces \cite[Ch.3.7]{Szekli1995}, \cite[Cor.3.1]{Ruschendorf2008}. R\"{u}schendorf also extended the Liggett condition for PSA of the Markov process \cite[Cor.3.4]{Ruschendorf2008}. \par In this paper, we want to characterize various forms of positive dependence for stochastically monotone Feller process. Those forms of dependence include association, weak association (WA), PSA, PSD, POD, positive upper orthant dependence (PUOD), and positive lower orthant dependence (PLOD). The association of diffusion processes, i.e. $(b(x), \Sigma(x),0)$, has been characterized by Chen \cite{Chen1993}, so we will only focus on jump-Feller process $(b(x),0,\nu(x,dy))$. Association of jump-Feller processes, i.e. $(b(x), 0, \nu(x,dy))$, was given by Wang (2009) \cite[Thm.1.4]{Wang2009}, but under certain continuity and integrability conditions of the characteristic triplet (see Remark \ref{rem:jmwang}). Here, we will relax those conditions, allowing us to consider a larger class of Feller processes. Additionally, we characterize WA, PSA, PSD, POD, PUOD, and PLOD for jump-Feller processes. Our techniques extend the ideas of Liggett, Szekli, and R\"{u}schendorf to the extended generator of the process, an integro-differential operator. We use ideas of the probabilistic symbol $p(x,\xi)$ of the process developed by Jacob and Schilling \cite[p.57-58]{Schilling2013}. Furthermore, for proving the necessary condition of association, WA, PSA, PSD, POD, PUOD, PLOD, we use the technique of small-time asymptotics of the Feller process \cite{KuhnLaplace2016}, which will allow us to surpass the use of the (extended) generator and use solely the state-space dependent L\'{e}vy\text{ } measure $\nu(x,dy)$. Finally, we provide examples of Feller processes satisfying the conditions of our main results. \par In a concurrent paper of ours, titled ``Association and other forms of positive dependence for Feller evolution systems" \cite{Tu2018b}, we characterize dependence structures for Feller evolution processes (FEP), which are time-inhomogeneous Markov processes having strongly continuous Markov evolutions and L\'{e}vy-type behavior. These FEPs are more general than the Feller processes (time-homogeneous) in this paper, but we need the results of this paper in order to characterize dependence structures of FEPs. We utilize B\"{o}ttcher's transformation of time-inhomogeneous FEPs into time-homogeneous Feller processes (see \cite{Bottcher2013}) and, in a non-trivial way, apply our results in this paper to prove characterizations of positive dependence for FEPs. This yields positive dependence characterizations for interesting time-inhomogeneous processes, like additive processes. For a more comprehensive overview of time-inhomogeneous Markov processes, we recommend the reader explore the paper by R\"{u}schendorf et. al. \cite{Ruschendorf2016}, which also discusses comparison theorems of time-inhomogeneous Markov processes. \par The present paper is organized in the following way. In Section \ref{sec:background}, we give some background on the positive dependence structures, association, WA, PSA, PSD, POD, PUOD, and PLOD, along with definitions of various stochastic orderings. We also provide background on L\'{e}vy\text{ } processes, Feller processes, and the different tools we use to analyze them. In Section \ref{sec:mainresults}, we state and prove our main results about the positive dependence structures of jump-Feller processes. Finally, in Section \ref{sec:examples}, we give a collection of interesting examples of multi-dimensional Feller processes to which we can apply these results. \section{Background} \label{sec:background} \subsection{Dependence and stochastic orderings} Let $X = (X_1,...,X_d)$ be a random vector in $\mathbb{R}^d$. We say $X$ is \textbf{positively correlated (PC)} if $\mathrm{Cov}(X_i,X_j)\geq0$ for all $i,j\in\{1,...,d\}$. This is one the weakest forms of positive dependence, and we are interested in stronger forms of positive dependence which will be of greater use in our study of stochastic processes. \textit{Association} is the strongest form of positive dependence that we will study. \begin{definition} \label{def:assoc} {\rm $X=(X_1,...,X_d)$ is \textbf{associated (A)} if we have $$\mathrm{Cov}(f(X), g(X)) \geq0,$$ for all $f,g:\mathbb{R}^d\rightarrow\mathbb{R}$ non-decreasing in each component, such that $\mathrm{Cov}(f(X),g(X))$ exists. } \end{definition} We will also study other forms of positive dependence that are weaker than association, but stronger than positive correlation. We list them below. \begin{definition} \label{def:WA} {\rm A random vector $X=(X_1,...,X_d)$ is \textbf{weakly associated (WA)} if, for any pair of disjoint subsets $I,J\subseteq\{1,..,d\}$, with $|I| = k$, $|J|=n$, \begin{equation*} \label{WA} \mathrm{Cov}(f(X_I), g(X_J))\geq0, \end{equation*} where $X_I := (X_i:i\in I)$, $X_J := (X_j:j\in J)$, for any $f:\mathbb{R}^k\rightarrow\mathbb{R}$, $g:\mathbb{R}^{n}\rightarrow\mathbb{R}$ non-decreasing, such that $\mathrm{Cov}(f(X_I), g(X_J))$ exists. } \end{definition} \begin{definition} \label{def:psa} {\rm $X$ is \textbf{positive supermodular associated (PSA)} if $\mathrm{Cov}(f(X), g(X))\geq0$ for all $f,g\in \mathcal{F}_{ism} := \{h:\mathbb{R}^d\rightarrow\mathbb{R}, \text{ non-decreasing, supermodular}\}$. $f$ \textbf{supermodular} means, for all $x,y\in\mathbb{R}^d$, $f(x\wedge y) + f(x\vee y) \geq f(x) + f(y),$ where $x\wedge y$ is the component-wise minimum, and $x\vee y$ is the component-wise maximum. } \end{definition} Now let $\hat{X} = (\hat{X}_1,...,\hat{X}_d)$ be a random vector such that for all $i$, $\hat{X}_i \stackrel{d}= X_i$ and $\hat{X}_i$'s are mutually independent. \begin{definition} \label{def:psd} {\rm $X$ is \textbf{positive supermodular dependent (PSD)} if, for all $f:\mathbb{R}^d\rightarrow\mathbb{R}$ supermodular, $\mathbbm{E} f(\hat{X}) \leq \mathbbm{E} f(X)$. } \end{definition} \begin{definition} \label{def:puod} {\rm $X$ is \textbf{positive upper orthant dependent (PUOD)} if for all $t_1,...,t_d\in\mathbb{R}$, \begin{align*} \mathbbm{P}(X_1 >t_1,...,X_d >t_d) \geq \mathbbm{P}(X_1>t_1)...\mathbbm{P}(X_d>t_d). \end{align*} } \end{definition} \begin{definition} \label{def:plod} {\rm $X$ is \textbf{positive lower orthant dependent (PLOD)} if for all $t_1,...,t_d\in\mathbb{R}$, \begin{align*} \mathbbm{P}(X_1 \leq t_1,...,X_d \leq t_d) \geq \mathbbm{P}(X_1\leq t_1)...\mathbbm{P}(X_d\leq t_d). \end{align*} } \end{definition} \begin{definition} \label{def:pod} {\rm $X$ is \textbf{positive orthant dependent (POD)} if $X$ is PUOD and PLOD. \par One can also state another equivalent definition to PUOD (PLOD). For $i=1,...,d$, let $f_i:\mathbb{R}\rightarrow\mathbb{R}_+$ be non-decreasing (non-increasing) functions. Then $X=(X_1,...,X_d)$ \textbf{PUOD} (\textbf{PLOD}) if and only if \begin{center} $\mathbbm{E} \left(\prod_{i=1}^d f_i (X_i) \right) \geq \prod_{i=1}^d \mathbbm{E} f_i (X_i).$ \end{center} } \end{definition} \noindent \underline{Note}: Definition \ref{def:assoc} first appeared in Esary et. al. \cite{Esary1967}, Definition \ref{def:WA} in Burton et. al. \cite{Burton1986}, Definition \ref{def:psa} in R\"{u}schendorf \cite[p.284]{Ruschendorf2008}, Definition \ref{def:psd} in Hu \cite{Hu2000}, and Definitions \ref{def:puod}-\ref{def:pod} in Lehmann \cite{Lehmann1966}. Definitions \ref{def:psd}-\ref{def:pod} can also be stated in terms of stochastic orderings. For more on this, we refer the reader to M\"{u}ller and Stoyan's book \cite[Ch.3]{Mueller2002}. It is useful to see the relationship between these different forms of positive dependence. We state the relationships in the following proposition. \begin{proposition} \label{propdepmap} The implications in Figure \ref{fig:prop2_1} hold. \begin{center} \begin{figure}[h] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, scale=.75]{prop2_1.png}\\ \caption{Implication map of various positive dependence structures}\label{fig:prop2_1} \end{figure} \end{center} \vspace{-.6cm} \end{proposition} \begin{proof} Proofs for these implications can be found in M\"{u}ller and Stoyan's book \cite[Ch.3]{Mueller2002}, and implications involving PSD can be found in \cite{Christofides2004}. \end{proof} These notions of dependence can be extended from random vectors to stochastic processes. Let $X = (X_t)_{t\geq0}$ be a stochastic process in $\mathbb{R}^d$. \begin{definition} \label{def:assocspacetime} {\rm (a) Process $X$ is \textbf{associated in space} or \textbf{spatially associated} if, for every $t\geq0$, the random vector $X_t = (X_t^{(1)},...,X_t^{(d)})$ is associated. \par (b) Process $X$ is \textbf{associated in time} or \textbf{temporally associated} if, for all \\$0\leq t_1< ... <t_n$, the random vector $(X_{t_1},...,X_{t_n})$ in $\mathbb{R}^{dn}$ is associated.} \end{definition} \begin{remark} {\rm \begin{enumerate}[noitemsep, label = (\roman*)] \item Clearly, (b) is a stronger than (a) in the above definition \item We can define other forms of positive dependence in stochastic processes if we replace ``associated" in Definitions \ref{def:assocspacetime} (a) and (b) with ``WA," ``PSA," ``PSD," ``POD," ``PUOD," ``PLOD." \item Definition (a) is equivalent to the statement that the ``process preserves positive correlations," as given in \cite[p.80]{Liggett1985} and \cite{Chen1993}. \end{enumerate} } \end{remark} \subsection{Feller processes, extended generators, small-time asymptotics} \subsubsection{Feller process} Consider a time-homogeneous Markov process $X = (X_t)_{t\geq0}$ on the space $(\Omega, \mathcal{G}, (\mathcal{G}_t)_{t\geq0}, \mathbbm{P}^x)_{x\in\mathbb{R}^d}$ on state space $\mathbb{R}^d$. $(\mathcal{G}_t)_{t\geq0}$ is the filtration, and the index ``$x$" indicates the starting point of the process: $\mathbbm{P}^x(X_0 = x) = 1$. We associate with a Markov process $X$ a positivity-preserving, contraction semigroup of bounded operators $(T_t)_{t\geq0}$ defined by \begin{center} $T_t f(x) := \mathbbm{E}^x f(X_t), \hspace{.3cm} x\in\mathbb{R}^d,$ \end{center} where $f\in B_b(\mathbb{R}^d)$, the space of bounded measurable functions on $\mathbb{R}^d$. Let $(C_0(\mathbb{R}^d), ||\cdot||_\infty)$ be the Banach space of continuous functions that vanish at infinity, i.e. $\lim_{|x|\rightarrow\infty}f(x) = 0$, where $||\cdot||_\infty$ is the sup-norm. Define $\mathcal{F}_i:=\{f:\mathbb{R}^d \rightarrow\mathbb{R}, \text{ non-decreasing in each component}\}$. A Markov process is \textbf{stochastically monotone} if $T_t f\in\mathcal{F}_i$ for all $f\in \mathcal{F}_i$. We define the \textbf{generator} $\mathcal{A}$ of the process $X$ to be \begin{equation} \label{generator} \mathcal{A} f := \lim_{t\searrow0}\frac{T_t f - f}{t}, \end{equation} for all $f\in\mathcal{D}(\mathcal{A})$, where $\mathcal{D}(\mathcal{A})$ is the \textbf{domain of the generator} defined to be $$\mathcal{D}(\mathcal{A}) = \{u\in C_0(\mathbb{R}^d): \text{ limit on RHS of (\ref{generator}) exists uniformly}\}.$$ The Markov process is a \textbf{Feller process} if the semigroup $(T_t)_{t\geq0}$ satisfies the following properties: \begin{center} (i) $T_t:C_0(\mathbb{R}^d)\rightarrow C_0(\mathbb{R}^d)$, \hspace{1cm} (ii) $\lim_{t\rightarrow0}||T_tu-u||_{\infty}=0$. \end{center} If additionally, the domain of the generator contains smooth functions with compact support, i.e. $\mathcal{D}(\mathcal{A}) \supset C_c^\infty(\mathbb{R}^d)$, we call the process $X$ a \textbf{rich Feller} process. It follows from Courr\`{e}ge's Theorem \cite{Courrege1965} that $-\mathcal{A}$ becomes a pseudo-differential operator $p(x,D)$ on the space of $C_c^\infty(\mathbb{R}^d)$: $\mathcal{A}|_{C_c^\infty(\mathbb{R}^d)} = -p(x,D) $, where $p(x,D)$ is defined to be \begin{equation} \label{pseudo} \mathcal{A} f(x) = -p(x,D) f (x) = (2\pi)^{-d/2} \int_{\mathbb{R}^d} e^{i \xi\cdot x} p(x,\xi) \hat{f}(\xi) d\xi, \hspace{.3cm} f\in C_c^\infty(\mathbb{R}^d). \end{equation} The function $-p(x,\cdot)$ is a continuous negative definite function, in the sense of Schoenberg, for all $x\in\mathbb{R}^d$, which yields a L\'{e}vy-Khintchine representation for each $x$: \begin{equation} \label{symbol} -p(x,\xi) = - i b(x) \cdot \xi + \frac{1}{2} \xi \cdot \Sigma(x) \xi - \int_{\mathbb{R}^d\setminus\{0\}} (e^{i \xi\cdot y} - 1 - i \xi \cdot y\chi(y)) \nu(x,dy), \end{equation} \noindent where $\chi:\mathbb{R}^d\rightarrow\mathbb{R}$ is a \textbf{cut-off function}. In this paper, unless otherwise mentioned, we will assume $\chi(y)=\mathbbm{1}_{(0,1)}(|y|)$. For each $x$, $(b(x),\Sigma(x),\nu(x,dy))$ is the \textbf{(L\'{e}vy) characteristic triplet}, where $b(x)\in\mathbb{R}^d$, $\Sigma(x) \in \mathbb{R}^{d\times d}$ a symmetric positive definite matrix, and $\nu(x,dy)$, the \textbf{L\'{e}vy\text{ } measure}, is a $\sigma$-finite measure on $\mathbb{R}^d\setminus\{0\}$ satisfying $\int_{\mathbb{R}^d\setminus\{0\}} (1\wedge |y|^2) \nu(x,dy)<\infty$. We call the function $p(x,\xi)$ the \textbf{symbol} of the process. We also write $X\sim (b(x),\Sigma(x), \nu(x,dy))$ to signify that $X$ is a Feller process with that characteristic triplet. \par When the symbol, and the corresponding triplet, are constant in $x$, i.e. $p(x,\xi) = p(\xi)$ and triplet $(b(x),\Sigma(x),\nu(x,dy)) = (b,\Sigma,\nu)$ then process $X$ is a \textbf{L\'{e}vy\text{ } process}, i.e. a stochastically continuous Markov process with stationary and independent increments. The symbol $p(\xi)$ is also the L\'{e}vy\text{ } symbol of the process, with characteristic function $\phi_{X_t} (\xi) = e^{tp(\xi)}$. In the L\'{e}vy\text{ } case, $b$ is the non-random linear drift, $\Sigma$ is covariance of the Brownian motion, and $\nu$ is a measure representing the jumps of the process. \par Continuous negative definite functions $p(x,\xi)$ which are associated with a Feller process have a form of local boundedness in the first argument. In other words, we say the symbol $p(x,\xi)$ is locally bounded if for all $K\subset\mathbb{R}^d$ compact, there exists $c_K>0$ such that \begin{equation} \label{locbdd} \sup_{x\in K} |p(x,\xi)| \leq c_K (1 + |\xi|^2). \end{equation} We say the \textbf{symbol is bounded} if (\ref{locbdd}) holds for $K=\mathbb{R}^d$. The local boundedness (or boundedness) of the symbol corresponds to the local boundedness (boundedness) of the characteristics $(b(x), \Sigma(x),\nu(x,dy))$ (See \cite[Lem.2.1]{Schilling1998}). \subsubsection{Integro-differential operator} For a general rich Feller process, the triplet $(b(x),\Sigma(x), \nu(x,dy))$ characterizes the behavior of the process, with $b(x)$ representing non-random continuous behavior, $\Sigma(x)$ representing the diffusion-like continuous behavior, and $\nu(x,dy)$ representing the jump behavior. To analyze the process one of the crucial tools we will use is the extended generator. For the case of rich Feller processes, when we substitute (\ref{symbol}) into the right-hand side of (\ref{pseudo}), by elementary Fourier analysis, we get an \textbf{integro-differential operator} $I(p)$, \begin{equation} \label{integro} I(p) f(x) = b(x) \hspace{-.05cm} \cdot \hspace{-.075cm} \nabla f(x) + \frac{1}{2} \nabla\hspace{-.05cm} \cdot \hspace{-.075cm} \Sigma(x)\nabla f(x) +\int_{y\neq0} \left(f(x+y)-f(x) - y\hspace{-.05cm} \cdot \hspace{-.075cm} \nabla f(x) \chi(y) \right)\nu(x,dy) \end{equation} where $\nabla\cdot \Sigma(x)\nabla f(x) = \sum_{j,k=1}^d \Sigma_{jk}(x)\partial_j\partial_k f(x)$. Clearly, the operator $I(p)$ is defined on $C_b^2(\mathbb{R}^d)$, the space of continuous twice-differentiable bounded functions. When the symbol $p(x,\xi)$ is bounded, $I(p)$ is an extension of $-p(x,D)$: \begin{center} $I(p)|_{C_c^\infty(\mathbb{R}^d)}= -p(x,D) = \mathcal{A}|_{C_c^\infty(\mathbb{R}^d)}$ \end{center} and an extension of generator $\mathcal{A}$: $I(p)|_{\mathcal{D}(\mathcal{A})}= \mathcal{A}$, as shown by Schilling \cite[Lem.2.3]{Schilling1998}. Our interest in this integro-differential operator $I(p)$ comes with wanting to use the idea of Liggett's characterization of association via the generator. \begin{theorem}[Liggett (1985) \cite{Liggett1985}, p.80] \label{liggettthm} Let $X = (X_t)_{t\geq0}$ be a Feller process on state space $E$ with generator $(\mathcal{A},\mathcal{D}(\mathcal{A}))$ and semigroup $(T_t)_{t\geq0}$. If $X$ is stochastically monotone, then \begin{equation} \label{liggett} \mathcal{A} fg \geq g\mathcal{A} f + f\mathcal{A} g, \hspace{.5cm} \forall f,g\in\mathcal{F}_i\cap \mathcal{D}(\mathcal{A}) \end{equation} if and only if $X_t \text{ is associated for all } t\geq0 \text{ wrt } \mathbbm{P}^x \text{ for all } x\in E.$ \end{theorem} Liggett proved this for $E$ compact and $\mathcal{A}$ bounded. This was extended by Szekli and R\"{u}schendorf to more general Polish spaces $E$ and $\mathcal{A}$ unbounded \cite[Ch.3.7]{Szekli1995}, \cite[Cor.3.1]{Ruschendorf2008}. For the Feller processes we consider in the above setting, particularly those of the jump-variety, the domain $\mathcal{D}(\mathcal{A})$ is often defined to be a dense subspace of $C_0(\mathbb{R}^d)$, and thus, $\mathcal{D}(\mathcal{A})\cap \mathcal{F}_i = \{f\equiv 0\}$. Hence, in that case, inequality (\ref{liggett}) would always hold. Thus, we would like to extend Theorem \ref{liggettthm} to the extended generator $I(p)$. \subsubsection{Small-time asymptotics} The (extended) generator gives us a connection between the notion of association and the L\'{e}vy\text{ } characteristics $(b(x), \Sigma(x), \nu(x,dy))$ due to the representation of integro-differential operator. Thus, to characterize association for Feller processes using the L\'{e}vy\text{ } characteristics, an extension of Theorem \ref{liggettthm} becomes quite useful. However, under weaker conditions of the symbol $p(x,\xi)$, such as local boundedness, it is useful to surpass the generator (as we will show in Section 3) and show a more direct connection between the L\'{e}vy\text{ } characteristics and the notion of association. We will establish such a connection by looking at small-time asymptotics of a Feller process. Additionally, this notion will allow us to characterize weaker forms of positive dependence under the L\'{e}vy\text{ } characteristics. \par The classical results of small-time asymptotics have been primarily established for L\'{e}vy\text{ } processes. For a given L\'{e}vy\text{ } process $L = (L_t)_{t\geq0}$ it is known that for all $f\in C_c(\mathbb{R}^d\setminus\{0\})$, \begin{equation} \label{levysmalltime} \lim_{t\searrow0} \frac{1}{t} \mathbbm{E}^0 f(L_t) = \int_{\mathbb{R}^d\setminus\{0\}} f(y) \nu(dy). \end{equation} (See \cite[p.2]{KuhnLaplace2016} for reference.) Thus, by the Portmanteau theorem, (\ref{levysmalltime}) implies \begin{center} $\displaystyle\lim_{t\searrow0}\frac{1}{t} \mathbbm{P}^0 (L_t \in A)=\nu(A)$ \end{center} for all $A\in\mathcal{B}(\mathbb{R}^d\setminus\{0\})$ with $0\notin \overline{A}$ and $\nu(\partial A)=0$. This result naturally extends to a general starting point $x$: For every $x\in\mathbb{R}^d$, $\lim_{t\searrow0}\frac{1}{t} \mathbbm{P}^x (L_t -x\in A)=\nu(A)$ by translation invariance of a L\'{e}vy\text{ } process. Until recently, an analogous statement of the above for Feller processes was not known. However, K\"{u}hn and Schilling (2016) proved in \cite{KuhnLaplace2016} such a statement for such processes. \begin{theorem}[K\"{u}hn, Schilling (2016) \cite{KuhnLaplace2016}, Cor.3.3] \label{kuhn} Let $X = (X_t)_{t\geq0}$ be a rich Feller process with symbol $p(x,\xi)$ and characteristics $(b(x), \Sigma(x), \nu(x,dy))$. If $f\in C_0(\mathbb{R}^d)$ and $f|_{B(0,\delta)} = 0$ for some $\delta>0$, then $$\lim_{t\searrow0} \frac{1}{t} \mathbbm{E}^x f(X_t -x) = \int_{\mathbb{R}^d\setminus\{0\}} f(y) \nu(x,dy).$$ Additionally, by the Portmanteau theorem, \begin{equation*} \label{fellersmalltime} \lim_{t\searrow0}\frac{1}{t} \mathbbm{P}^x(X_t - x \in A) = \nu(x,A) \end{equation*} for all $A\in\mathcal{B}(\mathbb{R}^d\setminus\{0\})$ such that $0\notin \overline{A}$ and $\nu(x,\partial A)=0$. \end{theorem} The small-time asymptotics given by Theorem \ref{kuhn} give us a direct connection between the L\'{e}vy\text{ } measure and the Feller process, surpassing the representation of the generator. Also, notice that the result holds for more general, locally bounded symbols. \par Our interest focuses on jump-Feller processes, i.e. $X\sim (b(x), 0, \nu(x,dy))$, since the association of diffusion processes $X\sim (b(x), \Sigma(x), 0)$ has been done by Mu-Fa Chen \cite{Chen1993}. In the following section, we will prove a sufficient and necessary condition for the jump-Feller process to be associated, WA, PSA, PSD, POD, PUOD, and PLOD in space, where the condition is \begin{equation} \label{resnickx} \nu(x, (\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c) = 0, \hspace{1cm} \forall x\in\mathbb{R}^d. \end{equation} \begin{remark} \label{rem:jmwang} {\rm We do note that Jie Ming Wang \cite[Thm.1.4]{Wang2009} proved spatial association is equivalent to \eqref{resnickx} under certain continuity and integrability conditions (unknown to the author at the time). These assumptions include \begin{itemize}[itemsep=0em] \item $b_i,\Sigma_{ij}\in C(\mathbb{R}^d)$, for all $i,j$. \item $\int h_i(z) (\nu(\cdot,dz) - \nu(\cdot, d(-z)))\in C(\mathbb{R}^d)$, where $h:\mathbb{R}^d\rightarrow\mathbb{R}^d$ is defined by $h_i(z) = \text{sgn}(z_i)(1\wedge |z_i|)$. \item $\int_A |h(z)|^2 \nu(\cdot, dz)\in C(\mathbb{R}^d)$ for all $A\in\mathcal{B}(\mathbb{R}^d)$. \item $\int g(z) \nu(\cdot, dz) \in C(\mathbb{R}^d)$ for all $g\in C_b(\mathbb{R}^d)$ that is 0 near the origin. \end{itemize} \noindent We relax these conditions, and furthermore our work includes characterizations of the other dependence structures mentioned in Definitions \ref{def:assoc}-\ref{def:pod}. } \end{remark} \section{Main results} \label{sec:mainresults} Consider a rich Feller process $X = (X_t)_{t\geq0}$ on the space $(\Omega, \mathcal{G}, (\mathcal{G}_t)_{t\geq0}, \mathbbm{P}^x)_{x\in\mathbb{R}^d}$ with L\'{e}vy\text{ } \\characteristics $(b(x), 0, \nu(x,dy))$. If we assume that $X$ is stochastically monotone, then condition (\ref{resnickx}) is a necessary and sufficient condition for the association, WA, PSA, PSD, POD, PUOD, and PLOD in space of process $X$. These equivalences can be illustrated in the implication map in Figure \ref{fig:impmapres2}. The dashed arrows are the implications we will prove. \begin{figure}[h] \centering \includegraphics[trim = {0cm 0cm 0cm 0cm}, scale=.65]{dep_map_new3.png}\\ \caption{Equivalence of dependencies under condition \eqref{resnickx} for Feller processes}\label{fig:impmapres2} \end{figure} \par To show these equivalences, we first give a proof that, under stochastic monotonicity, condition (\ref{resnickx}) is equivalent to association in space. We show this in Section 3.1. Then in Section 3.2, we show that PUOD in space (and, similarly, PLOD) implies condition (\ref{resnickx}). \subsection{Association is equivalent to condition (\ref{resnickx})} \begin{theorem} \label{resnickxthm} Let $X=(X_t)_{t\geq0}$ be a rich Feller processes with stochastically monotone transition semigroup $(T_t)_{t\geq0}$, a generator $(\mathcal{A},\mathcal{D}(\mathcal{A}))$, bounded symbol $p(x,\xi)$, and $(b(x), 0, \nu(x,dy))$. Then $X_t$ is associated for all $t\geq0$ if and only if condition (\ref{resnickx}): $\nu(x, (\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c)=0$ is satisfied. \end{theorem} We prove this by first showing that association of $X_t$'s is equivalent to a Liggett-type inequality for the extended generator, the statement of which is in the following theorem. \begin{theorem} \label{extliggett} Let $X=(X_t)_{t\geq0}$ be a rich Feller processes with stochastically monotone transition semigroup $(T_t)_{t\geq0}$, a generator $(\mathcal{A},\mathcal{D}(\mathcal{A}))$, bounded symbol $p(x,\xi)$, and an (extended) integro-differential operator $I(p)$. Assume $x\mapsto p(x,0)$ is continuous. Then \begin{equation} \label{eq:extliggett} I(p) fg \geq f I(p) g + g I(p) f, \hspace{.5cm} \forall f,g\in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i \end{equation} \noindent if and only if \begin{equation} \label{semiass} \forall t\geq0, \hspace{.5cm} T_t fg \geq T_t f \cdot T_t g, \hspace{.5cm} \forall f,g\in C_b(\mathbb{R}^d)\cap\mathcal{F}_i. \end{equation} \end{theorem} Inequality (\ref{semiass}) in Theorem \ref{extliggett} is another way to formulate that $X_t$ is associated for all $t\geq0$. Since inequality (\ref{semiass}) means, for all $x\in\mathbb{R}^d$, $\mathbbm{E}^x f(X_t) g(X_t) \geq \mathbbm{E}^x f(X_t) \mathbbm{E}^x g(X_t)$ which means $X_t$ is associated with respect to $\mathbbm{P}^x$. Inequality \eqref{eq:extliggett} intuitively means that the process moves either up or down, which in multidimensional Euclidean space, means that if the process is currently at point $x$, then it can only move to another point $y$ if $y\geq x$ or $y\leq x$ componentwise. \par Notice that in Theorem \ref{extliggett}, we are using the extended generator $I(p)$. In previous statements of Liggett's characterization, the generator $\mathcal{A}$ is used. But we need to use $I(p)$ for the reasons given in the comments after Theorem \ref{liggettthm}. Hence, it is necessary to show the Liggett-type inequality as a characterization of association for rich Feller processes. Such an extension has not been seen by the author of this paper. We first need the following lemmas to prove Theorem \ref{liggettthm}. We will often assume Setting \ref{set1} throughout this section. \par \begin{setting} \label{set1} {\rm Let $X=(X_t)_{t\geq0}$ be a rich Feller process with semigroup $(T_t)_{t\geq0}$, generator $(\mathcal{A},\mathcal{D}(\mathcal{A}))$, symbol $p(x,\xi)$, (extended) integro-differential operator $I(p)$, and characteristics $(a(x), b(x), \Sigma(x), \nu(x,dy))$, where $b,\Sigma,\nu$ are the same before, except we have an additional characteristic $a:\mathbb{R}^d \rightarrow \mathbb{R}_+$ which represents the ``killing rate." } \end{setting} \begin{remark} {\rm With the additional characteristic $a(x)$, function $-p(x,\xi)$ would look like $a(x)+\text{ RHS of equation } (\ref{symbol})$. Also, $I(p)f(x)$ would look like $- a(x) f(x)+ \text{RHS of equation }(\ref{integro})$. Unless stated otherwise, we will assume that $a(x)\equiv0$. For more on the case when $a(x)\not\equiv0$, see the paper by Schnurr \cite{Schnurr2017}, which discusses such processes satisfying $a(x)\not\equiv0$ and their connection to the symbol.} \end{remark} \par \begin{lemma} \label{integrogenerates} Assume Setting \ref{set1} and that $p(x,\xi)$ is bounded. Then $I(p)$ generates the semigroup $(T_t)_{t\geq0}$ locally uniformly, i.e. \begin{equation} \label{generates} I(p) f = \lim_{t\searrow0}\frac{1}{t}(T_t f - f), \hspace{.5cm} f\in C_b^2(\mathbb{R}^d), \end{equation} where the convergence is locally uniform. \end{lemma} For a detailed proof, see the Appendix. \begin{lemma} \label{integroderivative} Assume Setting \ref{set1} and the symbol $p(x,\xi)$ is bounded For all $f\in C_b^2(\mathbb{R}^d)$, $$\frac{d}{dt} T_t f = I(p)T_t f = T_t I(p) f$$ where the derivative is defined based on locally uniform convergence. \end{lemma} For a detailed proof, see the Appendix. Finally, we can extend Liggett's solution to a Cauchy problem \cite[Thm.2.15, p.19]{Liggett1985} to integro-differential operators that generate a semigroup locally uniformly. \begin{lemma}[Cauchy problem] \label{extcauchy} Let $(\mathcal{A},\mathcal{D}(\mathcal{A}))$ be a (rich) Feller generator of a semigroup $(T_t)_{t\geq0}$ with bounded L\'{e}vy\text{ } characteristics and symbol $p(x,\xi)$ Let $I(p)$ be the extended generator on $C_b^2(\mathbb{R}^d)$. Suppose $F,G:[0,\infty)\rightarrow C_b(\mathbb{R}^d)$ such that \par\text{ } \noindent (a) $F(t)\in\mathcal{D}(I(p))$ for all $t\geq0$ \\ (b) $G(t)$ is continuous on $[0,\infty)$ (locally uniformly) \\ (c) $F'(t) = I(p) F(t) + G(t)$ for all $t\geq0$. \par\text{ } \noindent Then $\displaystyle F(t) = T_t F(0) + \int_0^t T_{t-s} G(s) ds.$ \end{lemma} For a detailed proof, see the Appendix. We are now ready to prove the main theorems of this section. \\ \\ \textbf{Proof of Theorem \ref{extliggett}} \begin{proof} $(\Leftarrow)$ Assume $T_t fg \geq T_t f \hspace{.1cm}T_t g$ for all $f,g\in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i$. This implies \begin{align*} T_t fg -fg &\geq T_tf \hspace{.1cm} T_tg -fg =T_tf \hspace{.1cm}T_tg -fg + g \hspace{.1cm}T_t f - g \hspace{.1cm}T_t f= T_t f[T_t g- g] + g[T_t f-f]. \end{align*} Hence, for all $t>0$, $\displaystyle\frac{1}{t} (T_t fg - fg) \geq T_t f \hspace{.1cm}\frac{T_t g- g}{t} + g\hspace{.1cm} \frac{T_t f-f}{t}$. Therefore, \begin{align*} I(p) fg = \lim_{t\searrow0} \frac{1}{t} (T_t fg - fg) & \geq \lim_{t\searrow0} \left(T_t f \hspace{.1cm}\frac{T_t g- g}{t} + g\hspace{.1cm} \frac{T_t f-f}{t}\right) \\ & = \left(\lim_{t\searrow0} T_t f \right) \hspace{.1cm} \left(\lim_{t\searrow0} \frac{T_t g- g}{t}\right) + g\hspace{.1cm} \left(\lim_{t\searrow0} \frac{T_t f-f}{t}\right) \\ & = f I(p) g + g I(p) f, \end{align*} where the convergence is locally uniform. \\ \\ $(\Rightarrow)$ Assume $I(p) fg \geq f I(p) g + gI(p) f$ for all $f,g\in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i$. By monotonicity, $T_t f, T_t g \in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i$, which implies \begin{equation} \label{eq:liggettwithsemi} I(p) (T_t f) (T_t g) \geq T_tf [I(p)T_t g] + T_t g [I(p) T_tf]. \end{equation} Define $F(t) := T_t fg - T_t f \hspace{.1cm}T_t g$. Then by Lemma \ref{integroderivative}, we have \begin{align*} F'(t) = I(p) T_t fg - (T_t f [I(p) T_t g] + T_t g [I(p) T_t f]) &\geq I(p) T_t fg - (I(p) T_t f \hspace{.1cm} T_tg) \\ & = I(p) (T_t fg - T_t f\hspace{.1cm} T_tg) \\ & = I(p) F(t) \end{align*} where the inequality comes from \eqref{eq:liggettwithsemi}. Define $G(t) := F'(t) - I(p) F(t)\geq0$. Then by Lemma \ref{extcauchy}, the solution to the Cauchy problem $F'(t) = G(t) + I(p) F(t)$ is given by $$F(t) = T_t F(0) + \int_0^t T_{t-s} G(s) ds = \int_0^t T_{t-s} G(s) ds$$ since $F(0)=0$. Since $G(s)\geq0$ for all $s$, and $T_{t-s}$ is a positivity-preserving linear operator, $F(t)\geq0$ for all $t\geq0$. Thus, $T_t fg \geq T_t f \cdot T_t g$ for all $f,g\in C_b^2(\mathbb{R}^d)\cap\mathcal{F}_i$. This inequality also holds for all $f,g\in C_b(\mathbb{R}^d)\cap \mathcal{F}_i$, since we can approximate non-decreasing, continuous, bounded functions by non-decreasing smooth, bounded functions, and then use a dominated convergence argument. \end{proof} \begin{remark} {\rm For the necessary condition, we did not need stochastic monotonicity.} \end{remark} \noindent \textbf{Proof of Theorem \ref{resnickxthm}} \begin{proof} $(\Leftarrow)$. Fix $x\in\mathbb{R}^d$. Assume $\nu(x,(\mathbb{R}_+^d\cup\mathbb{R}_-^d)^c)=0$. Then, for all $f,g\in C_b^2(\mathbb{R}^d)\cap\mathcal{F}_i$, \begin{align*} & I(p) fg(x) - g(x) I(p) f(x) - f(x)I(p) g(x) \\ & = b(x)\cdot \nabla fg(x)+ \int_{y\neq0} \left(f(x+y) g(x+y)- f(x)g(x) - y\cdot \nabla fg(x)\mathbbm{1}_{(0,1)}(|y|)\right)\nu(x,dy) \\ & \hspace{.2cm} -b(x)\cdot g(x)\nabla f(x) - \int_{y\neq0} \left(f(x+y) g(x)- f(x)g(x) - y\cdot g(x) \nabla f(x)\mathbbm{1}_{(0,1)}(|y|) \right)\nu(x,dy) \\ & \hspace{.2cm} -b(x)\cdot f(x) \nabla g(x)- \int_{y\neq0} \left(f(x) g(x+y)- f(x)g(x) -y\cdot f(x)\nabla g(x) \mathbbm{1}_{(0,1)}(|y|) \right)\nu(x,dy) \\ & = \int_{y\neq0} \left(f(x+y)g(x+y) - f(x+y) g(x) - f(x)g(x+y) +f(x)g(x) \right)\nu(x,dy) \\ & = \int_{y\neq0} \left(f(x+y)- f(x))(g(x+y) - g(x)\right)\nu(x,dy) \\ & = \int_{\mathbb{R}_+^d} \left(f(x+y)- f(x))(g(x+y) - g(x)\right)\nu(x,dy) \\ & \hspace{.2cm} + \int_{\mathbb{R}_-^d} \left(f(x+y)- f(x))(g(x+y) - g(x)\right)\nu(x,dy) \\ & \geq0, \end{align*} where the drift terms and the cut-off term in the integrand vanish because $\nabla fg(x) = f(x) \nabla g(x) + g(x) \nabla f(x)$. Additionally, we get positivity at the end there because $\forall y\in\mathbb{R}_+^d$, $f(x+y) - f(x)\geq0$, and $g(x+y)-g(x)\geq0$, so $(f(x+y) - f(x))(g(x+y) - g(x))\geq0$ on $\mathbb{R}_+^d$. A similar result holds on $\mathbb{R}_-^d$. By Theorem \ref{extliggett}, this implies $T_t fg(x) \geq T_t f(x) T_tg(x)$, where $f,g\in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i$. Now to obtain association of $X_t$, this inequality needs to hold for all $f,g\in C_b(\mathbb{R}^d)\cap \mathcal{F}_i$ But we can use an approximation of a function $f\in C_b(\mathbb{R}^d)\cap \mathcal{F}_i$ by $f_n\in C_b^\infty(\mathbb{R}^d)\cap \mathcal{F}_i$ which gives us the desired result. \par $(\Rightarrow)$. Assume $X_t$ is associated for all $t\geq0$. This means $T_t fg(x) \geq T_tf(x) T_tg(x)$ for all $x\in\mathbb{R}^d$, for all $f,g\in C_b(\mathbb{R}^d)\cap \mathcal{F}_i$. So this inequality of course holds for $f,g\in C_b^2(\mathbb{R}^d)\cap \mathcal{F}_i$, which yields $I(p)fg \geq g I(p) f+ f I(p) g$ for such $f,g$ by Theorem \ref{extliggett}. This implies, by a similar calculation in the $(\Leftarrow)$ direction, that $$\int_{y\neq0} (f(x+y)-f(x))(g(x+y)-g(x))\nu(x,dy)\geq0.$$ For simplicity, assume $d=2$, but know that we can easily generalize this result to higher dimensions using correction functions. Fix $x=(x_1,x_2)\in\mathbb{R}^2$. Assume for contradiction that Resnick's condition is not satisfied. WLOG, let's say $\nu(x,(0,\infty)\times (-\infty,0))>0$. By continuity of measure, $\exists a>0$ such that $\nu(x,(a,\infty)\times (-\infty,a))>0$. Let $\epsilon\in(0,1)$, and define $f,g\in C_b^\infty(\mathbb{R}^2)\cap \mathcal{F}_i$ by \begin{equation*} f(y_1,y_2)= \begin{cases} 0 & \textrm{ if \hspace{.1cm} $y_1\leq x_1+\epsilon a$} \\ 1 & \textrm{ if \hspace{.1cm} $y_1\geq x_1+ a,$} \\ \end{cases} \quad \quad \quad g(y_1,y_2)= \begin{cases} 0 & \textrm{ if \hspace{.1cm} $y_2\geq x_2-\epsilon a$} \\ -1 & \textrm{ if \hspace{.1cm} $y_2\leq x_2-a.$} \\ \end{cases} \end{equation*} \noindent This implies $f(x)=g(x)=0$. Hence, \begin{align*} 0 & \leq \int_{y\neq0} (f(x+y)- f(x))(g(x+y)-g(x)) \nu(x,dy) \\ & = \int_{y\neq0} f(x+y)g(x+y) \nu(x,dy) \\& = \int_{(a,\infty)\times (-\infty,-a)} f(x+y)g(x+y) \nu(x,dy) + \int_{ (a,\infty)\times [-a,-\epsilon a]} f(x+y)g(x+y) \nu(x,dy) \\ & \hspace{.2cm} + \int_{ [\epsilon a,a] \times (-\infty,-a)} f(x+y)g(x+y) \nu(x,dy) + \int_{ [\epsilon a,a] \times [-a,-\epsilon a]} f(x+y)g(x+y) \nu(x,dy) \\ & = -\nu(x, (a,\infty)\times (-\infty,-a)) - \int_{(a,\infty)\times [-a,-\epsilon a]} g(x+y) \nu(x,dy) \\ & \hspace{.2cm} +\int_{ [\epsilon a,a] \times (-\infty,-a)} f(x+y)\nu(x,dy) + \int_{[\epsilon a,a] \times [-a,-\epsilon a]} f(x+y)g(x+y) \nu(x,dy) \\ &\leq -\nu(x, (a,\infty)\times (-\infty,-a)), \end{align*} implying $\nu(x,(a,\infty)\times (-\infty,-a)) \leq0$. Hence, $\nu(x,(a,\infty)\times (-\infty,-a)) =0$, a contradiction. \end{proof} \subsection{PUOD implies condition (\ref{resnickx})} \begin{lemma} \label{PODsub} If $Y= (Y_1,...,Y_d)$ is PUOD, then $(Y_{k_1},...,Y_{k_n})$ is PUOD for all multi-indices $\{k_j\}_{j=1}^n\subset \{1,...,d\}$. \end{lemma} \begin{proof} If $Y$ PUOD, then we know $\mathbbm{E} \left(\prod_{i=1}^d f_i (Y_i) \right) \geq \prod_{i=1}^d \mathbbm{E} f_i (Y_i)$ where $f_i:\mathbb{R}\rightarrow\mathbb{R}_+$ non-decreasing. So for all $i\in\{1,...,d\} \setminus \{k_j\}_{j=1}^n$, set $f_i = \mathbbm{1}_{\mathbb{R}}$. Then the above inequality becomes \begin{center} $\mathbbm{E} \left(\prod_{j=1}^n f_j (Y_{k_j}) \right) \geq \prod_{j=1}^n \mathbbm{E} f_j (Y_{k_j}).$ \end{center} Thus, we have that $(Y_{k_1},...,Y_{k_n})$ is PUOD. \end{proof} \begin{theorem} \label{PODnec} Let $X = (X_t)_{t\geq0}$ be a rich Feller process with symbol $p(x,\xi)$ and triplet $(b(x), 0, \nu(x,dy))$. Then, $X_t$ is PUOD for each $t\geq0$ implies condition (\ref{resnickx}): \begin{center} $\nu(x,(\mathbb{R}_+^d\cup \mathbb{R}_-^d)^c)=0.$ \end{center} \end{theorem} \begin{proof} Assume $X_t$ is PUOD (wrt $\mathbbm{P}^x$) for each $t\geq0$. Fix $x=(x_1,...,x_d)\in\mathbb{R}^d$. Since $X_t$ is PUOD, then $X_t-x$ is PUOD for all $t\geq0$. Assume for contradiction that $\nu$ not concentrated on $\mathbb{R}_+^d\cup \mathbb{R}_-^d$. WLOG, say $\nu(x,(0,\infty)^{d-1} \times (-\infty,0))>0$. By continuity of measure there exists $a>0$ such that \begin{center} $\nu(x,(a,\infty)^{d-1}\times (-\infty,-a))>0$ \end{center} and \begin{center} $\nu(x, \partial[(a,\infty)^{d-1}\times (-\infty,-a)])= \nu(x,\partial[(a,\infty)\times \mathbb{R}^{d-1}])=0.$ \end{center} Then by Theorem \ref{kuhn}, \begin{center}$\lim_{t\rightarrow0} \frac{1}{t} \mathbbm{P}^x( X_t - x\in (a,\infty)^{d-1} \times (-\infty,-a)) = \nu(x, (a,\infty)^{d-1} \times (-\infty,-a)).$\end{center} Hence, \begin{align*} 0&<\nu(x, (a,\infty)^{d-1}\times (-\infty,-a)) \\ & = \lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x(X_t - x \in (a,\infty)^{d-1}\times (-\infty,-a)) \\ & = \lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x(X_t^{(1)} - x_1>a ,...,X_t^{(d-1)} - x_{d-1}>a, X_t^{(d)} - x_d <-a) \\ & \leq \lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x(X_t^{(1)} - x_1>a ,...,X_t^{(d-1)} - x_{d-1}>a, X_t^{(d)} - x_d \leq-a) \\ & = \lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x( \{X_t^{(1)} - x_1>a\}\setminus [\{X_t^{(1)} - x_1>a\} \cap \{X_t^{(2)} - x_2 >a,..., X_t^{(d)} - x_d \leq-a\}^c]) \\ & = \lim_{t\rightarrow0}\frac{1}{t} \left[ \mathbbm{P}^x( X_t^{(1)} - x_1>a) \right. \\ & \hspace{1cm} \left. - \mathbbm{P}^x (\{X_t^{(1)} - x_1>a\} \cap \{X_t^{(2)} - x_2 >a,..., X_t^{(d)} - x_d \leq-a\}^c ) \right] \\ & = \lim_{t\rightarrow0}\frac{1}{t} \left[\mathbbm{P}^x( X_t^{(1)} - x_1>a) \right. \\ & \hspace{1cm} - \left. \mathbbm{P}^x(\{X_t^{(1)} - x_1>a\} \cap [ \{X_t^{(2)} - x_2 \leq a\}\cup...\cup \{X_t^{(d)} - x_d >-a\}]]) \right] \\ & = \lim_{t\rightarrow0}\frac{1}{t} \left[ \mathbbm{P}^x(X_t^{(1)} - x_1>a) \right. \\ & \hspace{1cm} - \left. \mathbbm{P}^x( \{X_t^{(1)} - x_1>a, X_t^{(2)} - x_2 \leq a\}\cup...\cup \{X_t^{(1)} - x_1>a, X_t^{(d)} - x_d >-a\}]) \right] \\ & \leq \lim_{t\rightarrow0}\frac{1}{t} \left[ \mathbbm{P}^x(X_t^{(1)} - x_1>a) - \mathbbm{P}^x( X_t^{(1)} - x_1>a, X_t^{(d)} - x_d >-a]) \right] \\ & \leq \lim_{t\rightarrow0}\frac{1}{t} \left[ \mathbbm{P}^x(X_t^{(1)} - x_1>a) - \mathbbm{P}^x( X_t^{(1)} - x_1>a)\mathbbm{P}^x(X_t^{(d)} - x_d >-a]) \right] \\ & = \lim_{t\rightarrow0}\frac{1}{t} \left[ \mathbbm{P}^x(X_t^{(1)} - x_1>a)(1 - \mathbbm{P}^x(X_t^{(d)} - x_d >-a]))\right] \\ & = \lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x(X_t^{(1)} - x_1>a)\mathbbm{P}^x(X_t^{(d)} - x_d \leq -a]) \\ & = \left[\lim_{t\rightarrow0}\frac{1}{t} \mathbbm{P}^x(X_t^{(1)} - x_1>a)\right] \left[\lim_{t\rightarrow0}\mathbbm{P}^x(X_t^{(d)} - x_d \leq -a])\right] \\ & = \nu(x, (a,\infty)\times \mathbb{R}^{d-1}) \mathbbm{P}^x( X_0^{(d)} - x_d \leq -a) \\ & = 0. \end{align*} We obtain lines 4, 9 by set containment, line 5 by the fact: $A\cap B = A\setminus (A\cap B^c)$, line 10 by Lemma \ref{PODsub}, and line 14 by Theorem \ref{kuhn}. This contradiction gives us the desired result. \end{proof} \begin{remark} {\rm \begin{enumerate}[noitemsep, label=(\roman*)] \item We could have also showed PLOD implies condition \eqref{resnickx} using similar techniques to those above. \item Symbol $p(x,\xi)$ in the above theorem need not be bounded, only locally bounded. \end{enumerate} } \end{remark} \begin{corollary} For stochastically monotone jump-Feller processes, i.e. $X\sim (b(x), 0, \nu(x,dy))$ with bounded symbols $p(x,\xi)$, then condition (\ref{resnickx}), $\nu(x,(\mathbb{R}_+^d\cup\mathbb{R}_-^d)^c)=0$, is equivalent to $X$ being associated, WA, PSA, PSD, POD, PUOD, and PLOD in space. \end{corollary} \begin{proof} True by Theorems \ref{resnickxthm} and \ref{PODnec}. \end{proof} \subsection{Association in time} Our results can also be applied to study the temporal association of Feller processes. We first examine the case of L\'{e}vy\text{ } processes, a sub-class of Feller processes with constant characteristic triplet $(b,Q,\nu)$. For L\'{e}vy\text{ } processes, spatial association is equivalent to temporal association. \begin{theorem} \label{levytime} Let $X=(X_t)_{t\geq0}$ be a stochastic process in $\mathbb{R}^d$ with independent and stationary increments, i.e. $X_t - X_s \independent X_s - X_r$, for all $0\leq r<s<t$, and $X_t-X_s \stackrel{d}=X_{t-s}$ for all $0\leq s<t$. Then $X$ is associated in time if and only if $X$ is associated in space. \end{theorem} \begin{proof} The forward direction is trivial by definition. We only need to prove the backward direction. Assume $X_t$ is associated in $\mathbb{R}^d$ for every $t\geq0$. Choose $0\leq t_1<...<t_n$. Then \begin{align*} (X_{t_1},...,X_{t_n}) & = (X_{t_1}, X_{t_1} + (X_{t_2} - X_{t_1}), ..., X_{t_1} + (X_{t_2} - X_{t_1}) +...+(X_{t_n} - X_{t_{n-1}})) \\ & = (X_{t_1},...,X_{t_1}) + (0, X_{t_2} - X_{t_1},...,X_{t_2}-X_{t_1}) + ... + (0,...,0,X_{t_n} - X_{t_{n-1}}) \end{align*} Now observe that by stationary increments, $X_{t_{k+1}} - X_{t_{k}} \stackrel{d}= X_{t_{k+1} - t_{k}}$ and $X_{t_{k+1} - t_{k}}$ is associated, which makes $X_{t_{k+1}} - X_{t_{k}}$ associated (association is preserved under equality in distribution), for all $k\in\{1,...,n-1\}$. Now observe that if $\hat{X}$ is associated in $\mathbb{R}^d$, then each block $(0,...0,\hat{X},...,\hat{X})$ is associated in $\mathbb{R}^{dn}$, where there are a $k$ number of $0$ vectors and $(n-k)$ $\hat{X}$ vectors. Therefore, each block $(0,...,0, X_{t_{k+1}} - X_{t_k},...,X_{t_{k+1}} - X_{t_k})$ is associated, for each $k\in\{1,...,n-1\}$. By independent increments, each block is independent. Therefore, since the sum of independent random vectors, each of which is associated, is associated, then $(X_{t_1},...,X_{t_n})$ is associated. \end{proof} \begin{corollary} Any L\'{e}vy\text{ } process $X$ that is associated in space is also associated in time. Additionally, if $X$ has triplet $(b,0,\nu)$, then $X$ is associated in time if and only if \\ $\nu( (\mathbb{R}_+^d\cup\mathbb{R}_-^d)^c)=0$. \end{corollary} \begin{proof} Any L\'{e}vy\text{ } process has independent and stationary increments, thus the result holds by Theorem \ref{levytime}. \end{proof} We would also like to consider conditions for temporal association of general Feller processes. Early work on this has been done by Harris \cite[Cor.1.2]{Harris1977} and Liggett \cite[p.82]{Liggett1985} for Feller processes with a countable state space. This can be extended to more general state spaces, as given in the following theorem. \begin{theorem} \label{thm:liggetttime} Let $X= (X_t)_{t\geq0}$ be a time-homogeneous, stochastically monotone Feller process on $\mathbb{R}^d$. If $X$ is spatially associated, and $X_0\sim \mu$, where $\mu$ satisfies \begin{equation*} \label{assocmeasure} \int fg \hspace{.1cm} d\mu \geq \int f \hspace{.1cm} d\mu \int g \hspace{.1cm} d\mu, \hspace{.5cm} f,g\in B_b(\mathbb{R}^d) \cap \mathcal{F}_i, \end{equation*} then $X$ is temporally associated. \end{theorem} The proof is similar to Liggett's proof found in \cite[p.82]{Liggett1985}. For details on the proof, we refer the reader to the author's dissertation \cite[p.59]{TuThesis}. Theorem \ref{thm:liggetttime} yields the following corollary about jump-Feller processes. \begin{corollary} \label{cor:assoctimeFeller} Let $X=(X_t)_{t\geq0}$ be a stochastically monotone Feller process with characteristics $(b(x),0,\nu(x,dy))$. Assume $X_0\sim \mu\in\mathcal{M}_a$. Then $\nu(x,(\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c)=0$ if and only if $X$ is associated in time. \end{corollary} \begin{proof} The proof follows from Theorems \ref{resnickxthm} and \ref{thm:liggetttime}. \end{proof} \section{Examples} \label{sec:examples} We give a collection of interesting Feller processes that satisfy stochastic monotonicity. \subsection{L\'{e}vy\text{ } processes} Any L\'{e}vy\text{ } process satisfies stochastic monotonicity. Let $(T_t)_{t\geq0}$ be a semigroup of a L\'{e}vy\text{ } process. Then, for $f\in\mathcal{F}_i$, we have $$T_t f(x) = \mathbbm{E}^x f(X_t) = \mathbbm{E}^0 f(X_t+x).$$ Thus monotonicity of function $f$ and of the expectation $\mathbbm{E}^0$ gives us that $T_t f\in\mathcal{F}_i$. \par Let $X = (X_t)_{t\geq0}$ be a jump-L\'{e}vy\text{ } process whose L\'{e}vy\text{ } characteristics look like $(b, 0, \nu)$, where there is no state-space dependence. Then $\nu((\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c)=0$ is equivalent to $X_t$ being associated, WA, PSA, PSD, POD, PUOD, and PLOD since all L\'{e}vy\text{ } processes are stochastically monotone. This was proven in B\"{a}uerle (2008) \cite{Bauerle2008} for association, PSD, and POD, but not for the other dependence structures. Furthermore, the technique in \cite{Bauerle2008} to prove condition (\ref{resnick}) is equivalent to PSD and POD required L\'{e}vy\text{ } copulas. Our method of short-time asymptotics avoids L\'{e}vy\text{ } copulas altogether, and solely uses the L\'{e}vy\text{ } measure. Additionally, condition \eqref{resnick} is equivalent to temporal association of $X$, by Corollary \ref{cor:assoctimeFeller}. \subsection{Ornstein-Uhlenbeck process} An Ornstein-Uhlenbeck (OU) process $X=(X_t)_{t\geq0}$ in $\mathbb{R}^d$ is the solution to the general \textit{Langevin equation}: \begin{align*} dX_t &= -\lambda X_t dt + dL_t \\ X_0 &=x \text{ a.s.} \end{align*} \noindent where $\lambda>0$, $L = (L_t)_{t\geq0}\sim(b_L,\Sigma_L,\nu_L)$ is a L\'{e}vy\text{ } process in $\mathbb{R}^d$, and $x\in\mathbb{R}^d$. Then OU-process looks like: $$X_t = e^{-\lambda t}x + \int_0^t e^{-\lambda(t-s)} dL_t$$ The semigroup $(T_t)_{t\geq0}$ of this process is called the Mehler semigroup and is given by $$T_t f(x) = \int_{\mathbb{R}^d} f( e^{\lambda t} x + y) \mu_t(dy), \hspace{.5cm} L_t \sim \mu_t$$ \begin{claim} The OU process is stochastically monotone. \end{claim} \begin{proof} Let $f\in B_b(\mathbb{R}^d)$ be an increasing function. Assume $x<y$, and fix some $t\geq0$. Then $e^{-\lambda t} x < e^{-\lambda t}y$. This implies $f(e^{-\lambda t} x + z) \leq f(e^{-\lambda t} y + z)$ for all $z\in \mathbb{R}^d$. Hence, \begin{center} $T_t f(x) = \int_{\mathbb{R}^d} f(e^{-\lambda t} x+ z) \mu_t(dz) \leq \int_{\mathbb{R}^d} f(e^{-\lambda t} y+ z) \mu_t(dz) = T_t f(y).$ \end{center} Thus, $T_t f$ is an increasing function on $\mathbb{R}^d$. \end{proof} Process $X$ has characteristic triplet: $(b_L-\lambda x, \Sigma_L, \nu_L)$ \cite{Applebaum2007}. Thus, the characterization of positive dependence (association, WA, PSA, PSD, POD, PUOD, PLOD) is equivalent to $\nu_L((\mathbb{R}_+^d \cup \mathbb{R}_-^d)^c)=0$ when $\Sigma=0$. \subsection{Feller's pseudo-Poisson process} Here we construct a stochastically monotone pseudo-Poisson process. Let $S=(S(n))_{n\in\mathbbm{N}}$ be a homogeneous Markov process taking values in $\mathbb{R}^d$. Let $(q^{(n)})_{n\in\mathbbm{N}}$ define the $n$-step transition probabilities: $$q^{(n)} (x,B) = \mathbbm{P}(S(n)\in B| S(0)=x)$$ for all $B\in \mathcal{B}(\mathbb{R}^d)$. Let $Q$ be the \textit{transition operator} of $S$, defined by $$(Q f)(x) = \int_{\mathbb{R}^d} f(y) q(x,dy)$$ for all $f\in B_b(\mathbb{R}^d)$, $x\in\mathbb{R}^d$. Note that $Q^n f(x) = \int_{\mathbb{R}^d} f(y) q^{(n)} (x,dy)$. Let $N = (N_t)_{t\geq0}$ be a Poisson process with rate $\lambda$ that is independent of $S$. Define $X = (X_t)_{t\geq0}$ by subordination: $$X_t := S(N_t) \text{ for all } t\geq0.$$ Process $X$, called \textit{Feller's pseudo-Poisson process}, is a Feller process. The semigroup $(T_t)_{t\geq0}$ and generator $\mathcal{A}$ of $X$ are given by: $$T_t f(x) = e^{t[\lambda(Q-I)]} f(x) = e^{-\lambda t}\sum_{n=0}^\infty \frac{(\lambda t)^n}{n!} Q^n f(x),$$ $$\mathcal{A} f(x) = \lambda (Q- I) f(x) = \int_{\mathbb{R}^d} [f(y) - f(x)] \lambda q(x,dy)$$ \begin{claim} If $S$ is a stochastically monotone Markov process, then $X$ is stochastically monotone. \end{claim} \begin{proof} We will show that for $f\in\mathcal{F}_i$, we have $T_t f\in\mathcal{F}_i$. Observe that by $S$ stochastically monotone, we have that $q(x,B)$ is monotone function in $x$ for all $B\in\mathcal{B}(\mathbb{R}^d)$ monotone set. Additionally, we have for $f\in B_b(\mathbb{R}^d)\cap \mathcal{F}_i$, $Q f(x) = \int_{\mathbb{R}^d} f(y) q(x,dy)$ is a monotone function. We show, by induction, that for all $n$, $G_n: = e^{-\lambda t} \frac{(\lambda t)^n}{n!} Q^n f$ is a non-decreasing function. \\ \\ \underline{Base Case}: $\underline{n=0}$: $G_0(x) = e^{-\lambda t} f(x)$ is non-decreasing. $\underline{n=1}$: $G_1(x) = e^{-\lambda t} \lambda t \hspace{.1cm} Q f(x) =e^{-\lambda t} \lambda t \int_{\mathbb{R}^d} f(z) q(x,dz) $ is non-decreasing.\\ \underline{Induction Hypothesis}: Assume $G_n(x) = e^{-\lambda t} \frac{(\lambda t)^n}{n!} Q^n f(x) =e^{-\lambda t} \frac{(\lambda t)^n}{n!} \int_{\mathbb{R}^d} f(z) q^{(n)}(x,dz) $ is a non-decreasing function. \\ \underline{Inductive Step}: \begin{align*} G_{n+1}(x) = e^{-\lambda t} \frac{(\lambda t)^{n+1}}{(n+1)!} Q^{n+1} f(x) & = e^{-\lambda t} \frac{(\lambda t)^{n+1}}{(n+1)!} \int_{\mathbb{R}^d} f(z) \hspace{.1cm}q^{(n+1)}(x,dz) \\ & = e^{-\lambda t} \frac{(\lambda t)^{n+1}}{(n+1)!} \int_{\mathbb{R}^d}\left( \int_{\mathbb{R}^d} f(z) \hspace{.1cm} q^{(n)}(y,dz) \right) q(x,dy) \\ & =:e^{-\lambda t} \frac{(\lambda t)^{n+1}}{(n+1)!} \int_{\mathbb{R}^d}H(y) q(x,dy) \end{align*} where $H(y) = \int_{\mathbb{R}^d} f(z) q^{(n)}(y,dz)$ is a non-decreasing function in $y$ by Induction Hypothesis, and line 2 is obtained by Chapman-Kolmogorov equations. Thus, by Base Case, the integral $\int_{\mathbb{R}^d} H(y) q(x,dy)$ is non-decreasing in $x$. Hence we get $G_n$ is a non-decreasing function for all $n$. Hence, $T_tf$ is non-decreasing, giving us our desired result. \end{proof} \noindent Now to find the characteristic triplet $(b(x), \Sigma(x), \nu(x,dy))$, we consider the generator: \begin{align*} \mathcal{A} f(x) & = \int_{\mathbb{R}^d} (f(z) - f(x)) \lambda q(x,dz)= \int_{\mathbb{R}^d} (f(x+z) - f(x)) \lambda q(x,dz+x) \\ & = \int_{\mathbb{R}^d} (f(x+z) - f(x)) \lambda \hat{q}(x, dz), \hspace{.1cm} \text{ where } \hat{q}(x,B) := q(x, B+x) \\ & = \int_{\mathbb{R}^d} (f(x+z) - f(x) - \nabla f(x) \cdot z \chi(z)) \lambda \hat{q}(x, dz) + \int_{\mathbb{R}^d} \nabla f(x) \cdot z \chi(z) \lambda \hat{q}(x, dz) \\ & = \int_{\mathbb{R}^d} (f(x+z) - f(x) - \nabla f(x) \cdot z \chi(z)) \lambda \hat{q}(x, dz) + \nabla f(x) \cdot \left(\int_{\mathbb{R}^d} z \chi(z) \lambda \hat{q}(x, dz) \right). \end{align*} \noindent Thus, the L\'{e}vy\text{ } triplet will be $(b(x), \Sigma(x), \nu(x,dy))$, where \begin{center} $b(x) = \int_{\mathbb{R}^d} z \chi(z) \lambda \hat{q}(x, dz), \hspace{1cm} \Sigma(x) = 0, \hspace{1cm} \nu(x, A) = \lambda \hat{q}(x,A) = \lambda q(x, A+x).$ \end{center} \subsection{Bochner's subordination of a Feller process} Consider a continuous-time Feller process $Y = (Y(t))_{t\geq0}$ with semigroup $(T_t)_{t\geq0}$ and generator $(\mathcal{A},\mathcal{D}(\mathcal{A}))$. Let $N = N_t$ be a subordinator independent of $Y$ with L\'{e}vy\text{ } characteristics $(b,\lambda)$, i.e. has L\'{e}vy\text{ } symbol $\eta(u) = ibu +\int_0^\infty (e^{iuy} - 1)\lambda(dy)$, where $\mathbbm{E} e^{iuN_t} = e^{t\eta(u)}$. Additionally, we can attain a Laplace transform of the subordinator, $\mathbbm{E} e^{-uN_t} = e^{-t\psi(u)}$, where \begin{center} $\psi(u) := - \eta(iu) = bu +\int_0^\infty (1-e^{-uy}) \lambda(dy).$ \end{center} Function $\psi$ is called the \textbf{Laplace symbol} or \textbf{Bernstein function} of the subordinator. The following is a theorem of Phillips. \begin{theorem}[Phillips (1952) \cite{Phillips1952}] Let $X = (X_t)_{t\geq0}$ be given by the prescription $X_t = Y(N_t)$. Then $X$ is a Feller process with semigroup $(T_t^X)_{t\geq0}$ and generator $(\mathcal{A}^X, \mathcal{D}(\mathcal{A}^X))$, given by $$T_t^X f = \int_0^\infty (T_s f) \hspace{.1cm}\mu_{N_t}(ds), \hspace{1cm} \mathcal{A}^X f = b\mathcal{A} f + \int_0^\infty (T_s f - f) \lambda (ds).$$ \end{theorem} \begin{claim} If $Y$ is a stochastically monotone Feller process with semigroup $(T_t)_{t\geq0}$, i.e. $T_t f \in \mathcal{F}_i$ for $f\in\mathcal{F}_i$, and $N = (N_t)_{t\geq0}$ is a subordinator, then $X=(X_t)_{t\geq0}$ given by $X_t = Y(N_t)$ is a stochastically monotone Feller process. \end{claim} \begin{proof} We already know that $X$ is Feller with semigroup $(T_t^X)_{t\geq0}$. So choose $f\in\mathcal{F}_i\cap C_b(\mathbb{R}^d)$. Then $T_s f \in\mathcal{F}_i \cap C_b(\mathbb{R}^d)$ for all $s\geq0$. Choose $x<y$. Then $T_s f(x) \leq T_s f(y)$ for all $s\geq0$. Hence, \begin{center} $T_t^X f(x) = \int_0^\infty (T_s f)(x) \hspace{.1cm} \mu_{N_t}(ds) \leq \int_0^\infty (T_s f)(y) \hspace{.1cm} \mu_{N_t}(ds) = T_t^X f(y).$ \end{center} Thus, $T_t^X f\in \mathcal{F}_i$. \end{proof} Let $Y$ have symbol $p(x,\xi)$. Then $X = Y(N)$ is a Feller process with symbol $p_X(x,\xi)$ that is given by $$p_X(x,\xi) = \psi(p(x,\xi)) + \text{ lower order perturbation}.$$ This ``perturbation" is ``measured in a suitable scale of anisotropic function spaces" \cite[p.104]{Schilling2013}. \par Particularly interesting examples are when $N$ is an $\alpha$-stable subordinator, inverse Gaussian subordinator, and Gamma subordinator, and $Y$ is a diffusion process $Y\sim (b(x), Q(x), 0)$. \begin{example} Let $Y$ be a stochastically monotone diffusion process in $\mathbb{R}^d$. This means $Y$ has L\'{e}vy\text{ } characteristics $(b(x), Q(x),0)$. Mu-Fa Chen and Feng-yu Wang \cite{Chen1993} proved that such a process is stochastically monotone if and only if $q_{ij}(x)$ only depends on $x_i$ and $x_j$, and $b_i(x) \leq b_i(y)$ whenever $x\leq y$ with $x_i=y_i$. The generator of $Y$ is given by: \begin{center} $\mathcal{A}^Y f(x) = b(x)\cdot \nabla f(x) + \frac{1}{2} \nabla \cdot Q(x) \nabla f(x)$ \end{center} Let $N$ be $\alpha$-stable subordinator with L\'{e}vy\text{ } characteristics $(0,\lambda)$, where $\lambda(dy) = \frac{\alpha}{\Gamma(1-\alpha)} \frac{1}{y^{1+\alpha}} dy.$ The generator $\mathcal{A}^X$ of process $X = Y(N)$ looks like \begin{align*} \mathcal{A}^X f(x) & = \int_0^\infty (T_s f(x) - f(x)) \lambda (ds) = \int_0^\infty (T_s f(x) - f(x))\frac{\alpha}{\Gamma(1-\alpha)} \frac{1}{s^{1+\alpha}} ds. \end{align*} \end{example} \noindent \textbf{Acknowledgements}: The author would like to thank Dr. Jan Rosinski for his helpful advice and guidance regarding the ideas of this paper. \section*{Appendix} This appendix contains proofs of some lemmas from Section \ref{sec:mainresults}. Throughout this appendix, we assume Setting \ref{set1}. \begin{lemmaA}[Schilling (1998) \cite{SchillingCb}, Thm.4.3] \label{cbfeller} Assume $p(x,\xi)$ is bounded. If $x\mapsto p(x,0)$ is continuous, then $(T_t)_{t\geq0}$ extends to a \textbf{$C_b$-Feller semigroup}, i.e. satisfies \begin{enumerate}[noitemsep, label = (\alph*)] \item $T_t: C_b(\mathbb{R}^d) \rightarrow C_b(\mathbb{R}^d)$, \item $\lim_{h\searrow0} ||T_{t+h} u - T_t u||_{\infty,K} = 0$ for all $K\subset\mathbb{R}^d$ compact, $u\in C_b(\mathbb{R}^d)$, $t\geq0$, where $||u||_{\infty,K} := \sup_{y\in K}| u(y) |$, i.e. locally uniformly continuous. \end{enumerate} \end{lemmaA} \begin{proof} For proof, see \cite[p.247]{SchillingCb}. \end{proof} \subsection*{Proof of Lemma \ref{integrogenerates}} \begin{proof} The process \begin{equation*} M_t^f := f(X_t) - f(X_0) - \int_0^t I(p) f(X_{s-}) ds \end{equation*} is, for every $f\in C_b^2(\mathbb{R}^d)$, a martingale with respect to $\mathbbm{P}^x$, for all $x$ (see Schilling \cite[Lemma 3.2, p.579]{Schilling1998}). This implies \begin{align*} 0 & = \mathbbm{E}^x f(X_t) - \mathbbm{E}^x f(X_0) - \mathbbm{E}^x \int_0^t I(p) f(X_{s-}) ds \\ & = T_t f(x) - f(x) - \int_0^t \mathbbm{E}^x I(p) f(X_{s-}) ds \\ & = T_t f(x) - f(x) - \int_0^t T_s I(p) f(x) ds \end{align*} for every $x\in\mathbb{R}^d$, $t\geq0$. Note that we can switch integrals in line 2 because $I(p)f\in C_b(\mathbb{R}^d)$ by Remark 4.5(ii) in Schilling \cite{SchillingCb}. This implies \begin{equation*} \label{boch1} \frac{1}{t}(T_t f - f) = \frac{1}{t} \int_0^t T_s I(p) f \hspace{.1cm} ds. \end{equation*} We argue that when taking the limit as $t\searrow0$, the right hand-side converges locally uniformly to $I(p) f$. Note that since $I(p)f\in C_b(R^d)$, then $(T_s I(p) f) \mathbbm{1}_K$ is continuous in $s$ for every compact set $K$ by the $C_b$-Feller property, i.e. \begin{align*} ||(T_{s+h} I(p) f)\mathbbm{1}_K - (T_{s} I(p) f)\mathbbm{1}_K||_\infty & = \sup_{x\in K} |T_{s+h} I(p) f(x) - T_{s} I(p) f(x)| \rightarrow0 \end{align*} So, the function $T_{(\raisebox{-0.25ex}{\scalebox{1.2}{$\cdot$}})} I(p) f \mathbbm{1}_K$ is the integrand of a Bochner-type integral that is continuous in $s$ and integrable on any closed interval $[a,b]$. Therefore, by Fundamental Theorem of Calculus for Bochner integrals \cite[p.21-22]{Dynkin1965}, \begin{align*} \lim_{t\searrow0}\frac{1}{t}(T_t f - f)\mathbbm{1}_K & = \lim_{t\searrow0} \frac{1}{t} \int_0^t (T_s I(p) f) \mathbbm{1}_K\hspace{.1cm} ds = (I(p) f)\mathbbm{1}_K \end{align*} for all $K\subset\mathbb{R}^d$ compact. Hence, $I(p)f = \lim_{t\searrow0}\frac{1}{t}(T_t f - f)$, where convergence is locally uniform. \end{proof} \subsection*{Proof of Lemma \ref{integroderivative}} \begin{proof} By Lemma A.\ref{cbfeller}, our semigroup $(T_t)_{t\geq0}$ satisfies the $C_b$-Feller property. Choose $f\in C_b^2(\mathbb{R}^d)$. Observe that for all $x\in\mathbb{R}^d$, \begin{align*} T_{t+h}f(x) -T_tf(x) & = T_t (T_hf(x) - f(x)) = T_t \int_0^h T_s I(p) f(x) \,ds \\ & = \mathbbm{E}^x \int_0^h T_s I(p) f(X_t) \,ds = \int_0^h \mathbbm{E}^x T_s I(p) f(X_t) \,ds, \text{ by Fubini's Theorem}, \\ & = \int_0^h T_t T_s I(p) f(x) \,ds = \int_0^h T_s T_t I(p) f(x)\,ds. \end{align*} Thus, \begin{align*} \lim_{h\rightarrow0}\frac{1}{h}(T_{t+h}f -T_tf) & = \lim_{h\rightarrow0}\frac{1}{h} \int_0^h T_s T_t I(p) f \hspace{.1cm}ds = T_t I(p) f \end{align*} because $T_t I(p) f\in C_b(\mathbb{R}^d)$ by $C_b$-Feller property, thus making $T_s T_t I(p) f \mathbbm{1}_K$ continuous in $s$ for every compact $K$. Once again, by Fundamental Theorem of Calculus for Bochner integrals (see \cite[p.21-22]{Dynkin1965}), we get the convergence shown above. \par Finally, we want to show $I(p) T_t f = T_t I(p) f$. Choosing $(\phi_n)_{n\in \mathbbm{N}}\subset C_c^\infty(\mathbb{R}^d)$ such that $\mathbbm{1}_{B(0,n)} \leq \phi_n \leq 1$ for all $n$. Hence, $f\phi_n \in C_c^2(\mathbb{R}^d) \subset \mathcal{D}(\mathcal{A})$, the domain of generator $\mathcal{A}$, and we have $I(p) T_t f \phi_n = T_t I(p) f\phi_n$. By an approximation argument, we get our desired result. \end{proof} \subsection*{Proof of Lemma \ref{extcauchy}} \begin{proof} Observe that all limits (and corresponding derivatives) we take here are with respect to locally uniform convergence. Note that by the assumption $x\mapsto p(x,0)$ is continuous, our semigroup $(T_t)_{t\geq0}$ satisfies the $C_b$-Feller property by Lemma A.\ref{cbfeller}. Also, by Lemma \ref{integrogenerates}, we have $\lim_{t\searrow0} \frac{1}{t}(T_t u - u) = I(p)u$ for all $u\in C_b^2(\mathbb{R}^d)$. Observe that we will define the derivative $F'(s)$ by $F'(s) = \lim_{h\rightarrow0} \frac{F(s+h)-F(s)}{h}$ where the limit is under locally uniform convergence. Also, our statement of (b) is different then Liggett's. \begin{center} Liggett's: if $t_n\rightarrow t$, then $||G(t_n) - G(t)||_\infty \rightarrow0$ as $n\rightarrow\infty$. \vskip.5cm Ours: if $t_n\rightarrow t$, then $||G(t_n) - G(t)||_{\infty,K} \rightarrow0$ as $n\rightarrow\infty$ for all $K$ compact. \end{center} Though Liggett's assumption would be sufficient, we don't need something that strong in our setting, and our $G$ will satisfy locally uniform continuity. Choose a compact set $K\subset\mathbb{R}^d$. \begin{align*} & \frac{T_{t-s-h}F(s+h) - T_{t-s}F(s)}{h} \cdot \mathbbm{1}_K \\ & = \frac{T_{t-s-h}F(s+h)}{h} \cdot \mathbbm{1}_K - \frac{T_{t-s}F(s)}{h} \cdot \mathbbm{1}_K \\ & + [T_{t-s-h} - T_{t-s}]F'(s) \cdot \mathbbm{1}_K - [T_{t-s-h} - T_{t-s}]F'(s) \cdot \mathbbm{1}_K \\ & + \frac{T_{t-s-h}F(s)}{h} \cdot \mathbbm{1}_K- \frac{T_{t-s-h}F(s)}{h} \cdot \mathbbm{1}_K \\ & + \frac{T_{t-s}F(s+h)}{h} \cdot \mathbbm{1}_K - \frac{T_{t-s}F(s+h)}{h} \cdot \mathbbm{1}_K \\ & + \frac{T_{t-s}F(s)}{h} \cdot \mathbbm{1}_K - \frac{T_{t-s}F(s)}{h} \cdot \mathbbm{1}_K \\ & =: (1) + (2) + (3) + (4) + (5) + (6) + (7) + (8) + (9) + (10) \\ & = \textcolor{red}{[(2)+(7)]} + \textcolor{blue}{[(5)+(10)]} + \textcolor{ForestGreen}{[(3)]} + \textcolor{Plum}{[ (4) + (1) + (9) + (8) + (6)]} \\ & = \textcolor{red}{T_{t-s}\left[ \frac{F(s+h) - F(s)}{h}\right] \cdot \mathbbm{1}_K }+ \textcolor{blue}{\left[ \frac{T_{t-s-h}-T_{t-s}}{h} \right]F(s) \cdot \mathbbm{1}_K} \\ & + \textcolor{ForestGreen}{[T_{t-s-h} - T_{t-s}]F'(s) \cdot \mathbbm{1}_K} + \textcolor{Plum}{[T_{t-s-h}-T_{t-s}]\left[ \frac{F(s+h) - F(s)}{h} - F'(s)\right] \cdot \mathbbm{1}_K} \\ & =: \textcolor{red}{(I)} + \textcolor{blue}{(II)} + \textcolor{ForestGreen}{(III)} + \textcolor{Plum}{(IV)} \end{align*} Now we consider the limits as $h$ goes to $0$ for each of these four terms. \begin{align*} (I): \lim_{h\searrow0} T_{t-s} \left[\frac{F(s+h)- F(s)}{h} \right] \cdot \mathbbm{1}_K& = T_{t-s} \lim_{h\searrow0} \left[\frac{F(s+h)- F(s)}{h} \right] \cdot \mathbbm{1}_K = T_{t-s} F'(s) \cdot \mathbbm{1}_K \end{align*} because $T_{t-s}$ is a bounded operator, which means it is a continuous operator. \par\text{ } \noindent (II): Let $u = t-s$. Then $s = t-u$ and $ds = -du$. For a function $f\in C_b(\mathbb{R}^d)$, \begin{align*} \lim_{h\searrow0} \left[ \frac{T_{t-s-h}-T_{t-s}}{h} \right]f \cdot \mathbbm{1}_K = \frac{d}{ds} T_{t-s} f \cdot \mathbbm{1}_K = -\frac{d}{du} T_{u} f \cdot \mathbbm{1}_K &= -I(p)T_{u} f \cdot \mathbbm{1}_K \\ & = -I(p) T_{t-s} f \cdot \mathbbm{1}_K. \end{align*} Therefore, $\displaystyle\lim_{h\searrow0} \left[ \frac{T_{t-s-h}-T_{t-s}}{h} \right]F(s) \cdot \mathbbm{1}_K= -I(p) T_{t-s} F(s) \cdot \mathbbm{1}_K = - T_{t-s} I(p) F(s) \cdot \mathbbm{1}_K.$ \\ \\ (III): By $C_b$-Feller property, since $F'(s)\in C_b(\mathbb{R}^d)$, $\displaystyle\lim_{h\searrow0} [T_{t-s-h} -T_{t-s}]F'(s) \cdot \mathbbm{1}_K = 0$ uniformly. \\ \\ (IV): Observe that $T_{t-s-h}$ and $T_{t-s}$ are both contractions. Hence, \begin{align*} & \left| \left| [T_{t-s-h}-T_{t-s}]\left[ \frac{F(s+h) - F(s)}{h} - F'(s)\right] \right| \right|_{\infty,K} \\ & \leq \left| \left| T_{t-s-h}-T_{t-s} \right| \right|\cdot \left| \left| \left[ \frac{F(s+h) - F(s)}{h} - F'(s)\right] \right| \right|_{\infty,K} \\ & \leq 2\left| \left| \left[ \frac{F(s+h) - F(s)}{h} - F'(s)\right] \right| \right|_{\infty,K}\longrightarrow0 \end{align*} as $h\rightarrow0$. Thus, we have for $0<s<t$, \begin{align*} \frac{d}{ds} & T_{t-s} F(s) \cdot \mathbbm{1}_K = \lim_{h\searrow0} \frac{T_{t-(s+h)} F(s+h) - T_{t-s}F(s)}{h} \cdot \mathbbm{1}_K \\ & = \lim_{h\searrow0} [(I)+(II)+(III)+(IV)] = T_{t-s} F'(s) \cdot \mathbbm{1}_K - T_{t-s} I(p) F(s) \cdot \mathbbm{1}_K \\ & = T_{t-s}[F'(s) - I(p) F(s)] \cdot \mathbbm{1}_K \stackrel{(c)} = T_{t-s} G(s) \cdot \mathbbm{1}_K. \end{align*} The right-hand side is a continuous function of $s$ because $G$ is continuous function of $s$ and the semigroup is uniformly continuous on $K$ by $C_b$-Feller property. Let's justify this: \\ \\ \underline{Aside}: Let $\epsilon>0$. Then $\exists N$ large s.t. $||G(s_n) - G(s)||_{\infty,K}<\epsilon/2$ for all $n\geq N$. Also, $\exists N'$ large s.t. $||T_{t-{s_n}} G(s_{N}) - T_{t-s} G(s_{N})||_{\infty,K} = ||(T_{t-s_n}-T_{t-s}) G(s_{N}) ||_{\infty,K} <\epsilon/2$ for all $n\geq N'$ since semigroup operator is uniformly continuous on compact sets. Let $M=\max(N,N')$. \begin{align*} ||T_{t-s_M} G(s_M) - T_{t-s} G(s)||_{\infty,K} & = ||T_{t-s_M} G(s_M) - T_{t-s} G(s_M) +T_{t-s} G(s_M) - T_{t-s} G(s)||_{\infty,K} \\ & \leq ||T_{t-s_M} G(s_M) - T_{t-s} G(s_M)||_{\infty,K} + ||T_{t-s} G(s_M) - T_{t-s} G(s)||_{\infty,K} \\ & \leq ||T_{t-s_M} G(s_M) - T_{t-s} G(s_M)||_{\infty,K} + || G(s_M) - G(s)||_{\infty,K} \\ & < \epsilon/2 + \epsilon/2 = \epsilon. \end{align*} Therefore we can integrate these functions with respect to $s$ from $0$ to $t$. And by Fundamental Theorem of Calculus for Bochner integrals (see \cite[p.21-22]{Dynkin1965}), \begin{align*} \int_0^t T_{t-s} G(s) ds \hspace{-.05cm} \cdot \hspace{-.075cm} \mathbbm{1}_K = \int_0^t \frac{d}{ds} T_{t-s} F(s) ds \hspace{-.05cm} \cdot \hspace{-.075cm} \mathbbm{1}_K = (T_{t-t}F(t) - T_t F(0)) \mathbbm{1}_K =(F(t) - T_t F(0)) \mathbbm{1}_K. \end{align*} \noindent Since $K$ compact is arbitrary, we have our desired result: $F(t) = T_t F(0) + \int_0^t T_{t-s} G(s)ds$. \end{proof} \newpage \bibliographystyle{acm}
{ "timestamp": "2019-05-17T02:06:10", "yymm": "1805", "arxiv_id": "1805.02541", "language": "en", "url": "https://arxiv.org/abs/1805.02541" }
\section{INTRODUCTION} \label{sec:intro} The binary system $\eta$ Carinae is composed of a very massive star at late stages of its evolution, the primary, and a hotter and less luminous evolved main sequence star, the secondary (\citealt{Damineli1996, DavidsonHumphreys1997,DavidsonHumphreys2012}). The binary system has a highly eccentric orbit (e.g., \citealt{Daminelietal1997, Smithetal2004, Davidsonetal2017}), and strong winds (\citealt{PittardCorcoran2002, Akashietal2006}) resulting in unique period of strong interaction every 5.54 years during periastron passage known as the spectroscopic event. During the event many spectral lines and emission in basically all wavelengths show rapid variability (e.g. \citealt{Zanellaetal1984, Davidsonetal2000, Smithetal2000, DuncanWhite2003, Whitelocketal2004, Martinetal2006, Martinetal2010, Stahletal2005, Hamaguchietal2007, Hamaguchietal2016, Nielsenetal2007, Daminelietal2008a, Daminelietal2008b, Mehneretal2010, Mehneretal2011, Mehneretal2015, Davidson2012}). The x-ray intensity, which also serves as an indicator to the intensity of wind interaction, drops for a duration of a few weeks, changing from one spectroscopic event to the other (\citealt{Corcoran2005, Corcoranetal2010, Corcoranetal2015} and references therein). \cite{Soker2005b} developed a model to interpret the line variations during the spectroscopic event as a result of accreting clumps of gas onto the secondary near periastron passages, disabling its wind. The suggestion was later developed to a detailed model accounting for different observations in the accretion model framework (\citealt{Akashietal2006}; \citealt{KashiSoker2009a}). The last three spectroscopic events, 2003.5, 2009 and 2014.6 were not similar, and reflected a trend in the intensities of various lines \citep{Mehneretal2015}. Observations of spectral lines across the 2014.6 event can be interpreted as weaker accretion onto the secondary close to periastron passage compared to previous events. This may indicate a decrease in the mass-loss rate from the primary star claimed to be a `change of state', already identified by \cite{Davidsonetal2005}, and theoretically explained by \cite{Kashietal2016}. Further indication for the change of state were recently found from comparison of UV lines emission at similar orbital phases separated by two orbital revolutions, at positions far from periastron passage \citep{Davidsonetal2018}. \cite{KashiSoker2009b} performed a more detailed calculation, integrating over time and volume of the density within the Bondi-Hoyle-Lyttleton accretion radius around the secondary, and found that accretion should take place close to periastron and the secondary should accrete $\sim~\rm{few}~\times~10^{-6}~~\rm{M_{\sun}}$ each cycle. Older grid-based simulations \citep{Parkinetal2011} and SPH simulations \citep{Okazakietal2008, Maduraetal2013} of the colliding winds have not obtained accretion onto the secondary. \cite{Teodoroetal2012} and \cite{Maduraetal2013} advocated against the need of accretion in explaining the spectroscopic event. \cite{Akashietal2013} performed 3D numerical simulations using the \texttt{VH-1} hydrodynamical code \citep{Blondin1994,Hawleyetal2012} to study the accretion and found that a few days before periastron passage clumps of gas are formed due to instabilities in the colliding winds structure, and some of these clumps flow towards the secondary implying accretion should occur. The final theoretical evidence for accretion came from simulations in \citet[hereafter \citetalias{Kashi2017}]{Kashi2017}. These simulations showed the destruction of the colliding winds structure into filaments and clumps that later flew onto the secondary. \citetalias{Kashi2017} demonstrated that dense clumps are crucial to the onset of the accretion process. The clumps were formed by the smooth colliding stellar winds that developed instabilities that later grew into clumps (no artificial clumps were seeded). This confirmed preceding theoretical arguments by \cite{Soker2005a,Soker2005b} that suggested accretion of clumps. The amount of accreted mass was not derived from the simulations in \citetalias{Kashi2017}, as it required further modeling of the secondary star response to the accreted mass. It is expected that accretion will cause the secondary star to stop, or partially stop, blowing its wind (\citealt{KashiSoker2009b} and ref. therein). However, quantifying the effect is a complicated task. One should consider the acceleration mechanism of the wind (line driving in the case of the secondary), and how gas settling on the envelope will reduce it. As gas is coming both from filaments and clumps directly, the wind of the secondary is expected to be affected directionally rather than isotropically. In this work we take a step forward, and quantify the accretion process and its dependence on the different parameters. In section \ref{sec:simulation} we describe the numerical simulation. Our results, showing accretion quantitatively, are presented in section \ref{sec:results}. Our a summary and discussion in given in section \ref{sec:summary}. \section{THE NUMERICAL SIMULATIONS} \label{sec:simulation} We use version 4.5 of the hydrodynamic code \texttt{FLASH}, originally described by \cite{Fryxell2000}. Our 3D Cartesian grid extends over $(x,y,z) = \pm 8 ~\rm{au}$, centered around the secondary. Our initial conditions are set $50$ days before periastron, which is enough time for forming the colliding winds structure (also know as the wind-wind collision zone, or WWC). We place the secondary in the center of the grid and send the primary on an eccentric $e\simeq0.85$--$0.9$ Keplerian orbit. As the mass loss and mass transfer during present-day $\eta$~Car~ are small (in contrast to their values during the GE and LE), the deviation from Keplerian orbit is very small. As the simulations are performed in the secondary rest frame, the wind is ejected isotropically around the secondary, and non-isotropically around the primary, as its orbital velocity around the secondary is subtracted from the wind velocity. To solve the hydrodynamic equations we use the \texttt{FLASH} version of the split piecewise parabolic method (PPM) solver \citep{ColellaWoodward1984}. We use the code's ratiation-transfer multigroup diffusion approximation, with one energy group (similar to the Eddington gray approximation). We use five levels of refinement with better resolution closer to the center. The length of the smallest cell is $1.18 \times 10^{11} \rm{cm}$ ($\simeq 1.7~\rm{R_{\sun}}$). This finest resolution extends over a sphere of a radius of $\simeq 82 ~\rm{R_{\sun}}$ centered at $(0,0,0)$. The second finest level of resolution, resolving twice the spatial scale of the finest level, continues up to a radius of $\simeq 320 ~\rm{R_{\sun}}$. This level of resolution covers the apex of the colliding winds from $\simeq 20 ~\rm{days}$ before periastron and on. As shown below and discussed in \citetalias{Kashi2017}, the instabilities that lead to accretion start only a few days before periastron, namely within this level of resolution. The highest resolution allows to follow in great detail the gas as it reaches the injection zone of the secondary wind and being accreted onto the secondary. Figure~\ref{fig:resolution_comparison} shows the same simulation at two different resolutions, manifesting the higher details revealed by the higher resolution. The right panel shows a simulation with lower resolution by 2 levels of refinement, namely the spatial scale resolved is 4 times larger. It can be seen that the high resolution simulation: \noindent(1) prevents unwanted effects of the grid that makes deviations from spherical symmetry, as can be seen in the right panel of Figure~\ref{fig:resolution_comparison} that shows that the density of the secondary wind is not perfectly isotropic. \noindent(2) much better resolves the secondary as a sphere, which is important for resolving directional accretion. \noindent(3) much better resolves the colliding winds structure with the two shocks and a contact instability. \noindent(4) allows small scale instabilities to form, which consequently create filaments and clumps, some of which later get accreted by the secondary. We therefore conclude that the high resolution is absolutely essential for obtaining meaningful results from the simulation. Therefore we ran all our simulations at the high resolution described above. \begin{figure} \centering \includegraphics[trim= -0.5cm -0.5cm 0.0cm 0.0cm,clip=true,width=0.99\columnwidth]{dens2d_resolution_comparison_reduced.eps} \caption{ Density maps showing the density sliced in the orbital plane ($z=0$), for the conventional mass model ($M_1=120 ~\rm{M_{\sun}}$ and $M_2=30 ~\rm{M_{\sun}}$) at two resolutions. The secondary is at the center, and the primary orbits it from the upper part of the figure to the bottom-left until periastron, and then down-right. Periastron occurs at $(x,y,z)=(-1.664 ~\rm{au}, 0, 0)$ and $t=0$. The two panels show the stars and the colliding wind structure 7.5 days before periastron. The left panel shows the simulation results with the high resolution described in section \ref{sec:simulation}, while the right panel shows a simulation with lower resolution by 2 levels of refinement, namely the spatial scale resolved is 4 times larger. } \label{fig:resolution_comparison} \end{figure} In \citetalias{Kashi2017} we took into account self-gravity. However, we found that the formation of filaments and clumps that are later accreted onto the secondary can occur as a result of instabilities that do not involve self-gravity. The free-fall (collapse) time of each clump as a result of self-gravity is much longer than the duration of the clump formation, indicating that self-gravity does not have a significant role in the formation of the clumps. We therefore here disable self gravity and consider only the gravity of the two stars, modeled as point masses. As there are different arguments in the literature regarding the masses of the two stars, we use two sets of stellar masses, similar to the sets we used in \citetalias{Kashi2017}: \begin{enumerate} \item \emph{Conventional mass model}, where the primary and secondary masses are $M_1=120 ~\rm{M_{\sun}}$ and $M_2=30 ~\rm{M_{\sun}}$, respectively \citep{Hillieretal2001}. \item \emph{High mass model} with $M_1=170 ~\rm{M_{\sun}}$ and $M_2=80 ~\rm{M_{\sun}}$ (\citealt{KashiSoker2010}, where the model was referred to as the `MTz model'; \citealt{KashiSoker2015}). \end{enumerate} The orbital period is $P=2023$ days, implying the semi-major axis is $a=16.64 ~\rm{au}$ for the conventional mass model (e.g., \citealt{Ishibashietal1999, Daminelietal2000, Whitelocketal2004, Davidsonetal2005} and references therein), and $a=19.73 ~\rm{au}$ for the high mass model. The stellar radii are taken to be $R_1=180 ~\rm{M_{\sun}}$ and $R_2=20 ~\rm{R_{\sun}}$ for both the conventional and the high mass models (see \citealt{KashiSoker2008} for a discussion on how the radii are derived). Larger secondary radius would probably make accretion somewhat easier. The mass loss rates and wind velocities are $\dot{M}_1=3$--$10 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$, $v_1=500 ~\rm{km~s^{-1}}$ and $\dot{M}_2=10^{-5} ~\rm{M_{\odot}}~\rm{yr^{-1}}$, $v_2=3\,000 ~\rm{km~s^{-1}}$, for the primary and the secondary, respectively \citep{PittardCorcoran2002, Hamaguchietal2007, Grohetal2012, Maduraetal2013, Corcoranetal2015}. The range for $\dot{M}_1$ is due to both uncertainty in the value itself, and a possible decrease it is undergoing in the last $\approx15$ years, as mentioned in section \ref{sec:intro}. The wind is being injected radially at its terminal speed from a narrow sphere around each star (see \citetalias{Kashi2017} for further details). In the process of injecting the winds we neglect the spins of the stars, but the orbital motion of the primary relative to the fixed grid is taken into account. The accelaration of the winds is also neglected and both winds are ejected at their terminal speeds. For the winds the adiabatic index is set to $\gamma=5/3$. Our initial conditions at $t=-50 ~\rm{days}$ set the entire grid (except the stars themselves) filled with the smooth undisturbed primary wind. We include radiative cooling based on solar composition from \cite{SutherlandDopita1993} according to the implementation described and tested in \citetalias{Kashi2017}. \begin{table} \centering \caption{List of stellar and orbital parameters. } \begin{tabular}{lll} \hline Parameter & Conventional & High Mass \\ & model & model \\ \hline $M_1$ & $120~\rm{M_{\sun}}$ & $170~\rm{M_{\sun}}$ \\ $M_2$ & $30~\rm{M_{\sun}}$ & $80~\rm{M_{\sun}}$ \\ $R_1$ & \multicolumn{2}{l}{\qquad $180~\rm{R_{\sun}}$} \\ $R_2$ & \multicolumn{2}{l}{\qquad $20~\rm{R_{\sun}}$} \\ $v_1$ & \multicolumn{2}{l}{\qquad $500 ~\rm{km~s^{-1}}$} \\ $v_2$ & \multicolumn{2}{l}{\qquad $3\,000 ~\rm{km~s^{-1}}$} \\ $\dot{M}_1$ & \multicolumn{2}{l}{\qquad $3$--$10 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$} \\ $\dot{M}_2$ & \multicolumn{2}{l}{\qquad $10^{-5} ~\rm{M_{\odot}}~\rm{yr^{-1}}$} \\ $P$ & \multicolumn{2}{l}{\qquad $2023$ days} \\ $a$ & \multicolumn{2}{l}{\qquad $16.64 ~\rm{au}$} \\ $e$ & \multicolumn{2}{l}{\qquad $0.85$--$0.9$} \\ \hline &&\\ \end{tabular} \label{table:stellarandorbitalparameters} \end{table} As to the response to the secondary star to the accreted gas we take four approaches: \noindent(1) Approaching gas removal: In the first approach we remove dense gas that reaches the secondary wind injection region, and replace it by fresh secondary wind with its regular mass loss and velocity. Namely, we do not make any changes to secondary wind and let it continue to blow as if the accreted gas did not cause any disturbance. \noindent(2) Exponentially reduced mass loss: In the second approach we reduce the mass loss rate of the secondary as it approaches periastron passage according to \begin{equation} \dot{M}_{2, \rm{eff}}= \left\{ \begin{array}{lc} \dot{M}_2 & t\leq -5 ~\rm{d} \\ \dot{M}_2 \exp[{-(t+5~\rm{d})\ln{10}/5~\rm{d}}] & -5 ~\rm{d} < t\leq 0 ~\rm{d}\\ 0.1\dot{M}_2 & 0 ~\rm{d} < t \\ \end{array} \right. , \label{eq:mdot_approach2} \end{equation} where $t=0$ is the time where the system is at periastron passage. This is an artificial approach that does not relate to the actual accretion situation in the simulation. Also, note that $\dot{M}_2$ is kept on the low value of the remainder of the simulation. Namely, for this approach we do not apply recovery from accretion. \noindent(3) Accretion dependent mass loss: In the third approach we dynamically change the mass loss rate of the secondary wind in response to the mass that has been accreted. We lower $\dot{M}_2$ by changing the density of the ejected wind by the \emph{extra} density of the accreted gas, namely \begin{equation} \begin{split} \frac{d\dot{M}_{2, \rm{eff}}}{d\Omega} &= \frac{d\dot{M}_2}{d\Omega} \frac{\rho_u(\Omega)-[\rho(\Omega)-\rho_u(\Omega)]}{\rho_u(\Omega)} \\ &= \frac{d\dot{M}_2}{d\Omega} \left(2-\frac{\rho(\Omega)}{\rho_u(\Omega)} \right). \end{split} \label{eq:mdot_approach3} \end{equation} In the above equation $\Omega$ is a solid angle, $d\dot{M}_2/d\Omega$ is the differential mass loss of the secondary, $\rho_u(\Omega)$ is the undisturbed density of the secondary wind as if it blows without the interruption of accreted gas, and $\rho(\Omega)$ is the density of the gas (if no accreted gas arrives to the secondary wind ejection region into a solid angle $\Omega$, then practically $\rho(\Omega)=\rho_u(\Omega)$ for that solid angle). Note that the approach gives mass loss rate that is not isotropic but rather dependent on the direction from which parcels of the accreted gas arrived. This approach therefore can only be implemented if the secondary and its immediate vicinity (from where its wind is being ejected and to where mass from the primary wind is being accreted) are simulated with high resolution. \noindent(4) No intervention: In the fourth approach we do not remove any accreted gas from the simulation. Cells in the secondary wind injection zone where dense blobs arrive are not replaced by fresh secondary wind but rather kept as is, i.e., with the density, velocity, and temperature of the blob. If the blob reaches the innermost injection zone, then the mass-loss rate over the solid angle of that cell for that timestep is zero; otherwise, the mass-loss rate per solid angle is unchanged. Table \ref{table:parameters} summarizes the simulations we ran, with different stellar masses and different approaches for treating the response of the secondary wind to the accreted gas. We also test denser primary wind and another value of orbital eccentricity. \begin{table*} \centering \caption{List of simulations. Run naming code: C$=$ Conventional; M$=$ Massive; WA$=$ Wind Acceleration. } \begin{tabular}{lcccclcc} \hline Run & Stellar masses &Semi-major &Orbital &$\dot{M}_1 (10^{-4} $ & Approach for secondary & Accretion & Accreted mass \\ & $M_1,M_2(~\rm{M_{\sun}})$ &axis ($~\rm{au}$) &eccentricity &$~\rm{M_{\odot}}~\rm{yr^{-1}}$) & response to accretion & duration (days) & $(10^{-6} ~\rm{M_{\sun}})$ \\ \hline C1 & 120,30 &16.64 & 0.9 &6 & (1) Approaching gas & 2 & 0.01 \\ & & & & & \quad \enskip removal & & \\ C2 & 120,30 &16.64 & 0.9 &6 & (2) Exponentially reduced & 65 & 3.8 \\ & & & & & \quad \enskip mass loss & & \\ C3 & 120,30 &16.64 & 0.9 &6 & (3) Accretion dependent & 2$^{a}$ & 0.04 \\ & & & & & \quad \enskip mass loss & & \\ C4 & 120,30 &16.64 & 0.9 &6 & (4) No intervention & 3$^{b}$ & 0.2 \\ &&&&&&\\ &&&&&&\\ M1 & 170,80 &19.73 & 0.9 &6 & (1) Approaching gas & 16 & 0.04 \\ & & & & & \quad \enskip removal & & \\ M2 & 170,80 &19.73 & 0.9 &6 & (2) Exponentially reduced & 65 & 4.2 \\ & & & & & \quad \enskip mass loss & & \\ M3 & 170,80 &19.73 & 0.9 &6 & (3) Accretion dependent & 29 & 0.06 \\ & & & & & \quad \enskip mass loss & & \\ M4 & 170,80 &19.73 & 0.9 &6 & (4) No intervention & 38 & 1 \\ &&&&&&\\ &&&&&&\\ C5 & 120,30 &16.64 & 0.9 &10 & (4) No intervention & 3$^{c}$ & 0.2 \\ M5 & 170,80 &19.73 & 0.9 &10 & (4) No intervention & 45 & 3.1 \\ &&&&&&\\ &&&&&&\\ C6 & 120,30 &16.64 & 0.85 &6 & (4) No intervention & 3 & 0.4 \\ M6 & 170,80 &19.73 & 0.85 &6 & (4) No intervention & 64 & 1.1 \\ &&&&&&\\ &&&&&&\\ M4WA$^{d}$& 170,80 &19.73 & 0.9 &6 & (4) No intervention & 48 & 1.6 \\ \hline &&\\ \end{tabular} \begin{flushleft} $^{a}$ Run C3 also shown long lasting very weak accretion, but the main accretion phase last $\simeq2$ days. \newline $^{b}$ Run C4 has 2 accretion episodes of clumps separated by many days. \newline $^{c}$ Run C5 has 3 accretion episodes of clumps separated by many days. \newline $^{d}$ Run M4WA is similar to run M4 in all parameters but includes wind acceleration for the secondary star. \end{flushleft} \label{table:parameters} \end{table*} Let us elaborate on how we calculate the accreted mass. With no accretion, the volume around the secondary, in the injection zone of the wind is supposed to have a certain density in each cell according to \begin{equation} \rho_u(r)=\frac{\dot{M}_2}{4 \pi r^2 v_2} . \label{eq:rhou} \end{equation} As the simulation runs, high density clumps and filaments approach the injection zone of the secondary wind and even reach the cells of the secondary itself. Whenever the actual density $\rho_{a, \rm{cell}}$ in a cell in the injection zone increases above the expected undisturbed value of $\rho_{u, \rm{cell}}$, we count the extra mass as accreted \begin{equation} \Delta M_{\rm acc} = (\rho_{a, \rm{cell}} - \rho_{u, \rm{cell}}) V_{\rm{cell}}, \label{eq:Macc} \end{equation} where $V_{\rm{cell}}$ is the volume of the cell. We then sum all the contributions from all cells in the injection zone to obtain the total mass accreted for that time step. \section{RESULTS} \label{sec:results} We post-processed every simulation to measure the mass accreted onto the secondary and derive other quantities that we discuss below. As the simulation is of very high resolution both the running time and the post-processing are long. We therefore derive a post-processing output every $\simeq1/2$ day, even though our data is calculated in time steps of $\simeq 1$--$3$ minutes (a necessary short time step determined by the Courant condition). This interval is however sufficient to produce accretion rate and other quantities in a good accuracy. In \citetalias{Kashi2017} we presented the results for a run with similar parameters to run C1 we show here. In both runs we used the conventional mass model ($M_1=120 ~\rm{M_{\sun}}$ and $M_2=30 ~\rm{M_{\sun}}$). The differences are, as explained in section \ref{sec:simulation}, that we here model the flow without self-gravity but rather with two point-masses for the two stars, and that the accreted mass is removed from the simulation. Figure~\ref{fig:density_slices} shows density maps for run C1 in the orbital plane ($z=0$), at different times of the simulation. Times are given with respect to periastron. The secondary is at the center of the grid, and the primary orbits it from the upper part of the figure to the bottom-left until periastron, and then bottom-right. At periastron the primary (light gray circle) is exactly to the left of the secondary (dark gray circle). The secondary wind is being injected between secondary radius and the black circle at its terminal velocity. Comparing run C1 in Figure~\ref{fig:density_slices} with figure~1 in \citetalias{Kashi2017}, it can be seen that accretion starts at the same time ($t \simeq -5$ days), but here the accretion time is shorter, lasting only for $\simeq 2$ days. The reason for the difference is the removal of the accreted gas we enforce as part of approach 1. Taking the gas away allows the secondary wind to continue to blow without interruption. Such an interruption existed in the run we presented in \citetalias{Kashi2017} by the untreated accreted gas that was left to accumulate around the secondary, and blocked some of the secondary wind, which in turn allowed more gas to be accreted. \begin{figure} \centering \includegraphics[trim= 0.cm 0.cm 0.cm 0.cm,clip=true,width=0.99\columnwidth]{6panel_120_30_approach1_reduced_tiny.eps} \caption{ Density maps with velocity vectors showing the density sliced in the orbital plane ($z=0$), for run C1, where we use the conventional mass model ($M_1=120 ~\rm{M_{\sun}}$ and $M_2=30 ~\rm{M_{\sun}}$). The bottom right panel shows a temperature map. The secondary is at the center, marked with a dark-gray circle while the primary, marked with a light-gray circle, orbits it counter-clockwise starting from the upper part of the figure at $t = -50$ days. The simulation is performed in the secondary rest frame, and therefore the orbital velocity of the primary is subtracted from its wind velocity. Note that this effect is hardly seen in the figure due to the short arrows depicting the primary wind velocity. Periastron occurs at $(x,y,z)=(-1.664 ~\rm{au}, 0, 0)$ and $t=0$. Times are given with respect to periastron. The secondary wind is being injected, at terminal velocity, between the secondary and the black circle around it. We model gravity by two point masses at the locations of both stars. Accretion starts $\simeq 5$ days before periastron when the dense clumps that formed in the post-shocked primary wind enter the injection region of the secondary wind, and lasts for only $\simeq 2$ days. } \label{fig:density_slices} \end{figure} Figure~\ref{fig:density_slices_run2} shows the same time series of density maps for run C2 where we also use the conventional mass model but this time the second approach for secondary mass loss response to accretion. The second approach is very artificial, in the sense that the mass loss rate of the secondary is being reduced regardless of the interaction details with the primary wind and accretion. It assumes that the accreted gas shuts down the secondary wind (more accurately, reducing its mass loss rate by a factor of 10). We use this approach to obtain an upper limit for the accretion rate. It can be seen that indeed as a result of reducing the mass loss rate of the secondary much more mass can reach the secondary and be accreted. Over a duration of 70 days the secondary accreted $M_{\rm acc} \simeq 3.8 \times 10^{-6} ~\rm{M_{\sun}}$. We do not reinstate the original mass loss rate of the secondary wind during this time interval. Had we done so the accretion would have most probably stopped at the time of reinstatement. \begin{figure} \centering \includegraphics[trim= 0.cm 0.cm 0.cm 0.cm,clip=true,width=0.99\columnwidth]{6panel_120_30_approach2_reduced_tiny.eps} \caption{ Like Figure~\ref{fig:density_slices} but for run C2. At times prior to 5 days before periastron there is (by definition) no difference, but then we lower the density of the secondary wind according to equation (\ref{eq:mdot_approach2}), therefore resulting in the accretion of more mass onto the secondary. It can be seen that as of day $-4$ the colliding winds structure cease to exist. Then accretion occurs directly from the primary wind onto the secondary. } \label{fig:density_slices_run2} \end{figure} Figure~\ref{fig:density_slices_run3} shows the same time series of density maps for run C3 where we also used the conventional mass model but this time the third approach for secondary mass loss response to accretion. The secondary wind does not retaliate much to the accreted mass, and there is no significant prolongation of the accretion phase compared to run C1. \begin{figure} \centering \includegraphics[trim= 0.cm 0.cm 0.cm 0.cm,clip=true,width=0.99\columnwidth]{6panel_120_30_approach3_reduced_tiny.eps} \caption{ Like Figure~\ref{fig:density_slices} but for run C3: conventional mass model with approach (3) for secondary wind response to accretion. We find that there is not a very large difference from the results of run C1. } \label{fig:density_slices_run3} \end{figure} The simulation in which we adopted the last of our four approaches for the conventional mass model is shown in Figure~\ref{fig:density_slices_run4}. We find that there is a considerably high accretion rate compared to approaches 1 and 3. As the gas that reaches the wind ejection region is not removed from the simulation, it is able to penetrate deeper into the wind ejection region and spread on wider solid angles. The result is regions in the secondary atmosphere that stop pushing wind as a result of accretion. Consequently, the mass acretion rate is higher and the accreted mass accumulates to $M_{\rm acc} \simeq 2 \times 10^{-7} ~\rm{M_{\sun}}$ for the duration of accretion. In this case accretion lasts for 2.5 days and later stops for about a month, after which there is a minor accretion of a clump lasting about 0.5 day. We consider this a natural result of a blob accretion. \begin{figure} \centering \includegraphics[trim= 0.cm 0.cm 0.cm 0.cm,clip=true,width=0.99\columnwidth]{6panel_120_30_approach4_reduced_tiny.eps} \caption{ Like Figure~\ref{fig:density_slices} but for run C4: conventional mass model with approach (4) for secondary wind response to accretion. Mass accretion is larger than approaches 1 and 3, but much smaller than approach 4. } \label{fig:density_slices_run4} \end{figure} We compare the mass accretion rates of runs C1--C4 in Figure~\ref{fig:accretion12030}. We can see that using approaches 1 and 3 gives very small accreted mass, while approach 4 yields larger amount. Approach 2, as discussed above gives an indication for the upper limit, had the secondary wind been reduced to 10\% its mass loss rate for the duration of the event. In all the runs for the conventional mass model runs (except run C2) the accreted mass is not large enough to account for observations according to the estimates in \cite{KashiSoker2009b}. \begin{figure} \centering \includegraphics[trim= 0.0cm 0.0cm 0.5cm 0.5cm,clip=true,width=0.99\columnwidth]{M_accd_and_dM_accd_dt_c_120_30.eps} \caption{ The accreted mass (upper panel) and the accretion rate (lower panel) for runs C1--C4 (Table \ref{table:parameters}), where we simulated the conventional mass model ($M_1=120 ~\rm{M_{\sun}}$ and $M_2=30 ~\rm{M_{\sun}}$) with the four different approaches for secondary wind response to accretion. Note that for clarity the values in both panels for Approach 1 and 3 have been multiplied by 10. } \label{fig:accretion12030} \end{figure} We repeated the simulation for the high mass model (i.e., $M_1=170 ~\rm{M_{\sun}}$ and $M_2=80 ~\rm{M_{\sun}}$), keeping the orbital period and winds properties the same as in the previous simulation, and taking the same four approaches for the secondary wind response to accretion. Figure \ref{fig:density_slices_run_M4} shows time series of run M4 where approach 4 for the response of the secondary to the accreted gas was used. Already at $t=0$ the difference between runs M4 and C4 is evident. On run M4 there is a vibrant accretion going on while on run C4 there is no memory of the weak accretion episode that took place only a few days earlier. \begin{figure} \centering \includegraphics[trim= 0.cm 0.cm 0.cm 0.cm,clip=true,width=0.99\columnwidth]{6panel_170_80_approach4_reduced_tiny.eps} \caption{ Like Figure~\ref{fig:density_slices} but for run M4. Here we use the high mass model, $M_1=170 ~\rm{M_{\sun}}$ and $M_2=80 ~\rm{M_{\sun}}$, while the orbital eccentricity and the mass loss rate of the primary are the same as in runs C1--C4. } \label{fig:density_slices_run_M4} \end{figure} The results of the mass accretion rates and cumulated accreted mass for runs M1--M4 (high mass model) are shown in Figure~\ref{fig:accretion17080}. The orbit for the high mass model has the same period and eccentricity, but the semi-major axis is larger, and consequently the periastron distance. As the gravitational well of the secondary is deeper for the high mass model, the secondary can more easily accrete the filaments and clumps formed in the colliding winds structure, and therefore accretion starts $\simeq 7.5$ days before periastron ($\simeq 2.5$ days earlier than for the conventional mass model). \begin{figure} \centering \includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\columnwidth]{M_accd_and_dM_accd_dt_c_170_80.eps} \caption{ Comparison of the accretion rates and accumulated accretion for runs M1--M4, where we simulated the high mass model ($M_1=170 ~\rm{M_{\sun}}$ and $M_2=80 ~\rm{M_{\sun}}$) with the four different approaches for secondary wind response to accretion. Note that for clarity the values in both panels for Approach 1 and 3 have been multiplied by 10. } \label{fig:accretion17080} \end{figure} We find that the accreted mass for approaches 1 and 3 (runs M1 and M3, respectively) are still low. For approach 2 (run M2) we see that the accretion rate reaches saturation at about $t\simeq 0$ days. The saturation in mass accretion rate describes a situation where the secondary wind does not succeed to overcome the momentum of the accreted gas from the primary wind. The primary wind is engulfing the star from almost all directions and the secondary wind is almost trapped. The secondary then accretes as much as it can. The wind of the secondary escapes through gaps in the primary wind, creating bubbles of thin gas within the dense primary wind. The rest of the gas of the primary wind, that cannot be accreted but is still gravitationally bound to the secondary, is accumulating around the secondary, waiting to be accreted. Note that we do not get saturation for the conventional mass mode (run C2), as in this run the secondary is able to push back up to 25$\%$ of the accreted wind. Namely, accretion in run C2 is weaker than in run M2. Approach 4 (run M4) presents a different picture, with much larger mass accretion rate and accreted mass of $\simeq 1.0 \times 10^{-6} ~\rm{M_{\sun}}$ over the duration of the simulation. The stronger gravity of the secondary in the high mass model makes a significant difference, allowing much more mass to be accreted compared to the conventional mass model (run C4). As noted above, the accreted gas is expected to reduce the effective temperature of the secondary and results in emission lines of lower ionization states. We post-process the simulations results to obtain the effective temperature of the secondary as a result of the obscuring gas. For that, we first calculate the optical depth towards the star. The optical depth is calculated from an outer radius $R_{\rm{out}}$ towards the center, stopping at $R_2$ \begin{equation} \tau(t) = \int\limits_{R_{\rm{out}}}^{R_2} \! -\kappa(r,t) \rho(r,t) \,\rm{d}r, \label{eq:tau} \end{equation} where $\kappa(r,t)$ is the Rosseland opacity, which depends on time since the density and temperature at each position along the calculated path are time dependent. The density $\rho(r,t)$ is determined by the simulations results, taking into account the accreted mass and the mass loss of the secondary. We then obtain the effective temperature assuming a grey photosphere, averaged over all directions \begin{equation} T^4_{\rm{eff}}(t) = \frac{4}{3} \left(\tau(t) + \frac{2}{3} \right)^{-1} T^4(\tau=2/3), \label{eq:Teff} \end{equation} where we take $T(\tau=2/3)=40\,000 ~\rm{K}$ to be the isotropic effective temperature of the undisturbed secondary, though we note that recent analysis of UV lines may indicate higher temperature \citep{Davidsonetal2018}. Figure~\ref{fig:Tavg} presents the effective temperature, showing the decrease as a result of accretion. \begin{figure} \centering \includegraphics[trim= 0.8cm 0.1cm 1.5cm 1.2cm,clip=true,width=1.0\columnwidth]{T_avg.eps} \caption{ The reduction in the effective temperature of the secondary, averaged over directions ($4\pi$). } \label{fig:Tavg} \end{figure} Observations of lines during the spectroscopic event indicate ionizing radiation from the secondary that is equivalent to that of a star with an effective temperature of $\lesssim 25\,000 ~\rm{K}$. From the decrease in effective temperature we conclude that the approach that best fits observation of the spectroscopic event is approach (4). Namely, we find that our simulations start and end accretion consistent with the observations without needing a prescription code that would intervene with the natural process. Still, the temperature we obtained for run M4 is somewhat higher than indicated by observed lines during the spectroscopic event, suggesting that the amount of accreted gas should be higher than we obtained for the parameters we used in this run. We therefore add more runs to see the effect of varying some of the parameters of the problem. Since we are dealing with very expensive computation we cannot go through all parameters and all their ranges, and therefore we restrict ourselves to the important ones and to key simulations that will show trends in the amount of accreted gas and reduction of the effective temperature of the secondary. Runs C5 and M5 are similar to C4 and M4, respectively, with the only change of increasing the primary mass loss rate to $\dot{M}_1= 10^{-3} ~\rm{M_{\odot}}~\rm{yr^{-1}}$. This value is supposed to resemble the older state of the primary mass loss rate, before its change of state (see section \ref{sec:intro} for details). For run M5 we find that the change we did compared to run M4, in a factor of $5/3$ to $\dot{M}_1$,resulted an increase by a factor of $3.1$ in $M_{\rm acc}$, such that overall the accreted mass is $M_{\rm acc} \simeq 3.1 \times 10^{-6} ~\rm{M_{\sun}}$. The duration of accretion is about $20\%$ longer than run M4. It starts $\simeq 18$ days before periastron passage and lasts for $\simeq 45$ days. This demonstrates the nonlinear dependency of accretion in the primary mass loss rate. Figure~\ref{fig:Tavg} also shows the effective temperature for run M5, showing somewhat too strong drop in the effective temperature, and for longer duration than indicated by observations of the 2003.5 spectroscopic event. We can therefore conclude that a mass loss rate larger than the one in run M4, but lower than the one taken in run M5, about $\dot{M}_1 \approx 8 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$ would best fit the properties inferred from observations. Observations of later spectroscopic events that were shorter and had weaker variation in lines as a result of the secondary UV radiation better fit primary mass loss rate closer to the lower value of run M4. It is interesting to note that \cite{Grohetal2012} found similar value, $8.5 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$ from 2D radiative transfer modeling of UV and optical spectra taken when the binary system was near apastron. \begin{figure} \centering \includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\columnwidth]{M_accd_and_dM_accd_dt_c_C456M456.eps} \caption{ The accreted mass (upper panel) and the accretion rate (lower panel) for runs C4--C6 and M4--M6 (see Table \ref{table:parameters}). It can be seen that for the high mass model (M-runs) much more mass is accreted onto the secondary. The main reason is the stronger gravity of the secondary. It is also very clear that stronger mass loss rate of the primary (runs C5 and M5) causes a large increase in the accreted mass. The dependence on eccentricity is more complicates as lower eccentricity (runs C6 and M6) means larger periastron distance but also longer periastron passage. These two effects can combine in different ways, as described in the text. } \label{fig:accretionC456M456} \end{figure} Another parameter we vary is the eccentricity, for which we also test the value $e=0.85$. This value was favored by \citealt{Davidsonetal2017}, who mentioned that it gives the smallest possible separation distance at the critical time when the spectroscopic event begins. It would therefore be expected that $e=0.85$ would produce earlier accretion compared to $e=0.9$, even though the periastron distance is $50\%$ larger for the smaller eccentricity. Runs C6 and M6 (Figure~\ref{fig:accretionC456M456}) show that for $e=0.85$ the accretion duration is longer. This confirms the claims of \citealt{Davidsonetal2017} since the time it takes for the secondary to undergo periastron passage is longer. Run M6 also shows early accretion exactly as expected by \citealt{Davidsonetal2017}. In run C6 we have not seen this behaviour, and the reason is that the larger periastron distance and smaller secondary mass combined to reduce the gravitational attraction of the secondary and therefore early accretion could not occur. Both runs C6 and M6 produced larger mass accretion than their counterparts with $e=0.9$, runs C4 and M4, respectively. The last effect we test, in a preliminary way, is the acceleration of the secondary wind. In run M4WA we take the parameters as in run M4, but instead of ejecting the wind at terminal speed, we accelerate the secondary wind using a $\beta$ profile motivated by the traditional CAK model \citep{Castoretal1975}. We take $\beta=1/2$, a value considered to be appropriate for O-stars (e.g., \citealt{Vinketal2011}), for which the wind is accelerated according to an inverse-$r^2$ law, with radial acceleration \begin{equation} a_2(r_2)=\frac{v_{2,\rm{inf}}^2}{2R_2} \left(\frac{R_2}{r_2}\right)^2, \label{eq:beta_half_acc} \end{equation} where $r_2$ is the distance from the center of the secondary and $v_{2,\rm{inf}}=3\,000 ~\rm{km~s^{-1}}$ is the terminal velocity of the secondary wind. We apply the acceleration only to material that has a velocity vector in the radial direction. If any cell has a velocity vector deviating from the radial direction, we treat it as affected by incoming gas, and stop accelerating it. Figure \ref{fig:accretionM4M4WA} compares the result of run M4 and M4WA. We caa see that the accretion rate obtained when taking the wind acceleration into account is somewhat larger, $M_{\rm acc} \simeq 1.6 \times 10^{-6} ~\rm{M_{\sun}}$ instead of $\simeq 1.0 \times 10^{-6} ~\rm{M_{\sun}}$. This result is expected since the secondary wind posing accretion is essentially less energetic when accelerated, rather than launched at terminal velocity. \begin{figure} \centering \includegraphics[trim= 0.0cm 0.0cm 0.0cm 0.0cm,clip=true,width=0.99\columnwidth]{M_accd_and_dM_accd_dt_c_M4_M4WA.eps} \caption{ The accreted mass (upper panel) and the accretion rate (lower panel) for run M4 for which the wind of the secondary is ejected at terminal velocity, and run M4WA for which the wind of the secondary is accelerated according to a $\beta=1/2$ profile. Accretion for the accelerated wind case is larger by $\simeq 60 \%$, and the duration is longer by $\simeq 26 \%$. } \label{fig:accretionM4M4WA} \end{figure} \section{SUMMARY AND DISCUSSION} \label{sec:summary} We perform detailed 3D numerical simulations of the $\eta$~Car~ colliding winds system close to periastron passage and derive the accretion rates onto the secondary star. The colliding wind region is prone to instabilities that lead to a non-linear formation of clumps and filaments that are accreted onto the secondary star. The accreted mass disturbs the secondary wind and weakens the mass loss, enabling more mass to be accreted. The accretion finally stops in the post-periastron phase when the stars get further from each other and consequently the density of the primary wind at the vicinity of the secondary decreases. Simulating accretion onto stars is a complicated task. The code used for this assignment needs to be able to handle both the hydrodynamics of the flow as well as the interaction of the photons emitted from the star with the gas. The \texttt{FLASH} code we use incorporates a radiation transfer unit which treats the photon-gas interaction, so the momentum of the accreted gas is being changed appropriately along its trajectory. In our simulations we included only the energy from the hot gas itself, which has a small effect on the gas that already has relatively high velocity. The acceleration from the photon emitted from the stars was not included directly but rather implied by the initial velocities given to the winds (and in one case it was implied in their acceleration in run M4WA where the beta-profile was used). The FLASH code can only treat the solution of the radiative transfer equation in the absence of scattering, namely it offers only a formal solution of the radiative transfer equation and not the full self consistent scattering solution. The effect of scattering may be addressed using different methods, such as dedicated radiation transfer codes that use monte-carlo approach. We here focus on the way the secondary wind would respond to the high accretion rate, and for that purpose suggest the four approaches discussed in section \ref{sec:simulation}: approaching gas removal, exponentially reduced mass loss, accretion dependent mass loss and no intervention. We find that accretion is obtained for both the conventional mass model $(M_1,M_2)=(120 ~\rm{M_{\sun}}, 30 ~\rm{M_{\sun}})$ and the high mass model $(M_1,M_2)=(170 ~\rm{M_{\sun}}, 80 ~\rm{M_{\sun}})$. For the high mass model the stronger secondary gravity attracts the clumps and we get higher accretion rates and longer accretion period. Obviously our simulations are not full radiation-transfer simulations, and as such do not provide complete details regarding the ionization structure. However they are sufficient to show the reduction in the secondary effective temperature within the assumption of high optical depth. We show that for the runs where accretion is substantial, $M_{\rm acc} \gtrsim 10^{-6} ~\rm{M_{\odot}}~\rm{yr^{-1}}$, the effective temperature of the secondary drops as a result of the ambient gas. Consequently fewer ionizing photons are emitted from the secondary, which is the major ionizing source of the binary system despite its lower luminosity. Therefore the ionization structure changes for the duration of the event. This confirms the basic idea of the accretion model that the obscuration of the ionizing photons of the secondary is the cause for variations in lines during the spectroscopic event \citep{Soker2001,Soker2005a,Soker2005b,Soker2007,Akashietal2006}. One important parameter we studied is the mass loss rate of the primary. We used values within the range of values explored in the literature (as discussed in \citetalias{Kashi2017}). Our simulations demonstrated that the mass loss rate of the primary affects the accretion rate of the secondary in non-linear way. Our results suggest that at least for the 2003.5 and 2009 periastron passages the mass loss rate of the primary was $\dot{M}_1 \approx 8 \times 10^{-4} ~\rm{M_{\odot}}~\rm{yr^{-1}}$, similar to the value obtained by \cite{Grohetal2012} from observations. The 2014.6 spectroscopic may have implied a further decrease in the primary mass loss rate, which was claimed to be an ongoing trend \citep{Mehneretal2015}. A similar decrease in the primary mass loss rate was obtained by simulations of recovery from giant eruptions \citep{Kashietal2016}. Our simulations partially support the conclusions of \citep{Mehneretal2015} who suggested that if the mass loss rate of the primary continues to decrease we will have very weak spectroscopic events in the future or there may be none. We indeed find strong dependency between the accreted mass and the mass loss rate of the primary, and it is clear that if the mass loss rate of the primary is lowered by a factor of a few the accretion can stop. Moreover, \cite{Mehneretal2015} claimed that in the 2014.6 event the primary mass loss rate has fallen low enough so that full accretion have not occurred, as opposed to previous events. Our simulations are unable to confirm or refute this conclusion due to the uncertainty in many parameters. In our study we counted material as accreted only if it actually reached the secondary. It is noteworthy to mention that there are other examples in the literature for accretion criteria, such as reaching the outer edge of the wind ejection zone (e.g. \citealt{Akashietal2013}). One very relaxing criteria is the one brought by \cite{deVal-Borroetal2009}, who preformed simulations of accretion in symbiotic systems. When modeling Bondi-Hoyle-Lyttleton accretion \citep{HoyleLyttleton1939,BondiHoyle1944} they removed some of the gas in the vicinity of the star and added it to the accreting star. For that they used a criterion for the accretion radius to be $R_{\rm acc} = 0.1 R_H = 0.1 r (M_2/3M_1)^{1/3}$, where $R_H$ is the Hill radius of the accreting star and $r$ is the binary separation. In our case adopting such an expression for the conventional mass model (the high mass model) would give $R_{\rm acc} \simeq 15.6 ~\rm{R_{\sun}}$ ($\simeq 22.9 ~\rm{R_{\sun}}$) at periastron, and larger accretion radii before and after periastron time. While for the conventional mass model $R_{\rm acc} < R_2$, adopting prescription of \cite{deVal-Borroetal2009} for the accretion radius in the case of the high mass model of $\eta$~Car~ would have given a significantly wider accretion radius, and would consequently increase the accretion rate considerably. As expected, we can see that the accretion rate obtained when taking the secondary wind acceleration into account was higher. It is not very difficult to come up with improved approaches for how the secondary wind would react to accretion. For example, our third approach neglects possible rotation of the secondary that would make the change in the mass loss rate to be closer to latitude dependent rather than direction dependent. Even though more sophisticated approach can be used, we find the ones we use here to give accretion rates that match earlier estimates \citep{KashiSoker2009b} based on observations of the duration of the spectroscopic event. We therefore leave the development of higher order approaches to a further study. These will also take into account the full effects of wind acceleration, for which we show here preliminary results. Accelerating the stellar winds will lead to two competing effects that might affect the accretion onto the secondary. The pre-shock velocity will be lower, which will reduce the penetration of the clumps and their ability to reach the secondary’s surface, while the pre-shock density will also be higher, thereby undergoing more radiative cooling and creating denser clumps that will have a higher chance of reaching the secondary star. As such, the incorporation of wind acceleration of both stars will be a key component of future work. The lower accretion rates compared to \cite{KashiSoker2009b}, where the accelaration was taken into account, may suggest that the acceleration of the wind has a role in the details of accretion and the response of the secondary to accretion. An additional aspect is the way the accreting star would respond to the accreted gas which settles onto its envelope, with momentum in the opposite direction to its blowing wind, angular momentum, and different composition. A complete treatment requires involving a stellar evolution code and adding the accreted mass to the secondary star and obtaining the properties of the wind as a result. It may also be required to iterate between the hydrodynamical and stellar evolution code in order to get a consistent solution. However even the most modern stellar evolution codes only use formulated (or semi-empirical) prescriptions for mass loss rates (e.g. \citealt{KudritzkiPuls2000,Vinketal2001,Vinketal2011,Pulsetal2008,Vink2015}). It is therefore not clear that the exercise suggested above will produce better results than the treatment we incorporate here. As the parameter space is large and not tightly constrained, there is no point at this time to fine tune the parameters in our simulations. The main point is that some of the secondary wind response approaches we explored match the properties inferred from observations, and better support the high mass model for $\eta$~Car~. In a future work we intend to explore in more details directional effects of the accreted gas and quantitatively study the angular momentum of the accreted gas and how it effects the binary system at times of spectroscopic events. \section*{Acknowledgements} I appreciate very helpful comments from Noam Soker and an anonymous referee. This work used the Extreme Science and Engineering Discovery Environment (XSEDE) TACC/Stampede2 at the service-provider through allocation TG-AST150018. This work was supported by the Cy-Tera Project, which is co-funded by the European Regional Development Fund and the Republic of Cyprus through the Research Promotion Foundation. This research was enabled in part by support provided by Compute Canada (\url{www.computecanada.ca}), thanks to the sponsorship of A. Skorek.
{ "timestamp": "2019-04-08T02:18:38", "yymm": "1805", "arxiv_id": "1805.02529", "language": "en", "url": "https://arxiv.org/abs/1805.02529" }
\section{Introduction} The Narayana sequence was introduced by the Indian mathematician Narayana in the 14th century, while studying the following problem of a herd of cows and calves: A cow produces one calf every year. Beginning in its fourth year, each calf produces one calf at the beginning of each year. How many calves are there altogether after 20 years? (cf. \cite{Al-Jo}). This problem can be solved in the same way that Fibonacci solved its problem about rabbits (cf. \cite{Ko}). If $r$ is the year, then the Narayana problem can be modelled by the recurrence \begin{equation} N_{r+1}=N_{r}+N_{r-2},\ N_{0}=0,\ N_{1}=N_{2}=1\ \ (r\geq2). \end{equation} The first few terms are $\{N_{r}\}_{r\geq0}=\{0,1,1,1,2,3,4,6,9,13,...\}$. This sequence is called Narayana sequence. The sequences $\{N_{r}\}_{r\geq0}$ can be defined for negative values of $r$ by using the definition $N_{-(s+1)}=-N_{-(s-1)}+N_{-(s-2)}$ ($s\geq2$) with initial conditions $N_{0}=N_{-1}=0$ and $N_{-2}=1$. A number of properties of the Narayana sequence were studied in \cite{Ra-Si} using matrix methods and their generalization called $k$-Narayana sequence was studied. In \cite{Bi}, Bilgici defined a new recurrence which is called generalized order-$k$ Narayana's cows sequence and by using this generalization and some matrix properties, he gave some identities related to the Narayana's cows numbers. In this work we shall generalize the identities about $ar$ subscripted Narayana numbers $N_{ar}$ to any $N_{ar+b}$ ($1\leq b<a$). One of our main theorem is to express $N_{ar+b}$ by $N_{2a+b}$, $N_{a+b}$ and $N_{b}$, which are $a$ step apart terms. \section{Narayana table} For $a\in {\mathbb N}$, when we say $a$ columns Narayana table we mean a rectangle shape having $a$ columns that consists of the all Narayana numbers from $N_{1}$ in order. So, \begin{equation} \left[ \begin{array}{cccc} N_{1} & N_{2} & ... & N_{a} \\ N_{a+1} & N_{a+2} & ... & N_{2a} \\ N_{2a+1} & N_{2a+2} & ... & N_{3a} \\ ... & ... & ... & ... \end{array} \right] . \end{equation} We shall investigate a third order linear recurrence $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$ for Narayana numbers with some $p_{a},q_{a}\in {\mathbb Z}$. \begin{lemma}\label{lem1} $N_{m}=N_{m-2}+2N_{m-4}+N_{m-6}$, $N_{m}=4N_{m-3}-3N_{m-6}+N_{m-9}$ and $N_{m}=5N_{m-4}-2N_{m-8}+N_{m-12}$ for any $m\in {\mathbb Z}$. \end{lemma} \begin{proof} Observe that $N_{6}=4=2+2\cdot 1=N_{4}+2N_{2}+N_{0}$. If we assume $N_{t}=N_{t-2}+2N_{t-4}+N_{t-6}$ for all $t<m$. Then, \begin{align*} N_{m}&=N_{m-1}+N_{m-3}\\ &=(N_{m-3}+2N_{m-5}+N_{m-7})+(N_{m-5}+2N_{m-7}+N_{m-9})\\ &=(N_{m-3}+N_{m-5})+2(N_{m-5}+N_{m-7})+(N_{m-7}+N_{m-9})\\ &= N_{m-2}+2N_{m-4}+N_{m-6}. \end{align*} Similar to this, we notice $N_{9}=13=4\cdot 4-3\cdot 1=4N_{6}-3N_{3}+N_{0}$. If we assume $N_{t}=4N_{t-3}-3N_{t-6}+N_{t-9}$ for all $t<m$, then the induction hypothesis proves $N_{m}=4N_{m-3}-3N_{m-6}+N_{m-9}$. Analogously, since $N_{12}=41=5\cdot 9-2\cdot2=5N_{8}-2N_{4}+N_{0}$, the identity $N_{m}=5N_{m-4}-2N_{m-8}+N_{m-12}$ can be proved immediately by induction. \end{proof} \begin{remark} Note that the identity $N_{4r}=5N_{4(r-1)}-2N_{4(r-2)}+N_{4(r-3)}$ is a special case of $N_{m}=5N_{m-4}-2N_{m-8}+N_{m-12}$ in above Lemma \ref{lem1} when $m$ is divisible by 4 ($m=4r$), with $r\in {\mathbb Z}$. We extend Lemma \ref{lem1} to any integer $1\leq a\leq 8$. \end{remark} \begin{theorem}\label{teo1} Let $1\leq a\leq 8$. Then the third order recurrence $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$ of $N_{m}$\ holds with the following $(p_{a},q_{a})$. $$ \begin{tabular}{|l|l|l|} \hline $a$ & $(p_{a},q_{a})$ & $ \begin{array}{cccc} N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a} \end{array} $ \\ \hline $1$ & $(1,0)$ & $ \begin{array}{cccc} N_{m}=N_{m-1}+N_{m-3} \end{array} $ \\ $2$ & $(1,2)$ & $ \begin{array}{cccc} N_{m}=N_{m-2}+2N_{m-4}+N_{m-6} \end{array}% $ \\ $3$ & $(4,-3)$ & $ \begin{array}{cccc} N_{m}=4N_{m-3}-3N_{m-6}+N_{m-9} \end{array} $ \\ $4$ & $(5,-2)$ & $ \begin{array}{cccc} N_{m}=5N_{m-4}-2N_{m-8}+N_{m-12} \end{array} $ \\ $5$ & $(6,5)$ & $ \begin{array}{cccc} N_{m}=6N_{m-5}+5N_{m-10}+N_{m-15} \end{array} $ \\ $6$ & $(10,-1)$ & $ \begin{array}{cccc} N_{m}=10N_{m-6}-N_{m-12}+N_{m-18} \end{array} $ \\ $7$ & $(15,-7)$ & $ \begin{array}{cccc} N_{m}=15N_{m-7}-7N_{m-14}+N_{m-21} \end{array} $ \\ $8$ & $(21,6)$ & $ \begin{array}{cccc} N_{m}=21N_{m-8}+6N_{m-16}+N_{m-24} \end{array} $ \\ \hline \end{tabular} $$ \end{theorem} \begin{proof} Clearly $N_{m}=N_{m-1}+N_{m-3}$ shows $(p _{1},q _{1})=(1,0)$. And Lemma \ref{lem1} shows $(p_{a},q_{a})=(1,2)$, $(4,-3)$ and $(5,-2)$ for $a=2,3,4$, respectively. Let $m=ar+b$ ($1\leq b <a$) and $5\leq a\leq 8$. In order to express $N_{ar+b}$ as $p_{a}N_{a(r-1)+b}+q_{a}N_{a(r-2)+b}+N_{a(r-3)+b}$, we shall consider $a$ columns Narayana tables. Let us begin with $a=5$. $$ \left[ \begin{array}{ccccc} 1 & 1 & 1 & 2 & 3 \\ 4 & 6 & 9 & 13 & 19 \\ 28 & 41 & 60 & 88 & 129 \\ 189 & 277 & 406 & 595 & ...% \end{array}% \right] . $$ Then it can be observed that, for instance $$ \left\{ \begin{array}{c} N_{16}=189=6\cdot 28+5\cdot 4+1=6N_{11}+5N_{6}+N_{1} \\ N_{17}=277=6\cdot 41+5\cdot 6+1=6N_{12}+5N_{7}+N_{2} \\ N_{18}=406=6\cdot 60+5\cdot 9+2=6N_{13}+5N_{8}+N_{3} \end{array} \right. . $$ Thus, by assuming $N_{t}=6N_{t-5}+5N_{t-10}+N_{t-15}$ for all $t<m$, the induction hypothesis gives rise to \begin{align*} N_{m}&=N_{m-1}+N_{m-3}\\ &=(6N_{m-6}+5N_{m-11}+N_{m-16})+(6N_{m-8}+5N_{m-13}+N_{m-18})\\ &=6(N_{m-6}+N_{m-8})+5(N_{m-11}+N_{m-13})+(N_{m-16}+N_{m-18}) \\ &=6N_{m-5}+5N_{m-10}+N_{m-15}, \end{align*} so $(p_{5},q_{5})=(6,5)$. Moreover from the $6$ columns Narayana table $$ \left[ \begin{array}{cccccc} 1 & 1 & 1 & 2 & 3 & 4 \\ 6 & 9 & 13 & 19 & 28 & 41 \\ 60 & 88 & 129 & 189 & 277 & ... \\ \end{array}% \right] $$ we can observe that, for instance $$ \left\{ \begin{array}{c} N_{19}=595=10\cdot 60-6+1=10N_{13}-N_{7}+N_{1} \\ N_{20}=872=10\cdot 88-9+1=10N_{14}-N_{8}+N_{2} \\ N_{21}=1278=10\cdot 129-13+1=10N_{15}-N_{9}+N_{3} \end{array}% \right. . $$ By assuming $N_{t}=10N_{t-6}-N_{t-12}+N_{t-18}$ for all $t<m$, we have \begin{align*} N_{m}&=N_{m-1}+N_{m-3}\\ &=(10N_{m-7}-N_{m-13}+N_{m-19})+(10N_{m-9}-N_{m-15}+N_{m-21})\\ &=10(N_{m-7}+N_{m-9})-(N_{m-13}+N_{m-15})+(N_{m-19}+N_{m-21}) \\ &=10N_{m-6}+5N_{m-12}+N_{m-18}, \end{align*} so $(p_{6},q_{6})=(10,-1)$. Therefore the observations and mathematical induction prove that the coefficients $(p_{a},q_{a})$ for $a=7,8$ satisfying $$N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$$ are equal to $(15,-7)$ and $(21,6)$, respectively. \end{proof} \begin{remark} We note that the subscript $m$ of $N_{m}$ could be negative, for example, in $6\ $columns Narayana table, $N_{15}=129=10N_{9}-N_{3}+N_{-3}$. \end{remark} \begin{definition} Let $r\in \mathbb{Z}$. A sequence $n_{r}$ is called a Narayana type if it satisfies $n_{r}+n_{r+2}-n_{r+3}=0$ with any initials $n_{1},$ $n_{2}$ and $n_{3}$. \end{definition} \begin{theorem}\label{th:3} For $1\leq a\leq 8$, let $(p_{a},q _{a})$ be the coefficient of the third order recurrence $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$. Then, \begin{enumerate} \item For $1\leq s\leq 5$, $\{p_{s}\}$ is a Narayana type sequence $p_{s+3}=p_{s+2}+p_{s}$ with initials $p_{1}=p_{2}=1$ and $p_{3}=4$, while $\{q_{s}\}$\ satisfies $q_{s+3}=q_{s}-q_{s+1}$ with $q_{1}=0$, $q_{2}=2$ and $q_{3}=-3$. \item Moreover, $p _{s}=N_{s}+3N_{s-2}$ and $q_{s}=-p_{-s}$ for $1\leq s\leq 8$, where $N_{s}$ is the $s$-th Narayana number. \end{enumerate} \end{theorem} \begin{proof} By above Theorem \ref{teo1}, $$\{p_{s}\}_{s=1}^{8}=\{1,1,4,5,6,10,15,21\}$$ and $$\{q_{s}\}_{s=1}^{8}=\{0,2,-3,-2,5,-1,-7,6\}.$$ So it is easy to see that $p_{s+3}=p_{s+2}+p_{s}$ and $q_{s+3}=q_{s}-q_{s+1}$ for $1\leq s\leq 5$. Moreover, by means of Narayana numbers $N_{m}$, we notice $$ p_{1}=1=N_{1}+3N_{-1},\ p_{2}=1=N_{2}+3N_{0},\ p_{3}=4=N_{3}+3N_{1}, $$ and $p_{4}=p_{3}+p_{1}=5=N_{4}+3N_{2}$, etc. So $p_{s}=N_{s}+3N_{s-2}$ for $1\leq s\leq 8.$ Now, by considering $N_{m}$ with negative $m$, the Narayana type sequence $\{p_{s}\}$ can be extended to any $s\in \mathbb{Z}$, as follows. $$ \begin{tabular}{l|llllllllllllll} \hline $s$ & $...$ & $-8$ & $-7$ & $-6$ & $-5$ & $-4$ & $-3$ & $-2$ & $-1$ & $0$ & $% 1$ & $2$ & $3$ & $...$ \\ \hline $p_{s}$ & $...$ & $-6$ & $7$ & $1$ & $-5$ & $2$ & $3$ & $-2$ & $0$ & $3$ & $1$ & $1$ & $4$ & $...$ \\ \hline \end{tabular}% $$ Then by comparing $\{p_{s}\}_{s=-1}^{-8}=\{0,-2,3,2,-5,1,7,-6\}$ with $\{q_{s}\}_{s=1}^{8}$, we find that $q_{s}=-p_{-s}$ for $0\leq s\leq 8$. \end{proof} \section{The third order linear recurrence of $N_{m}$} We shall generalize the findings in above section for $0\leq a\leq 8$ to any integer $a$. \begin{theorem}\label{teo2} Let $p_{a}=N_{a}+3N_{a-2}$ and $q_{a}=-p_{-a}$ for any $a\in \mathbb{Z}^{+}$. Then, any $m$-th Narayana number satisfies $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$ for every $a<m$. \end{theorem} \begin{proof} It is due to above Theorem \ref{th:3} if $0\leq a\leq 8$. Since $p_{a}=N_{a}+3N_{a-2}$ for all $a\in \mathbb{Z}^{+}$, then $\{p_{a}\}$ is a Narayana type sequence because \begin{align*} p_{a}+p_{a+2} &=(N_{a}+3N_{a-2})+(N_{a+2}+3N_{a}) \\ &=(N_{a}+N_{a+2})+3(N_{a-2}+N_{a}) \\ &=N_{a+3}+3N_{a+1}=p_{a+3}. \end{align*} Similarly, since $q_{a}=-p_{-a}$ for all $a$, $\{q_{a}\}$ satisfies \begin{align*} q_{a}-q _{a+1} &=-p_{-a}+p _{-(a+1)} \\ &=-(p_{-a-1}+p_{-a-3})+p_{-(a+1)} \\ &=-p_{-(a+3)}=q_{a+3}. \end{align*} We now suppose that the three order recurrence $N_{m}=p_{t}N_{m-t}+q_{t}N_{m-2t}+N_{m-3t}$ hold for all $t\leq a$. Since \begin{align*} N_{m-(a-2)}&=N_{m-(a-1)}+N_{m-(a+1)}\\ N_{m-2(a-2)}&=N_{m-2(a-1)}+2N_{m-2a}+N_{m-2(a+1)}\\ N_{m-3(a-2)}&=4N_{m-3(a-1)}-3N_{m-3a}+N_{m-3(a+1)}. \end{align*} Then, by lemma \ref{lem1} and the mathematical induction with long calculations proves that \begin{align*} p_{a+1}N_{m-(a+1)}&+q_{a+1}N_{m-2(a+1)}+N_{m-3(a+1)} \\ &=(p_{a}+p_{a-2})N_{m-(a+1)}+(q_{a-2}-q_{a-1})N_{m-2(a+1)}+N_{m-3(a+1)} \\ &=(p_{a}+p_{a-2})(N_{m-(a-2)}-N_{m-(a-1)}) \\ & \ \ +(q_{a-2}-q_{a-1})(N_{m-2(a-2)}-N_{m-2(a-1)}-2N_{m-2a}) \\ & \ \ +(N_{m-3(a-2)}-4N_{m-3(a-1)}+3N_{m-3a}) \\ &= N_{m}. \end{align*} \end{proof} Theorem \ref{teo2} provides a good way to find huge Narayana numbers. For instance, for $40$-th Narayana number $N_{40}$, we may choose any $a$, say $a=10$, then $p_{10}=N_{10}+3N_{8}=46$ and $q_{10}=-p _{-10}=-13$, thus \begin{align*} N_{40}&=p_{10}N_{40-10}+q_{10}N_{40-20}+N_{40-30}\\ &=46\cdot 39865-13\cdot 872+19 \\ &=1822473, \end{align*} a $7$ digit integer. On the other hand, if we take $a=8$ then $p_{8}=N_{8}+3N_{6}=21$ and $q_{8}=-p_{-8}=6$, so $P_{40}$ can be obtained by $N_{40}=p_{8}N_{32}+q_{10}N_{24}+N_{16}$. More identities for $p _{a}$ can be developed in terms of three successive Narayana numbers. Now, for each $m \in {\mathbb Z}^{+}$, we define two sequences \begin{equation} P_{N,m}=N_{m}+N_{-m}\ \textrm{and}\ Q_{N,m}=N_{m}-N_{-m}. \end{equation} Then it is easy to have the table $$ \begin{tabular}{l|llllllllllllll} \hline $m$ & $1$ & $2$ & $3$ & $4$ & $5$ & $6$ & $7$ & $8$ & $9$ & $10$ & $% 11$ & $12$ & $...$ \\ \hline\hline $N_{m}$ & $1$ & $1$ & $1$ & $2$ & $3$ & $4$ & $6$ & $9$ & $% 13 $ & $19$ & $28$ & $41$ & $...$ \\ $N_{-m}$ & $0$ & $1$ & $0$ & $-1$ & $1$ & $1$ & $-2$ & $0$ & $3$ & $% -2 $ & $-3$ & $5$ & $...$ \\ \hline $P_{N,m}$ & $1$ & $2$ & $1$ & $1$ & $4$ & $5 $ & $4$ & $9$ & $16$ & $17$ & $25$ & $46$ & $...$ \\ $Q_{N,m}$ & $1$ & $0$ & $1$ & $3$ & $2$ & $3 $ & $8$ & $9$ & $10$ & $21$ & $31$ & $36$ & $...$ \\ \hline \end{tabular} $$ From the table, we notice $N_{8}=9=46-21-16$ or $N_{8}=P_{N,12}-Q_{N,10}-P_{N,9}$. \begin{theorem} Let $m\in\mathbb{Z}^{+}$. Then the sequences $\{N_{m}\}$ satisfy the relation $$N_{m}=P_{N,m+4}-Q_{N,m+2}-P_{N,m+1}.$$ Furthermore, $N_{m}=\frac{1}{2}\left( P_{N,m}+Q_{N,m}\right) $ and $N_{-m}=\frac{1}{2}\left( P_{N,m}-Q_{N,m}\right)$. \end{theorem} \begin{proof} It is easy to see that \begin{eqnarray*} P_{N,m} &=&N_{m}+N_{-m} \\ &=&\left( N_{m-1}+N_{m-3}\right) +\left( -N_{-(m-2)}+N_{-(m-3)}\right) \\ &=&P_{N,m-3}+N_{m-1}-N_{-(m-2)} \\ &=&P_{N,m-3}+(N_{m-2}+N_{m-4})-N_{-(m-2)} \\ &=&P_{N,m-3}+Q_{N,m-2}+N_{m-4}. \end{eqnarray*} Hence $N_{m}=P_{N,m+4}-Q_{N,m+2}-P_{N,m+1}$. \end{proof} \begin{theorem}\label{teo4} Let $m=ar+b$ with $1\leq b\leq a<m$. Let $(p_{a},q_{a})$ be the coefficient of the third order recurrence $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$. Then, $N_{m}$ is a linear combination of any three consecutive entries of $b$-th column in the $a$ columns Narayana table. Furthermore, $N_{m}$ is expressed by the first three terms $N_{2a+b}$, $N_{a+b}$ and $N_{b}$ of $b$-th column. \end{theorem} \begin{proof} Let $N_{m}=p_{a}N_{m-a}+q_{a}N_{m-2a}+N_{m-3a}$ in Theorem \ref{teo2}. Then, \begin{eqnarray*} N_{ar+b} &=&p_{a}N_{a(r-1)+b}+q_{a}N_{a(r-2)+b}+N_{a(r-3)+b} \\ &=&p_{a}\left( p_{a}N_{a(r-2)+b}+q_{a}N_{a(r-3)+b}+N_{a(r-4)+b}\right) \\ &&\ \ +q{a}N_{a(r-2)+b}+N_{a(r-3)+b} \\ &=&\left( p_{a}^{2}+q_{a}\right) N_{a(r-2)+b}+\left(p_{a}q_{a}+1\right) N_{a(r-3)+b}+p_{a}N_{a(r-4)+b}. \end{eqnarray*} Hence after $s$ step (with $0<s<r$), if we write $$N_{ar+b}=\alpha N_{a(s+2)+b}+\beta N_{a(s+1)+b}+\gamma N_{as+b},$$ with $\alpha, \beta,\gamma \in \mathbb{Z}$. Then, in the next step we have $N_{ar+b}=(p_{a}\alpha +\beta)N_{a(s+1)+b}+(q_{a}\alpha+\gamma) N_{as+b}+ \alpha N_{as+b}$. Continue this process to reach $s=1$, then it follows that $N_{m}$ is a linear combination of $N_{2a+b}$, $N_{a+b}$ and $N_{b}$. \end{proof} For example, for $N_{38}$ we may take any $a<38$, say $a=7$. Since $(p_{7},q_{7})=(15,-7)$ by Theorem \ref{teo1}, $N_{38}$ can be obtained easily by Theorem \ref{teo4} that \begin{eqnarray*} N_{38} &=&15N_{31}-7N_{24}+N_{17} \\ &=&(15^{2}-7)N_{24}+(15\cdot (-7)+1)N_{17}+15N_{10} \\ &=&218N_{24}-104N_{17}+15P_{10} \\ &=&218\left( 15N_{17}-7N_{10}+N_{3}\right) -104N_{17}+15N_{10} \\ &=&3166N_{17}-1511N_{10}+218N_{3} \\ &=&3166\cdot 277-1511\cdot 19+218\cdot 1 \\ &=&848491. \end{eqnarray*} However, since $N_{m}$ is composed of $N_{m-a}$, $N_{m-2a}$ and $N_{m-3a}$, it may be better to choose $a\approx \frac{m}{3}$. Indeed if we take $\frac{38}{3}\approx 12=a$, then $N_{38}=p_{12}N_{26}-p_{-12}N_{14}+N_{2}$ and the last term $N_{2}=1$ is known easily. \begin{remark} Assume the same context $(p_{a},q_{a})$ as before. If $m=3a$, then $N_{m}=p_{\frac{m}{3}}N_{\frac{2m}{3}}+q_{\frac{m}{3}}N_{\frac{m}{3}}$ since $N_{0}=0$. In the other hand, if $m=3a+1$ or $m=3a+2$ and $N_{1}=N_{2}=1$, it follows that \begin{equation*} N_{m}=p_{\left\lfloor \frac{m}{3}\right\rfloor }N_{\left\lfloor \frac{2m}{3}\right\rfloor +1}+q_{\left\lfloor \frac{m}{3}\right\rfloor }N_{\left\lfloor \frac{m}{3}+\frac{1}{2}\right\rfloor +1}+1, \end{equation*} where $\left\lfloor x \right\rfloor$ is the floor function of $x$. \end{remark} For example, if $m=26,$ we have $N_{26}=p_{8}N_{18}+q_{8}N_{10}+1=8641=21\cdot 406+6\cdot 19+1$. If $m=36,$ we have $N_{36}=p_{12}N_{24}+q_{12}N_{12}$. \section{Partial sum of Narayana numbers in a row} Consider $N_{ar+b}$ ($r\geq 0$ and $1\leq b\leq a$) as an entry placed at the $(r+1)$-th row and $b$-th column in the table, and let \begin{equation*} S_{N,r}^{(a,b)}=\sum_{k=0}^{r}N_{ak+b}=N_{b}+N_{a+b}+N_{2a+b}+\cdots +N_{ar+b} \end{equation*} be the partial sum of $r+1$ entries of $b$-th column. \begin{theorem}\label{teo5} For $r\geq 0$, we have \begin{equation}\label{s2} S_{N,r}^{(4,0)}=\sum_{k=0}^{r}N_{4k}=\frac{1}{3}\left(N_{4(r+1)}-N_{4r}+N_{4(r-1)}-1\right). \end{equation} \end{theorem} \begin{proof} Let $r=3$, Lemma \ref{lem1} shows $N_{4(4)}=5N_{4(3)}-2N_{4(2)}+N_{4}$, so we have \begin{eqnarray*} 3\sum_{k=0}^{3}N_{4k}&=&3N_{4(3)}+3N_{4(2)}+3N_{4}+3N_{0} \\ &=&(5N_{4(3)}-2N_{4(2)}+N_{4})-2N_{4(3)}+5N_{4(2)}+2N_{4}+3N_{0} \\ &=&N_{4(4)}-2N_{4(3)}+5N_{4(2)}+2N_{4}\\ &=&N_{4(4)}-2N_{4(3)}+(N_{4(3)}+2N_{4})+2N_{4}\\ &=&N_{4(4)}-N_{4(3)}+N_{4(2)}-1, \end{eqnarray*} since $4N_{4}=N_{8}-1$. Assume $3\sum_{k=0}^{r}N_{4k}=N_{4(r+1)}-N_{4r}+N_{4(r-1)}-1$ is true. Then it follows that \begin{align*} N_{4(r+2)}-&N_{4(r+1)}+N_{4r}-1\\ &=\left(5N_{4(r+1)}-2N_{4r}+N_{4(r-1)}\right) -N_{4(r+1)}+N_{4r}-1 \\ &=4N_{4(r+1)}-N_{4r}+N_{4(r-1)}-1\\ &=3N_{4(r+1)}+N_{4(r+1)}-N_{4r}+N_{4(r-1)}-1\\ &=3N_{4(r+1)}+3\sum_{k=0}^{r}N_{4k}=3\sum_{k=0}^{r+1}N_{4k}. \end{align*} \end{proof} \begin{remark} Theorem \ref{teo5} is a sum of $4k$ subscripted Narayana numbers. But in our context, Eq. (\ref{s2}) can be explained as a sum of entries of $4$-th column in the $4$ columns Narayana table. We now shall study the sum of entries of any $b$-th column in the $4$ columns Narayana table. \end{remark} \begin{theorem} Consider $S_{N,r}^{(4,b)}$\ with $1\leq b\leq 4$. Then for $r\geq 3$,% \begin{equation}\label{s5} S_{N,r}^{(4,b)}=\left\{ \begin{array}{ccc} 5S_{N,r-1}^{(4,b)}-2S_{N,r-2}^{(4,b)}+S_{N,r-3}^{(4,b)}-1 & if & b=1, \\ 5S_{N,r-1}^{(4,b)}-2S_{N,r-2}^{(4,b)}+S_{N,r-3}^{(4,b)}+1 & if & b=2,4, \\ 5S_{N,r-1}^{(4,b)}-2S_{N,r-2}^{(4,b)}+S_{N,r-3}^{(4,b)}+2& if & b=3. \end{array} \right. \end{equation} \end{theorem} \begin{proof} The $4$ columns Narayana table makes the table of $S_{N,r}^{(4,b)}$ as follows. \begin{equation*} \left[ \begin{tabular}{llll} $1$ & $1$ & $1$ & $2$ \\ $3$ & $4$ & $6$ & $9$ \\ $13$ & $19$ & $28$ & $41$ \\ $60$ & $88$ & $129$ & $189$ \\ $277$ & $406$ & $595$ & $\cdots $% \end{tabular}% \right] \text{ and \begin{tabular}{l|llll} \hline $r$ & $S_{N,r}^{(4,1)}$ & $S_{N,r}^{(4,2)}$ & $S_{N,r}^{(4,3)}$ & $S_{N,r}^{(4,4)}$ \\ \hline $0$ & $1$ & $1$ & $1$ & $2$ \\ $1$ & $4$ & $5$ & $7$ & $11$ \\ $2$ & $17$ & $24$ & $35$ & $52$ \\ $3$ & $77$ & $112$ & $164$ & $241$ \\ $4$ & $354$ & $518$ & $759$ & $\cdots $ \\ \hline \end{tabular}% .} \end{equation*}% When $r=4$ and $b=1$, we notice $77=\left( 5\cdot 17-2\cdot 4+1\right)-1$, and it can be written as \begin{equation*} S_{N,3}^{(4,1)}=5S_{N,2}^{(4,1)}-2S_{N,1}^{(4,1)}+S_{N,0}^{(4,1)}-1. \end{equation*}% Similar to this, we observe that \begin{equation*} \left\{ \begin{array}{c} S_{N,3}^{(4,2)}=5S_{N,2}^{(4,2)}-2S_{N,1}^{(4,2)}+S_{N,0}^{(4,2)}+1,\\ S_{N,3}^{(4,3)}=5S_{N,2}^{(4,3)}-2S_{N,1}^{(4,3)}+S_{N,0}^{(4,3)}+2,\\ S_{N,3}^{(4,4)}=5S_{N,2}^{(4,4)}-2S_{N,1}^{(4,4)}+S_{N,0}^{(4,4)}+1.% \end{array}% \right. . \end{equation*}% Furthermore, assume that $S_{N,r}^{(4,b)}=5S_{N,r-1}^{(4,b)}-2S_{N,r-2}^{(4,b)}+S_{N,r-3}^{(4,b)}+1$ if $b=2,4$. Then Theorem \ref{teo1} together with induction hypothesis yields \begin{align*} S_{N,r+1}^{(4,b)}&=\sum_{k=0}^{r+1}N_{4k+b}=S_{N,r}^{(4,b)}+N_{4(r+1)+b} \\ &=(5S_{N,r-1}^{(4,b)}-2S_{N,r-2}^{(4,b)}+S_{N,r-3}^{(4,b)}+1)+(5N_{4r+b}-2N_{4(r-1)+b}+N_{4(r-2)+b}) \\ &=5S_{N,r}^{(4,b)}-2S_{N,r-1}^{(4,b)}+S_{N,r-2}^{(4,b)}+1, \end{align*} this proves the Eq. (\ref{s5}). Similarly, other relationships are followed. \end{proof}
{ "timestamp": "2018-05-08T02:12:32", "yymm": "1805", "arxiv_id": "1805.02255", "language": "en", "url": "https://arxiv.org/abs/1805.02255" }
\section{Introduction} The goal of this work is to facilitate algorithmic discovery in the scientific literature. Despite notable advances in scientific search engines, data mining and digital libraries \citep[e.g.,][]{wu:14}, researchers remain unable to answer simple questions such as: \begin{itemizesquish} \item What is the percentage of female subjects in depression clinical trials? \item Which of my co-authors published one or more papers on coreference resolution? \item Which papers discuss the effects of Ranibizumab on the Retina? \end{itemizesquish} \begin{figure}[t] \centering { \includegraphics[width=0.8\linewidth]{graphlet.png} } \caption{Part of the literature graph. \label{fig:graphlet}} \end{figure} In this paper, we focus on the problem of extracting structured data from scientific documents, which can later be used in natural language interfaces \cite[e.g.,][]{iyer:17} or to improve ranking of results in academic search \cite[e.g.,][]{xiong:17}. We describe methods used in a scalable deployed production system for extracting structured information from scientific documents into \emph{the literature graph} (see Fig. \ref{fig:graphlet}). The literature graph is a directed property graph which summarizes key information in the literature and can be used to answer the queries mentioned earlier as well as more complex queries. For example, in order to compute the Erd\H{o}s number of an author X, the graph can be queried to find the number of nodes on the shortest undirected path between author X and Paul Erd\H{o}s such that all edges on the path are labeled ``authored''. We reduce literature graph construction into familiar NLP tasks such as sequence labeling, entity linking and relation extraction, and address some of the impractical assumptions commonly made in the standard formulations of these tasks. For example, most research on named entity recognition tasks report results on large labeled datasets such as CoNLL-2003 and ACE-2005 \cite[e.g.,][]{lample:16}, and assume that entity types in the test set match those labeled in the training set \cite[including work on domain adaptation, e.g.,][]{daume:07}. These assumptions, while useful for developing and benchmarking new methods, are unrealistic for many domains and applications. The paper also serves as an overview of the approach we adopt at \url{www.semanticscholar.org} in a step towards more intelligent academic search engines \cite{etzioni:11}. In the next section, we start by describing our symbolic representation of the literature. Then, we discuss how we extract metadata associated with a paper such as authors and references, then how we extract the entities mentioned in paper text. Before we conclude, we briefly describe other research challenges we are actively working on in order to improve the quality of the literature graph. \section{Structure of The Literature Graph}\label{sec:structure} The literature graph is a \textit{property graph} with directed edges. Unlike Resource Description Framework (RDF) graphs, nodes and edges in property graphs have an internal structure which is more suitable for representing complex data types such as papers and entities. In this section, we describe the attributes associated with nodes and edges of different types in the literature graph. \subsection{Node Types} \paragraph{Papers.} We obtain metadata and PDF files of papers via partnerships with publishers (e.g., Springer, Nature), catalogs (e.g., DBLP, MEDLINE), pre-publishing services (e.g., arXiv, bioRxive), as well as web-crawling. Paper nodes are associated with a set of attributes such as `title', `abstract', `full text', `venues' and `publication year'. While some of the paper sources provide these attributes as metadata, it is often necessary to extract them from the paper PDF (details in \S\ref{sec:science_parse}). We deterministically remove duplicate papers based on string similarity of their metadata, resulting in 37M unique paper nodes. Papers in the literature graph cover a variety of scientific disciplines, including computer science, molecular biology, microbiology and neuroscience. \paragraph{Authors.} Each node of this type represents a unique author, with attributes such as `first name' and `last name'. The literature graph has 12M nodes of this type. \paragraph{Entities.} Each node of this type represents a unique scientific concept discussed in the literature, with attributes such as `canonical name', `aliases' and `description'. Our literature graph has 0.4M nodes of this type. We describe how we populate entity nodes in \S\ref{sec:kbs}. \paragraph{Entity mentions.} Each node of this type represents a textual reference of an entity in one of the papers, with attributes such as `mention text', `context', and `confidence'. We describe how we populate the 237M mentions in the literature graph in \S\ref{sec:entities_approaches}. \subsection{Edge Types} \paragraph{Citations.} We instantiate a directed citation edge from paper nodes $p_1 \longrightarrow p_2$ for each $p_2$ referenced in $p_1$. Citation edges have attributes such as `from paper id', `to paper id' and `contexts' (the textual contexts where $p_2$ is referenced in $p_1$). While some of the paper sources provide these attributes as metadata, it is often necessary to extract them from the paper PDF as detailed in \S\ref{sec:science_parse}. \paragraph{Authorship.} We instantiate a directed authorship edge between an author node and a paper node $a \longrightarrow p$ for each author of that paper. \paragraph{Entity linking edges.} We instantiate a directed edge from an extracted entity mention node to the entity it refers to. \paragraph{Mention--mention relations.} We instantiate a directed edge between a pair of mentions in the same sentential context if the textual relation extraction model predicts one of a predefined list of relation types between them in a sentential context.\footnote{Due to space constraints, we opted not to discuss our relation extraction models in this draft.} We encode a symmetric relation between $m_1$ and $m_2$ as two directed edges $m_1 \longrightarrow m_2$ and $m_2 \longrightarrow m_1$. \paragraph{Entity--entity relations.}\qquad While mention--mention edges represent relations between mentions in a particular context, entity--entity edges represent relations between abstract entities. These relations may be imported from an existing knowledge base (KB) or inferred from other edges in the graph. \section{Extracting Metadata}\label{sec:science_parse} In the previous section, we described the overall structure of the literature graph. Next, we discuss how we populate paper nodes, author nodes, authorship edges, and citation edges. Although some publishers provide sufficient metadata about their papers, many papers are provided with incomplete metadata. Also, papers obtained via web-crawling are not associated with any metadata. To fill in this gap, we built the \textsc{ScienceParse} system to predict structured data from the raw PDFs using recurrent neural networks (RNNs).\footnote{The \textsc{ScienceParse} libraries can be found at \url{http://allenai.org/software/}.} For each paper, the system extracts the paper title, list of authors, and list of references; each reference consists of a title, a list of authors, a venue, and a year. \paragraph{Preparing the input layer.} We split each PDF into individual pages, and feed each page to Apache's PDFBox library\footnote{\url{https://pdfbox.apache.org}} to convert it into a sequence of tokens, where each token has features, e.g., `text', `font size', `space width', `position on the page'. We normalize the token-level features before feeding them as inputs to the model. For each of the `font size' and `space width' features, we compute three normalized values (with respect to current page, current document, and the whole training corpus), each value ranging between -0.5 to +0.5. The token's `position on the page' is given in XY coordinate points. We scale the values linearly to range from $(-0.5,-0.5)$ at the top-left corner of the page to $(0.5,0.5)$ at the bottom-right corner. In order to capture case information, we add seven numeric features to the input representation of each token: whether the first/second letter is uppercase/lowercase, the fraction of uppercase/lowercase letters and the fraction of digits. To help the model make correct predictions for metadata which tend to appear at the beginning (e.g., titles and authors) or at the end of papers (e.g., references), we provide the current page number as two discrete variables (relative to the beginning and end of the PDF file) with values 0, 1 and 2+. These features are repeated for each token on the same page. For the $k$-th token in the sequence, we compute the input representation $\mathbf{i}_k$ by concatenating the numeric features, an embedding of the `font size', and the word embedding of the lowercased token. Word embeddings are initialized with GloVe \cite{pennington:14}. \paragraph{Model.} The input token representations are passed through one fully-connected layer and then fed into a two-layer bidirectional LSTM \cite[Long Short-Term Memory,][]{hochreiter:97}, i.e., \begin{align*} \mathbf{g}_k^\rightarrow &= \text{LSTM}(\mathbf{Wi}_k, \mathbf{g}_{k-1}^\rightarrow), \mathbf{g}_k = [\mathbf{g}_k^\rightarrow; \mathbf{g}_k^\leftarrow], \nonumber \\ \mathbf{h}_k^\rightarrow &= \text{LSTM}(\mathbf{g}_k, \mathbf{h}_{k-1}^\rightarrow), \mathbf{h}_k = [\mathbf{h}_k^\rightarrow; \mathbf{g}_k^\leftarrow] \nonumber \\ \end{align*} where $W$ is a weight matrix, $\mathbf{g}_k^\leftarrow$ and $\mathbf{h}_k^\leftarrow$ are defined similarly to $\mathbf{g}_k^\rightarrow$ and $\mathbf{h}_k^\rightarrow$ but process token sequences in the opposite direction. Following \newcite{collobert:11}, we feed the output of the second layer $\mathbf{h}_k$ into a dense layer to predict unnormalized label weights for each token and learn label bigram feature weights (often described as a conditional random field layer when used in neural architectures) to account for dependencies between labels. \paragraph{Training.} The \textsc{ScienceParse} system is trained on a snapshot of the data at PubMed Central. It consists of 1.4M PDFs and their associated metadata, which specify the correct titles, authors, and bibliographies. We use a heuristic labeling process that finds the strings from the metadata in the tokenized PDFs to produce labeled tokens. This labeling process succeeds for 76\% of the documents. The remaining documents are not used in the training process. During training, we only use pages which have at least one token with a label that is not ``none''. \paragraph{Decoding.} At test time, we use Viterbi decoding to find the most likely global sequence, with no further constraints. To get the title, we use the longest continuous sequence of tokens with the ``title'' label. Since there can be multiple authors, we use all continuous sequences of tokens with the ``author'' label as authors, but require that all authors of a paper are mentioned on the same page. If the author labels are predicted in multiple pages, we use the one with the largest number of authors. \paragraph{Results.} \begin{table} \centering \begin{tabular}{@{}r|ccc@{}} \toprule Field & Precision & Recall & F1 \\ \midrule title & 85.5 & 85.5 & 85.5 \\ authors & 92.1 & 92.1 & 92.1 \\ bibliography titles & 89.3 & 89.4 & 89.3 \\ bibliography authors & 97.1 & 97.0 & 97.0 \\ bibliography venues & 91.7 & 89.7 & 90.7 \\ bibliography years & 98.0 & 98.0 & 98.0 \\ \bottomrule \end{tabular} \caption{Results of the \textsc{ScienceParse} system.} \label{tab:spresults} \end{table} We run our final tests on a held-out set from PubMed Central, consisting of about 54K documents. The results are detailed in Table \ref{tab:spresults}. We use a conservative evaluation where an instance is correct if it exactly matches the gold annotation, with no credit for partial matching. To give an example for the type of errors our model makes, consider the paper \cite{wang:13} titled ``Clinical review: Efficacy of antimicrobial-impregnated catheters in external ventricular drainage - a systematic review and meta-analysis.'' The title we extract for this paper omits the first part ``Clinical review:''. This is likely to be a result of the pattern ``Foo: Bar Baz'' appearing in many training examples with only ``Bar Baz'' labeled as the title. \section{Entity Extraction and Linking}\label{sec:entities} In the previous section, we described how we populate the backbone of the literature graph, i.e., paper nodes, author nodes and citation edges. Next, we discuss how we populate mentions and entities in the literature graph using entity extraction and linking on the paper text. In order to focus on more salient entities in a given paper, we only use the title and abstract. \subsection{Approaches}\label{sec:entities_approaches} We experiment with three approaches for entity extraction and linking: \vspace{1.5mm} {\noindent \textbf{I. Statistical:} uses one or more statistical models for predicting mention spans, then uses another statistical model to link mentions to candidate entities in a KB. } \vspace{1.5mm} {\noindent \textbf{II. Hybrid:} defines a small number of hand-engineered, deterministic rules for string-based matching of the input text to candidate entities in the KB, then uses a statistical model to disambiguate the mentions.\footnote{We also experimented with a ``pure'' rules-based approach which disambiguates deterministically but the hybrid approach consistently gave better results.} } \vspace{1.5mm} {\noindent \textbf{III. Off-the-shelf:} uses existing libraries, namely \cite[][TagMe]{ferragina:10}\footnote{The TagMe APIs are described at \url{https://sobigdata.d4science.org/web/tagme/tagme-help}} and \cite[][MetaMap Lite]{demnerfushman:17}\footnote{We use v3.4 (L0) of MetaMap Lite, available at \url{https://metamap.nlm.nih.gov/MetaMapLite.shtml}}, with minimal post-processing to extract and link entities to the KB. } \vspace{1.5mm} We evaluate the performance of each approach in two broad scientific areas: computer science (CS) and biomedical research (Bio). For each unique (paper ID, entity ID) pair predicted by one of the approaches, we ask human annotators to label each mention extracted for this entity in the paper. We use CrowdFlower to manage human annotations and only include instances where three or more annotators agree on the label. If one or more of the entity mentions in that paper is judged to be correct, the pair (paper ID, entity ID) counts as one correct instance. Otherwise, it counts as an incorrect instance. We report `yield' in lieu of `recall' due to the difficulty of doing a scalable comprehensive annotation. \begin{table}[t] \centering \begin{tabular}{@{}r|cr|cr@{}} \toprule Approach & \multicolumn{2}{c}{CS} & \multicolumn{2}{c}{Bio} \\ & prec. & yield & prec. & yield \\ \midrule Statistical & 98.4 & 712 & 94.4 & 928 \\ Hybrid & 91.5 & 1990 & 92.1 & 3126 \\ Off-the-shelf & 97.4 & 873 & 77.5 & 1206 \\ \bottomrule \end{tabular} \caption{Document-level evaluation of three approaches in two scientific areas: computer science (CS) and biomedical (Bio). } \label{tab:analyzers} \end{table} Table \ref{tab:analyzers} shows the results based on 500 papers using v1.1.2 of our entity extraction and linking components. In both domains, the statistical approach gives the highest precision and the lowest yield. The hybrid approach consistently gives the highest yield, but sacrifices precision. The TagMe off-the-shelf library used for the CS domain gives surprisingly good results, with precision within 1 point from the statistical models. However, the MetaMap Lite off-the-shelf library we used for the biomedical domain suffered a huge loss in precision. Our error analysis showed that each of the approaches is able to predict entities not predicted by the other approaches so we decided to pool their outputs in our deployed system, which gives significantly higher yield than any individual approach while maintaining reasonably high precision. \subsection{Entity Extraction Models} \label{extraction_model} Given the token sequence $t_1, \ldots, t_N$ in a sentence, we need to identify spans which correspond to entity mentions. We use the BILOU scheme to encode labels at the token level. Unlike most formulations of named entity recognition problems (NER), we do not identify the entity type (e.g., protein, drug, chemical, disease) for each mention since the output mentions are further grounded in a KB with further information about the entity (including its type), using an entity linking module. \paragraph{Model.} First, we construct the token embedding $\mathbf{x}_k = [\mathbf{c}_k; \mathbf{w}_k]$ for each token $t_k$ in the input sequence, where $\mathbf{c}_k$ is a character-based representation computed using a convolutional neural network (CNN) with filter of size 3 characters, and $\mathbf{w}_k$ are learned word embeddings initialized with the GloVe embeddings \cite{pennington:14}. We also compute context-sensitive word embeddings, denoted as $\mathbf{lm}_k = [\mathbf{lm}_k^{\rightarrow};\mathbf{lm}_k^{\leftarrow}]$, by concatenating the projected outputs of forward and backward recurrent neural network language models (RNN-LM) at position $k$. The language model (LM) for each direction is trained independently and consists of a single layer long short-term memory (LSTM) network followed by a linear project layer. While training the LM parameters, $\mathbf{lm}^{\rightarrow}_k$ is used to predict $t_{k+1}$ and $\mathbf{lm}^{\leftarrow}_k$ is used to predict $t_{k-1}$. We fix the LM parameters during training of the entity extraction model. See \newcite{peters:17} and \newcite{ammar:17} for more details. Given the $\mathbf{x}_k$ and $\mathbf{lm}_k$ embeddings for each token $k \in \{1, \ldots, N \}$, we use a two-layer bidirectional LSTM to encode the sequence with $\mathbf{x}_k$ and $\mathbf{lm}_k$ feeding into the first and second layer, respectively. That is, \begin{align*} \mathbf{g}_k^\rightarrow &= \text{LSTM}(\mathbf{x}_k, \mathbf{g}_{k-1}^\rightarrow), \mathbf{g}_k = [\mathbf{g}_k^\rightarrow ; \mathbf{g}_k^\leftarrow], \nonumber \\ \mathbf{h}_k^\rightarrow &= \text{LSTM}([\mathbf{g}_k ; \mathbf{lm}_k], \mathbf{h}_{k-1}^\rightarrow), \mathbf{h}_k = [\mathbf{h}_k^\rightarrow; \mathbf{h}_k^\leftarrow], \nonumber \end{align*} where $\mathbf{g}_k^\leftarrow$ and $\mathbf{h}_k^\leftarrow$ are defined similarly to $\mathbf{g}_k^\rightarrow$ and $\mathbf{h}_k^\rightarrow$ but process token sequences in the opposite direction. Similar to the model described in \S\ref{sec:science_parse}, we feed the output of the second LSTM into a dense layer to predict unnormalized label weights for each token and learn label bigram feature weights to account for dependencies between labels. \paragraph{Results.} We use the standard data splits of the SemEval-2017 Task 10 on entity (and relation) extraction from scientific papers \cite{augenstein:17}. Table \ref{tab:sciencie_entities} compares three variants of our entity extraction model. The first line omits the LM embeddings $\mathbf{lm}_k$, while the second line is the full model (including LM embeddings) showing a large improvement of 4.2 F1 points. The third line shows that creating an ensemble of 15 models further improves the results by 1.1 F1 points. \begin{table}[t] \centering \begin{tabular}{@{}r|c@{}} \toprule Description & F1 \\ \midrule Without LM & 49.9 \\ With LM & 54.1 \\ Avg. of 15 models with LM & 55.2 \\ \bottomrule \end{tabular} \caption{Results of the entity extraction model on the development set of SemEval-2017 task 10.} \label{tab:sciencie_entities} \end{table} \paragraph{Model instances.} In the deployed system, we use three instances of the entity extraction model with a similar architecture, but trained on different datasets. Two instances are trained on the BC5CDR \cite{li:16} and the CHEMDNER datasets \cite{krallinger:15} to extract key entity mentions in the biomedical domain such as diseases, drugs and chemical compounds. The third instance is trained on mention labels induced from Wikipedia articles in the computer science domain. The output of all model instances are pooled together and combined with the rule-based entity extraction module, then fed into the entity linking model (described below). \subsection{Knowledge Bases}\label{sec:kbs} In this section, we describe the construction of entity nodes and entity-entity edges. Unlike other knowledge extraction systems such as the Never-Ending Language Learner (NELL)\footnote{\url{http://rtw.ml.cmu.edu/rtw/}} and OpenIE 4,\footnote{\url{https://github.com/allenai/openie-standalone}} we use existing knowledge bases (KBs) of entities to reduce the burden of identifying coherent concepts. Grounding the entity mentions in a manually-curated KB also increases user confidence in automated predictions. We use two KBs: {\noindent \textbf{UMLS:} The UMLS metathesaurus integrates information about concepts in specialized ontologies in several biomedical domains, and is funded by the U.S. National Library of Medicine. } {\noindent \textbf{DBpedia:} DBpedia provides access to structured information in Wikipedia. Rather than including all Wikipedia pages, we used a short list of Wikipedia categories about CS and included all pages up to depth four in their trees in order to exclude irrelevant entities, e.g., ``Lord of the Rings'' in DBpedia. } \subsection{Entity Linking Models} Given a text span $s$ identified by the entity extraction model in \S\ref{extraction_model} (or with heuristics) and a reference KB, the goal of the entity linking model is to associate the span with the entity it refers to. A span and its surrounding words are collectively referred to as a mention. We first identify a set of candidate entities that a given mention may refer to. Then, we rank the candidate entities based on a score computed using a neural model trained on labeled data. For example, given the string ``\ldots{} \textit{database of facts, an ILP system will} \ldots{}'', the entity extraction model identifies the span ``ILP'' as a possible entity and the entity linking model associates it with ``Inductive\_Logic\_Programming'' as the referent entity (from among other candidates like ``Integer\_Linear\_Programming'' or ``Instruction-level\_Parallelism''). \paragraph{Datasets.} We used two datasets: i) a biomedical dataset formed by combining MSH \cite{jimeno:2011} and BC5CDR \cite{li:16} with UMLS as the reference KB, and ii) a CS dataset we curated using Wikipedia articles about CS concepts with DBpedia as the reference KB. \paragraph{Candidate selection.} In a preprocessing step, we build an index which maps any token used in a labeled mention or an entity name in the KB to associated entity IDs, along with the frequency this token is associated with that entity. This is similar to the index used in previous entity linking systems \cite[e.g.,][]{bhagavatula:15} to estimate the probability that a given mention refers to an entity. At train and test time, we use this index to find candidate entities for a given mention by looking up the tokens in the mention. This method also serves as our baseline in Table \ref{tab:el_results} by selecting the entity with the highest frequency for a given mention. \paragraph{Scoring candidates.} Given a mention (m) and a candidate entity (e), the neural model constructs a vector encoding of the mention and the entity. We encode the mention and entity using the functions $\mathbf{f}$ and $\mathbf{g}$, respectively, as follows: \begin{align*} \textbf{f}(\text{m}) &= [\textbf{v}_{\text{m.name}};\text{avg}(\textbf{v}_{\text{m.lc}}, \textbf{v}_{\text{m.rc}})], \nonumber \\ \textbf{g}(\text{e}) &= [\mathbf{v}_{\text{e.name}};\mathbf{v}_{\text{e.def}}], \end{align*} where m.surface, m.lc and m.rc are the mention's surface form, left and right contexts, and e.name and e.def are the candidate entity's name and definition, respectively. $\mathbf{v}_\text{text}$ is a bag-of-words sum encoder for text. We use the same encoder for the mention surface form and the candidate name, and another encoder for the mention contexts and entity definition. Additionally, we include numerical features to estimate the confidence of a candidate entity based on the statistics collected in the index described earlier. We compute two scores based on the word overlap of (i) mention's context and candidate's definition and (ii) mention's surface span and the candidate entity's name. Finally, we feed the concatenation of the cosine similarity between $\mathbf{f}(\text{m})$ and $\mathbf{g}(\text{e})$ and the intersection-based scores into an affine transformation followed by a sigmoid non-linearity to compute the final score for the pair (m, e). \paragraph{Results.} We use the Bag of Concepts F1 metric \cite{ling:15} for comparison. Table \ref{tab:el_results} compares the performance of the most-frequent-entity baseline and our neural model described above. \begin{table}[t] \centering \begin{tabular}{l|c|c} \toprule & CS & Bio \\ \midrule Baseline & 84.2 & 54.2 \\ Neural & 84.6 & 85.8\\ \bottomrule \end{tabular} \caption{The Bag of Concepts F1 score of the baseline and neural model on the two curated datasets.} \label{tab:el_results} \end{table} \section{Other Research Problems} \label{sec:author_disambiguation} \label{sec:others} In the previous sections, we discussed how we construct the main components of the literature graph. In this section, we briefly describe several other related challenges we are actively working on. \paragraph{Author disambiguation.} Despite initiatives to have global author IDs ORCID and ResearcherID, most publishers provide author information as names (e.g., arXiv). However, author names cannot be used as a unique identifier since several people often share the same name. Moreover, different venues and sources use different conventions in reporting the author names, e.g., ``first initial, last name'' vs.~``last name, first name''. Inspired by \newcite{culotta:07}, we train a supervised binary classifier for merging pairs of author instances and use it to incrementally create author clusters. We only consider merging two author instances if they have the same last name and share the first initial. If the first name is spelled out (rather than abbreviated) in both author instances, we also require that the first name matches. \paragraph{Ontology matching.} Popular concepts are often represented in multiple KBs. For example, the concept of ``artificial neural networks'' is represented as entity ID D016571 in the MESH ontology, and represented as page ID `21523' in DBpedia. Ontology matching is the problem of identifying semantically-equivalent entities across KBs or ontologies.\footnote{Variants of this problem are also known as deduplication or record linkage.} \paragraph{Limited KB coverage.} The convenience of grounding entities in a hand-curated KB comes at the cost of limited coverage. Introduction of new concepts and relations in the scientific literature occurs at a faster pace than KB curation, resulting in a large gap in KB coverage of scientific concepts. In order to close this gap, we need to develop models which can predict textual relations as well as detailed concept descriptions in scientific papers. For the same reasons, we also need to augment the relations imported from the KB with relations extracted from text. Our approach to address both entity and relation coverage is based on distant supervision \cite{mintz:09}. In short, we train two models for identifying entity definitions and relations expressed in natural language in scientific documents, and automatically generate labeled data for training these models using known definitions and relations in the KB. We note that the literature graph currently lacks coverage for important entity types (e.g., affiliations) and domains (e.g., physics). Covering affiliations requires small modifications to the metadata extraction model followed by an algorithm for matching author names with their affiliations. In order to cover additional scientific domains, more agreements need to be signed with publishers. \paragraph{Figure and table extraction.} Non-textual components such as charts, diagrams and tables provide key information in many scientific documents, but the lack of large labeled datasets has impeded the development of data-driven methods for scientific figure extraction. In \newcite{siegel:18}, we induced high-quality training labels for the task of figure extraction in a large number of scientific documents, with no human intervention. To accomplish this we leveraged the auxiliary data provided in two large web collections of scientific documents (arXiv and PubMed) to locate figures and their associated captions in the rasterized PDF. We use the resulting dataset to train a deep neural network for end-to-end figure detection, yielding a model that can be more easily extended to new domains compared to previous work. \paragraph{Understanding and predicting citations.} The citation edges in the literature graph provide a wealth of information (e.g., at what rate a paper is being cited and whether it is accelerating), and opens the door for further research to better understand and predict citations. For example, in order to allow users to better understand what impact a paper had and effectively navigate its citations, we experimented with methods for classifying a citation as important or incidental, as well as more fine-grained classes \cite{valenzuela:15}. The citation information also enables us to develop models for estimating the potential of a paper or an author. In \newcite{weihs:17}, we predict citation-based metrics such as an author's h-index and the citation rate of a paper in the future. Also related is the problem of predicting which papers should be cited in a given draft \cite{bhagavatula:18}, which can help improve the quality of a paper draft before it is submitted for peer review, or used to supplement the list of references after a paper is published. \section{Conclusion and Future Work}\label{sec:conclusion} In this paper, we discuss the construction of a graph, providing a symbolic representation of the scientific literature. We describe deployed models for identifying authors, references and entities in the paper text, and provide experimental results to evaluate the performance of each model. Three research directions follow from this work and other similar projects, e.g., \newcite{hahnpowell:17,wu:14}: i) improving quality and enriching content of the literature graph (e.g., ontology matching and knowledge base population). ii) aggregating domain-specific extractions across many papers to enable a better understanding of the literature as a whole (e.g., identifying demographic biases in clinical trial participants and summarizing empirical results on important tasks). iii) exploring the literature via natural language interfaces. In order to help future research efforts, we make the following resources publicly available: metadata for over 20 million papers,\footnote{\url{http://labs.semanticscholar.org/corpus/}} meaningful citations dataset,\footnote{\url{http://allenai.org/data.html}} models for figure and table extraction,\footnote{\url{https://github.com/allenai/deepfigures-open}} models for predicting citations in a paper draft \footnote{\url{https://github.com/allenai/citeomatic}} and models for extracting paper metadata,\footnote{\url{https://github.com/allenai/science-parse}} among other resources.\footnote{\url{http://allenai.org/software/}}
{ "timestamp": "2018-05-08T02:12:37", "yymm": "1805", "arxiv_id": "1805.02262", "language": "en", "url": "https://arxiv.org/abs/1805.02262" }
\section{Introduction} Root distribution of polynomials in a sequence discover intensive information about the interrelations of the polynomials in the sequence, especially when the sequence satisfies a recurrence. Stanley \cite{StaW} provides some figures for the root distribution of some polynomials in a sequence arising from combinatorics. In the study of the root distribution of sequential polynomials, both the real-rootedness and the limiting distribution of zeros of the polynomials receive much attention. Some evidence for the significance of real-rootedness of polynomials can be found in Stanley~\cite[\S 4]{Sta00}. Bleher and Mallison~\cite{BM06} consider the zeros of Taylor polynomials, and the asymptotics of the zeros for linear combinations of exponentials. Some study on certain ``zero attractor'' of particular sequences of polynomials can be found in~\cite{BG07,GHR09}. The exploration of zero attractors of Appell polynomials has been regarded as ``gems in experimental mathematics'' in \cite{BG08}. Limiting distribution of zeros has been used to study the four-color theorem via the chromatic polynomials initiated by Birkhoff \cite{Bir12}, which amounts to the nonexistence of a chromatic polynomial with a zero at the point $4$. Beraha and Kahane~\cite{BK79} examine the limits of zeros for the sequence of chromatic polynomials of a special family of $3$-regular graphs, described as to consist of an inner and outer square separated by $n$ $4$-rings. It turns out that the number $4$ is a limit of zeros of polynomials in this family. Motived by the LCGD conjecture from topological graph theory, Gross, Mansour, Tucker and the first author~\cite{GMTW16-01,GMTW16-10} study the root distribution of polynomials satisfying the recurrence \begin{equation}\label[rec]{rec:AB} W_n(z) = A(z)W_{n-1}(z)+B(z)W_{n-2}(z), \end{equation} where the functions $A(z)$ and $B(z)$ are polynomials such that one of them is linear and that the other is constant. They established the real-rootedness subject to some sign conditions of the coefficients of $A(z)$ and $B(z)$. Since the real-rootedness implies the log-concavity, they confirm the LCGD conjecture for many graph families whose genus polynomials satisfy \cref{rec:AB} with the sign conditions. Orthogonal polynomials and quasi-orthogonal polynomials have closed relations with \cref{rec:AB}; see Andrews, Richard and Ranjan~\cite{ARR99B} and Brezinski, Driver and Redivo-Zaglia~\cite{BDR04}. Jin and Wang~\cite{JW17X} characterized the common zeros of polynomials $W_n(z)$ for general $A(z)$ and $B(z)$. Following Gross et al.~\cite{GMTW16-01}, a sequence $\{W_n(z)\}_n$ of polynomials satisfying \cref{rec:AB} is said to be of type $(\deg A(z),\,\deg B(z))$. It is normalized if $W_0(z)=1$ and $W_1(z)=z$. When $A(z)=az+b$ and $B(z)=cz+d$ are linear, \cref{rec:AB} reduces to \begin{equation}\label[rec]{rec2:linear} W_n(z) = (az+b)W_{n-1}(z)+(cz+d)W_{n-2}(z). \end{equation} Concentrating on the root distribution, and considering the polynomials defined by $(-1)^nW_n(-z)$, one may suppose without loss of generality that $c\ge 0$. We use a quadruple $(\sgn(a),\sgn(b),\sgn(c),\sgn(d))$, each coordinate of which is either $+$ or~$-$ or $0$, to denote the combination of signs of the numbers $a,b,c,d$. Gross et al.~\cite{GMTW16-01,GMTW16-10}, establish the real-rootedness for Cases $(+,*,0,-)$, $(0,+,+,+)$ and $(0,+,+,-)$, where the symbol~$*$ indicates that the number $b$ might be of any sign. In Case $(-,-,+,-)$, Wang and Zhang~\cite{WZ17X--+-} establish the real-rootedness of all polynomials $W_n(z)$ for when $\Delta_g>0$, where $\Delta_g=(b+c)^2+4d(1-a)$. In Case $(+,+,+,+)$, they \cite{WZ17X++++} show that every polynomial $W_n(z)$ is real-rooted if and only if $ad\le bc$. According to Beraha, Kahane, and Weiss' result~\cite{BKW75,BKW78} on limits of zeros of polynomials satisfying \cref{rec:AB}, polynomials satisfying \cref{rec2:linear} have at most two isolated limits of zeros. In this paper, we show that the set of non-isolated limits of zeros of polynomials satisfying \cref{rec2:linear} is either an arc, or a circle, or a ``lollipop'', or an interval. As an application, we can show that in Case $(+,-,+,-)$, every polynomial is real-rooted if and only if $ad\le bc$. Moreover, when the isolated limits are real, the zeros approach to them in an oscillating manner in Cases $(0,+,+,+)$ and $(+,+,+,+)$, that is, from both the left and right sides of the isolated limits, while the convergence way is from only one side in Case $(+,-,+,-)$; see \cref{thm:rr:<}. We should mention that the generating function of the normalized polynomials satisfying \cref{rec:AB} is \[ \sum_{n\ge0}W_n(z)t^n=\frac{1+(z-A(z))t}{1-A(z)t-B(z)t^2}. \] In comparison, the root distribution of the polynomials generated by the function \[ \sum_{n\ge0}W_n(z)t^n=\frac{1}{1-A(z)t-B(z)t^2} \] has been investigated in \cite{Tra14}, in which Tran found an algebraic curve containing the zeros of all polynomials $W_n(z)$ with large subscript $n$. This paper is organised as follows. After reviewing necessary notion and and notation, we interpret Beraha et al.'s characterization for polynomials satisfying \cref{rec2:linear} in \cref{thm:lz}. In \S\ref{sec:rr}, we provide a sufficient and necessary condition of real-rootedness in Case $(+,-,+,-)$, and the root distribution when they are real-rooted as an application of \cref{thm:lz}. \section{Geometry of the limits of zeros} Throughout this paper, we let $a,b,c,d\in\mathbb{R}$, $ac\ne0$, and let $\{W_n(z)\}_{n\ge 0}$ be a sequence of polynomials satisfying \cref{rec2:linear}. Then the polynomial $W_n(z)$ has leading term $a^{n-1}z^n$. For any complex number $z=re^{i\theta}$ with $\theta\in(-\pi,\pi]$, we use the square root notation $\sqrt{z}$ to denote the number $\sqrt{r}e^{i\theta/2}$, which lies in the right half-plane $\theta\in(-\pi/2,\,\pi/2]$. The general formula in \cref{lem:00} is the base of our study, which can be found in~\cite{GMTW16-01,GMTW16-10}. \begin{lem}\label{lem:00} Let $A,B\in\mathbb{C}$. Suppose that $W_0=1$ and $W_n=AW_{n-1}+BW_{n-2}$ for $n\ge 2$. Then \[ W_n=\begin{cases} {\displaystyle {\alpha_+\lambda_+^n+\alpha_-\lambda_-^n}},&\textrm{ if $\Delta\neq0$},\\[5pt] {\displaystyle {A+nh\over 2}\cdot\bgg{{A\over 2}}^{n-1}},&\textrm{ if $\Delta=0$}, \end{cases} \] for $n\ge 0$, where $h=2W_1-A$ and \[ \lambda_\pm=\frac{A\pm\sqrt{\Delta}}{2},\qquad \alpha_\pm=\frac{\sqrt{\Delta}\pm h}{2\sqrt{\Delta}}, \qquad\text{with $\Delta=A^2+4B$}. \] \end{lem} Accordingly, we employ the notations \begin{align*} \Delta(z) &=A(z)^2+4B(z) =a^2z^2+(2ab+4c)z+(b^2+4d),\\[4pt] h(z)&=2W_1(z)-A(z)=(2-a)z-b,\\[4pt] \lambda_\pm(z)&=\frac{A(z)\pm\sqrt{\Delta(z)}}{2},\\ \alpha_\pm(z)&=\frac{\sqrt{\Delta(z)}\pm h(z)}{2\sqrt{\Delta(z)}},\\ g(z)&=-\alpha_+(z)\alpha_-(z)\Delta(z) =\frac{h^2(z)-\Delta(z)}{4} =(1-a)z^2-(b+c)z-d. \end{align*} Denote by $x_A=-b/a$ and $x_B=-d/c$ the zeros of $A(z)$ and~$B(z)$ respectively. The function $\Delta(z)$ has two zeros \[ x_\Delta^{\pm}=x_A+\frac{-2c\pm2\sqrt{\Delta_\Delta}}{a^2}, \] where $\Delta_\Delta=c^2-a^2B(x_A)$ is the discriminant of $\Delta(z)$. A number $z^*\in\mathbb{C}$ is a {\em limit of zeros} of the sequence $\{W_n(z)\}_n$ of polynomials if there is a zero $z_n$ of $W_n(z)$ for each $n$ such that $\lim_{n\to\infty}z_n=z^*$. \begin{lem}[Beraha et al.~\cite{BKW75}]\label{lem:BKW} Under the non-degeneracy conditions \begin{enumerate}[label=\emph{(N-\roman*)}] \item\label[icond]{cond:rec2} the sequence $\{W_n(z)\}_n$ does not satisfy a recurrence of order less than two, \item\label[icond]{cond:f<>0} $\lambda_+(z)\ne\omega\lambda_-(z)$ for some $z\in\mathbb{C}$ and some constant $\omega$ such that $|\omega|=1$, \end{enumerate} a number $z$ is a limit of zeros if and only if it satisfies one of the following conditions: \begin{enumerate}[label=\emph{(C-\roman*)}] \item\label[icond]{cond:-} $\alpha_-(z)=0$ and $\lambda_+(z)<\lambda_-(z)$; \item\label[icond]{cond:+} $\alpha_+(z)=0$ and $\lambda_+(z)>\lambda_-(z)$; \item\label[icond]{cond:=} $\lambda_+(z)=\lambda_-(z)$. \end{enumerate} \end{lem} A limit $z$ of zeros is said to be {\em non-isolated} if it satisfies \cref{cond:=}, and to be {\em isolated} if it satisfies \cref{cond:-} or \cref{cond:+}. We denote the set of non-isolated limits of zeros of the polynomials $W_n(z)$ by $\clubsuit$, and denote the set of isolated limits of zeros by $\spadesuit$. The clover symbol $\clubsuit$ is adopted for the leaflets of a clover are not alone, while the spade symbol $\spadesuit$ appearing as a single leaflet represents isolation in comparison. \begin{thm}\label{thm:lz} Let $a,b,c,d\in\mathbb{R}$ and $ac\ne0$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. Then the sets of isolated and non-isolated limits of zeros of $\{W_n(z)\}_n$ are respectively \begin{align*} \spadesuit&=\{z\in\mathbb{C}\colon g(z)=0,\,\Re\bg{A(z)\overline{h(z)}}<0\}\quad\hbox{ and }\quad\\ \clubsuit&=\begin{cases} \wideparen{x_\Delta^-x_Ax_\Delta^+},&\text{if $\Delta_\Delta<0$};\\[5pt] C_0,&\text{if $\Delta_\Delta=0$};\\[5pt] J_\Delta\cup C_0,&\text{if $\Delta_\Delta>0$ and $B(x_A)>0$};\\[5pt] J_\Delta,&\text{if $\Delta_\Delta>0$ and $B(x_A)\le0$}; \end{cases} \end{align*} where $\overline{z}$ denotes the complex conjugate of $z$, $\wideparen{x_\Delta^-x_Ax_\Delta^+}$ stands for the circular arc connecting the points $x_\Delta^-$ and $x_\Delta^+$, through the point $x_A$, \[ C_0=\{z\in\mathbb{C}\colon\abs{z-x_B}=\abs{x_A-x_B}\} \] is the circle with center $x_B$ and radius $\abs{x_A-x_B}$, and \[ J_\Delta=\{x\in\mathbb{R}\colon x_\Delta^-\le x\le x_\Delta^+\} \] is an interval. \end{thm} \begin{proof} \Cref{cond:rec2} is satisfied since otherwise one would have $W_n(z)=z^n$ for each $n$, contradicting the fact $W_2(z)=az^2+(b+c)z+d$. \Cref{cond:f<>0} holds true since $|\lambda_-(x)|\ne|\lambda_+(x)|$ for sufficiently large real number $x$. Suppose that $z\in\spadesuit$. From definition, we have $\alpha_-(z)\alpha_+(z)=0$, which implies \[ 0=g(z)=\frac{h^2(z)-\Delta(z)}{4}. \] Thus $\sqrt{\Delta(z)}\in\{\pm h(z)\}$. If $\sqrt{\Delta(z)}=h(z)$, then $\alpha_-(z)=0$ from definition. By \cref{lem:BKW}, we have $\lambda_+(z)<\lambda_-(z)$, i.e., $\Re\bg{A(z)\overline{h(z)}}<0$. Along the same line we can handle the other case $\sqrt{\Delta(z)}=-h(z)$. It is clear that $\{x_A,\,x_\Delta^-,\,x_\Delta^+\}\subseteq\clubsuit$. Let $z=x+yi\in\clubsuit$ such that $A(z)\Delta(z)\ne0$, where $x,y\in\mathbb{R}$. If $y=0$, then $z,\,A(z),\,\Delta(z)\in\mathbb{R}$. In this case, we can infer that \[ \lambda_-(z)=\lambda_+(z) \quad\iff\quad \Delta(z)<0 \quad\iff\quad \Delta_\Delta>0 \;\text{and}\; x\in (x_\Delta^-,\,x_\Delta^+). \] Otherwise $y\ne0$. We can infer that \begin{align*} \lambda_-(z)=\lambda_+(z) &\iff \text{the vectors $A(z)$ and $\sqrt{\Delta(z)}$ are orthogonal}\\ &\iff \text{the vectors $A^2(z)$ and $\Delta(z)$ have opposite directions}\\ &\iff \text{$A^2(z)$ and $B(z)$ have opposite directions, $\abs{A^2(z)}<\abs{4B(z)}$}\\ &\iff \begin{cases} \Re A^2(z)\cdot \Im B(z)=\Re B(z)\cdot \Im A^2(z)\\[5pt] \Im A^2(z)\cdot \Im B(z)<0\\[5pt] \abs{\Im A^2(z)}<4\abs{\Im B(z)} \end{cases}\\ &\iff \begin{cases} \,(x-x_B)^2+y^2=(x_A-x_B)^2\\ (x-x_A)(x-x_A+2c/a^2)<0 \end{cases}\\ &\iff z\in C_0\cap S_0\setminus \{x_A,\,x_\Delta^-,\,x_\Delta^+\}, \end{align*} where $S_0=\{z\in\mathbb{C}\colon \abs{\Re z-x_A}\le\abs{2c/a^2},\,c\cdotp(\Re z-x_A)\le 0\}$ is the vertical strip with boundaries $\Re z=x_A$ and $\Re z=x_A-2c/a^2$. It is clear that the boundary $\Re z=x_A$ intersects the circle $C_0$ at the point $x_A$. To figure out the intersection of the other boundary with $C_0$, we proceed according to the sign of $\Delta_\Delta$. Suppose that $\Delta_\Delta<0$. Then $J_\Delta=\emptyset$ from definition, and \[ \Re \bg{x_\Delta^\pm}=x_A-\frac{2c}{a^2} \quad\hbox{ and }\quad \Im \bg{x_\Delta^\pm}=\pm\frac{2\sqrt{-\Delta_\Delta}}{a^2}. \] It follows that \[ \bg{x_\Delta^\pm-x_B}^2 =\bgg{x_A-\frac{2c}{a^2}-x_B}^2+\biggl(\frac{2\sqrt{-\Delta_\Delta}}{a^2}\biggr)^2 =(x_A-x_B)^2. \] Thus the points $x_\Delta^\pm$ lie on the intersection of the boundary $\Re z=x_A-2c/a^2$ and the circle $C_0$. Since the intersection contains at most two points, the points $x_\Delta^\pm$ consitute the intersection. Hence the set $\clubsuit=C_0\cap S_0$ is the circular arc $\wideparen{x_\Delta^-x_Ax_\Delta^+}$. When $\Delta_\Delta=0$, the points $x_\Delta^\pm=x_A-2c/a^2$ coincide with each other. As a consequence, we have $C_0\cap S_0=C_0$ and $\clubsuit=J_\Delta\cup C_0=C_0$. Below we can suppose that $\Delta_\Delta>0$. Note that \begin{equation}\label{B:xA} B(x_A)=c(x_A-x_B). \end{equation} When $B(x_A)\le 0$, we claim that $C_0\cap S_0=\{x_A\}$. Let $z\in C_0\cap S_0$. If $c>0$, then $x_A\le x_B$ by \cref{B:xA}. Since~$z\in C_0$, we have $\Re z\ge x_A$. Since $z\in S_0$, we have $c(\Re z-x_A)\le 0$. Therefore, we infer that $\Re z=x_A$, and $z=x_A$ consequently. Otherwise $c<0$. Then $x_A\ge x_B$ by \cref{B:xA}. In this case, $z\in C_0$ implies $\Re z\le x_A$, and $z\in S_0$ implies $\Re z\ge x_A$. Hence $z=x_A$ for the same reason. This proves the claim. Since $\Delta(x_A)=4B(x_A)\le0$, we have $x_A\in J_\Delta$. Hence $\clubsuit=J_\Delta$. When $B(x_A)>0$, we claim that $C_0\subset S_0$. Let $z\in C_0$. One may show $c(\Re z-x_A)\le 0$ in the same fashion as when $B(x_A)<0$. By geometric interpretation and the condition $\Delta_\Delta>0$, we deduce that \[ |\Re z-x_A|\le (\text{the diameter of $C_0$}) =2|x_A-x_B|<|2c/a^2|. \] This proves the claim and hence $\clubsuit=J_\Delta\cup C_0$. \end{proof} We remark that $z\in\spadesuit$ if and only if $\overline{z}\in\spadesuit$. Since $\Delta_\Delta\le 0$ implies $B(x_A)>0$, the case ``$\Delta>0$ and $B(x_A)\le0$'' in \cref{thm:lz} can be reduced to ``$B(x_A)\le0$''. \begin{cor}\label{cor:rr} Let $a,b,c,d\in\mathbb{R}$ and $ac\ne0$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. If every polynomial $W_n(z)$ for large $n$ is real-rooted, then $B(x_A)\le0$, and $\Delta\ge0$ as a consequence. \end{cor} \begin{proof} Since every polynomial $W_n(z)$ for large $n$ is real-rooted, we have $\spadesuit\cup\clubsuit\subset\mathbb{R}$. By \cref{thm:lz}, we find either $\clubsuit=J_\Delta$, or $\clubsuit=C_0$ and $C_0$ degenerates to a single point. In the former case, we find $B(x_A)\le0$. In the latter case, we have $\Delta_\Delta=0$ and $x_A=x_B$, which is impossible since otherwise \[ 0=\Delta_\Delta=c^2-a^2B(x_A)=c^2, \] a contradiction. This completes the proof. \end{proof} When $\clubsuit=J_\Delta\cup C_0$, it turns out that the set $\clubsuit$ looks like a lollipop; see \cref{fig:lollipop}. \begin{figure}[h] \includegraphics[width=7cm]{lem34-1} \includegraphics[width=7cm]{lem34-2} \caption{The zero distribution of $W_{30}(z)$ for the parameters $(a,b,c,d)=(1,\,-2,\,2,\,-1)$ and $(a,b,c,d)=(1,\,2,\,-2,\,-1)$, for each of which we have $x_A=-2$, $x_B=-1/2$, and $B(x_A)=3$.}\label{fig:lollipop} \end{figure} \begin{thm}\label{thm:dd>0:BxA>0} Suppose $\Delta_\Delta>0$ and $B(x_A)>0$.† Then $J_\Delta\cap C_0=\{2x_B-x_A\}$, and the part of $J_\Delta$ outside the circle $C_0$ is longer than the part of $J_\Delta$ inside $C_0$. \end{thm} \begin{proof} By \cref{thm:lz}, we have $\clubsuit=J_\Delta\cup C_0$. First of all, denote $x_0=2x_B-x_A$ to be one of the two real points on $C_0$, other than~$x_A$. Since \[ \Delta(x_0)=-\frac{4B(x_A)\Delta_\Delta}{c^2}<0, \] we have $x_0\in J_\Delta$. Second, the centre of the circle $C_0$ is not on the interval $J_\Delta$ since $\Delta(x_B)=A^2(x_B)>0$. It follows that $J_\Delta\cap C_0=\{x_0\}$. Thirdly, note that \begin{equation}\label{pf1} x_0-\frac{x_\Delta^-+x_\Delta^+}{2} =\frac{1}{c}\cdot \frac{2\Delta_\Delta}{a^2}. \end{equation} If $c>0$, then $x_B<x_A$ by \cref{B:xA}. It follows that $x_0<x_B$. Thus the interval $J_\Delta$ intersects the circle $C_0$ from the left of $C_0$. By \cref{pf1}, we have $x_0>(x_\Delta^-+x_\Delta^+)/2$. Thus the part of $J_\Delta$ outside the circle $C_0$ is longer than the part of $J_\Delta$ inside. The other case $c<0$ can be handled in the same way. \end{proof} \section{The interlacing zeros for Case $(+,-,+,-)$}\label{sec:rr} Here is the main result of this section. \begin{thm}\label{thm:rr} Let $a,c>0$ and $b,d<0$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. Then $W_n(z)$ is real-rooted if and only if $x_A\le x_B$. \end{thm} The necessity part of \cref{thm:rr} can be seen directly from \cref{cor:rr}. The sufficiency part will be handled for the case $x_A<x_B$ in \cref{thm:rr:<}, and for the case $x_A=x_B$ in \cref{thm:rr:=}. Throughout this section, we suppose that $x_A\le x_B$, which implies that $\Delta_\Delta>0$ and $x_\Delta^\pm\in\mathbb{R}$. The zeros of the function $g(z)$ are \[ x_g^\pm=\begin{cases} \displaystyle \frac{b+c}{2(1-a)}\pm\frac{\sqrt{\Delta_g}}{2\abs{1-a}},&\text{if $a\ne1$},\\[8pt] \displaystyle -{d\over b+c},&\text{if $a=1$ and $b+c\ne0$}, \end{cases} \] where $\Delta_g=(b+c)^2+4d(1-a)$. We define two numbers $u$ and $v$ by \begin{equation}\label{interval:uv} (u,v)=\begin{cases} (x_\Delta^-,\,x_\Delta^+),&\text{if $a<2$ and $F\le0$};\\[4pt] (x_g^-,\,x_g^+),&\text{if $a>2$ and $F<0$};\\[4pt] (x_g^+,\,x_\Delta^+),&\text{if $a<1$ and $F>0$};\\[4pt] (x_g^-,\,x_\Delta^+),&\text{otherwise}; \end{cases} \end{equation} where $F=\Delta_g-\Delta_\Delta=d(a-2)^2+bc(2-a)+b^2$. Note that $(u,v)=(x_\Delta^-,\,x_\Delta^+)$ if $a=1$ and $b+c=0$. Furthermore, we have $u,v\in\mathbb{R}$ since $\Delta_g>\Delta_\Delta>0$ whenever $a\ge 2$ or $F>0$. As will be seen in \cref{thm:rr:<,thm:rr:=}, we have $u<v$ and the interval $(u,v)$ is the best bound for the zeros of $W_n(z)$. \subsection{Case $x_A<x_B$} We determine the signs of $W_n(u)$ and $W_n(v)$ in \cref{lem:uv}. \begin{lem}\label{lem:uv} Let $a,c>0$ and $b,d<0$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. Suppose that $x_A< x_B$. Then we have \begin{align} &u\le x_\Delta^-<x_A<x_\Delta^+\le v<x_B,\label[ineq]{uv}\\[4pt] &u<0<v,\label[ineq]{u0v}\\[4pt] &W_n(u)(-1)^n>0,\label[ineq]{W:u}\\[4pt] &W_n(v)>0,\label[ineq]{W:v}\quad\hbox{ and }\quad\\[4pt] &\{u,v\}\subseteq\spadesuit\cup\clubsuit.\label{uv:club} \end{align} \end{lem} \begin{proof} The premise $x_A<x_B$ implies $\Delta(x_A)=4B(x_A)<0$. It follows that \begin{align} &x_A\in (x_\Delta^-,\,x_\Delta^+),\qquad x_\Delta^+>0,\qquad A(x_\Delta^+)>0>A(x_\Delta^-),\quad\hbox{ and }\quad\notag\\ &h(x_\Delta^+)=(2-a)x_\Delta^+-b\ge -b>0\qquad\text{if $a\le 2$}.\label[ineq]{h:xd2:+} \end{align} Since $\Delta(x_B)=A^2(x_B)>0$ and $x_\Delta^-<x_A<x_B$, we have $x_\Delta^+<x_B$. To confirm Relation \eqref{uv:club}, by \cref{thm:lz}, it suffices to show that \begin{equation}\label[ineq]{dsr:uv:club} A(x)h(x)<0,\qquad\text{for any $x\in\{u,v\}\backslash\{x_\Delta^-,\,x_\Delta^+\}$}. \end{equation} Let $x_h$ be the unique zero of the function $h(z)$ when $a\ne2$. Then $x_h=b/(2-a)$. We proceed according to the definition of the numbers $u$ and $v$. \begin{case}\label{case:a<2:F<=0} $a<2$, $F\le0$ and $[u,v]=J_\Delta$. It is routine to compute that \begin{equation}\label{hh:xd} h(x_\Delta^-)h(x_\Delta^+)=\frac{4F}{a^2}. \end{equation} Together with \cref{h:xd2:+}, we have $h(x_\Delta^-)\le 0$ and thus \[ x_\Delta^-\le x_h=\frac{b}{2-a}<0, \] verifying \cref{u0v}. By \cref{lem:00}, we have \begin{equation}\label{W:xd} W_n(x_\Delta^\pm) =\frac{A(x_\Delta^\pm)+nh(x_\Delta^\pm)}{2}\cdot\bgg{\frac{A(x_\Delta^\pm)}{2}}^{n-1}, \end{equation} which implies \cref{W:u,W:v}. \end{case} \begin{case}\label{case:a>2:F<0} $a>2$, $F<0$ and $[u,v]=[x_g^-,\,x_g^+]$. Observe that \begin{equation}\label[ineq]{g:xd} g(x_\Delta^\pm)=\frac{h^2(x_\Delta^\pm)}{4}\ge 0. \end{equation} Since the polynomial $g(z)$ is quadratic with leading coefficient negative, we can derive all inequalities in \eqref{uv} except $v<x_B$. Since $F<0$, we have $d(a-2)-bc<0$ and thus \[ g(x_B)=\frac{-d}{c^2}\bg{(a-1)d-bc}<\frac{-d}{c^2}\bg{(a-2)d-bc}<0. \] Since $x_g^-<x_A<x_B$, we infer that $x_g^+<x_B$. On the other hand, by Vi\`eta's theorem, we have \begin{equation}\label[ineq]{xg1xg2} x_g^-x_g^+=\frac{d}{a-1}, \end{equation} whose negativity verifies \cref{u0v}. By \cref{lem:00}, we have \begin{equation}\label{W:xg} W_n(x_g^\pm)=(x_g^\pm)^n, \end{equation} which implies \cref{W:u,W:v}. It is routine to compute that \begin{equation}\label{hh:xg} h(x_g^-)h(x_g^+)=\frac{F}{a-1}\qquad\text{if $a\ne1$}. \end{equation} Thus $h(v)<0<h(u)$. By \eqref{uv}, we have $A(u)<0<A(v)$. This proves \cref{dsr:uv:club}. \end{case} \begin{case}\label{case:a<1:F>0} $a<1$, $F>0$ and $[u,v]=[x_g^+,\,x_\Delta^+]$. In view of \cref{W:xd,W:xg,h:xd2:+}, to confirm \cref{uv,u0v,W:u,W:v,dsr:uv:club}, we shall show that \[ x_g^+\le x_\Delta^-,\qquad x_g^+<0,\quad\hbox{ and }\quad h(x_g^+)>0. \] In fact, we note that the polynomial $g(z)$ is quadratic with leading coefficient positive. On the one hand, \cref{hh:xg} gives $x_h\in (x_g^-,\,x_g^+)$. This confirms $h(x_g^+)>0$ immediately. By \cref{hh:xd}, we can deduce that $x_h<x_\Delta^-$, since otherwise one would have the absurd inequality \[ 0<x_\Delta^+<x_h=\frac{b}{2-a}<0. \] Thus \cref{g:xd} implies $(x_g^-,\,x_g^+)\cap J_\Delta=\emptyset$. Moreover, the whole interval $(x_g^-,\,x_g^+)$ lies to the left of $J_\Delta$. This proves $x_g^+\le x_\Delta^-$. On the other hand, by \cref{xg1xg2} we have $x_g^-x_g^+>0$. Since $x_g^-<x_h<0$, we find $x_g^+<0$. \end{case} \begin{case}\label{case:remain} For all remaining cases we have $[u,v]=[x_g^-,\,x_\Delta^+]$. This time, to confirm \cref{uv,u0v,W:u,W:v,dsr:uv:club}, we shall show that \[ x_g^-\le x_\Delta^-,\qquad x_g^-<0,\qquad h(x_\Delta^+)\ge0,\quad\hbox{ and }\quad h(x_g^-)>0. \] In fact, when $a=1$, in view of \cref{case:a<2:F<=0}, we now have $F>0$ and thus $b+c<0$. Note that $g(z)=-(b+c)z-d$. It follows from \cref{g:xd} that $x_g^-\le x_\Delta^-$. Since $g(0)=-d>0$, we obtain $x_g^-<0$. By \cref{h:xd2:+}, we have $h(x_\Delta^+)\ge0$. It is routine to compute that \[ h(x_g^-)=x_g^--b=-\frac{d}{b+c}-b=-\frac{F}{b+c}>0. \] Now, in view of \cref{case:a<2:F<=0,case:a<1:F>0}, we have $a>1$. Consequently, one may derive $J_\Delta\subseteq[x_g^-,\,x_g^+]$ and $x_g^-<0$ as in \cref{case:a>2:F<0}. We shall handle the two inequalities involving $h$ according to the value range of $a$. If $a=2$, then the function $h(z)=-b$ reduces to a positive constant and we are done. Now we can suppose that $a\ne 2$. \begin{enumerate}[leftmargin=20pt] \item If $a>2$, then \[ h(x_\Delta^-)+h(x_\Delta^+) =\frac{4}{a^2}\bg{(a-2)c-ab}>0. \] In view of \cref{case:a>2:F<0}, we have $F\ge 0$. By \cref{hh:xd}, we have $h(x_\Delta^-)h(x_\Delta^+)\ge 0$. Therefore, we infer that $h(x_\Delta^+)\ge 0$. Since the polynomial $h(z)$ is strictly decreasing and $x_g^-<x_\Delta^+$, we have $h(x_g^-)>h(x_\Delta^+)>0$. \item If $1<a<2$, by \cref{h:xd2:+}, it suffices to show that $h(x_g^-)>0$. In view of \cref{case:a<2:F<=0}, we have $F>0$. By \cref{h:xd2:+,hh:xd}, we have $h(x_\Delta^-)>0$ and $x_h<x_\Delta^-$. By \cref{hh:xg}, we have $h(x_g^-)h(x_g^+)>0$. Since $J_\Delta\subseteq[x_g^-,\,x_g^+]$, we deduce that $x_h<x_g^-$, i.e., $h(x_g^-)>0$. \end{enumerate} \end{case} This completes the proof. \end{proof} Let $X,Y\subset\mathbb{R}$ such that $|X|-|Y|\in\{0,1\}$. We say that {\em $X$ interlaces $Y$}, if the elements $x_i$ of $X$ and the elements $y_j$ of $Y$ can be arranged so that $x_1\le y_1\le x_2\le y_2\le\cdots$, and that {\em $X$ strictly interlaces $Y$} if no equality holds in the ordering. \Cref{lem:crt:itl} is Lemma 3.3 of \cite{GMTW16-10}, wherein used in a proof of the real-rootedness of polynomials $W_n(z)$ defined by \cref{rec2:linear} with $a>0$, $b\in\mathbb{R}$, $c=0$ and $d<0$ by induction. \begin{lem}[Gross et al.~\cite{GMTW16-10}]\label{lem:crt:itl} Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec:AB}. Let $m\ge0$ and $\alpha,\beta\in\mathbb{R}$. Suppose that the polynomial $W_{m+2}(x)$ has degree $m+2$, and that $B(x)<0$ for all $x\in R_{m+1}$, \hbox{$W_{m} (\alpha)W_{m+2}(\alpha)>0$}, $W_{m} (\beta)W_{m+2}(\beta)>0$, $|R_{m+1}|=m+1$, $R_{m+1}\subset(\alpha,\beta)$, and $R_{m+1}$ strictly interlaces~$R_{m}$. Then we have $|R_{m+2}|=m+2$, $R_{m+2}\subset(\alpha,\beta)$, and $R_{m+2}$ strictly interlaces $R_{m+1}$. \end{lem} Now we are in a position to show the real-rootedness with the interlacing property and the best bound of all zeros. \begin{thm}\label{thm:rr:<} Let $a,c>0$ and $b,d<0$ such that $x_A<x_B$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. Then every polynomial $W_n(z)$ is real-rooted. Denote by $R_n$ the zero set of~$W_n(z)$. Then $R_n\subset (u,v)$, and the set $R_{n+1}$ strictly interlaces $R_n$. Moreover, the bound $(u,v)$ is sharp, in the sense that both the numbers $u$ and $v$ are limits of zeros. \end{thm} \begin{proof} We prove by induction with aid of \cref{lem:crt:itl} for $(\alpha,\beta)=(u,v)$. Note that $R_1=\{0\}$. By \cref{lem:uv}, we have $u<0<v$. From definition, any singleton set strictly interlaces the empty set~$R_0$. Now, we can suppose, for some $m\ge0$, that $|R_{m+1}|=m+1$, $R_{m+1}\subset (u,v)$, and $R_{m+1}$ strictly interlaces $R_m$. Let $n\ge0$. From \cref{rec2:linear}, every polynomial $W_n(z)$ is of degree $n$. By \cref{lem:uv}, we have $B(x)<0$ for $x\in R_n$, $W_n(u)W_{n+2}(u)>0$ and $W_{n}(v)W_{n+2}(v)>0$. By \cref{lem:crt:itl}, we obtain the real-rootedness, the bound $(u,v)$ and the strict interlacing property. By \cref{thm:lz}, we have $\{x_\Delta^\pm\}\subseteq\clubsuit$. By \cref{lem:uv}, we have $\{u,v\}\backslash\{x_\Delta^\pm\}\subseteq\spadesuit$. Hence both the numbers $u$ and $v$ are limits of zeros. This completes the proof. \end{proof} We remark that the sharpness of the bound $(u,v)$ can be shown by using the totally different method demonstrated in the proof of Theorem 4.5 in \cite{GMTW16-10}. \subsection{Case $x_A=x_B$} Suppose that $x_A=x_B$. Then \cref{interval:uv} reduces to \[ u=\begin{cases} x_\Delta^-,&\text{if $a<2$ and $F\le0$}\\[4pt] x_g^+,&\text{if $a<1$ and $F>0$}\\[4pt] x_g^-,&\text{otherwise} \end{cases} \qquad\quad\hbox{ and }\quad\qquad v=x_\Delta^+=x_A=x_B. \] In an analogue with \cref{lem:uv}, we have \cref{lem:uv:=}. \begin{lem}\label{lem:uv:=} Let $a,c>0$ and $b,d<0$. If $x_A=x_B$, then $u\le x_\Delta^-$, $u<0$, $W_n(u)(-1)^n>0$, and $u\in\spadesuit$ as if $u\ne x_\Delta^-$. \end{lem} \begin{proof} Same to the proof of \cref{lem:uv}. \end{proof} Now we can demonstrate the root distribution of the polynomials $\{W_n(z)\}$. \begin{thm}\label{thm:rr:=} Let $a,c>0$ and $b,d<0$ such that $x_A=x_B$. Let $\{W_n(z)\}_n$ be a sequence of polynomials satisfying \cref{rec2:linear} with $W_0(z)=1$ and $W_1(z)=z$. Then the function $U_n(z)=W_n(z)/A^{\lfloor n/2\rfloor}(z)$ is a polynomial, with all its zeros lying in the interval $(u,\,x_B)$. Moreover, the interval $(u,\,x_B)$ is sharp in the sense that both the numbers $u$ and $x_B$ are limits of zeros of the polynomials $U_n(z)$. \end{thm} \begin{proof} By \cref{rec2:linear}, the functions $U_n(z)$ satisfy the recurrence \begin{equation}\label[rec]{rec:U} U_n(z)=\begin{cases} \displaystyle \qquad U_{n-1}(x)+c'\cdot U_{n-2}(x),&\quad\text{if $n$ is even},\\[4pt] \displaystyle A(x)U_{n-1}(x)+c'\cdot U_{n-2}(x),&\quad\text{if $n$ is odd}, \end{cases} \end{equation} where $c'=c/a$, with $U_0(z)=1$ and $U_1(z)=z$. It follows immediately that the function $U_n(z)$ is a polynomial of degree $\lceil{n/2}\rceil$. Let $R_n'$ be the zero set of $U_n(z)$. We shall show by induction that the zeros $z_j$ of $U_n(z)$ strictly interlaces the zeros~$x_j$ of $U_{n-1}(z)$ from the left, in the interval $(u,\,x_B)$, i.e., \begin{equation}\label[rl]{interlacing:U} \begin{cases} u<z_1<x_1<z_2<\cdots<z_{\lceil{\frac n2}\rceil}<x_{\lceil{\frac{n-1}{2}}\rceil}<x_B, &\text{if $n$ is even};\\ u<z_1<x_1<z_2<\cdots<z_{\lceil{\frac{n-1}{2}}\rceil}<x_{\lceil{\frac{n-1}{2}}\rceil} <z_{\lceil{\frac{n}{2}}\rceil}<x_B, &\text{if $n$ is odd}. \end{cases} \end{equation} We make some preparations. First, by \cref{rec:U}, it is direct to show by induction that $U_n(x_B)>0$. Second, by \cref{lem:uv:=}, we have $u\le x_\Delta^-<x_\Delta^+=x_A$ and $W_n(u)(-1)^n>0$. Therefore, we have $A(u)<0$ and thus \[ U_n(u)(-1)^{\lceil{n/2}\rceil}>0. \] In particular, we have $U_2(u)<0$. Since $U_2(u)=z+c'$, we have $u<-c'<0<x_B$. This checks the truth for $n=2$. Let $n\ge3$. By induction hypothesis, the set $R_{n-1}'$ strictly interlaces $R_{n-2}'$ from the left. Therefore, we have \[ U_{n-2}(x_j)(-1)^{\lceil{n/2}+j}\rceil>0\qquad \text{for $j\le \lceil{(n-1)/2}\rceil$.} \] By \cref{rec:U}, the number $U_n(x_j)$ has the same sign as the number $U_{n-2}(x_j)$, that is, $U_{n}(x_j)(-1)^{\lceil{n/2}+j}\rceil>0$. By using the intermediate value theorem, we derive the desired \eqref{interlacing:U}. Same to the proof of \cref{thm:rr:<}, one may show the minimality of the interval $(u,x_B)$ as a bound of the zeros of polynomials $W_n(z)$. Note that $x_\Delta^-\ne x_\Delta^+$. By \cref{thm:lz}, each point in the interval $J_\Delta$ is a limit of zeros of the polynomials~$W_n(z)$. Therefore, each point in $J_\Delta$ is a limit of zeros of the polynomials $U_n(z)$, and the interval $(u,x_B)=(u,x_\Delta^+)$ becomes the best bound of the union of zeros of all polynomials $U_n(z)$. This completes the proof. \end{proof}
{ "timestamp": "2018-05-08T02:16:31", "yymm": "1805", "arxiv_id": "1805.02460", "language": "en", "url": "https://arxiv.org/abs/1805.02460" }
\section{Introduction} In this paper we consider linear uniformly parabolic equations of the form \begin{equation}\label{Equation} u_t - \text{div}(A(x,\,t)\nabla u) = 0. \end{equation} Here $u : \mathbb{R}^{n+1} \rightarrow \mathbb{C}$, and the coefficients are bounded measurable, complex-valued functions satisfying \begin{equation}\label{Ellipticity} \text{Re}(A_{kl}(x,\,t)p_k\overline{p}_l) \geq \lambda |p|^2, \quad |A(x,\,t)(p)|^2 \leq \Lambda^2 |p|^2 \end{equation} for some constants $\lambda,\, \Lambda > 0$, and for all $(x,\,t) \in \mathbb{R}^{n+1}$ and $p \in \mathbb{C}^n$. By a solution we mean that $u \in L^2_{loc,\,t}(H^1_{loc,\,x})$ solves (\ref{Equation}) in the sense of distributions. We note that (\ref{Equation}) can be viewed as a uniformly parabolic system of the form \begin{equation}\label{System} \partial_tv^{\alpha} - \partial_k(B^{kl}_{\alpha\beta}(x,\,t)v^{\beta}_l) = 0, \quad 1 \leq k,\,l \leq n, \quad 1 \leq \alpha,\,\beta \leq 2. \end{equation} Here $u = v^1 + iv^2$ and $B_{11} = B_{22} = \text{Re}(A),\, B_{12} = -B_{21} = -\text{Im}(A)$. We briefly discuss the elliptic case \begin{equation}\label{EllipticEquation} \text{div}(A(x)\nabla u) = 0 \end{equation} in $\mathbb{R}^n$. Solutions to (\ref{EllipticEquation}) are $C^{\alpha}$ when $n = 2$ by work of Morrey \cite{Mo}. Real-valued solutions are $C^{\alpha}$ by fundamental work of De Giorgi \cite{DG1} and Nash \cite{Na}. There are classical counterexamples to continuity for solutions to elliptic systems when $n \geq 3$ (see \cite{DG2}, \cite{GM}, \cite{Ma}). Discontinuous solutions to (\ref{EllipticEquation}) were first constructed in dimension $n \geq 5$ \cite{MNP}, and later in dimension $n \geq 3$ \cite{F}. In general, the best regularity we have for (\ref{EllipticEquation}) is $u \in W^{1,\,2 + \delta}_{loc}$ for some $\delta(n,\,\lambda,\,\Lambda) > 0$ (see \cite{Gi}), which is only slightly better than the energy class of the solutions. In fact, for each $\gamma > 2$ there are solutions to (\ref{EllipticEquation}) that are not in $W^{1,\,\gamma}_{loc}$ (see \cite{F}). Interestingly, the parabolic problem (\ref{Equation}) has resisted a similar understanding. Real-valued solutions are $C^{\alpha}$ \cite{Na}. In general we have the higher-integrability results $\nabla u \in L^{2 + \delta}_{loc}$ and $u \in L^{\infty}_{loc,\, t}(L^{2+\delta}_{loc, \, x})$ for some $\delta(n,\,\lambda,\,\Lambda) > 0$ (see \cite{St}, \cite{NS}). There are also examples of discontinuity from smooth data when $n \geq 3$ (\cite{FM}, and \cite{SJM}, \cite{SJ2} for more general systems). However, the examples are in $L^{\infty}_{loc,\,t}(W^{1,\,2+\delta}_{loc,\,x})$, and are thus significantly more regular than the higher-integrability results predict. When $n = 2$ the known results don't imply continuity of solutions (unlike the elliptic case), which remained open for some time (see e.g. \cite{SJM}, \cite{JS}, \cite{SJ1}, \cite{SJ2}). We recently settled this problem with a counterexample \cite{M1}. Still, the example in \cite{M1} is barely irregular enough to develop a discontinuity (it is e.g. in $L^{\infty}_{loc,\,t}(L^{p}_{loc,\,x})$ for $p$ large), so the regularity gap between theory and examples remained large. The purpose of this paper is to complete the picture for (\ref{Equation}) by constructing solutions in dimension $n \geq 2$ that are exactly as irregular as the parabolic higher-integrability results allow. We also prove some Liouville theorems which explain why previous approaches only produced ``elliptic" discontinuities. Our results connect the regularity problem for (\ref{Equation}) in $\mathbb{R}^{n+1},$ in parabolic geometry, to that for the elliptic equation (\ref{EllipticEquation}) in $\mathbb{R}^{n+2}$. We make this connection precise in the next section. \section{Results} In this section we state our results. We will deal with ``spiraling'' self-similar solutions to (\ref{Equation}) of the form \begin{equation}\label{Ansatz} u(x,\,t) = (-t)^{-\frac{\mu}{2}}\,e^{-\frac{i}{2}\log(-t)}\, w\left(\frac{x}{(-t)^{1/2}}\right). \end{equation} These are invariant under $u \rightarrow \lambda^{\mu}e^{i\log \lambda} u(\lambda x,\, \lambda^2t).$ We obtain a solution to (\ref{Equation}) on $\mathbb{R}^n \times (-\infty,\,0)$ with coefficients $A(x/(-t)^{1/2})$ if $w$ solves the elliptic equation \begin{equation}\label{SelfSimEquation} \text{div}(A(x)\nabla w) = \frac{1}{2}(iw + \mu w + x \cdot \nabla w) \end{equation} on $\mathbb{R}^n$, and $A$ satisfies (\ref{Ellipticity}) for some $\lambda,\,\Lambda > 0$. Furthermore, the solution defined by (\ref{Ansatz}) is smooth up to $t = 0$ away from $x = 0$ and develops a ``spiraling $-\mu$-homogeneous'' discontinuity at $t = 0$ provided $\mu \geq 0$ and \begin{equation}\label{Asymptotics} w = |x|^{-\mu}g(x/|x|)e^{-i\log |x|}(1 + \mathcal{E}(|x|^{-2})) \,\, \text{ on } \,\, \mathbb{R}^n \backslash B_1. \end{equation} Here $g \in C^{\infty}(S^{n-1})$ and $\mathcal{E}$ is a smooth function with $\mathcal{E}(0) = 0$. We can extend the solution to positive times e.g. by solving the heat equation with initial data $u(x,\,0) := |x|^{-\mu}g(x/|x|)e^{-i\log |x|}$, provided $\mu < n$. Our first result is: \begin{thm}\label{Counterexample} If $n \geq 2$ and $0 \leq 2\mu < n$, then there exists a nontrivial solution to (\ref{SelfSimEquation}) on $\mathbb{R}^n$ that satisfies (\ref{Asymptotics}). \end{thm} \noindent By taking $\mu$ arbitrarily close to $\frac{n}{2}$ we obtain as a consequence: \begin{cor}\label{Optimality} For all $n \geq 2$ and $\delta > 0$, there exists a solution to (\ref{Equation}) on $\mathbb{R}^{n+1}$ such that $$\lim_{t \rightarrow 0^{-}} \|u\|_{L^{2+\delta}_x(B_1 \times \{-t\})} = \infty, \quad \lim_{t \rightarrow 0^{-}} \|\nabla u\|_{L^{2+\delta}(B_1 \times (-1,\,-t))} = \infty.$$ \end{cor} \noindent (The ellipticity ratio $\lambda / \Lambda$ degenerates as $\delta \rightarrow 0$, in accordance with the higher-integrability results). We conclude, as in the elliptic case, that solutions to parabolic systems are only slightly better than their energy class. Our remaining results are Liouville theorems for (\ref{SelfSimEquation}). It is natural to ask whether one can construct solutions that decay any faster than than we managed. Our first Liouville theorem shows this is not possible: \begin{thm}\label{Liouville} Assume that $w \in H^1_{loc}(\mathbb{R}^n)$ solves (\ref{SelfSimEquation}), with $|w| = O(|x|^{-\mu})$ and $2\mu \geq n$. Then $w \equiv 0$. \end{thm} \noindent There are nontrivial $-\mu$-homogeneous solutions to elliptic systems of the form $\text{div}(A(x)\nabla u) = 0$ in $\mathbb{R}^n$ provided $2\mu < n-2$, and there is a Liouville theorem for $-\mu$-homogeneous solutions on $\mathbb{R}^n \backslash \{0\}$ in the equality case (see \cite{M2}). Thus, Theorems \ref{Counterexample} and \ref{Liouville} mirror the elliptic results in dimension $n+2$. This agrees with the observation that the parabolic energy $L^{\infty}_t(L^2_x) + L^2_t(H^1_x)$ in $\mathbb{R}^{n+1}$ and the elliptic energy $H^1$ in $\mathbb{R}^{n+2}$ are invariant under the matching rescalings $$u \rightarrow \lambda^{n/2}u(\lambda x,\,\lambda^2 t), \quad \text{ resp. } \quad u \rightarrow \lambda^{n/2}u(\lambda x).$$ Theorem \ref{Liouville} is a consequence of parabolic energy estimates. We can extend it to the ``elliptic regime" $2\mu \geq n-2$ when $w$ has the monotonicity property \begin{equation}\label{Monotonicity} (2\mu + x \cdot \nabla)|w|^2 \geq 0: \end{equation} \begin{thm}\label{EllipticLiouville} Assume that $w \in H^1_{loc}(\mathbb{R}^n)$ solves (\ref{SelfSimEquation}), with $|w| = O(|x|^{-\mu})$ and $2\mu \geq n-2$. If in addition $w$ satisfies (\ref{Monotonicity}), then $w \equiv 0$. \end{thm} \noindent It is easy to check that previous examples (\cite{FM}, and \cite{SJM}, \cite{SJ2} for more general systems) satisfy condition (\ref{Monotonicity}), which explains why they have ``elliptic" discontinuities (that is, $n \geq 3$ and $2\mu < n-2$). \vspace{2mm} The paper is organized as follows. In Section \ref{CounterexampleSection} we prove Theorem \ref{Counterexample}. In Section \ref{LiouvilleTheorems} we prove Theorems \ref{Liouville} and \ref{EllipticLiouville}. Finally, in Section \ref{OpenQuestions} we list some open questions. \section{Proof of Theorem \ref{Counterexample}}\label{CounterexampleSection} In this section we prove Theorem \ref{Counterexample}. We exploit the useful observation that if $\text{Im}(A)$ is symmetric, then the ellipticity condition (\ref{Ellipticity}) is satisfied provided $\text{Re}(A)$ is uniformly positive definite and $|A|$ is bounded (see \cite{F}). \begin{rem} Heuristically, this structure allows strong coupling between components when we view (\ref{Equation}) as the system (\ref{System}). The example in \cite{M1} has skew-symmetric imaginary coefficients, which corresponds to the symmetry $B^{kl}_{\alpha\beta} = B^{lk}_{\beta\alpha}$ of the system coefficients. In that case it is important to estimate the size of $\text{Im}(A)$ since it affects the ellipticity condition. \end{rem} \subsection{Reduction to ODE System} We first reduce (\ref{SelfSimEquation}) to an ODE system. Let $r = |x|$ and let $\nu = r^{-1}x$ be the unit radial vector. We search for solutions of the form \begin{equation}\label{SolutionForm} w = \varphi(r)g(\nu)e^{-i\log r}. \end{equation} Then \begin{equation}\label{Gradient} \nabla w = ge^{-i\log r}(\varphi'(r) - ir^{-1}\varphi)\nu + \varphi(r)e^{-i\log r}r^{-1}\nabla_{S^{n-1}}g. \end{equation} Here and below $\nabla_{S^{n-1}}$ and $\Delta_{S^{n-1}}$ denote the usual gradient and Laplace operators on the sphere. If $$B = f(r) \nu \otimes \nu + h(r)(I - \nu \otimes \nu)$$ then we have $$B\nabla w = ge^{-i\log r}r^{n-1}(f\varphi' - ir^{-1}f\varphi) \frac{\nu}{r^{n-1}} + h\varphi e^{-i\log r}r^{-1} \nabla_{S^{n-1}}g.$$ We will choose $\varphi$ such that $\varphi'$ and $r^{-1}\varphi$ are bounded. Using that $\nu / r^{n-1}$ is divergence-free away from the origin we compute \begin{align*} \text{div}(B\nabla w) &= \\ &ge^{-i\log r}\left[\frac{(r^{n-1}f\varphi')'}{r^{n-1}} - \left(f - \frac{\Delta_{S^{n-1}}g}{g} h\right)\frac{\varphi}{r^2} - i\left(\frac{(r^{n-2}f \varphi)'}{r^{n-1}} + \frac{f \varphi'}{r}\right)\right]. \end{align*} Let $g$ be an eigenfunction of $\Delta_{S^{n-1}}$ with eigenvalue $-\lambda_g < 0$. Then the previous expression becomes $$\text{div}(B\nabla w) = ge^{-i\log r} \left[\frac{(r^{n-1}f\varphi')'}{r^{n-1}} - (f + \lambda_g h)\frac{\varphi}{r^2} - i \frac{(r^{n-2}f \varphi^2)'}{r^{n-1}\varphi}\right].$$ Thus, if we take coefficients \begin{equation}\label{Coefficients} A = \alpha I + i (\beta(r) \nu \otimes \nu + \gamma(r)(I - \nu \otimes \nu)) \end{equation} with $\alpha > 0$ constant, and $g$ is any linear function restricted to the sphere, we obtain \begin{align*} \text{div}(A\nabla w) &= ge^{-i\log r} \left[\alpha \left(\frac{(r^{n-1}\varphi')'}{r^{n-1}} - n\frac{\varphi}{r^2}\right) + \frac{(r^{n-2}\beta\varphi^2)'}{r^{n-1} \varphi} \right. \\ &+ \left. i \left(\frac{(r^{n-1}\beta\varphi')'}{r^{n-1}} - (\beta + (n-1)\gamma) \frac{\varphi}{r^2} - \alpha \frac{(r^{n-2}\varphi^2)'}{r^{n-1}\varphi}\right)\right]. \end{align*} Since $$iw + \mu w + x \cdot \nabla w = ge^{-i\log r}(\mu \varphi + r\varphi'),$$ the equation (\ref{SelfSimEquation}) becomes the ODE system \begin{equation}\label{ODESystem} \begin{cases} \frac{(r^{n-2}\beta\varphi^2)'}{r^{n-1} \varphi} = \frac{1}{2}(\mu \varphi + r\varphi') + n\alpha \frac{\varphi}{r^2} - \alpha \frac{(r^{n-1}\varphi')'}{r^{n-1}}, \\ (n-1)\gamma \frac{\varphi}{r^2} = -\alpha \frac{(r^{n-2}\varphi^2)'}{r^{n-1}\varphi} + \frac{(r^{n-1}\beta\varphi')'}{r^{n-1}} - \beta \frac{\varphi}{r^2}. \end{cases} \end{equation} We will fix $\varphi \sim r^{-\mu}$ and $\alpha > 0$ depending on $\mu$. Then the first equation determines $\beta$, and the second one $\gamma$. By the remark at the beginning of the section, the point is to make choices such that $\beta$ and $\gamma$ are bounded. \subsection{Solving the ODE System} Integrating the first equation in (\ref{ODESystem}) we obtain \begin{equation}\label{IntegratedEquation} \begin{split} \beta &= \frac{1}{4}\left(r^2 + \frac{2\mu-n}{r^{n-2}\varphi^2} \int_0^r \varphi^2(s)s^{n-1}\,ds \right) \\ &+ \frac{n\alpha}{r^{n-2}\varphi^2} \int_0^r \varphi^2(s)s^{n-3}\,ds \\ &+ \frac{\alpha}{r^{n-2}\varphi^2} \int_0^r \varphi'^2(s)s^{n-1}\,ds - \alpha\frac{r\varphi'}{\varphi}. \end{split} \end{equation} \begin{rem} It follows easily that if $2\mu \geq n$ and $\varphi = O(r^{-\mu})$, then $\beta$ is unbounded (compare to Theorem \ref{Liouville}). \end{rem} We define \begin{equation} \varphi(r) = \begin{cases} r,\, \quad 0 \leq r < 3/4 \\ r^{-\mu} + C_{\mu}r^{-\mu-2}, \quad r > 1 \\ \text{positive and smooth,} \quad 1/2 < r < 3/2 \end{cases} \end{equation} where $C_{\mu} \geq 0$ will be chosen later. \begin{rem}\label{InterestingCases} By Theorem \ref{EllipticLiouville} it will be necessary to take $C_{\mu} > 0$ when $2\mu \geq n-2$ (and in particular, to generate discontinuities in the case $n = 2$). \end{rem} \vspace{2mm} For $r < 3/4$ it is easy to check that $\beta$ and $\gamma$ are of the form $c_1(n,\,\alpha) + c_2(n,\,\mu)r^2$ (with $c_i$ linear in $\alpha$ and $\mu$) so we only need to analyze the solutions for $r$ large. We divide into three cases. \vspace{2mm} {\bf Case $1$: $2\mu < n-2$.} We take $C_{\mu} = 0$ and $\alpha = 1$. It is easy to check that $\beta$ and $\gamma$ have the form $c_1 + c_2r^{2-n+2\mu}$ for $r > 1$, which is bounded. \vspace{2mm} {\bf Case $2$: $n-2 < 2\mu < n$.} Now the quantities $$D := \int_0^{\infty} (\varphi^2 - s^{-2\mu})s^{n-1}\,ds, \quad E := \int_0^{\infty} \varphi^2 s^{n-3}\,ds, \quad F := \int_{0}^{\infty} \varphi'^2 s^{n-1}\,ds$$ are bounded, for any fixed $C_{\mu} \geq 0$. The solution (\ref{IntegratedEquation}) becomes $$\beta = \left(-\frac{n-2\mu}{4}D + \alpha(nE + F)\right)r^{2\mu - n + 2} + \mathcal{R}(1).$$ Here and below, $\mathcal{R}(1)$ denotes any smooth function on $(1,\,\infty)$ whose $j^{th}$ derivative is $O(r^{-j})$ as $r \rightarrow \infty$ for each $j \geq 0$. Using the definition of $\varphi$ we estimate \begin{align*} D &\geq -\int_0^1 s^{n-1-2\mu}\,ds + 2C_{\mu}\int_1^{\infty} s^{n-3-2\mu}\,ds \\ &= -\frac{1}{n-2\mu} + \frac{2C_{\mu}}{2\mu - n + 2}. \end{align*} We conclude that $$-\frac{n-2\mu}{4}D \leq \frac{1}{4} - \frac{n-2\mu}{2(2\mu-n+2)}C_{\mu} < 0$$ provided we choose $C_{\mu}$ large. We may then choose $\alpha > 0$ small so that $$-\frac{n-2\mu}{4}D + \alpha(nE + F) = 0,$$ hence $$\beta = \mathcal{R}(1).$$ Solving the second equation in (\ref{ODESystem}) for $\gamma$ gives $$\gamma = \mathcal{R}(1),$$ which completes this case. \vspace{2mm} {\bf Case $3$: $2\mu = n-2$.} This case is similar to the case $2\mu > n-2$, except to leading order $\beta$ grows logarithmically. Computing (\ref{IntegratedEquation}) gives $$\beta = \left(-C_{\mu} + \alpha \left(n + \frac{1}{4}(n-2)^2\right)\right)\log r + \mathcal{R}(1).$$ Choosing $C_{\mu}$ and $\alpha$ to satisfy the relation $$C_{\mu} = \left(n + \frac{1}{4}(n-2)^2\right)\alpha$$ we arrive at the same conclusion as in Case $2$, completing the construction. \subsection{Proof of Theorem \ref{Counterexample}} \begin{proof}[{\bf Proof of Theorem \ref{Counterexample}}:] For $0 \leq 2\mu < n$, take $\varphi,\, g,\, \alpha,\, \beta,\, \gamma$ as constructed above. Then the function $$w = \varphi(r)g(\nu)e^{-i \log r}$$ solves the equation (\ref{SelfSimEquation}) in $\mathbb{R}^n$ with bounded coefficients $$A = \alpha I + i(\beta(r) \nu \otimes \nu + \gamma(r) (I-\nu \otimes \nu))$$ and has the asymptotics (\ref{Asymptotics}). Since $\alpha > 0$ is constant and $\text{Im}(A)$ is symmetric, the coefficients satisfy the ellipticity condition (\ref{Ellipticity}), completing the proof. \end{proof} \begin{rem} In our construction, $w$ is Lipschitz but no better at $0$, and smooth but not analytic away from $0$. This is a consequence of choices we made for computational convenience. It is not hard to modify the construction so that $w$ is analytic on $\mathbb{R}^n$, e.g. by taking $w = \varphi(r)g(\nu)e^{-\frac{i}{2}\log(1+r^2)}$ with $g$ as above and $$\varphi = r\left((1+r^2)^{-\frac{\mu+1}{2}} + C_{\mu}(1+r^2)^{-\frac{\mu+3}{2}}\right).$$ The coefficients $A(x)$ also become analytic with these modifications. \end{rem} \section{Liouville Theorems}\label{LiouvilleTheorems} In this section we prove the Liouville theorems Theorem \ref{Liouville} and Theorem \ref{EllipticLiouville}. \subsection{Proof of Theorem \ref{Liouville}} \begin{proof}[{\bf Proof of Theorem \ref{Liouville}}] Let $\psi \in C^{\infty}_0(\mathbb{R}^n)$ be real-valued. Multiplying (\ref{SelfSimEquation}) by $\overline{w}\psi^2$ we obtain \begin{equation}\label{KeyInequality} 2\text{Re}\left( \text{div}(A\nabla w)\overline{w}\psi^2 \right) = \frac{1}{2}(2\mu |w|^2 + x \cdot \nabla |w|^2)\psi^2. \end{equation} Integrating by parts and using the ellipticity condition (\ref{Ellipticity}) we get \begin{equation}\label{Caccioppoli} \begin{split} \int_{\mathbb{R}^n} (-\lambda |\nabla w|^2\psi^2 &+ C(\lambda,\,\Lambda)|w|^2|\nabla \psi|^2)\,dx \\ &\geq \frac{2\mu - n}{2} \int_{\mathbb{R}^n} |w|^2\psi^2\,dx - \frac{1}{2}\int_{\mathbb{R}^n} |w|^2 x \cdot \nabla (\psi^2)\,dx. \end{split} \end{equation} Since $2\mu \geq n$, the first term on the right side is non-negative. We now fix our choice of $\psi$. Let $\psi_1$ be a smooth, radially decreasing function supported in $B_2$ with $\psi_1 \equiv 1$ in $B_1$, and let $\psi_R := \psi_1(R^{-1}x)$. Take $\psi = \psi_R$. Then the second term on the right side of (\ref{Caccioppoli}) is non-negative, so the right side is non-negative. Using that $|w|^2|\nabla \psi|^2 = O(R^{-2\mu-2})$ in $B_{2R} \backslash B_R$ we conclude that $$\int_{B_R} |\nabla w|^2\,dx = O(R^{n-2\mu-2}) = O(R^{-2}),$$ completing the proof. \end{proof} \subsection{Proof of Theorem \ref{EllipticLiouville}} \begin{proof}[{\bf Proof of Theorem \ref{EllipticLiouville}}] We start again with the identity (\ref{KeyInequality}). By (\ref{Monotonicity}) the right side of (\ref{KeyInequality}) is non-negative. Integrating by parts gives the Caccioppoli inequality $$\int_{\mathbb{R}^n} |\nabla w|^2\psi^2\,dx \leq C(\lambda,\,\Lambda) \int_{\mathbb{R}^n} |w|^2|\nabla \psi|^2\,dx.$$ Choosing $\psi$ as before, we recover the inequality $$\int_{B_{R}} |\nabla w|^2\,dx = O(R^{n-2\mu-2}),$$ which proves the theorem when $2\mu > n-2$. In the critical case $2\mu = n-2$, use instead $$\psi = \begin{cases} 1 \text{ in } B_1, \\ 1-\log(r)/\log(R) \text{ in } B_R \backslash B_1, \\ 0 \text{ in } \mathbb{R}^n \backslash B_R \end{cases} $$ to obtain $$\int_{B_{\sqrt{R}}} |\nabla w|^2\,dx = O\left(\frac{1}{\log R}\right).$$ \end{proof} \section{Some Questions}\label{OpenQuestions} To conclude we list some open questions. \vspace{2mm} \begin{enumerate} \item Our examples have coefficients with symmetric imaginary part. Similar constructions might be possible with skew-symmetric imaginary coefficients, using techniques from \cite{M1}. In this setting the imaginary coefficients play a role in ellipticity. \vspace{2mm} \item For elliptic systems there is a sharp condition on the spectrum of the coefficients that guarantees continuity of solutions \cite{Ko}. Sufficient conditions are known in the parabolic case (\cite{Ko}, \cite{Ka}). It would be interesting to investigate how closely our counterexamples match these conditions. \vspace{2mm} \item Solutions to parabolic systems in dimension $n \geq 3$ can be discontinuous on very large sets \cite{SJ1}. It is natural to ask how large the discontinuity set can be when $n = 2$. Known results imply spatial continuity at almost every time, which is false when $n \geq 3$ by elliptic examples. \vspace{2mm} \item Parabolic systems with the quasilinear structure \begin{equation}\label{QuasilinearSystem} u_t - \text{div}(A(u)\nabla u) = 0 \end{equation} have a well-developed partial regularity theory and are important in applications \cite{GS}. Here the coefficients depend smoothly on $u$. Constructing solutions to (\ref{QuasilinearSystem}) becomes easier when $u \in \mathbb{R}^m$ for $m$ large because there is more room to ``disperse $u$.'' \cite{M1} contains examples of discontinuity formation for (\ref{QuasilinearSystem}) when $n = 2,\, m = 4$. One can improve to $n = 2,\, m = 3$ using similar techniques \cite{M3}. Continuity for solutions to (\ref{QuasilinearSystem}) in the case $n = m = 2$ (in particular, the $\mathbb{C}$-valued scalar case) remains open. It seems possible in view of Theorem \ref{EllipticLiouville} that the restrictive geometry of the target could play in favor of regularity (see the discussion in \cite{M3}). \end{enumerate} \section*{acknowledgements} This work was supported by NSF grant DMS-1501152 and by the ERC grant ``Regularity and Stability in PDEs." I am grateful to John Ball and Jan Kristensen for discussions, and for the generous hospitality of the Oxford Mathematical Institute during the time this work was completed.
{ "timestamp": "2018-05-23T02:07:29", "yymm": "1805", "arxiv_id": "1805.02419", "language": "en", "url": "https://arxiv.org/abs/1805.02419" }
\section{INTRODUCTION} Quantum teleportation \cite{Bennett96} enables networking participants to move an unknown quantum state between the nodes of a quantum network \cite{Gisin07}. Quantum teleportation experiments have been realized in laboratories \cite{Pirandola15,Bouwmeester97,Zhang06}, free space \cite{Yin12,Xia18}, and even ground to satellite \cite{Ren17}. The ideal teleportation of a qubit with an unknown state $\rho$ acts as an identity unitary transformation, $\chi_{I}$, on the transmitted state, i.e., $\chi_{I}(\rho)=\rho$, as illustrated in Figs.~\ref{basicidea}(a) and \ref{basicidea}(b). However, if such a quantum process is attacked by an eavesdropper, or manipulated by untrusted networking participants, the performance of all the networking tasks underlying the quantum teleportation process becomes questionable \cite{Pirandola15}. Thus, the problem of identifying genuinely quantum teleportation through quantum networks, and ruling out any classical strategies of mimicry, poses an interesting but significant challenge to both quantum-information processing and practical implementation. In particular, while it is known that networking teleportation is fueled by quantum operations and entangled pairs shared between participants, it is not yet clear how the teleportation task can be utilized to quantitatively characterize the quantum correlations underlying the network. In order to tackle this problem, we introduce the concept of a genuinely classical process (GCP) to simulate the ideal quantum teleportation process, $\chi_{I}$, and provide a strategy for mimicking teleportation by classical physics. The proposed formalism not only provides a benchmark of faithful teleportation, but also gives the means to classify the quantum correlations between the quantum nodes. In contrast to existing theories, which utilize the \textit{state characteristics} to verify teleportation \cite{He15,Chiu16,Cavalcanti17} and quantum correlations (e.g., Bell nonlocality \cite{Brunner14}, nonbilocality \cite{Branciard10,Branciard12,Saunderse17,Carvacho17}, and non-$N$ locality \cite{Tavakoli14,Rosset16,Tavakoli16,Lee18}), the proposed formalism is truly task-oriented, and is thus well suited to the characterization of general many-node networking teleportation and its underlying quantum resources. Moreover, the formalism can be readily implemented in a wide variety of present experiments on teleportation, as will be shown in subsequent sections. \begin{figure}[t] \includegraphics[width=8.3cm]{Concept.pdf} \caption{Quantum teleportation and its classical mimicry. (a) Implementation of teleportation. The sender node (Alice) and receiver node (Bob) of the network first share a qubit pair from an entanglement source (E). Alice performs quantum joint-state measurement on the transmitted qubit with a state $\rho$ and half of the entangled pair held by her in the basis of Bell states (not shown). She then sends her measurement result to Bob. Finally, depending on Alice's measurement outcome, Bob performs local operations on his half of the entangled pair to recover the unknown state. (b) Notably, this input-output procedure acts as an identity quantum process $\chi_{I}$. (c) The proposed formalism of a genuinely classical process (GCP) $\chi_{GC}$ is used to simulate $\chi_{I}$ and derive faithful criteria for experiments. (d) The GCP formalism is sufficiently general to encompass the local hidden variable (LHV) model for mimicking teleportation using a classical source (S).}\label{basicidea} \end{figure} \section{Genuinely classical processes} We define a GCP as a set of three steps describing the system state and its evolution. In particular, we assume that the input particle undergoes a generic physical process and decays into a classical system that is considered to be a physical object with properties satisfying the assumption of realism \cite{Brunner14}. The system then evolves in accordance with classical stochastic theory \cite{Breuer&Petruccione02}. Finally, once the process is complete, the system resides in its final output state with properties satisfying the assumption of realism. Since a GCP treats the initial system as a physical object with properties satisfying the assumption of realism, the system can be modeled as a state described by a fixed set of physical properties $\textbf{v}_{\xi}$. Assume that the system is described by three properties, say $V_{1}$, $V_{2}$ and $V_{3}$, where each property has two possible states. There therefore exist $2^{3}=8$ sets underlying the classical object: $\textbf{v}_{\xi}(\text{v}_{1},\text{v}_{2},\text{v}_{3})$, where $\text{v}_{1},\text{v}_{2},\text{v}_{3}\in\{+1,-1\}$ represent the possible measurement outcomes for $V_{1}$, $V_{2}$ and $V_{3}$, respectively. The subsequent classical evolution of the system changes the system from an initial state $\textbf{v}^{(a)}_{\xi}$ to a final state $\textbf{v}^{(b)}_{\mu}$ according to the transition probabilities $\Omega_{\xi\mu}$. Therefore, the relationship between a specific input state of the $i$th physical property of the system, e.g., $\text{v}_{i}=v^{(a)}_{i}$, and a specific output state of the $j$th physical property $v^{(b)}_{j}$ can be characterized by \begin{equation} P(v^{(b)}_{j}|v^{(a)}_{i})=\sum_{\xi,\mu}P(\textbf{v}^{(a)}_{\xi}|v^{(a)}_{i})\Omega_{\xi\mu}P(v^{(b)}_{j}|\textbf{v}^{(b)}_{\mu}).\label{resultingstate} \end{equation} [see Fig.~\ref{basicidea}(c)]. Let process tomography (PT), a particular application of the quantum operations formalism \cite{Chuang96,Nielsen&Chuang00}, be used to systematically exploit the experimentally measurable quantities given in Eq. (\ref{resultingstate}). After PT, the GCP can be completely characterized with a positive Hermitian matrix, called the process matrix, of the form \begin{equation} \chi_{GC}= \frac{1}{4}\left[ \begin{matrix} \hat{I}_{gc}+\hat{V}_{gc,3} & \hat{V}_{gc,1}+i\hat{V}_{gc,2} \\ \hat{V}_{gc,1}-i\hat{V}_{gc,2} & \hat{I}_{gc}-\hat{V}_{gc,3} \end{matrix} \right],\label{process_tomography} \end{equation} where $\hat{I}_{gc}\equiv\rho_{v^{(a)}_{i}=+1}+\rho_{v^{(a)}_{i}=-1}$ and $\hat{V}_{gc,i}\equiv\rho_{v^{(a)}_{i}=+1}-\rho_{v^{(a)}_{i}=-1}$ for $i=1,2,3$ (see Appendix \ref{derived_process_matrix}) for details. Using Eq. (\ref{resultingstate}) and state tomography \cite{Vogel89}, the density operator of the output system conditioned on a specific initial state $v^{(a)}_{i}$ is given by \begin{equation} \rho_{v^{(a)}_{i}}=\frac{1}{2}(\hat{I}+\sum_{j=1}^{3}\sum_{v^{(b)}_{j}=\pm 1}v^{(b)}_{j}P(v^{(b)}_{j}|v^{(a)}_{i})\hat{V}^{(b)}_{j}),\label{rhogc} \end{equation} where $\hat{I}$ denotes the identity operator and the observables $\hat{V}_{j}^{(b)}$ are the quantum analogs of the physical properties $V_{j}^{(b)}$ and are complementary to each other. Notably, the above description of a system and its evolution extends the idea of the quantum output state represented by a density operator in a classical process (CP) \cite{Hsieh17}, denoted as $\chi_{C}$. As shown in Appendix \ref{comparison}, the $\chi_{GC}$ can fully describe the $\chi_{C}$. \section{Bell nonlocality} Suppose that a process of interest is created and its normalized process matrix, $\chi_{\text{expt}}$, is derived from experimentally available data using the PT procedure, as described above. Suppose further that the process fidelity of $\chi_{\text{expt}}$ and $\chi_{I}$ is used to evaluate the performance of the experimental process. For a given set of observables $\{\hat{V}_{j}^{(a)},\hat{V}^{(b)}_{j}|j=1,2,3\}$, if the process fidelity satisfies \begin{equation} F_{\text{expt}}\equiv \text{tr}(\chi_{\text{expt}}\chi_{I})> F_{GC}\equiv\max_{\chi_{GC}} \text{tr}(\chi_{GC}\chi_{I}),\label{fidelity} \end{equation} then the experimental process $\chi_{\text{expt}}$ is qualified as truly nonclassical and is close to teleportation. The overriding goal of Eq. (\ref{fidelity}) is to rule out the best classical mimicry of ideal teleportation $\chi_{I}$. Such a capability of genuinely classical mimicry can be evaluated by performing the following mathematical maximization task via semidefinite programming (SDP) with MATLAB \cite{Lofberg, sdpsolver}: $\max_{\chi_{GC}}\hspace{3pt}\text{tr}(\chi_{GC}\chi_{I})$, such that $\chi_{GC}\geq 0,\hspace{3pt}\text{tr}(\chi_{GC})=1$, $\Omega_{\xi\mu} \geq 0$ $\forall\ \xi$,$\mu$. The above constraints ensure that the GCP matrix $\chi_{GC}$ satisfies both the definitions of process fidelity and a density operator. Here the computational cost depends on the number of possible transition probabilities $\Omega_{\xi\mu}$ under evaluation, and then only relies on the dimension of the input system (see Appendix \ref{Computational_cost}). Since the observables for the PT procedure are chosen as $\hat{V}^{(a)}_{1}=X, \hat{V}^{(a)}_{2}=Y,$ $\hat{V}^{(a)}_{3}=Z$ for the input states and $\hat{V}^{(b)}_{1}=UXU^{\dag}, \hat{V}^{(b)}_{2}=UYU^{\dag},$ $\hat{V}^{(b)}_{3}=UZU^{\dag}$ for the output states, where $U=\left|0\right\rangle\!\!\left\langle0\right|+\exp(i\pi/4)\left|1\right\rangle\!\!\left\langle1\right|$ and $X$, $Y$, and $Z$ are the Pauli matrices, the clearest distinction possible is obtained between the classical result and the quantum mechanical prediction. The closest similarity to teleportation is \begin{equation} F_{GC}\sim0.8536,\label{fgc} \end{equation} under the above measurement setting (see Appendix \ref{Clearest_distinction}). (Note that the same measurement setting is used for all the networking cases presented in the remainder of the text.) It is worth noting that, for the best classical simulation (\ref{fgc}), $\chi_{GC}$ mimics $\chi_{I}$ as a phase damping process \cite{Nielsen&Chuang00} \begin{equation} \chi_{GC}(\rho)=0.8536\hat{I}\rho\hat{I}^{\dag}+0.1464Z\rho Z^{\dag},\label{processmatrix8536} \end{equation} with noise intensity $0.1464$ (see Appendix \ref{Clearest_distinction}). The performance inspection described here relies only on the preparation of four different input states \cite{four_input} and the relevant output state tomography for PT. Therefore, existing reported experiments on teleporting qubits are sufficient for checking the teleportation performance \cite{Pirandola15,Yin12,Bouwmeester97,Ren17,Xia18,Zhang06}. It is noted that the criterion proposed above is stricter than the existing criterion used to identify faithful teleportation as a means of ruling out the measure-prepare strategy (a direct classical mimicry strategy) \cite{Pirandola15,Measure-prepare_strategy,Massar95}. The best capability for the measure-prepare strategy to mimic teleportation is $F_{\text{expt}}=0.5$, i.e., the average state fidelity of the input and output states, $\bar{F}_{\text{expt,s}}=2/3\sim0.6667$, where $\bar{F}_{\text{expt,s}}=(2F_{\text{expt}}+1)/3$ \cite{Gilchrist05}. However, according to the fidelity criterion proposed in Eqs. (\ref{fidelity}) and (\ref{fgc}), the average state fidelity is $\bar{F}_{\text{expt,s}}>\bar{F}_{GC,s}\sim0.9024$. This result implies that not all entangled states can demonstrate a teleportation process that goes beyond $\chi_{GC}$ \cite{Cavalcanti17}. A GCP can be treated as an input-output transformation implemented by sharing local hidden variables (LHVs) between the parties involved (Alice and Bob). Equation~(\ref{resultingstate}) in the LHV model thus becomes $P(v^{(b)}_{j}|v^{(a)}_{i})=2\sum_{\lambda}P(v^{(a)}_{i}|\lambda)P(\lambda)P(v^{(b)}_{j}|\lambda)$. See Appendix \ref{process_LHV} for details. As shown in Fig.~\ref{basicidea}(d), LHV $\lambda$ describes the connection between the input state $v^{(a)}_{i}$ and the output state $v^{(b)}_{j}$. Moreover, the distribution $P(\lambda)$ determines the process matrix $\chi_{GC}$, by which one can consider the correlation of $\chi_{GC}$ as \textit{Bell local}. Therefore, since $F_{\text{expt}}>F_{GC}$, the resulting process possesses \textit{Bell nonlocality} to enable the networking participants to perform qualified experimental teleportation. This approach to testing the Bell local model is different from that of existing Bell tests, which all use Bell-like inequalities \cite{Brunner14,Branciard10,Branciard12,Saunderse17,Carvacho17,Tavakoli14,Rosset16,Tavakoli16}. (Notably, the manner in which a concrete information task together with its implementation can be described by a LHV model for networking process is not included in these standard Bell tests.) \section{Quantum correlations of three-node network} Teleporting unknown qubits through a linear network composed of three quantum nodes can be implemented by repeating the bipartite teleportation procedure twice in parallel [Fig.~\ref{three-node}(a)]. For example, assume that it is desired to teleport a qubit with state $\rho$ from the first node in a network (say, Alice) to the end node in the network (say, Charlie) through an intermediate node (say, Bob). Since the overall teleportation procedure consists of two ideal input-output subprocesses connecting Alice and Bob, $\chi_{I1}$, and Bob and Charlie, $\chi_{I2}$, respectively, the resultant process, $\chi_{I12}=\chi_{I1}\circ\chi_{I2}$, is still an identity unitary transformation with the mapping $\chi_{I12}(\rho)=\rho$, where $\circ$ denotes the concatenation operator. In other words, the general criterion given in Eq. (\ref{fidelity}) for identifying teleportation between two nodes still holds for three-node quantum networks. \begin{figure}[t] \includegraphics[width=6.5cm]{three-node.pdf} \caption{Networking teleportation and its classical mimicry. (a) A three-node teleportation process is implemented with two entanglement sources ($E_{1,2}$) and two Bell state measurements on Alice's node and Bob's node, respectively (not shown). Two dependent classical sources ($S_{1,2}$) are used for teleportation simulation under the Bell local model, and hence (b) the resulting process $\chi_{GC12}$ is a GCP involving all three participants. By contrast, if the sources are independent, the underlying correlations become bilocal (c), and the resulting process $\chi_{GC1|2}$ is composed of two individual GCPs, i.e., $\chi_{GC1}$ and $\chi_{GC2}$.}\label{three-node} \end{figure} From a classical viewpoint, the three-node networking task described above can be simulated using the same LHV model as that used for the two-node case. That is, the distribution of LHV $P(\lambda)$ determines the resultant GCP, where LHV $\lambda$ correlates Alice's inputs and Charlie's outputs and then results in a specific process [Fig.~\ref{three-node}(b)]. For the measurement setting given above, the closest similarity to the three-node teleportation process that can be achieved by $\chi_{GC12}$ is quantified as \begin{equation} F_{GC12}\equiv \max_{\chi_{GC12}}\hspace{3pt}\text{tr}(\chi_{GC12}\chi_{I12})\simeq0.8536.\label{FGC12} \end{equation} The correlation of an experimental three-node qubit transmission process is then Bell nonlocal if the experimental process $\chi_{\text{expt}12}$ satisfies the criterion $F_{\text{expt}12}\equiv\text{tr}(\chi_{\text{expt}12}\chi_{I12})>F_{GC12}$. Ideal three-node teleportation requires two entangled pairs. When transmitting qubits between distant nodes in a general network, these pairs are inevitably generated by two spatially separated independent sources \cite{Saunderse17,Carvacho17}. As a result, it is reasonable to infer that the classical strategy using a single LHV $\lambda$ to mimic teleportation can be modified. Given the assumption of independent sources, one can reasonably assign an individual LHV to each subprocess. Let $\lambda_{1}$ and $\lambda_{2}$ be the LHVs assigned to the state transmissions between Alice and Bob, and Bob and Charlie, respectively [see Fig.~\ref{three-node}(c)]. The relation between the inputs and outputs of each subprocess is totally determined by the underlying LHVs $\lambda_{k}$ ($k=1,2$). The distribution of the LHV $P(\lambda)$ in the original LHV model for two-node state transmission thus becomes a product of the joint probability of these LHVs, $P(\lambda_{1})P(\lambda_{2})$, in the three-node case, which implies that $P(v^{(b)}_{j}|v^{(a)}_{i})=P(v^{(b)}_{j})$. Therefore, $P(\lambda_{1})$ and $P(\lambda_{2})$ determine their individual GCPs, say $\chi_{GC1}$ and $\chi_{GC2}$, respectively. The resulting GCP in the three-node network is specified by $\chi_{GC1|2}\equiv\chi_{GC1}\circ\chi_{GC2}$, where the correlation of $\chi_{GC1|2}$ is referred to as \textit{bilocal}. The maximum fidelity of $\chi_{GC1|2}$ and $\chi_{I12}$ can be regarded as a threshold for the bilocal model. For the present case, the fidelity threshold is given as \begin{equation} F_{GC1|2}\equiv \max_{\chi_{GC1|2}}\hspace{3pt}\text{tr}(\chi_{GC1|2}\chi_{I12})\simeq0.7500\label{FGC1|2} \end{equation} (see Appendix \ref{derived_fidelities}). When each subprocess matrix is experimentally measured by PT as $\chi_{\text{expt}k}$ for $k=1,2$, the correlation of the joint process $\chi_{\text{expt}1|2}=\chi_{\text{expt}1}\circ\chi_{\text{expt}2}$ is \textit{nonbilocal} (or possesses \textit{nonbilocality}), if $F_{\text{expt}1|2}\equiv\text{tr}(\chi_{\text{expt}1|2}\chi_{I12})>F_{GC1|2}$. If the receiver, Charlie, manipulates the received system under quantum operations and trusts his measurement equipment, then the above-mentioned bilocal model becomes a LHV-LHS (local hidden state \cite{Wiseman07}) hybrid model provided that the sources are independent. The transmission between Bob and Charlie can then be described by a classical process matrix \cite{Hsieh17}, $\chi_{C2}$, while the subprocess for Alice and Bob is specified by $\chi_{GC1}$. However, if the two subprocesses share the same pair (i.e., the sources are dependent), then the LHS model specifies the resulting process as being classical by $\chi_{C12}$. For the present measurement setting, the closest similarities between $\chi_{C12}$ and $\chi_{I12}$, and between the hybrid process $\chi_{GC1|C2}\equiv\chi_{GC1}\circ\chi_{C2}$ and an ideal teleportation process, are given as follows: \begin{eqnarray} &&F_{C12}\equiv \max_{\chi_{C12}}\hspace{3pt}\text{tr}(\chi_{C12}\chi_{I12})\simeq0.6830,\label{FC12}\\ &&F_{GC1|C2}\equiv \max_{\chi_{GC1|C2}}\hspace{3pt}\text{tr}(\chi_{GC1|C2}\chi_{I12})\simeq0.5985\label{FGC1|C2} \end{eqnarray} (see Appendix \ref{derived_fidelities}). Thus, $F_{\text{expt}12}>F_{C12}$ implies steering in $\chi_{\text{expt}12}$, while $F_{\text{expt}1|2}>F_{GC1|C2}$ implies that the networking process $\chi_{\text{expt}1|2}$ has a nonlocality-steering hybrid correlation. \begin{figure}[t] \includegraphics[width=8.7cm]{hybridprocess_noise.pdf} \caption{Quantum correlations in noisy three-node teleportation. When the entangled state $\left|\phi^+\right\rangle=(\left|00\right\rangle+\left|11\right\rangle)/\sqrt{2}$ created by source $E_{k}$ mixes with white noise and becomes $\rho_{E_{k}}=(1-p_{\text{noise}k})\left|\phi^+\right\rangle\!\!\left\langle\phi^+\right|+p_{\text{noise}k}\hat{I}/4$ for $k=1,2$, the experimental process fidelities, i.e., (a) $F_{\text{expt12}},F_{\text{expt}1|2}$ and (b) $F_{\text{expt112}}$, decrease with an increasing noise intensity $p_{\text{noise}k}$. (a) Applying the fidelity thresholds given in Eqs. (\ref{FGC12})-(\ref{FGC1|C2}), the underlying quantum correlations can be discriminated in accordance with Eq.~(\ref{levels}). Meanwhile, (b) the nonbilocality and nonlocality-steering hybrid correlations can be identified through the criteria given in Eq. (\ref{levels2}).}\label{hybridprocess_noise} \end{figure} The fidelity thresholds in Eqs.~(\ref{FGC12})-(\ref{FGC1|C2}) suggest the existence of the following hierarchy between the Bell nonlocality, nonbilocality, steering and nonlocality-steering hybrid correlations of the teleportation process, i.e., \begin{equation} \begin{split} &F_{GC12}<F_{\text{expt12}} \ \leq 1 \ \ \ \ \ \ \ \ \ \ \text{Bell nonlocality},\\ &F_{GC1|2}<F_{\text{expt}1|2} \leq F_{GC12} \ \ \ \text{nonbilocality}, \\ &F_{C12}<F_{\text{expt12}} \leq \ F_{GC1|2}\ \ \ \ \ \text{steering},\\ &F_{GC1|C2}<F_{\text{expt}1|2} \leq F_{C12}\ \ \ \text{nonlocality steering}. \end{split}\label{levels} \end{equation} Compared with the process $\chi_{GC1|2}$ under the bilocal model, the process $\chi_{GC12}$ under the Bell local model achieves a better simulation of ideal teleportation in terms of the process fidelity. Thus, the correlations of a networking process $\chi_{\text{expt}12}$ that are identified as nonlocal through (\ref{FGC12}) can always go beyond the bilocal description $\chi_{GC1|2}$. However, nonbilocality of $\chi_{\text{expt}1|2}$ does not necessarily imply the existence of Bell nonlocality. (Note that the nonbilocality, steering and nonlocality-steering hybrid correlations can be compared and analyzed in an analogous manner.) In addition to the criteria for discriminating correlations given in Eq.~(\ref{levels}), the experimental process matrices $\chi_{\text{expt}12}$ and $\chi_{\text{expt}1}$ can be sufficient to verify the underlying correlations for teleportation. For example, consider the process fidelity $F_{\text{expt112}}\equiv\text{tr}(\chi_{\text{expt}1}\circ\chi_{\text{expt}12}\chi_{I12})$, and posit that the nonbilocality and nonlocality-steering hybrid correlations of $\chi_{\text{expt}12}$ can be identified according to the following criteria: \begin{equation} \begin{split} &F_{\text{expt112}}>F_{GC1|2}\ \ \ \ \ \text{nonbilocality}, \\ &F_{\text{expt112}}>F_{GC1|C2}\ \ \ \text{nonlocality steering}. \end{split}\label{levels2} \end{equation} When the correlation is bilocal, the fidelity $F_{\text{expt112}}=F_{GC1|2}$ becomes maximal when $\chi_{\text{expt}1}\circ\chi_{\text{expt}12}=\chi_{GC1|2}$. If $\chi_{\text{expt}1}\circ\chi_{\text{expt}12}=\chi_{GC1|C2}$, the fidelity $F_{\text{expt112}}=F_{GC1|C2}$ is maximum under the LHV-LHS hybrid assumption. See Fig.~\ref{hybridprocess_noise} for an illustrative example of the correlation discrimination in (\ref{levels}) and identification criteria in (\ref{levels2}) for noisy entanglement sources. It is worth emphasizing here that $\chi_{\text{expt}1}$, $\chi_{\text{expt}2}$, and $\chi_{\text{expt}12}$ provide a complete description of what operations and errors are involved in the three-node experiment. Thus, the resulting process fidelities can be used to reveal the experimental performance by referring to the correlation hierarchy in (\ref{levels}) and identification criteria in (\ref{levels2}). Such an inspection is beneficial in evaluating and improving primitive operations in networking teleportation from the viewpoints of trusted-untrusted measurement devices and dependent-independent sources in the experiment. \section{Non-$N$ locality} The correlation discrimination method introduced above can be readily extended to explore the quantum correlations in general many-node teleportation networks. For example, in the following, we demonstrate the quantitative characterization of non-$N$-local correlations \cite{Tavakoli14,Rosset16,Tavakoli16} of networking teleportation involving $N$ independent entanglement sources. (Note that the nonlocality-steering hybrid correlation can be characterized in the same way.) For ideal $(N+1)$-node teleportation, the resultant process remains an identity operation $\chi_{I1N}(\rho)=\rho$, where $\chi_{I1N}=\prod_{k=1}^{N}\chi_{Ik}=\chi_{I1}\circ\chi_{I2}\cdots\circ\chi_{IN}$ and $\chi_{Ik}$ denotes the ideal input-output subprocesses connecting the $k$th node and the $(k+1)$th node. In the $N$-local model, the whole process $\chi_{GC1|N}\equiv\prod_{k=1}^{N}\chi_{GCk}$ is composed of the subprocesses $\chi_{GCk}$ between the $k$th node and the $(k+1)$th node, each having its own underlying LHV $\lambda_{k}$. Non-$N$locality then exists if $F_{\text{expt}1|N}\equiv\text{tr}(\chi_{I1N}\prod_{k=1}^{N}\chi_{\text{expt}k})>F_{GC1|N}\equiv \max_{\chi_{GC1|N}}\hspace{3pt}\text{tr}(\chi_{GC1|N}\chi_{I1N})$, where $F_{GC1|3}\simeq0.6768$, $F_{GC1|4}\simeq0.6250$, and $\lim_{N\rightarrow \infty}F_{GC1|N}\simeq0.5000$ (See Appendix \ref{derived_fidelities}). Finally, we extend the idea of $F_{\text{expt112}}$ and introduce the following experimental process fidelity $F_{\text{expt}11N}\equiv\text{tr}(\chi_{I1N}\chi_{\text{expt}1}\circ\prod_{k=2}^{N}\chi_{\text{expt}1k})$, where $\chi_{\text{expt}1k}$ describes experimental teleportation from the first node in the network to the $(k+1)$th node through all $k-1$ intermediate nodes between them. Since the maximum value of $F_{\text{expt}11N}$ predicted by the $N$-local model is $F_{GC1|N}$, it can be inferred that $F_{\text{expt}11N}>F_{GC1|N}$ implies the existence of non-$N$locality of experimental teleportation. \section{Summary and outlook} We have proposed a formalism referred to as a genuinely classical process to characterize and identify both true quantum teleportation and the underlying quantum correlations in many-node networking teleportation. We show that quantum-information processing can be employed to quantitatively discriminate quantum correlations. The proposed formalism is well suited to the analysis of existing experiments, and faithfully evaluates the performance of all the operations required for teleportation through quantum networks. Such a task-oriented approach raises several interesting questions, including how one can identify generic truly quantum networking tasks such as one-way quantum computation in many-node networks and what multipartite quantum correlations exist behind multipartite distributed quantum-information processing. \\ \begin{acknowledgments} We thank J.-W. Pan for helpful comments and discussions. This work is partially supported by the Ministry of Science and Technology, Taiwan, under Grant Numbers MOST 107-2628-M-006-001-MY4 and MOST 107-2627-E-006-001. H.Lu was partially supported by Major Program of Shandong Province Natural Science Foundation (grants ZR2018ZB0649). \end{acknowledgments}
{ "timestamp": "2020-01-15T02:10:16", "yymm": "1805", "arxiv_id": "1805.02431", "language": "en", "url": "https://arxiv.org/abs/1805.02431" }
\section{Introduction} This paper concerns invariants of 3-manifolds that are of interest both in geometry and in computational topology. For computational purposes, 3-manifolds are often expressed by a \emph{triangulation}, that is by gluing a collection of tetrahedra. For example, this is true for 3-manifold software {\tt SnapPea}, developed by Weeks in the early 1980s, now maintained and distributed as {\tt SnapPy}~\cite{snappy}, and for {\tt Regina}, developed by Burton~\cite{burton04-regina}. These programs have been influential in the development of 3-manifold geometry and topology. Computational topology considers the running time of algorithms. For an algorithm that takes a triangulation as input, the running time frequently depends on some measure of the ``simplicity'' of the triangulation. Note, however, that a 3-manifold can have many different triangulations. Therefore, it is important to produce triangulations that are as simple as possible. Here, we will evaluate the simplicity of a triangulation by its \emph{treewidth}. The treewidth of a triangulation is a measure of the sparsity of the gluing relations between tetrahedra; see \refdef{Treewidth}. It was first developed in graph theory~\cite{robertson86-algorithmic}, then adapted to 3-manifold triangulations. In recent years, several algorithms have been developed that are highly efficient for triangulations with low treewidth~~\cite{DBLP:conf/icalp/BurtonMS15,DBLP:conf/soda/BurtonS13}, and so we would like to find triangulations of 3-manifolds with treewidth bounded in terms of well-understood properties of the manifold. One property is geometry. By the geometrisation theorem proved by Perelman (\cite{perelman02, perelman03}, or see~\cite{kleiner08-perelman}), every closed orientable 3-manifold decomposes into geometric pieces, and the hyperbolic pieces are among the most prevalent and least understood. If a closed 3-manifold admits a hyperbolic structure, then that structure is a topological invariant of the manifold \cite{Mostow}, and so it is natural to ask if the hyperbolic geometric properties of the manifold can bound treewidth of a triangulation. A hyperbolic invariant that has received much attention is the hyperbolic volume. By work of J{\o}rgensen and Thurston, a hyperbolic 3-manifold $M$ that has a lower bound on injectivity radius admits a triangulation with $O(\operatorname{vol}(M))$ tetrahedra (\cite{Thurston:notes}, see also \cite{KobayashiRieck}). However, if we put no restrictions on injectivity radius, then no such result holds: for a sufficiently large constant $C>0$, there are infinitely many closed hyperbolic 3-manifolds with volume bounded above by $C$, and therefore a finite number of tetrahedra cannot triangulate them all. For example, such manifolds are obtained by Dehn filling a hyperbolic manifold with finite volume, using the fact that volume decreases under Dehn filling (\cite{Thurston:notes}, or see \cite[Chapter~E]{BenedettiPetronioHyperbolicGeom}). Nevertheless, we prove in this paper that any hyperbolic 3-manifold with bounded volume admits a triangulation with bounded treewidth. \begin{theorem}\label{Thm:TreeWidth} There exists a universal constant $c>0$ such that a hyperbolic 3-manifold $M$ with volume $\operatorname{vol}(M)$ admits a triangulation with treewidth at most $c \cdot \operatorname{vol}(M)$. \end{theorem} In computer science, parameterized complexity classifies computational difficulty in terms of multiple parameters as input. Some problems that are known to require superpolynomial time in terms of the input alone, under standard computational complexity assumptions, can be solved by algorithms that are exponential in one fixed parameter, but only polynomial in the size of another. Thus they can be solved efficiently for low values of the first parameter. An important example of this from graph theory is Courcelle's theorem, which states that many graph theory problems can be decided in linear time in the treewidth of the graph~\cite{Courcelle}. Recently, Courcelle's theorem has been adapted to 3-manifold topology by Burton and Downey~\cite{DBLP:journals/jct/BurtonD17}. Along with the rich theory of parameterized algorithms and standard dynamic programming techniques, this has led to the development of several algorithms in 3-manifold topology that are both theoretically and practically efficient, provided the input triangulation has small treewidth. Some of these parameterized algorithms have been implemented in the 3-manifold software {\tt Regina}~\cite{burton04-regina}, and have led to significant improvement in practical computations. In practice, the treewidth parameter is strongly dependent on the triangulation chosen for representing a manifold, and obtaining low treewidth triangulations can be difficult; see~\cite{DBLP:conf/soda/MariaS17} for a discussion. Unfortunately, a manifold that has a simple topological or geometric description can often be represented by a triangulation that has extremely large treewidth, with no obvious combinatorial simplifications. Therefore it is important to identify triangulations of a manifold whose treewidth is bounded by topological or geometric properties of the manifold, as in \refthm{TreeWidth}. \smallskip We also consider the converse to \refthm{TreeWidth}, and show it does not hold. In \refthm{BddTW}, we show that there exists a sequence of closed hyperbolic manifolds with bounded treewidth and volume approaching infinity. Thus, while volume gives an upper bound on treewidth, it does not give a lower bound. On the other hand, recent work of Husz{\'a}r, Spreer, and Wagner~\cite{HuszarSpreerWagner} implies that there is a sequence of 3-manifolds whose treewidth approaches infinity. A corollary of our result is that any such examples that are hyperbolic have volume also approaching infinity. In fact, one family of examples is the family of small manifolds with large genus developed by Agol~\cite{Agol:Small}. For the $n$-th manifold in this family, combining work of~\cite{Agol:Small} with~\cite{HuszarSpreerWagner}, the treewidth is at least $n/2$, but the volume is of order $O(n^2)$. It would be interesting to find a family for which volume and treewidth grow proportionally, to determine whether there is hope of improving \refthm{TreeWidth}. \subsection{Crushing, carving-width, and treewidth} The proofs of \refthm{TreeWidth} and \refthm{BddTW} modify triangulations using the crushing procedure developed by Jaco and Rubinstein~\cite{JacoRubinstein:0Eff} and simplified by Burton~\cite{Burton:Crushing}. In order to prove the theorems, we show in \refcor{carving-width} that crushing does not affect a different measure of the sparsity of gluing relations of the triangulation, namely the \emph{carving-width}; see \refdef{Carvingwidth}. For a 3-manifold triangulation, it is known that the carving-width is at least 2/3 the treewidth and at most four times the treewidth; see \refthm{boundcngtw}. Thus crushing any finite number of times affects treewidth by at most a multiplicative constant. Note that a usual pipeline for practical computation in 3-manifold topology consists of first, simplifying a triangulation using efficient implementations of the crushing procedure, and then running computations. Corollaries \ref{Cor:carving-width} and \ref{Cor:CrushingTreewidth} guarantee that this approach does not affect the computational complexity of parameterized algorithms using the carving-width or treewidth as a parameter, such as~\cite{DBLP:journals/jct/BurtonD17,DBLP:conf/icalp/BurtonMS15,DBLP:conf/soda/BurtonS13}. Thus these results on crushing and computational complexity are important, and likely of independent interest. \subsection{Outline} In \refsec{Treewidth} we review results on treewidth and carving-width. Then in \refsec{Crushing}, we prove that the Jaco-Rubinstein crushing procedure does not increase the carving-width of a triangulation. To prove \refthm{TreeWidth}, we use the fact that there is a universal Margulis constant $\mu$ such that any hyperbolic manifold $M$ can be obtained from its $\mu$-thick part $\M^{\geq \mu}$ by hyperbolic Dehn filling; see \refsec{HyperbolicReview}. The proof begins by taking a geodesic triangulation of $\M^{\geq \mu}$ with $O(\operatorname{vol}(M))$ \emph{fat} tetrahedra, i.e.\ tetrahedra with volume bounded from below (\refsec{geometry}). Next, we describe how to proceed to the Dehn filling without increasing the treewidth of the whole triangulation (\refsec{combinatorics}). We consequently obtain a triangulation with treewidth $O(\operatorname{vol}(M))$, and describe an explicit algorithm to construct it. The number of tetrahedra of the triangulation depends solely on the hyperbolic volume $\operatorname{vol}(M)$ and the slopes of the hyperbolic Dehn surgeries. Finally, in \refsec{cst_tw}, we prove there exists a family of closed hyperbolic 3-manifolds with unbounded volume that admits a triangulation with constant treewidth. \section{Triangulations, carving-width, and treewidth}\label{Sec:Treewidth} In this section, we define several necessary terms and fix notation. Let $M$ be a closed 3-manifold. A {\em cell-decomposition} of $M$ is a pairwise-disjoint collection of $n$ oriented, compact, convex linear 3-cells $\Delta_1,\ldots,\Delta_n$ equipped with affine maps that identify (or ``glue together'') their faces in pairs, so that the underlying topological space is homeomorphic to $M$. The {\em dual graph} of a cell decomposition is the graph, with multiple arcs and loops, having a node for every 3-cell $\Delta_i$, and an arc $(\Delta_i,\Delta_j)$ for every face gluing between 3-cells $\Delta_i$ and $\Delta_j$. A \emph{generalised triangulation} of $M$ is a cell-decomposition where all 3-cells are abstract tetrahedra. Its dual graph is naturally a 4-valent graph, corresponding to gluings of triangular faces. Generalised triangulations are widely used across major 3-manifold software packages, and they allow the representation of a rich variety of 3-manifolds using very few tetrahedra. We also encounter \emph{ideal triangulations} in this work, which are triangulations from which vertices have been removed. A removed vertex is called an \emph{ideal vertex}. \begin{remark}\label{Rem:Notation} We will be discussing both graphs and triangulations. We will refer to \emph{nodes} and \emph{arcs} of graphs, to clearly distinguish these from the \emph{vertices} and \emph{edges} of triangulations. \end{remark} The \emph{carving-width}, also known as \emph{congestion}, is a graph parameter introduced by Seymour and Thomas~\cite{Seymour1994}. \begin{definition}\label{Def:Carvingwidth} Let $\mathcal{G}$ be a graph, possibly with loops and multiple arcs between nodes, defined on $n$ nodes, and let $T$ be an unrooted binary tree, with all internal nodes of degree 3, and with $n$ leaves. An {\em embedding} $\pi$ of $\mathcal{G}$ into $T$ is an injective mapping from the nodes of $\mathcal{G}$ to the leaves of $T$. To every pair of endpoints $(u, v)$ of an arc in $\mathcal{G}$, there corresponds a unique path $p(\pi(u),\pi(v))$ in $T$, connecting leaves $\pi(u)$ and $\pi(v)$. Define the {\em congestion} of an embedding $\pi$ to be: \[ \operatorname{cng}(\pi) = \max_{a \ \text{arc of} \ T} \left| \{ (u,v) \ \text{in} \ \mathcal{G} : p(\pi(u),\pi(v)) \ \text{contains} \ a \}\right|, \] where note we count a multiple arc only once in the formula. Here $|\cdot|$ denotes the number of elements in a set. The {\em carving-width} $\operatorname{cng}(\mathcal{G})$ of a graph $\mathcal{G}$ is the minimal congestion over all its embeddings into binary trees. The \emph{carving-width of a cell-decomposition} $\operatorname{cng}(\mathfrak{T})$ is the carving-width of its dual graph. Finally, we define the \emph{carving-width of a 3-manifold} $M$, denoted by $\operatorname{cng}(M)$, to be the minimal carving-width over all its generalised triangulations. \end{definition} In this article, our definition of congestion differs from the literature by counting a multiple arc only once in the tree embedding. However, for a graph dual to a triangulation of a 3-manifold, this only affects the carving-width by a constant multiple, since all dual graphs of triangulations have constant maximal degree four. Also, note that a loop arc $(u,u)$ leads to paths $p(\pi(u),\pi(u))$ of length 0 in a tree embedding, and can be disregarded when computing carving-width. We give an example of a tree embedding in \reffig{example}. The following additional example will be important to our applications. \begin{example}\label{Example:LST} Let $M$ be a solid torus. We describe a well-known triangulation of $M$, discussed in detail by Jaco and Rubinstein~\cite{JacoRubinstein:LayeredTriang}, called a \emph{layered} triangulation or \emph{layered solid torus}. At the core is a triangle with two sides identified to form a M\"obius band. The first tetrahedron is glued such that two of its faces glue to the core triangle, one on either side of the triangle. By gluing correctly, the result is homeomorphic to a solid torus and has boundary consisting of exactly two triangles; see~\cite{JacoRubinstein:LayeredTriang} for details. Additional tetrahedra may now be added inductively. At each step, a single tetrahedron is attached to the existing triangulation such that two of its faces are glued to the two boundary faces, covering a boundary edge (which then becomes an interior edge). The result is a solid torus with two triangular boundary faces. For any fixed slope on the torus, there exists a layered solid torus for which that slope is the meridian, i.e.\ bounds a disc; see for example~\cite[Theorem~4.1]{JacoSedgwick}. Consider now the gluing graph of a layered solid torus. The tetrahedron at the core with its two faces identified gives a loop at the corresponding node. The other two faces are identified to a single tetrahedron, giving two arcs to the next node. Any additional nodes are connected by two nodes to the previous node, and two nodes to the next node. This forms a simple daisy chain. See \reffig{daisy}, left. \end{example} \begin{figure} \centering \import{figures/}{Daisy_CarvingWidth.pdf_tex} \caption{On the left is the daisy chain graph. Shown is an embedding into a tree, on the right, indicating that the carving-width is at most two.} \label{Fig:daisy} \end{figure} Note if there are only two nodes in the daisy chain graph, the carving-width is $1$. In the more general case we obtain the following. \begin{lemma}\label{Lem:DaisyChain} The daisy chain graph with $n\geq 3$ nodes, arising as the dual of a layered solid torus, has carving-width two. \end{lemma} \begin{proof} The carving-width is at least two, because the carving-width of a graph is at least the maximal degree of a node after identifying multiple arcs, which is two for the daisy chain. We now show that the carving width is at most two. Number the nodes of the daisy chain linearly as in \reffig{daisy}, by $a_1, a_2, \dots, a_{n-1}, a_n$, where the node $a_1$ has a loop and two arcs running to $a_2$, the node $a_2$ has an additional two arcs running to $a_3$, etc., and $a_n$ is 2-valent (two of its faces lie on the boundary of the layered solid torus). Form an unrooted binary tree with $n$ nodes as follows. Start with the unrooted binary tree $T_3$ with three leaves and a single 3-valent node. Label nodes $b_1$, $b_2$, $b_3$ in a cyclic manner. Note the two paths from $b_1$ to $b_2$ and from $b_2$ to $b_3$ run over the arc connecting the leaf $b_2$ exactly twice, and other arcs exactly once. Now inductively increase the size of the tree until it has $n$ leaves. Given an unrooted tree $T_k$ with $k$ leaves labelled $b_1, \dots, b_k$ in a cyclic manner, form $T_{k+1}$ by attaching two new leaves to the $k$-th leaf, making it a 3-valent node, and label all leaves as in $T_k$ except the two new leaves, labelled $b_k$ and $b_{k+1}$, such that the labelling on $T_{k+1}$ is still cyclic. See \reffig{daisy}, right. Note that paths from $b_i$ to $b_{i+1}$ for $i<k-1$ are identical in $T_k$ and $T_{k+1}$. For $i=k-1$ in $T_{k+1}$, the path from $b_{k-1}$ to $b_k$ runs over the arc connecting the leaf $b_{k-1}$, along the arc that connects the new 3-valent node in $T_{k+1}$, and then along the arc connecting the leaf $b_k$. The path from $b_k$ to $b_{k+1}$ runs over the two new arcs. Thus by induction, all paths from $b_i$ to $b_{i+1}$, $1\leq i\leq k$, run over arcs connecting leaves $b_2, \dots, b_k$ exactly twice, and all other arcs exactly once. Continue until $k+1=n$, and map each $a_i$ to $b_i$ in $T_n$. Thus the carving-width is at most two. \end{proof} A fundamental property of carving-width is that it decreases when taking {\em immersions}. \begin{definition}\label{Def:Immersion} Let $\mathcal{G}$ be a graph, with adjacent arcs $(u,v)$ and $(v,w)$. A \emph{lifting} of $uvw$ consists of removing all arcs $(u,v)$ and $(v,w)$ from $\mathcal{G}$, and adding arc $(u,w)$. An \emph{immersion} of $\mathcal{G}$ is a graph $\mathcal{H}$ that can be obtained from $\mathcal{G}$ by a sequence of liftings, and arc and node removals. Equivalently, $\mathcal{H}$ is an immersion of $\mathcal{G}$ if there exists a mapping of the nodes of $\mathcal{H}$ to the nodes of $\mathcal{G}$ where every arc $(u,v)$ is sent to a path from $\pi(u)$ to $\pi(v)$ in $\mathcal{G}$ such that distinct arcs in $\mathcal{H}$ lead to arc-disjoint paths in $\mathcal{G}$. \end{definition} The following is standard and follows from the definitions. \begin{lemma}\label{Lem:Immersion} If $\mathcal{H}$ is an immersion of $\mathcal{G}$, then \[\operatorname{cng}(\mathcal{G}) \geq \operatorname{cng}(\mathcal{H}).\] \end{lemma} \begin{proof} The nodes of $\mathcal{H}$ are a subset of the nodes of $\mathcal{G}$, so any embedding of $\mathcal{G}$ into a tree $T$ restricts to an embedding of $\mathcal{H}$ into the same tree. Form $T_{\mathcal{H}}$ from $T$ by removing leaves of $T$ that are not the image of nodes of $\mathcal{H}$, and viewing arcs adjacent to remaining 2-valent nodes as a single arc. Then paths in $T_{\mathcal{H}}$ between nodes coming from $\mathcal{H}$ are obtained by taking paths in $T$ between nodes of $\mathcal{H}$ and removing leaves and 2-valent nodes. If $\mathcal{H}$ differs from $\mathcal{G}$ by a lifting of $uvw$, then $\mathcal{H}$ contains an arc $(u,w)$ and no arcs $(u,v)$ and $(v,w)$, while $\mathcal{G}$ contains arcs $(u,v)$ and $(v,w)$. Note that the unique path in $T$ from $\pi(u)$ to $\pi(w)$ can be obtained by taking the union of paths from $\pi(u)$ to $\pi(v)$ and from $\pi(v)$ to $\pi(w)$ and removing all arcs traversed twice. Thus the arcs in $(\pi(u),\pi(w))$ in $T$ form a subset of those in the two paths $(\pi(u),\pi(v))$ and $(\pi(v),\pi(w))$. It follows that the congestion of $\mathcal{H}$ is at most that of $\mathcal{G}$. Finally, for any node and arc removal to convert $\mathcal{G}$ to $\mathcal{H}$, the corresponding paths will be removed from $T$, so the number of paths $p(\pi(u),\pi(v))$ running over a fixed arc $a$ in $T$, and hence $T_{\mathcal{H}}$, will also decrease. \end{proof} The carving-width of a graph is closely related to \emph{treewidth}, which plays a major role in combinatorial algorithms. The \emph{treewidth} of a graph was introduced by Robertson and Seymour~\cite{robertson86-algorithmic}, and is defined as follows. \begin{definition}\label{Def:Treewidth} Let $\mathcal{G}$ be a graph with loops and multiple arcs. A {\em tree decomposition} $(X, \{B_\tau\})$ of $\mathcal{G}$ consists of a tree $X$ and {\em bags} $B_\tau$ of nodes of $\mathcal{G}$ for each node $\tau$ of $X$, for which: \begin{enumerate} \item each node $u$ in $\mathcal{G}$ belongs to some bag $B_\tau$; \item for every arc $(u,v)$ in $\mathcal{G}$, there exists a bag $B_\tau$ containing both $u$ and $v$; \item for every node $u$ in $\mathcal{G}$, the bags containing $u$ form a connected subtree of $X$. \end{enumerate} The \emph{width} of this tree decomposition is defined as $\max_{\tau \in X} |B_\tau|-1$. The \emph{treewidth of $\mathcal{G}$}, denoted $\operatorname{tw}(\mathcal{G})$, is the smallest width of any tree decomposition of $\mathcal{G}$. \end{definition} Similarly, the treewidth of a cell-decomposition is the treewidth of its dual graph, and the treewidth of a 3-manifold is the minimal treewidth over all of its generalised triangulations. \reffig{example} shows the dual graph of a $9$-tetrahedra triangulation of a $3$-manifold, along with a possible tree decomposition. The largest bags have size three, and so the width of this tree decomposition is $3-1=2$. \begin{figure}[tb] \centering \includegraphics{figures/TW_modified.pdf} \caption{The dual graph of a $3$-manifold triangulation (left), a tree decomposition of width $2$ (centre), and a tree embedding of width $4$ (right).} \label{Fig:example} \end{figure} Finally, treewidth and carving-width are closely related, and enjoy similar properties. First, they only differ by a constant multiplicative factor: \begin{theorem}[Theorem~1 of \cite{DBLP:journals/jct/Bienstock90}] \label{Thm:boundcngtw} Let $\mathcal{G}$ be a graph of maximal degree $d$. Then, \[ \frac{2}{3} (\operatorname{tw}(\mathcal{G}) +1) \leq \operatorname{cng}(\mathcal{G}) \leq d (\operatorname{tw}(\mathcal{G}) +1). \] \end{theorem} Note that for dual graphs of generalised triangulations, the degree of every node is at most four, and treewidth and carving-width consequently differ by a small multiplicative constant. The decision problem associated to computing the treewidth or carving-width of a graph is NP-complete~\cite{Arnborg:1987:CFE:37170.37183,Seymour1994}. However, both treewidth and carving-width, together with an optimal tree decomposition or embedding into a tree, can be computed in time $O(f(k) \cdot n)$ on graphs with $n$ nodes and treewidth/carving-width at most $k$~\cite{DBLP:journals/siamcomp/Bodlaender96,10.1007/3-540-40996-3_17}. In the following, we use carving-width because of its favourable properties, and we connect it to the more widely used treewidth. \section{Crushing triangulations does not increase carving-width} \label{Sec:Crushing} This section focuses on compact manifolds with or without boundary, which are the main object of study of this article. However, all results cited and introduced extend naturally to ideal triangulations. \emph{Crushing} of triangulations is a fundamental technique introduced by Jaco and Rubinstein~\cite{JacoRubinstein:0Eff} to simplify 3-manifold triangulations. Let $\mathfrak{T}$ be a generalised triangulation of a 3-manifold $M$. A normal surface $S$ in $\mathfrak{T}$ is a properly embedded surface in $\mathfrak{T}$ that meets each tetrahedron in a (possibly empty) collection of curvilinear triangles and quads, as illustrated in \reffig{Cut}. A \emph{trivial surface} is a normal surface made only of triangles; it always triangulates the link of a vertex. Finally, a \emph{0-efficient triangulation} is a triangulation $\mathfrak{T}$ that either \begin{itemize} \item contains no non-trivial normal sphere if $\mathfrak{T}$ is closed or ideal, or \item contains no non-trivial normal disk, if $\mathfrak{T}$ is bounded. \end{itemize} \begin{figure} \centering \includegraphics[width=5cm]{figures/cut.pdf} \caption{Tetrahedron cut by a normal surface. The intersection is a collection of disjoint normal disks (quads and triangles).} \label{Fig:Cut} \end{figure} Crushing was introduced in~\cite{JacoRubinstein:0Eff} (see also~\cite{Burton:Crushing}) as a means to construct 0-efficient triangulations. \begin{definition}\label{Def:Crushing} Let $S$ be a normal surface in a triangulation $\mathfrak{T}$. \emph{Crushing} the triangulation along $S$ consists of the following three steps: \begin{enumerate} \item \textbf{Cut} $\mathfrak{T}$ open along $S$, leading to a cell-decomposition with various cell types, presented in \reffig{Collapse}. \item \textbf{Collapse} each copy of $S$ to a point, using the quotient topology. This gives four types of cells: tetrahedra, \emph{3-sided footballs}, \emph{4-sided footballs}, and \emph{triangular purses} (all illustrated in \reffig{Collapse}). \item \textbf{Flatten} all non-tetrahedra cells to obtain a triangulation, i.e.\ flatten footballs into edges, and triangular purses into triangles as in \reffig{Collapse}. \end{enumerate} Conclude by separating tetrahedra joined by pinched vertices and edges. \end{definition} Following~\cite[Lemma 3]{Burton:Crushing}, the flattening step can be performed iteratively, one non-tetrahedron cell at a time. In particular, flattening a football or a triangular purse induces the flattening of bigonal faces in the adjacent cells, hence creating temporary cells of new types: triangular purses with one or two flattened bigons (also known as bigonal pyramid and triangular pillows, respectively), and 2-sided footballs (also known as bigonal pillows). \begin{figure} \centering \import{figures/}{Collapse_modified.pdf_tex} \caption{Cut out tetrahedron containing quads, collapsing of the cells and flattening. The collapsing produces two triangular purses, and a collection of 3 and 4-sided footballs. Footballs are flattened into edges, and triangular purses into triangles.} \label{Fig:Collapse} \end{figure} In particular, we use the following property. \begin{theorem}[Jaco-Rubinstein~\cite{JacoRubinstein:0Eff}, see also Burton~\cite{Burton:Crushing}] Let $\mathfrak{T}$ be a generalised triangulation of a closed or bounded 3-manifold $M$. There is an algorithm to construct a finite family of triangulations $\mathfrak{T}_1, \ldots, \mathfrak{T}_n$ triangulating manifolds $M_1, \ldots, M_n$, such that $M = M_1 \# \ldots \# M_n$, and each $\mathfrak{T}_i$ is either 0-efficient, or can be shown to be a triangulation of $S^3$, $S^2 \times S^1$, $\mathbb{R}P^3$ or $L(3,1)$. The algorithm consists of finding normal spheres and disks in the original triangulation, and crushing them. \label{Thm:JRCrushing} \end{theorem} Jaco and Rubinstein prove that 0-efficient triangulations of a closed irreducible manifold have one vertex, and 0-efficient triangulations of a bounded irreducible $\partial$-irreducible manifold (without 2-sphere boundary components\footnote{This is a technicality that only rules out the simple case where the irreducible manifold is a 3-cell.}) have all vertices in the boundary, with exactly one vertex per boundary component. We now introduce the main combinatorial result of this article, namely that crushing does not increase the carving-width. We use this result repeatedly as a tool to manipulate hyperbolic manifolds in the latter sections. \begin{theorem}\label{Thm:crushing} Let $\mathfrak{T}$ be a generalised (or ideal, or bounded) triangulation of a 3-manifold $M$, and $S$ a normal surface in $\mathfrak{T}$. Let $\mathfrak{T}^*$ be the triangulation obtained after crushing $S$. Then the dual graph of $\mathfrak{T}^*$ is an immersion of the dual graph of $\mathfrak{T}$. \end{theorem} \begin{proof} We track the evolution of the cell decomposition of the triangulation under the three steps (cut, collapse, flatten) of crushing $S$ in $\mathfrak{T}$, in order to describe the change to its dual graph. Let $\mathcal{G}$ be the dual graph of $\mathfrak{T}$. Cut $\mathfrak{T}$ along $S$. The result is a collection of cells; each tetrahedron that meets $S$ in $\mathfrak{T}$ is split into cells across normal discs of $S$ as in Figures \ref{Fig:Cut} and~\ref{Fig:Collapse}. Now collapse each normal disc of $S$ to a point, using the quotient topology, to obtain the cell complex $\mathfrak{T}'$. This operation splits every tetrahedron $\Delta$ of $\mathfrak{T}$ into a collection $C_\Delta = \{\Delta_0, \ldots, \Delta_n\}$ of cells, where $n$ is the number of normal disks in $\Delta \cap S$. The cells are of four types: tetrahedra, $3$-sided footballs, $4$-sided footballs, and triangular purses; see \reffig{Collapse}. Note that if $\Delta \cap S$ contains no quad, then $C_\Delta$ is made of exactly one (central) tetrahedron, and a possibly empty collection of $3$-sided footballs. If $\Delta \cap S$ contains quads, then $C_\Delta$ is made of two triangular purses, and a possibly empty collection of $3$-sided and $4$-sided footballs. Now flatten. We obtain a generalised triangulation $\mathfrak{T}^*$. The $3$ and $4$-sided footballs become edges, and thus have no dual nodes or arcs in $\mathfrak{T}^*$. A tetrahedron is not flattened, thus the dual graph of $\mathfrak{T}^*$ has one node corresponding to $\Delta\cap S$ containing no quads. Two triangular purses flatten to triangles, thus removing the node corresponding to $\Delta$. Note if we perform this process one tetrahedron at a time, adjusting the dual graph one node at a time, then each node corresponding to $\Delta$ that does not meet a quad will be replaced by a node corresponding to a tetrahedron of the crushing; the arcs from this node will run to the same nodes as before the replacement, since faces of the new tetrahedron are still glued to faces of adjacent tetrahedra. For each node corresponding to $\Delta$ that meets a quad, the node is removed, and two liftings are performed: when each triangular purse is flattened, it identifies two triangular faces together and removes a node. Thus faces of adjacent tetrahedra become glued through this triangle. The result is a lifting. See \reffig{Lifting}. Perform this process for each tetrahedron. Then separate tetrahedra joined by pinched vertices and edges, which does not affect the dual graph. We see that the final result is an immersion. \end{proof} \begin{figure} \centering \includegraphics[width=12cm]{figures/dualgraph.pdf} \caption{Local transformation of the dual graph at tetrahedron $\Delta$ from~\reffig{Collapse} when crushing iteratively at $\Delta$. $\mathcal{G}$ is the dual graph before crushing, $\mathcal{G}'$ is the dual graph after cutting, collapsing, and then flattening only the 3 and 4-sided footballs from $\Delta$ ($\Delta_1$ and $\Delta_2$ stand for the two triangular purses), and $\mathcal{G}^*$ is the dual graph after flattening the triangular purses. Note that some of the nodes $\Delta_i^j$ in $\mathcal{G}'$ may have already been removed when flattening adjacent footballs, and some of the bigon faces of triangular purses may already be collapsed. This does not change the analysis as it only removes nodes and arcs from the dual graph. $\mathcal{G}^*$ is obtained from $\mathcal{G}$ by lifting $\Delta^1\Delta\Delta^2$ and $\Delta^3\Delta\Delta^4$, then removing the node $\Delta$. Because their corresponding cells have bigonal faces, and hence cannot be tetrahedra, the crossed out nodes on $\mathcal{G}^*$ will be removed from the graph when flattening adjacent cells. The immersion of $\mathcal{G}^*$ into $\mathcal{G}$ is obtained by mapping the (non-removed) nodes $\Delta_i^j$ in $\mathcal{G}^*$ to $\Delta^j$ in $\mathcal{G}$.} \label{Fig:Lifting} \end{figure} \begin{corollary}\label{Cor:carving-width} Crushing does not increase carving-width. \end{corollary} \begin{proof} Immersion does not increase carving-width, \reflem{Immersion}. \end{proof} \begin{corollary}\label{Cor:CrushingTreewidth} Crushing an arbitrary finite number of normal surfaces in a triangulation increases the treewidth by at most a multiplicative factor of six. \end{corollary} \begin{proof} This follows from \refcor{carving-width} and \refthm{boundcngtw}, using the fact that a graph coming from the dual of a 3-manifold triangulation has all nodes of degree at most four. \end{proof} \begin{figure} \centering \includegraphics[width=10cm]{figures/twincr.pdf} \caption{Liftings of 2-0-10 and 1-0-9 increase the treewidth of the graph.} \label{Fig:twincr} \end{figure} \begin{remark} Note that in general the two liftings pictured in \reffig{Lifting} may increase the treewidth of a graph. For example, consider the graphs in \reffig{twincr}, where the graph of the pentagonal prism (on the right) is obtained from liftings 2-0-10 and 1-0-9 in the graph on the left. The latter has treewidth $3$, as it admits $K_4$ as a minor ($\operatorname{tw} \geq 3$), and a path decomposition $(B_i)_{i = 1 \ldots 8}$, $B_i = \{0,i,i+1,i+2\}$, of width $3$. The former is a well-known obstruction to treewidth $3$, and has treewidth $4$~\cite{ARNBORG19901,doi:10.1002/net.3230200304}. \end{remark} In their seminal work on 0-efficient triangulations, Jaco and Rubinstein proved that a minimal triangulation (i.e.\ with a minimal number of tetrahedra) of a manifold is 0-efficient. In the same spirit, we deduce the following for triangulation width: \begin{corollary} Any closed, orientable, irreducible 3-manifold $M$, not $S^3$, $S^2 \times S^1$, $\mathbb{R}P^3$ or $L(3,1)$, admits a 0-efficient triangulation of optimal carving-width $\operatorname{cng}(M)$. Any compact, orientable, irreducible, $\partial$-irreducible 3-manifold $M$, not the 3-cell, admits a 0-efficient triangulation of optimal carving-width $\operatorname{cng}(M)$. \end{corollary} \begin{proof} Let $\mathfrak{T}$ be a triangulation of $M$ of carving-width $\operatorname{cng}(M)$. By~\refthm{JRCrushing}, one can crush normal spheres and disks in $\mathfrak{T}$ to get a 0-efficient triangulation $\mathfrak{T}^*$ of carving-width at most $\operatorname{cng}(M)$ by~\refcor{carving-width}. \end{proof} To conclude this section, we prove the following simple property of carving-width. \begin{lemma}\label{Lem:cngconnecting} Let $\mathcal{G}$ and $\mathcal{G}'$ be two graphs, and let $\mathcal{G} \# \mathcal{G}'$ be obtained by adding $m \geq 1$ arcs between nodes of $\mathcal{G}$ and nodes of $\mathcal{G}'$, not counting multiplicities. Then \[ \operatorname{cng}(\mathcal{G} \# \mathcal{G}') \leq \max \{ \operatorname{cng}(\mathcal{G}) + m-1, \operatorname{cng}(\mathcal{G}') + m-1, \text{\emph{max degree in}} \ \mathcal{G} \# \mathcal{G}' \}. \] If the $m$ arcs are incident to a single node $u$ in $\mathcal{G}$, then \[ \operatorname{tw}(\mathcal{G} \# \mathcal{G}') \leq \max \{ \operatorname{tw}(\mathcal{G}), \operatorname{tw}(\mathcal{G}') + 1 \}. \] \end{lemma} \begin{proof} Pick two optimal tree embeddings $\pi\from\mathcal{G}\to T$ and $\pi'\from\mathcal{G}'\to T'$, with arcs $a\in T$ and $a'\in T'$ realising the congestion $\operatorname{cng}(\pi)$ and $\operatorname{cng}(\pi')$, respectively. Without loss of generality, suppose $\operatorname{cng}(\pi)\geq\operatorname{cng}(\pi')$, hence at least as many paths run through $a$ as $a'$. Let $u$ in $\mathcal{G}$ and $v$ in $\mathcal{G}'$ be two nodes that are adjacent in $\mathcal{G} \# \mathcal{G}'$. Subdivide the only arcs incident to the leaves of $\pi(u)$ and $\pi'(v)$ in the tree embeddings, and connect the two new nodes by an arc. This leads to a tree embedding $\Pi(\mathcal{G} \# \mathcal{G}')$ of $\mathcal{G}\#\mathcal{G}'$. Note that $m$ paths run through the arc connecting the trees $T$ and $T'$. The congestion $\operatorname{cng}(\Pi)$ will be largest if as many of those paths as possible also run through $a$. If $a$ in $T$ does not connect the leaf $\pi(u)$, then at most $m-1$ new paths run through $a$, because the path from $\Pi(u)$ to $\Pi(v)$ only runs through new arcs. If $a$ in $T$ is the arc connecting the leaf $\pi(u)$, then $\operatorname{cng}(\mathcal{G})$ paths run to $u$, hence $u$ is $\operatorname{cng}(\mathcal{G})$-valent. The arc $a$ is subdivided to form the new tree, and $\operatorname{cng}(\mathcal{G})$ paths from the tree $T$ will continue to run over the two new arcs obtained by subdividing $a$. If one of the $m$ new arcs between $\mathcal{G}$ and $\mathcal{G}'$ does not have an endpoint on $u$, then the corresponding path will run over the subarc of $a$ that does not meet the leaf of $\Pi(u)$, whereas the arc from $\Pi(u)$ to $\Pi(v)$ will not meet this arc, and thus the congestion is at most $\operatorname{cng}(\mathcal{G})+m-1$. However, if all the $m$ new arcs between $\mathcal{G}$ and $\mathcal{G}'$ run from $u$ to nodes of $\mathcal{G}'$, then all $m$ new paths must also run over the arc connecting the leaf of $\Pi(u)$. Thus the congestion will be at most $\operatorname{cng}(\mathcal{G})+m$, which is the degree of $u$ in $\mathcal{G}\#\mathcal{G}'$. For treewidth, pick two optimal tree decompositions $T$ and $T'$ for $\mathcal{G}$ and $\mathcal{G}'$ respectively, and let $u$ in $\mathcal{G}$ be the node to which all new arcs are incident. Let $(u,v)$, with $v$ in $\mathcal{G}'$, be a new arc. Let $B_u$ be a bag of $T$ containing $u$, and $B_v$ be a bag of $T'$ containing $v$. Connecting $B_u$ and $B_v$ with an arc, and adding node $u$ to all bags in $T'$, leads to a tree decomposition of $\mathcal{G} \# \mathcal{G}'$, and the result follows. \end{proof} \section{Triangulating thick hyperbolic manifolds} \label{Sec:geometry} In this section, we consider a finite volume compact hyperbolic $3$-manifold $M$ with boundary and bounded injectivity radius. We show such a manifold admits a triangulation with a bounded number of tetrahedra, where the bound is linear in volume. This result is not new; its proof is outlined in Thurston's notes \cite{Thurston:notes}, and proved carefully elsewhere, for example by Kobayashi and Reick \cite{KobayashiRieck}. We step through highlights of the proof here for completeness, and also to discuss the algorithmic nature of the argument, in order to actually compute a triangulation. The algorithm will be summarised in \refsec{Algorithm}. \subsection{Hyperbolic manifolds}\label{Sec:HyperbolicReview} Here we review definitions and results in hyperbolic geometry that are most important to our results. For further information on hyperbolic 3-manifolds, see for example~\cite{BenedettiPetronioHyperbolicGeom}. \begin{definition}\label{Def:ThickPart} Let $\mu>0$, and let $M$ be a hyperbolic 3-manifold. The \emph{$\mu$-thick part of $M$}, denoted $\M^{\geq \mu}$, consists of all points $x\in M$ such that any geodesic based at $x$ has length at least $\mu$. Equivalently, $\M^{\geq \mu}$ is the set of points in $M$ with injectivity radius at least $\mu/2$. The complement of the $\mu$-thick part is the $\mu$-thin part. \end{definition} Recall that by the Margulis lemma, there exists a universal constant $\epsilon_3$ such that for any finite volume hyperbolic 3-manifold $M$, and any $\mu\leq\epsilon_3$, the $\mu$-thin part of $M$ consists only of tubes about geodesics and cusps~\cite{KazhdanMargulis}. In the discussion below, we will always assume that $0<\mu\leq \epsilon_3$. Such a $\mu$ is said to be a 3-dimensional \emph{Margulis constant}. Let $B_D(x,r)$ denote the open ball of centre $x$ and radius $r > 0$ in $D$. Recall that the volume of a hyperbolic $3$-ball of radius $r > 0$ is given by: \[ \operatorname{vol}(B_{{\mathbb{H}}^3}(r)) = \pi (\sinh 2r - 2r); \] see, for example~\cite{FenchelHyperbolicGeom}. \subsection{Triangulating thick parts}\label{Sec:Meshing} In this section, for a fixed 3-dimensional Margulis constant $\mu$, we recall the argument of \cite{KobayashiRieck} to show that a small neighbourhood of the $\mu$-thick part of a hyperbolic 3-manifold $M$ can be triangulated with $O(\operatorname{vol}(M))$ tetrahedra. We start by setting notation. For $\mu>0$ a Margulis constant and any $d>0$, denote the metric $d$-neighbourhood of $\M^{\geq \mu}$ by $X:=N_d(\M^{\geq \mu})$. In \cite[Proposition~1.2]{KobayashiRieck}, it is shown that there exists $R:=R(\mu,d)$ such that for any complete finite volume hyperbolic 3-manifold $M$, and any $x\in X$, the injectivity radius of $x$ is at least $R$, and $X$ is obtained from $M$ by drilling out short geodesics and truncating cusps. Let $D=\min\{R,d\}$. \begin{definition}\label{Def:Net} Let $(X,\mathrm{dist}_X)$ be a metric space. For $\varepsilon > 0$, a set of points $P \subset X$ is $\varepsilon$-dense in $X$ if, for any $x \in X$, there is a point $p \in P$ such that $\mathrm{dist}_X(x,p) < \varepsilon$. For $1 \geq \delta > 0$, the set $P$ is $\delta \varepsilon$-separated if any two distinct points $p,q \in P$ satisfy $\mathrm{dist}_X(p,q) \geq \delta \varepsilon$. We call $P$ a $(\delta,\varepsilon)$-net if it is $\varepsilon$-dense and $\delta \varepsilon$-separated. Note that any $(\delta,\varepsilon)$-net is also a $(\delta',\varepsilon)$-net for any $\delta' \leq \delta$. \end{definition} \begin{lemma}\label{Lem:mesh} Let $\mu>0$ be a 3-dimensional Margulis constant and $d>0$. Let $M$ be a hyperbolic manifold of finite volume $\operatorname{vol}(M)$, with $d$-neighbourhood of the thick part $\M^{\geq \mu}$ denoted by $X$. Then $X$ admits a $(\delta,\varepsilon)$-net of size \[ n \leq \frac{\operatorname{vol}(M)}{\pi (\sinh \varepsilon - \varepsilon)} \leq \frac{6}{\pi} \frac{\operatorname{vol}(M)}{\varepsilon^3} \] for any $\varepsilon \leq \mu$ and any $\delta \leq 1$. \end{lemma} \begin{proof} This is the standard iterative construction of nets. Fix an arbitrary $\varepsilon \leq \mu$. Set $P$ to be the empty set. While there exists a point $x$ in the set \[ X - \bigcup_{p \in P} B_{X}(p, \varepsilon), \] set $P$ to be $P \cup \{x\}$. At any time of the procedure, the union of balls of radius $\varepsilon/2$ centred on the points of $P$ are disjoint and embedded in $M$. Consequently, \[ |P| \times \operatorname{vol}(B_{{\mathbb{H}}^3}(\varepsilon/2)) \leq \operatorname{vol}(M) \] Because $M$ has finite volume, the procedure terminates, and $P$ is a $(1,\varepsilon)$-net for $X$ by construction. \end{proof} Recall that $D=\min\{R,d\}$. Kobayashi and Rieck take a maximal $D$-separated set for $X$, but it suffices for their argument to let $\{x_1, \dots, x_N\}$ be a $(1,D)$-net for $X$. Now let $\{V_1, \dots, V_N\}$ be the \emph{Voronoi cells} in $M$ corresponding to $\{x_1, \dots, x_N\}$, namely the sets \[ V_i = \{p\in M \mid \mathrm{dist}(p,x_i) \leq \mathrm{dist}(p,x_j) \mbox{ for } j = 1, \dots, N\}. \] Kobayashi and Rieck show that the components of $V_i\cap X$ consist of handlebodies with universally bounded genus, with boundaries consisting of geodesic faces meeting in geodesic edges and vertices, and that the number of such faces and edges (and vertices) is universally bounded independent of $M$. After possibly perturbing the points $\{x_1, \dots, x_N\}$ slightly, they give an algorithm that builds, for each component $V_{i,j}$ of $V_i\cap X$, a 2-complex $K_{i,j}$. The complex $K_{i,j}$ has totally geodesic faces, a universally bounded number of faces and edges, and it cuts $V_{i,j}$ into a single ball $B_{i,j}$. By subdividing remaining faces into triangles, and then coning to the centre of the ball $B_{i,j}$, we obtain a triangulation of $V_{i,j}$ such that by construction, triangulations of distinct $V_{i,j}$ agree on their intersections. The above gives the following, which is \cite[Proposition~1.4]{KobayashiRieck}. \begin{proposition}\label{Prop:VoronoiTriangulations} Let $\mu$ be a 3-dimensional Margulis constant, and fix $d>0$. For any complete finite volume hyperbolic 3-manifold $M$, let $X$ denote the metric $d$-neighbourhood of $\M^{\geq \mu}$. Then there exists a constant $C=C(\mu,d)$ so that the following holds. \begin{enumerate} \item $M$ is decomposed into $N\leq C \operatorname{vol}(M)$ Voronoi cells $\{V_1, \dots, V_N\}$. \item $V_i\cap X$ is triangulated using at most $C$ tetrahedra for all $i=1, \dots, N$. \item For any $i,j\in\{1, \dots, N\}$, the triangulations in (2) coincide on $(V_i\cap X) \cap (V_j\cap X)$. \end{enumerate} \end{proposition} We now obtain the following consequence, which is \cite[Theorem~1.1]{KobayashiRieck}. \begin{theorem}\label{Thm:MinVolTet} Let $\mu$ be a 3-dimensional Margulis constant, and fix $d>0$. Then there exists a constant $v(\mu,d)>0$ such that for $M$ any closed hyperbolic 3-manifold with volume $\operatorname{vol}(M)$, the metric $d$-neighbourhood of the $\mu$-thick part $\M^{\geq \mu}$ admits a triangulation $\mathfrak{T}_B$ with number of tetrahedra at most \[ \frac{\operatorname{vol}(M)}{v(\mu,d)} = O(\operatorname{vol}(M)). \] \end{theorem} \begin{proof} Apply \refprop{VoronoiTriangulations} to $M$, decomposing $X$ into at most $C$ cells with at most $C$ tetrahedra each. Because the triangulations match where the cells overlap, this gives a triangulation of $N_d(\M^{\geq \mu})$ with at most $C^2\operatorname{vol}(M)$ tetrahedra. Set $v(\mu,d)=1/C^2$. \end{proof} Naturally, this triangulation has carving-width (and treewidth) at most $O(\operatorname{vol}(M))$. \section{Triangulations of closed manifolds} \label{Sec:combinatorics} For $\mu$ a Margulis constant, and fixed $d>0$, let $\M^{\geq \mu}$ denote the thick part of $M$, and let $X:=N_d(\M^{\geq \mu})$ denote its $d$-neighbourhood. Then $X$ is a compact manifold with (possible) torus boundary components; see \cite[Proposition~2.1]{KobayashiRieck}. Let $\mathfrak{T}_B$ be the triangulation of $X$ constructed in \refthm{MinVolTet}. This triangulation contains at most $\operatorname{vol}(M)/v(\mu,d)$ tetrahedra. \begin{lemma}\label{Lem:0Efficient} There exists a triangulation $\mathfrak{T}_{JR}$ of $X$ that has exactly one vertex in each boundary component of $\M^{\geq \mu}$, no other vertices, and contains at most $\operatorname{vol}(M)/v(\mu,d)$ tetrahedra. \end{lemma} \begin{proof} For the proof, we wish to obtain a $0$-efficient triangulation, as in work of Jaco and Rubinstein~\cite{JacoRubinstein:0Eff}, repeated in \refthm{JRCrushing}. To obtain a $0$-efficient triangulation from $\mathfrak{T}_B$, apply the crushing procedure. Crushing does not increase the number of tetrahedra. However, it may affect the topology of the underlying manifold, but only in well-understood ways that are listed in \cite[Theorem~2]{Burton:Crushing}: \begin{itemize} \item It may undo connect sums, \item cut open along properly embedded discs, \item fill a boundary sphere with a 2-ball, or \item delete a 3-ball, 3-sphere, ${\mathbb{R}} P^3$, $L(3,1)$ or $S^2\times S^1$ component. \end{itemize} Since we start with $M$ hyperbolic, the interior of the manifold $\M^{\geq \mu}$ is also hyperbolic. Hence there are no connect sums, no boundary spheres, and no non-hyperbolic components. There are also no essential discs; thus if we cut along a properly embedded disc, the disc will cut off a ball, which we may ignore. Thus repeatedly applying the crushing move gives a $0$-efficient triangulation, which has the properties required by the lemma. \end{proof} \begin{lemma}\label{Lem:FillingAndTreewidth} If $\mathfrak{T}$ is a triangulation of a 3-manifold $M$ with a torus boundary component $S$ such that $S$ inherits a one-vertex triangulation from $\mathfrak{T}$, then any Dehn filling of $M$ along $S$ can be given a triangulation with carving-width at most \[ \max \{ \operatorname{cng}(\mathfrak{T}) + 1, 4 \}. \] \end{lemma} \begin{proof} First, note that the only one-vertex triangulation of the torus $S$ has two triangles, so the boundary component of $\mathfrak{T}$ corresponding to $S$ has two triangles, which are incident to at most two distinct tetrahedra. The Dehn filling can be obtained by attaching a layered solid torus, as in Example~\ref{Example:LST}, with gluing graph a simple daisy chain, with one loop arc with both endpoints on the node corresponding to the tetrahedron in the centre, and pairs of parallel arcs between nodes corresponding to the layers of tetrahedra. The layering is determined by the slope of the Dehn filling. After a finite number of steps, the meridian of the layered solid torus will correspond to the desired slope; see \cite{JacoRubinstein:LayeredTriang}. At this stage, the final pair of faces is attached to the tetrahedra in $M$ that form the torus boundary. Because the triangulation of $M$ and the one of the layered solid torus are glued along two faces of at most two distinct tetrahedra, the carving-width after gluing is at most the maximum of the carving-width between the two triangulations plus 1, or the maximal degree in the dual graph after filling, which is 4, by \reflem{cngconnecting}. Because a daisy chain graph has carving-width at most two, by \reflem{DaisyChain}, the result follows. \end{proof} \begin{theorem}\label{Thm:TWandVol} Let $\mu$ be a 3-dimensional Margulis constant and fix $d>0$. Let $M$ be a closed hyperbolic 3-manifold with volume $\operatorname{vol}(M)$. Then the carving-width of $M$ is bounded above by $6 \cdot \operatorname{vol}(M)/v(\mu,d) = O(\operatorname{vol}(M))$. \end{theorem} \begin{proof} Start with the triangulation $\mathfrak{T}_{JR}$ of $X:= N_d(\M^{\geq \mu})$. The triangulation $\mathfrak{T}_{JR}$ is obtained in \reflem{0Efficient} by crushing, and by \refcor{carving-width} has carving-width at most the carving-width of triangulation $\mathfrak{T}_B$ in \refthm{MinVolTet}, which is naturally at most $4 \cdot \operatorname{vol}(M)/v(\mu,d) = O(\operatorname{vol}(M))$. We obtain $M$ from $X$ by Dehn filling the torus boundary components. Each boundary component inherits a triangulation from $\mathfrak{T}_{JR}$ with exactly one vertex and two triangles, and so there are at most $2 \cdot \operatorname{vol}(M)/v(\mu,d)$ boundary components. Consequently, performing the Dehn fillings increases the carving-width by at most $2 \cdot \operatorname{vol}(M)/v(\mu,d)$, by \reflem{FillingAndTreewidth}. \end{proof} By \refthm{boundcngtw}, this result also applies to treewidth. \subsection{Algorithm and computational complexity}\label{Sec:Algorithm} Our approach to constructing a triangulation with carving-width $O(\operatorname{vol}(M))$ for a closed hyperbolic 3-manifold $M$ is algorithmic. Given $M$ presented by its thick part and hyperbolic Dehn fillings, and an oracle to access its geometry, one can compute the $(1,D)$-net of \reflem{mesh} with $O(\mathrm{poly}(\operatorname{vol}(M)))$ calls to the oracle, where $\mathrm{poly}$ is a polynomial function. The procedure to compute Voronoi cells is polynomial in the size of the net, which is $O(\operatorname{vol}(M))$. Meshing in polynomial time, while maintaining the Voronoi cells, can be done using an intrinsic version of Delaunay refinement~\cite{edelsbrunner_2001}. Kobayashi and Rieck's algorithm to subdivide each cell $V_{i,j}$ into a ball by constructing a 2-complex $K_{i,j}$ requires first building a graph with desirable properties, in \cite[Lemma~4.2]{KobayashiRieck}, which is constructed by adding components of $V_{i,j}\cap\partial X$ one at a time, and then adding and removing edges that lie in geodesic faces. The number of components of $V_{i,j}\cap \partial X$ is universally bounded, depending on the universal bound of the genus of $V_{i,j}$, and thus for each $V_{i,j}$ this algorithm runs in constant time. One can then triangulate the balls $V_{i,j}-K_{i,j}$ in constant time, using the fact that the number of faces, edges, and vertices of each ball is universally bounded. This is done for each of the $O(\operatorname{vol}(M))$ cells. The algorithm using the crushing procedure in the proof of \reflem{0Efficient} is exponential in $\operatorname{vol}(M)$, the number of tetrahedra. However, the complexity becomes polynomial~\cite{Burton:Crushing} if, instead of $0$-efficient triangulations, we only require the triangulations $\mathfrak{T}_i$ of the decomposition described in \refthm{JRCrushing} to have exactly one vertex per boundary component if bounded, or one vertex triangulation if closed. This is sufficient for the construction of \refthm{TWandVol}. The filling by Dehn surgery in the proof of \refthm{TWandVol} is linear time in the Dehn filling coefficients (and hence linear in the final number of tetrahedra for the output triangulation). Consequently, the procedure above constructs a triangulation with treewidth $\operatorname{vol}(M)$ for a closed hyperbolic 3-manifold $M$ in time: \[ O\left( \mathrm{poly}(\operatorname{vol}(M)) \cdot \mathrm{Or} + \mathrm{poly}(\operatorname{vol}(M)) + n\right) \] and $O(n)$ memory, where $\mathrm{poly}$ denotes polynomial functions, $\mathrm{Or}$ is the time complexity for calling the geometry oracle, and $n$ is the number of tetrahedra of the output manifold. Note that, by construction, we also get a tree-decomposition of width $O(\operatorname{vol}(M))$. \section{Treewidth does not bound volume} \label{Sec:cst_tw} In this section, we prove there exist families of manifolds with constant treewidth but unbounded volume. Our examples include both manifolds with boundary and closed manifolds. The manifolds with boundary that we consider are the exteriors of 2-bridge knots. There are many ways to describe 2-bridge knots; see for example \cite{BurdeZieschang}. For the purpose of this paper, a 2-bridge knot $K[a_{n-1}, \ldots, a_1]$ is described by a finite collection of integers $a_1, \dots, a_{n-1}$. The diagram of the knot $K[a_{n-1}, \ldots, a_1]$ consists of $n-1$ twist regions arranged linearly, with the $i$-th region containing $|a_i|$ crossings, and the direction of the crossing determined by the sign of $a_i$, and twist regions connected as shown in \reffig{twobridgeknot}. In general, we may always assume that either all the $a_i$ are positive or all negative, and $|a_1|$ and $|a_{n-1}|$ are at least $2$. \begin{figure} \centering \import{figures/}{2BridgeDiagram.pdf_tex} \caption{The diagram of $K[a_{n-1}, \ldots , a_1]$, for $n$ even. The box labelled $\pm a_i$ denotes a horizontal twist region with $|a_i|$ crossings, with the sign of all crossings equal to the sign of $\pm a_i$. The crossing number is $C = |a_{n-1}| + \ldots + |a_1|$. } \label{Fig:twobridgeknot} \end{figure} \begin{proposition}\label{Prop:CuspedBddTW} The family of hyperbolic 2-bridge knots has unbounded volume but treewidth bounded by a constant. This is true whether we take treewidth corresponding to an ideal triangulation or corresponding to a finite triangulation of the knot exterior. \end{proposition} \begin{proof} By Theorem~B.3 of \cite{GueritaudFuter:2bridge}, the volume of a 2-bridge knot $K[a_{n-1}, \dots, a_1]$ is bounded below by $2\,v_3\,n$, where $v_3=1.0149\dots$ is a constant. Thus letting $n$ approach infinity gives a sequence of 2-bridge knots with volume approaching infinity. To show that these knots have bounded treewidth, we need to describe a triangulation of the knot complements. We use the well known triangulation of 2-bridge knot complements due to Sakuma and Weeks \cite{SakumaWeeks}; see also Gu{\'e}ritaud and Futer \cite{GueritaudFuter:2bridge}. We will review the description of ideal triangulation briefly here. More details can be found in the two previous references, or in \cite{Purcell:book}. For ease of exposition, we will only work with examples for which $n$ is even, each $a_i>0$, and $a_1$ and $a_{n-1}$ are both at least $2$. While a similar argument works when $n$ is odd or when the values $a_i$ are all negative, we will not need it for our purposes here. The easiest way to describe the triangulation is to start with the diagram of $K[a_{n-1}, \dots, a_1]$ as in \reffig{twobridgeknot}, and then isotope all the odd twist regions to be vertical, as in \reffig{tangle}. \begin{figure} \centering \import{figures/}{PosTangleDiagram.pdf_tex} \caption{Another diagram of a 2-bridge knot.} \label{Fig:tangle} \end{figure} With the diagram in the form of \reffig{tangle}, we may think of the crossings as nested, with the first crossing of the twist region corresponding to the first crossing of $a_1$ on the inside, and the last crossing corresponding to the last crossing of $a_{n-1}$ on the outside. Recall that there are $C = a_1+\dots +a_{n-1}$ crossings in total. The complement of the knot is built of the following: \begin{enumerate} \item A tangle containing the very first crossing on the inside. \item For the $i$-th crossing, $i=2, \dots, C-1$, a block homeomorphic to $S\times I$ where $S$ is a 4-punctured sphere. The block contains a single horizontal or vertical crossing, depending on whether the $i$-th crossing is horizontal or vertical. See \reffig{4PunctSphereBlocks}. It is stacked onto the previous crossing. \item A tangle containing the very last crossing on the outside. \end{enumerate} \begin{figure} \centering \includegraphics{figures/4PunctSphereBlocks.pdf} \caption{Vertical (left) and horizontal (right) blocks of the form $S\times I$. The 4-punctured spheres on the outside and inside correspond to $S\times\{1\}$ and $S\times\{0\}$, respectively. Figure from \cite{Purcell:book}.} \label{Fig:4PunctSphereBlocks} \end{figure} There are $2(C-2)$ tetrahedra in the decomposition, which we now describe. First, ignore the innermost and outermost tangles containing a single crossing. For the $i$-th crossing, $i=2, \dots, C-2$, there is a pair of tetrahedra lying between the $i$-th and $(i+1)$-st blocks $S\times I$. The ideal edges of these tetrahedra are edges that are either horizontal or vertical on one of the surfaces $S\times\{0\}$ and $S\times\{1\}$, for the $i$-th or $(i+1)$-st block. When we isotope all these edges to lie between the two blocks, we see the form of the two tetrahedra, as in \reffig{Pillowcase}, left. One tetrahedron, denoted $T_i^1$ lies in front of the $i$-th block, and one, denoted $T_i^2$, lies behind. \begin{figure} \centering \includegraphics{figures/SolidPillowcases.pdf} \hspace{.75in} \import{figures/}{TetrFaces.pdf_tex} \caption{On the far left, the edges of the tetrahedron are shown. Middle: two faces of the tetrahedron $T_i^1$ lying on the surface $S\times\{0\}\subset S\times I$ for the $(i+1)$-st block. Right: Position of those two faces when isotoped to $S\times\{1\}$ on the $(i+1)$-st block. Figure from \cite{Purcell:book}.} \label{Fig:Pillowcase} \end{figure} We need to determine how the faces of the tetrahedra glue. Note that the faces on the outside are isotopic through the outside block to triangles on the outside block. Consider the two outer faces of $T_i^1$ on $S\times\{0\}$ on the $(i+1)$-st block. These two faces are shown in \reffig{Pillowcase} for the example that the crossing is vertical. When we isotope through the $(i+1)$-st block, the triangles isotope to the triangles shown on the right of \reffig{Pillowcase}. Note that one lies in front, and one lies in back. The one in front will be glued to an inside face of $T_{i+1}^1$, and the one in the back will be glued to an inside face of $T_{i+1}^2$. Similarly, if the crossing in the $(i+1)$-st block is horizontal, triangle faces of $T_i^1$ on the outside isotope to two triangles, one on the front and one on the back of $S\times\{1\}$ in the $(i+1)$-st block. Thus one will be glued to $T_{i+1}^1$ and the other to $T_{i+1}^2$. An identical argument, isotoping in the other direction, then implies that one of the inside faces of $T_i^1$ is glued to a face of $T_{i-1}^1$ and the other inside face is glued to $T_{i-1}^2$. To complete the dual graph of the triangulation, we must determine how inner faces of $T_2^1$ and $T_2^2$ glue when we insert the innermost tangle containing a single crossing. As described in \cite{GueritaudFuter:2bridge}, inserting this tangle glues the two faces on the block $S\times I$ containing the second crossing to each other. The two triangles in the front of $S\times\{0\}$ are glued, and the two triangles in the back of $S\times\{0\}$ are glued. However, recall that we view tetrahedra $T_2^1$ and $T_2^2$ as lying between the 2nd and 3rd blocks. As in \reffig{Pillowcase}, right, isotoping the inner faces of $T_2^1$ and $T_2^2$ through the block $S\times I$ to $S\times\{0\}$ puts the two faces of $T_2^1$ on opposite sides of $S\times\{0\}$. Thus when we attach the tangle containing the first crossing, we glue each inner face of $T_2^1$ to an inner face of $T_2^2$. A similar argument shows each outer face of $T_{C-2}^1$ is glued to an outer face of $T_{C-2}^2$. Thus the graph dual to the triangulation of the 2-bridge knot complement has the form shown in \reffig{2BridgeDual}. \begin{figure} \centering \import{figures/}{2BridgeDualGraph.pdf_tex} \caption{The form of the dual graph to a 2-bridge knot triangulation.} \label{Fig:2BridgeDual} \end{figure} A tree decomposition of the graph of \reffig{2BridgeDual} is shown in \reffig{2BridgeTreeDecomp}. There are $C-4$ bags in the tree decomposition. Each bag contains exactly four nodes, namely $T_i^1$, $T_i^2$, $T_{i+1}^1$, and $T_{i+1}^2$ for $i=2, 3, \dots, C-3$. Thus the treewidth of this tree decomposition is $4-1=3$. Note it is constant, independent of the values of $a_i$ and the number of crossings $C$ and twist regions $n$. \begin{figure} \centering \import{figures/}{2BridgeTreeDecomp.pdf_tex} \caption{A tree decomposition of an ideal triangulation of a 2-bridge knot.} \label{Fig:2BridgeTreeDecomp} \end{figure} Finally, the above argument holds for ideal triangulations. It might be preferable to work with finite triangulations, i.e., tetrahedra with only finite vertices and not ideal vertices. We can modify the above decomposition into a finite triangulation by first, truncating ideal vertices. This turns an ideal tetrahedron into a polyhedron with four triangular faces (from truncating) and four hexagonal faces (one for each face of the previous ideal tetrahedron). Add a finite vertex to the centre of each hexagonal face and cone to obtain six triangles. Then add a finite vertex to the centre of the polyhedron and cone to all the faces. The result is a subdivision of the ideal tetrahedron into 28 finite tetrahedra. To obtain a tree decomposition of the finite triangulation, take the same tree decomposition as in \reffig{2BridgeTreeDecomp}. Replace $T_i^j$ with the 28 finite tetrahedra in the subdivision of $T_i^j$. This gives a tree decomposition, although it is likely not optimal. However, the size of each bag in the decomposition is $28*4 = 112$. Thus the treewidth of this decomposition is $111$, which is constant, independent of $C$ and $n$. \end{proof} \smallskip \begin{remark}~\label{Rem:twknotcomplement} It is folklore that given a link diagram of treewidth $k$ (seen as a 4-valent graph), one can construct an ideal triangulation of its link exterior with treewidth $O(k)$. More explicitly, this can be done using {\tt SnapPy}'s link complement triangulation algorithm~\cite{snappy}. Specifically, {\tt SnapPy}'s procedure first constructs a cell complex with four cells per crossing in the diagram, whose dual graph connects the four cells around a crossing in a square, and whose arcs run along the link diagram otherwise; this graph has treewidth at most $4$ times the treewidth of the diagram. The procedure concludes by contracting the cell complex along the link diagram, which induces \emph{arc contractions} in the dual graph that can only reduce the treewidth~\cite{robertson86-algorithmic}, and by adding a constant number of cells (increasing the treewidth by a constant) to get an ideal triangulation. However, the construction of \refprop{CuspedBddTW} above gives a more natural triangulation with a smaller treewidth. \end{remark} We have found manifolds with boundary with unbounded volume and bounded treewidth. We wish to find closed examples. To do so, we will perform Dehn filling on the 2-bridge knots from above, by attaching a layered solid torus. However, we need to ensure that there is a triangulation of the manifold with only one vertex on the boundary, with bounded treewidth. \begin{theorem}\label{Thm:BddTW} There exists a sequence of closed hyperbolic manifolds $M_n$ with bounded treewidth and unbounded volume. \end{theorem} \begin{proof} The sequence will be obtained by Dehn filling 2-bridge knot complements. By Proposition~\ref{Prop:CuspedBddTW}, there exists a sequence of 2-bridge knots with volume approaching infinity but with treewidth bounded by a constant. By virtue of \refcor{carving-width} (and \refthm{boundcngtw}), applying the 0-efficiency construction of Jaco and Rubinstein~\cite{JacoRubinstein:0Eff}, any (compact) 2-bridge knot exterior admits a bounded triangulation with constant treewidth and one vertex on each boundary component. Now perform a very high Dehn filling on the knot. The volume decreases by a bounded amount; see for example~\cite{FKP:DFVolJP}. Hence the sequence still has volume approaching infinity. We construct the triangulation of the Dehn filling using \reflem{FillingAndTreewidth}, which increases the treewidth by one, by \reflem{cngconnecting}. Thus the treewidth remains bounded. \end{proof} \section*{Acknowledgements} CM would like to thank Arnaud de Mesmay and Jonathan Spreer for discussions that led to the simple argument connecting treewidth of link diagrams and link exteriors in \refrem{twknotcomplement}. JP was supported in part by the Australian Research Council.
{ "timestamp": "2019-01-23T02:28:45", "yymm": "1805", "arxiv_id": "1805.02357", "language": "en", "url": "https://arxiv.org/abs/1805.02357" }
\section{Introduction}\label{s1} The present paper seeks to investigate the relations between compositions of restriction and induction functors when applied to modules over dihedral groups, and in particular to study certain algebras $A_{P,\mathcal{M}}$ derived from these relations. \subsection{Motivation} Let $S_n$ be the symmetric group on $n$ elements, and $S_n$-mod the category of its finitely generated modules over $\mathbb{C}$. Then we have the usual induction functor $\ind_n:S_n\text{-mod}\rightarrow S_{n+1}\text{-mod}$ and restriction functor $\res_n:S_{n+1}\text{-mod}\rightarrow S_{n}\text{-mod}$. Consider now the direct sum of all of the categories $S_n$-mod, as $n$ ranges over the nonnegative integers. This category comes equipped with two exact endofunctors $\ind$ and $\res$ obtained by adding up all $\ind_n$ and $\res_n$ respectively. By taking the Grothendieck group of the whole construction, we obtain a vector space with two linear operators $[\res]$ and $[\ind]$ respectively. The classical Branching rule for the symmetric group (cf., e.g.,\cite[p. 77]{Sa01}) implies that these two linear operators satisfy the defining relations of the Heisenberg algebra (as is used for instance in \cite{Kh14}), namely \begin{equation*} [\res][\ind]-[\ind][\res]=[\id]. \end{equation*} Moreover, it is known that the above equality admits the functorial ``upgrade'' \begin{equation*} \res\circ\ind\cong\id+\ind\circ\res. \end{equation*} From the study of the decomposition numbers of certain Hecke algebras emerged a refinement of the above to fields of positive characteristic, $p$. Using eigenvalues of Jucys-Murphy elements, the induction and restriction functors decompose into $p$ summands. In this way, the above gives rise to a specific representation of the affine Kac-Moody algebra $\hat{\mathfrak{sl}}_p$, (cf. \cite{LLT95}, \cite{LLT96}, \cite{Ar96}, and \cite{Gr99}). This approach has since spawned many interesting results connecting the representation theories of various Hecke algebras with those of other algebras (cf., e.g. \cite{Ar01}, \cite{Ja05}, \cite{Ar06}, \cite{BK09}, \cite{Kh14}, \cite{RS15}, and \cite{MV18}). The original motivation for the present paper comes from the attempt to investigate a similar construction (in the case of modules over complex numbers) for dihedral groups rather than the symmetric groups. There are several significant differences. While the symmetric groups are naturally included into each other with respect to the usual linear order on their set of indices (the set of nonnegative integers), natural inclusions of dihedral groups are given by the divisibility partial order on ther set of their orders. As a consequence, we have infinitely many ``elementary'' induction and restriction functors, naturally indexed by prime numbers. The aim of this paper is to understand the basic combinatorics which these functors generate on the level of the Grothendieck group. \subsection{Contents} The paper is structured as follows. In Section \ref{s2}, we fix some notation and recall basic facts about the dihedral groups, their modules, and restrictions and inductions of the latter. In Section \ref{s3}, we describe the actions of the restriction and induction functors on all simple modules over the dihedral groups. We use this to define the algebras $A_{P,\mathcal{M}}$, which depend on a choice of a set $P$ of prime numbers and a collection (satisfying a few closedness conditions) $\mathcal{M}$ of simple dihedral group modules. These algebras will be the main objects of study in this paper. Section \ref{s4} contains a couple of results on these algebras which can be obtained without restrictions on the orders of the involved dihedral groups. Section \ref{s5} is devoted to the study of the case of dihedral groups of the form $D_{2n}$ where $n$ is odd. We give a presentation and describe a basis of $A_{P,\mathcal{M}}$ in Theorems \ref{babybasisthm} and \ref{basisthm}. Furthermore, we describe the center of $A_{P,\mathcal{M}}$ in Theorem \ref{cthm} and use central idempotents to obtain a decomposition of $A_{P,\mathcal{M}}$ into a direct sum of two indecomposable algebras in Theorems \ref{babydecompthm} and \ref{decompthm}. In Corollary \ref{bicyccor}, we see that in certain cases, the indecomposable components of $A_{P,\mathcal{M}}$ can be described as tensor powers of the semigroup algebra of the classical bicyclic monoid (cf., e.g, \cite{CP64}). Finally, in Section \ref{s6}, we discuss the more difficult case involving all dihedral groups, tie up some loose ends, and speculate on possible further directions of study. \subsection*{Acknowledgements} The author is very much indebted to his advisor, Volodymyr Mazorchuk, for valuable discussions on the content of the paper as well as its presentation. \section{Preliminaries}\label{s2} \subsection{Miscellaneous notation, assumptions, and conventions} By $\mathbb{N}$ we denote the set of nonnegative integers; the set of positive integers we denote by $\mathbb{Z}_{>0}$. We use double brackets to denote intervals (open, closed or half-open) of integers. For instance $\llbracket 1,4\rrparenthesis=\{1,2,3\}$. All vector spaces (in particular modules, algebras etc) considered will be complex. All modules considered will be left modules. By angled brackets $\langle A|B\rangle$ we mean the algebraic structure (of a kind specified by the context) generated by the elements $A$ subject to the relations $B$. \subsection{Dihedral groups} For each integer $n\ge 3$, the dihedral group $D_{2n}$ is defined by \begin{equation*} D_{2n}=\langle r_n,s_n|r_n^n=1, s_nr_ns_n=r_n^{-1}\rangle, \end{equation*} and may be identified with its natural (real) representation, which is the group of symmetries of the regular $n$-gon inscribed in the unit circle such that $(1,0)$ is a vertex. Under this identification, $r_n$ corresponds to a rotation by $2\pi/n$, and $s_n$ corresponds to reflection with respect to the horizontal axis. Note that we consider dihedral groups for $n=1,2$ undefined, in contrast to going the Coxeter route where it is natural to define dihedral groups also for these $n$. This will have important consequences for the structure of our main objects of study. For a brief discussion on the case of defined dihedral groups for $n=1,2$, see Section \ref{s6}. \subsection{Modules over dihedral groups} For any integer $n\ge 3$, let us define $V_{a,b}(n)$ to be the one-dimensional complex $D_{2n}$-module with $r_n$-action given by multiplication with $a$ and $s_n$-action given by multiplication with $b$. Here $b\in\{1,-1\}$, and $a=1$ if $n$ is odd while $a\in\{1,-1\}$ if $n$ is even. Also define for any integers $k$ and $n\ge 3$ the two-dimensional complex $D_{2n}$-module $W_k(n)$ with $r_n$-action given by $\begin{pmatrix} e^{2\pi i k/n}&0\\0&e^{-2\pi i k/n}\end{pmatrix}$ and $s_n$-action given by $\begin{pmatrix} 0&1\\1&0\end{pmatrix}$ (both matrices are with respect to the standard basis). Let us for technical reasons also define $V_{a,b}(n)=0$ and $W_k(n)=0$ whenever $n$ is not an integer greater than or equal to 3, or (in the latter case) when $k$ is not an integer. The modules $W_k(n)$ are further described in the following easy Lemma. \begin{mylem} \label{wlem} \begin{enumerate} \item[$($i$)$] If $k\equiv \pm l\text{ (mod $n$)}$, then $W_k(n)\cong W_l(n)$. \item[$($ii$)$] The module $W_k(n)$ is indecomposable (hence simple) if $k\not\in\frac{1}{2}\mathbb{Z} n$. \item[$($iii$)$] If $k\in \mathbb{Z} n$, then $W_k(n)\cong V_{1,1}(n)\oplus V_{1,-1}(n)$. \item[$($iv$)$] If $k\in \frac{1}{2}\mathbb{Z} n\backslash \mathbb{Z} n$, then $W_k(n)\cong V_{-1,1}(n)\oplus V_{-1,-1}(n)$. \end{enumerate} \end{mylem} \begin{proof} Statement (i) holds because if $k\equiv l\text{ (mod $n$)}$ then $\id:W_k(n)\xrightarrow{\sim} W_l(n)$, and if $k\equiv -l\text{ (mod $n$)}$ then $s_n\cdot:W_k(n)\xrightarrow{\sim} W_l(n)$. Statements (ii), (iii) and (iv) hold because $\begin{pmatrix} 0&1\\1&0\end{pmatrix}$ has eigenvectors $(1,1)$ and $(1,-1)$, neither of which is an eigenvector of $\begin{pmatrix} e^{2\pi i k/n}&0\\0&e^{-2\pi i k/n}\end{pmatrix}$ unless $k\in\frac{1}{2}\mathbb{Z} n$, in which case they form a basis of $V_{1,1}(n)$ and $V_{1,-1}(n)$ respectively $V_{-1,1}(n)$ and $V_{-1,-1}(n)$. \end{proof} The classification of simple $D_{2n}$-modules is given in the following proposition, and a proof can be found e.g. as Theorems 3.4.1 and 3.4.2 in \cite{So14}. \begin{myprop} The simple $D_{2n}$-modules have either dimension 1 or 2. These are of the forms \begin{enumerate} \item[$($i$)$] $V_{a,b}(n)$, for any integer $n\ge 3$, for $b\in\{1,-1\}$, for $a=1$ in case of odd $n$, and for $a\in\{1,-1\}$ in case of even $n$. \item[$($ii$)$] $W_k(n)$, for any integer $n\ge 3$ and $k\in \llbracket 1,n/2\rrparenthesis$. \end{enumerate} Also, these modules are nonisomorphic. \end{myprop} \section{The restriction and induction functors}\label{s3} Let $D_{2n}\text{-Mod}$ denote the category of all left $D_{2n}$-modules. For any integer $p$, there is a natural inclusion \begin{equation} \label{incleq} \begin{aligned} D_{2n}&\hookrightarrow D_{2pn}\\ r_n&\mapsto r_{pn}^{p}\\ s_n&\mapsto s_{pn}. \end{aligned} \end{equation} Since any such inclusion factors into ones where $p$ is prime, we will throughout the rest of this text without loss of generality assume that $p$ is prime. With respect to these inclusions, we have the induction and restriction functors \begin{align*} \ind_n^{pn}: D_{2n}\text{-Mod}&\rightarrow D_{2pn}\text{-Mod}\\ M&\mapsto D_{2pn}\otimes_{\mathbb{C}[D_{2n}]} M \end{align*} and \begin{align*} \res_n^{pn}: D_{2pn}\text{-Mod}&\rightarrow D_{2n}\text{-Mod}\\ M&\mapsto M_{|D_{2n}}=D_{2n}\otimes_{\mathbb{C}[D_{2n}]} M \end{align*} respectively. In particular, it is understood that we will only consider induction and restriction between dihedral groups whose order differ by a prime factor. In what follows we will -- somewhat sloppily -- write $\res_p$ and $\ind_p$ instead of $\res_n^{pn}$ and $\ind_n^{pn}$ whenever the intended functors should be clear from the context. The following proposition describes the actions of the induction and restriction functors on simple modules. \begin{myprop} \label{resindprop} Restriction and induction act as follows on simple dihedral group modules. \begin{enumerate} \item[$($i$)$] $\res_p V_{a,b}(np)\cong\begin{cases} V_{a,b}(n), & \mbox{if } p\ne 2,\\ V_{|a|,b}(n) & \mbox{if } p=2. \end{cases}$ \item[$($ii$)$] $\res_p W_k(np)\cong W_k(n)\cong \begin{cases} W_{\pm k \text{ (mod $n$)}\in \llbracket 1,n/2\rrparenthesis}(n), & \mbox{if } k\not\in\frac{1}{2}\mathbb{Z} n,\\ V_{1,1}(n)\oplus V_{1,-1}(n) & \mbox{if } k\in\mathbb{Z} n,\\ V_{-1,1}(n)\oplus V_{-1,-1}(n) & \mbox{if $n$ is even and } k\in\frac{1}{2}\mathbb{Z} n\backslash \mathbb{Z} n. \end{cases}$ \item[$($iii$)$] $\ind_p V_{a,b}(n)\cong \begin{cases} V_{a,b}(pn)\oplus\bigoplus_{j\in \llbracket 1,\frac{p-1}{2}\rrbracket}W_{jn}(pn), & \mbox{if } p\ne 2\text{ and }a=1,\\ V_{a,b}(pn)\oplus\bigoplus_{j\in \llbracket 1,\frac{p-1}{2}\rrbracket}W_{(j-\frac{1}{2})n}(pn), & \mbox{if }p\ne 2\text{ and }a=-1\text{ (where $n$ is even)},\\ V_{1,b}(pn)\oplus V_{-1,b}(pn) & \mbox{if } p=2\text{ and }a=1,\\ W_{n/2}(pn) & \mbox{if } p=2\text{ and }a=-1\text{ (where $n$ is even)}. \end{cases}$ \item[$($iv$)$] $\ind_p W_k(n)\cong \begin{cases} W_k(pn)\oplus\bigoplus_{j\in\llbracket 1,\frac{p-1}{2}\rrbracket}(W_{-k+jn}(pn)\oplus W_{k+jn}(pn)), & \mbox{if } p\ne 2,\\ W_k(pn)\oplus W_{-k+n}(pn) & \mbox{if } p=2. \end{cases}$ \end{enumerate} \end{myprop} \begin{proof} {\bf Part (i):} The generator $r_n$ acts on $V_{a,b}(pn)_{|D_{2n}}$ via $r_{pn}^p$, which acts by $a$ if $p$ is odd, and by $|a|$ if $p$ is even. The generator $s_n$ acts on $V_{a,b}(pn)_{|D_{2n}}$ via $s_{pn}$, which acts by $b$. {\bf Part (ii):} The generator $r_n$ acts on $W_k(pn)_{|D_{2n}}$ via $r_{pn}^p$, which acts by $\begin{pmatrix} e^{2\pi i kp/(pn)}&0\\0&e^{-2\pi i kp/(pn)}\end{pmatrix}=\begin{pmatrix} e^{2\pi i k/n}&0\\0&e^{-2\pi i k/n}\end{pmatrix}$. The generator $s_n$ acts on $W_k(pn)_{|D_{2n}}$ via $s_{pn}$, which acts by $\begin{pmatrix} 0&1\\1&0\end{pmatrix}$. Hence we have the isomorphism $\res_p W_k(np)\cong W_k(n)$. The rest now follows from Lemma \ref{wlem}. {\bf Part (iii):} This is done by applying Frobenius reciprocity. {\bf Part (iv):} Likewise. \end{proof} The action of various inductions and restrictions on the simple modules over the dihedral groups define a partial order on those simple modules: for modules $M$ and $N$ we define $M\le N$ if and only if $M\cong N$ or $M$ is a summand of some restriction of $N$. We may conveniently illustrate these actions using the Hasse diagrams with respect to these partial orders; this is done in Figures 1 and 2. These diagrams are analogous to the Bratteli diagrams used for instance to study restriction and induction in the case of symmetric groups (see for instance \cite{BS05}), but here the underlying ordering of the group algebras is not linear and furthermore the trivial group algebra is not included. We will call these graphs \emph{induction/restriction diagrams}. \begin{comment} \dots &&&&&&&&&&&\\ W_m(p^2m)&&&&&&&&&&&\\ &&&&&&&&&&&\\ &&&&&&&&&&&\\ &&&&&&&&&&&\\ &&&&&&&&&&& \end{comment} \begin{sidewaysfigure} \label{figure1} \begin{centering} \xymatrix@!=1pc@C=0.01pt{ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&{\scriptstyle W_{1}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{-1+m}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{1+m}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{-1+2m}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle \dots}&&&&&{\scriptstyle W_{1+\frac{p-1}{2}m}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&{\scriptstyle \dots}&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&{\scriptstyle W_1(m)}\ar@{-}[uullllllllllll]\ar@{-}[uulllllll]\ar@{-}[uull]\ar@{-}[uurrr]\ar@{-}[uurrrrrrrrrrrrr]&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle W_2(m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&\dots&&&&{\scriptstyle W_{\frac{m-1}{2}}(m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&{\scriptstyle W_m(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{-m+pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{m+pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{-m+2pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle \dots}&&&&&{\scriptstyle W_{m+\frac{p-1}{2}pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle V_{1,1}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle W_{2pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle \dots}&&&&&{\scriptstyle W_{\frac{p-1}{2}pm}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle V_{1,-1}(p^2m)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle \dots}&&&&&&&&{\scriptstyle \dots}&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle V_{1,1}(pm)}\ar@{-}[uu]\ar@{-}[uurrrrr]\ar@{-}[uurrrrrrrrrr]\ar@{-}[uurrrrrrrrrrrrrrrrrrrr]&&&&&{\scriptstyle W_m(pm)}\ar@{-}[uullllllllllllllllllllllllllllllllll]\ar@{-}[uulllllllllllllllllllllllllllll]\ar@{-}[uullllllllllllllllllllllll]\ar@{-}[uulllllllllllllllllll]\ar@{-}[uulllllllll]&&&&&{\scriptstyle W_{2m}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle \dots}&&&&&{\scriptstyle W_{\frac{p-1}{2}}(pm)}\ar@{-}[ul]\ar@{-}[ull]\ar@{-}[u]\ar@{-}[ur]\ar@{-}[urr]&&&&&{\scriptstyle V_{1,-1}(pm)}\ar@{-}[uu]\ar@{-}[uulllll]\ar@{-}[uulllllllllllllll]\ar@{-}[uullllllllllllllllllll]&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle \dots}&&&&&&&&&&&&&&{\scriptstyle \dots}&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle V_{1,1}(m)}\ar@{-}[uu]\ar@{-}[uurrrrr]\ar@{-}[uurrrrrrrrrr]\ar@{-}[uurrrrrrrrrrrrrrrrrrrr]&&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle V_{1,-1}(m)}\ar@{-}[uu]\ar@{-}[uulllll]\ar@{-}[uulllllllllllllll]\ar@{-}[uullllllllllllllllllll]&& } \end{centering} \caption{The induction/restriction diagram of all simple $D_{2mp^l}$-modules, where $p$ is an odd prime, where $l\in\mathbb{N}$, and where $m$ is either equal to $p$ or odd and not divisible by $p$. The diagram has $\frac{m+1}{2}$ connected components.} \end{sidewaysfigure} \begin{sidewaysfigure} \label{figure2} \begin{centering} \xymatrix@!=1pc@C=0.01pt{ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ {\scriptstyle V_{1,1}(8m)}\ar@{-}[ur]\ar@{-}[u]&&&&{\scriptstyle V_{-1,1}(8m)}\ar@{-}[ur]&&&&{\scriptstyle W_{m}(8m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle W_{2m}(8m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle W_{-m+4m}(8m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle V_{-1,-1}(8m)}\ar@{-}[ul]&&&&{\scriptstyle V_{1,-1}(8m)}\ar@{-}[ul]\ar@{-}[u]&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ {\scriptstyle V_{1,1}(4m)}\ar@{-}[uu]\ar@{-}[uurrrr]&&&&{\scriptstyle V_{-1,1}(4m)}\ar@{-}[uurrrrrrrr]&&&&&&&&{\scriptstyle W_{m}(4m)}\ar@{-}[uullll]\ar@{-}[uurrrr]&&&&&&&&{\scriptstyle V_{-1,-1}(4m)}\ar@{-}[uullllllll]&&&&{\scriptstyle V_{1,-1}(4m)}\ar@{-}[uu]\ar@{-}[uullll]&&&&&{\scriptstyle W_1(4m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle W_{-1+2m}(4m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle W_{-1+m}(4m)}\ar@{-}[ul]\ar@{-}[ur]&&&&{\scriptstyle W_{1+m}(4m)}\ar@{-}[ul]\ar@{-}[ur]&&&&&&&&\textcolor{gray}{{\scriptstyle W_{-1+4}(8)}}\ar@{-}@[gray][ur]\ar@{-}@[gray][ul]&&&&\textcolor{gray}{{\scriptstyle W_{1+4}(8)}}\ar@{-}@[gray][ur]\ar@{-}@[gray][ul]&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ {\scriptstyle V_{1,1}(2m)}\ar@{-}[uu]\ar@{-}[uurrrr]&&&&{\scriptstyle V_{-1,1}(2m)}\ar@{-}[uurrrrrrrr]&&&&&&&&&&&&&&&&{\scriptstyle V_{-1,-1}(2m)}\ar@{-}[uullllllll]&&&&{\scriptstyle V_{1,-1}(2m)}\ar@{-}[uu]\ar@{-}[uullll]&&&&&&&{\scriptstyle W_1(2m)}\ar@{-}[uull]\ar@{-}[uurr]&&&&&&&&{\scriptstyle W_{-1+m}(2m)}\ar@{-}[uull]\ar@{-}[uurr]&&&&&&&&&&&&\textcolor{gray}{{\scriptstyle W_1(4)}}\ar@{-}@[gray][uurr]\ar@{-}@[gray][uull]&&&&&&&&&&&&\\ &&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&&\\ {\scriptstyle V_{1,1}(m)}\ar@{-}[uu]\ar@{-}[uurrrr]&&&&&&&&&&&&&&&&&&&&&&&&{\scriptstyle V_{1,-1}(m)}\ar@{-}[uu]\ar@{-}[uullll]&&&&&&&&&&&{\scriptstyle W_1(m)}\ar@{-}[uullll]\ar@{-}[uurrrr]&&&& \dots&&&&{\scriptstyle W_{\frac{m-1}{2}}(m)}\ar@{-}[ul]\ar@{-}[ur]&&&&&&\textcolor{gray}{{\scriptstyle V_{-1,1}(2)}}\ar@{-}@[gray][uurr]&&&&\textcolor{gray}{{\scriptstyle V_{-1,-1}(2)}}\ar@{-}@[gray][uull]&&&&&&&&&& } \end{centering} \caption{The induction/restriction diagram of all simple $D_{2mp^l}$-modules, where $p=2$, where $l\in\mathbb{N}$, and where $m$ is either equal to $p$ or not divisible by $p$. The diagram has $\frac{m+1}{2}$ connected components; the grayed out (rightmost) component is excluded in case $m\ne p$, and included in place of the $\frac{m-1}{2}$ components to its left in case $m=p$.} \end{sidewaysfigure} \subsection{Induction and restriction module structure on Grothendieck groups of dihedral group modules} Define induction and restriction functors $\ind_p$ and $\res_p$ on $\bigoplus_{n\ge 3} D_{2n}\text{-Mod}$ by setting \begin{equation*} {\ind_p}_{|D_{2n}\text{-Mod}}=\ind_n^{pn} \end{equation*} and \begin{equation*} {\res_p}_{|D_{2n}\text{-Mod}}=\begin{cases} \res_{n/p}^n, & \mbox{if }p|n\text{ and }n\ne p,\\ 0, & \mbox{otherwise}. \end{cases}, \end{equation*} and extending via additivity. These functors then also induce endomorphisms on the Grothendieck group \begin{equation*} \groth[\bigoplus_{n\ge 3} D_{2n}\text{-Mod}], \end{equation*} where we note that the split Grothendieck group and the regular Grothendieck group coincide because of Maschke's Theorem. By further abuse of notation, the induced functors will also be denoted by $\res_p$ and $\ind_p$ respectively. \subsection{The algebras \texorpdfstring{$A_{P,\mathcal{M}}$}{APM}} For $P$ being any set of primes, define $A_P$ to be the free algebra generated by the symbols $\res_p$ and $\ind_p$ with $p\in P$. By abuse of notation, let us sometimes omit the set brackets of singletons and also write $A_p=A_{\{p\}}$. Then the complexified Grothendieck group \begin{equation*} \mathcal{G}=\mathbb{C}\otimes_{\mathbb{Z}}\groth[\bigoplus_{n\ge 3} D_{2n}\text{-Mod}] \end{equation*} becomes an $A_P$-module with action induced by the actions of $\res_p$ and $\ind_p$ on the Grothendieck group. For any submodule $\mathcal{M}\subset \mathcal{G}$, let $\ann_{A_P}(\mathcal{M})$ be the ideal of elements of $A_P$ that annihilate each element of $\mathcal{M}$, and let \begin{equation*} A_{P,\mathcal{M}}=A_{P}/\ann_{A_P}(\mathcal{M}). \end{equation*} The action of $A_P$ on $\mathcal{M}$ induces an action of $A_{P,\mathcal{M}}$ on $\mathcal{M}$ in the obvious way. In what follows we will study the algebra $A_{P,\mathcal{M}}$ as well as $\ann_{A_P}(\mathcal{M})$, the latter being the kernel of the natural surjection \begin{equation*} \varphi_{P,\mathcal{M}}:A_P\rightarrow A_{p,\mathcal{M}}, \end{equation*} We observe that for any $z_1,z_2\in A_P$ we have that $\varphi_{P,\mathcal{M}}(z_1)=\varphi_{P,\mathcal{M}}(z_2)$ if and only if $z_1M=z_2M$ for all $M\in\mathcal{M}$, a fact that will be used extensively in the proofs to follow. By further abuse of notation, we will often let $\res_p$ and $\ind_p$ denote also the images $\varphi_{P,\mathcal{M}}(\res_p)$ and $\varphi_{P,\mathcal{M}}(\ind_p)$ respectively when no confusion should occur. The following lemma is obvious. \begin{mylem} \label{wloglem} Let $P$ be some set of primes, let $\mathcal{N}\subset\mathcal{M}\subset\mathcal{G}$ be $A_P$-submodules and let $z\in\ker(\varphi_{P,\mathcal{M}})$. Then also $z\in\ker(\varphi_{P,\mathcal{N}})$. \end{mylem} It is clear that the representation of $A_P$ by $\mathcal{M}$ factors through $A_{P,\mathcal{M}}$ via a natural surjection and that $A_{P,\mathcal{M}}$ is terminal with this property. The following proposition offers another way to think about $A_{P,\mathcal{M}}$. \begin{myprop} The $A_P$-module $A_{P,\mathcal{M}}$ satisfies that for any nonzero $a\in A_{P,\mathcal{M}}$ there exists a homomorphism of $A_P$-modules \begin{equation*} A_{P,\mathcal{M}}\rightarrow \mathcal{M} \end{equation*} which does not annihilate $a$, and $A_{P,\mathcal{M}}$ is the unique maximal quotient of the regular $A_{P}$-module that satisfies this property. \end{myprop} \begin{proof} The homomorphisms $A_{P,\mathcal{M}}\rightarrow \mathcal{M}$ correspond precisely to the mappings $1\mapsto M\in\mathcal{M}$ by the extension $a\mapsto aM$ for all $a\in A_{P,\mathcal{M}}$. The quotient $A_{P,\mathcal{M}}$ of $A_P$ is taken by precisely the set of $a\in A_P$ for which $aM=0$ for every $M\in\mathcal{M}$, and these $a$ are those for which there can not exist a homomorphism of $A_P$-modules $A_{P,\mathcal{M}}\rightarrow \mathcal{M}$ which does not annihilate $a$. \end{proof} \subsection{Termini and nadirs} We conclude the present section by introducing certain features of monomials in $A_P$, which will be very useful to consider in coming proofs. Throughout this subsection, let $z\in A_P$ be a (non-commutative) nonzero monomial, i.e. up to a scalar a sequence of various $\res_q$ and $\ind_q$ symbols, where $q$ ranges over $P$, and let $p\in P$ be fixed. We call the total number of $\ind_p$ in $z$ minus the total number of $\res_p$ in $z$ the \emph{terminus} of $z$ with respect to $p$. Such a terminus will most often be denoted by $e_p$. We call a terminal subsequence (i.e. a right monomial factor) $z'$ of $z$ a \emph{nadir in} $z$ with respect to $p$ if the number of $\ind_p$ minus the number of $\res_p$ in $z'$ is minimal over all terminal subsequences of $z$. This number will be called the \emph{nadir of} $z$ with respect to $p$, and will most often be denoted by $d_p$. Note that the word nadir will thus be used in two different (albeit related) ways distinguished by the choice of preposition. In particular, the nadir of a sequence with respect to a fixed prime number is unique, while a nadir in the subsequence with respect to that prime is not necessarily unique. If $z'$ is a nadir in $z$ with respect to all $p\in P$ simultaneously, then we call $z'$ a \emph{total nadir} in $z$. The following lemma is a first example of results which rely on these concepts. \begin{mylem} \label{nadirlem} Let $P$ be any set of primes, and let $z\in A_P$ be a monomial. For $p\in P$, let $d_p$ be the nadir of $z$ with respect to $p$. \begin{enumerate} \item[$($i$)$] If there is no total nadir in $z$, then, for every simple $D_{2n}$-module $L$, we have $zL=0$ if and only if $p^{-d_p}\not|n$ for some $p\in P$. \item[$($ii$)$] If there is a a total nadir in $z$, then, for any simple $D_{2n}$-module $L$, we have that $zL=0$ if and only if $p^{-d_p}\not|n$ for some $p\in P$ or $n=2\prod_{p\in P}p^{-d_p}$. \end{enumerate} \end{mylem} \begin{proof} For $z$ of degree 1, the statement of the Lemma is clear from the definitions of the actions of $\res_p$ and $\ind_p$ on simple $D_{2n}$-modules. From Proposition \ref{resindprop} we see that the structure constants of these actions are all nonnegative, so $zL=0$ if and only if at some point in computing $zL$, a $\res_p$ is applied to $D_{2m}$-modules with $p\not|m$ or $m=p$. The result follows. \end{proof} \section{Relations between restrictions and inductions between modules over dihedral groups of general order}\label{s4} We may say a few things about the relations in general algebras $A_{P,\mathcal{M}}$. \begin{myprop} \label{relsprop} For any set of primes $P$, any $A_P$-submodule $\mathcal{M}\subset\mathcal{G}$, and any $p,q\in P$, the following equalities hold. \begin{enumerate} \item[$($i$)$] $\varphi_{P,\mathcal{M}}(\res_p\res_q)=\varphi_{P,\mathcal{M}}(\res_q\res_p)$. \item[$($ii$)$] $\varphi_{P,\mathcal{M}}(\ind_p\ind_q)=\varphi_{P,\mathcal{M}}(\ind_q\ind_p)$. \end{enumerate} \end{myprop} \begin{proof} {\bf Part (i):} In cases where $\res_p$ or $\res_q$ acts by 0, the statement is clear. Assume therefore that this is not the case. The compositions $D_{2n}\hookrightarrow D_{2pn}\hookrightarrow D_{2pqn}$ and $D_{2n}\hookrightarrow D_{2qn}\hookrightarrow D_{2pqn}$ of inclusions as in \eqref{incleq} are the same, hence for any $D_{2pqn}$-module $M$ we have \begin{equation*} (M_{|D_{2pn}})_{|D_{2n}}\cong (M_{|D_{2qn}})_{|D_{2n}} \end{equation*} so that $\res_p\res_qM=\res_q\res_pM$. Hence $\res_p\res_q-\res_q\res_p$ lies in the kernel of $\varphi_{P,\mathcal{M}}$. {\bf Part(ii):} This follows by Frobenius reciprocity. \end{proof} \begin{myprop} \label{somecommprop} For any distinct primes $p$ and $q$ and any simple $D_{2pn}$-module $L\in\mathcal{G}$ where $n>1$ (equivalently any simple $D_{2m}$-module such that $\res_pL\ne 0)$, we have that \begin{equation*} \res_p\ind_qL=\ind_q\res_pL. \end{equation*} \end{myprop} \begin{proof} We have that \begin{align*} \ind_q\res_p L&\cong \mathbb{C}[D_{2qn}]\otimes_{\mathbb{C}[D_{2n}]}\mathbb{C}[D_{2n}]\otimes_{\mathbb{C}[D_{2n}]}\mathbb{C}[D_{2pn}]\otimes_{\mathbb{C}[D_{2pn}]}L\\&\cong \mathbb{C}[D_{2qn}]\otimes_{\mathbb{C}[D_{2n}]}\mathbb{C}[D_{2pn}]\otimes_{\mathbb{C}[D_{2pn}]}L \end{align*} and \begin{align*} \res_p\ind_qL&\cong \mathbb{C}[D_{2qn}]\otimes_{\mathbb{C}[D_{2qn}]}\mathbb{C}[D_{2pqn}]\otimes_{\mathbb{C}[D_{2pn}]}\mathbb{C}[D_{2pn}]\otimes_{\mathbb{C}[D_{2pn}]}L\\&\cong \mathbb{C}[D_{2pqn}]\otimes_{\mathbb{C}[D_{2pn}]}L. \end{align*} It then suffices to show that the homomorphism of $\mathbb{C}[D_{2qn}]$-$\mathbb{C}[D_{2pn}]$-bimodules \begin{align*} f:\mathbb{C}[D_{2qn}]\otimes_{\mathbb{C}[D_{2n}]}\mathbb{C}[D_{2pn}]&\rightarrow \mathbb{C}[D_{2pqn}]\\x\otimes_{\mathbb{C}[D_{2n}]}y&\mapsto xy \end{align*} induced by the inclusion \eqref{incleq} is in fact an isomorphism. It is clear from \eqref{incleq} that $f(1\otimes_{\mathbb{C}[D_{2n}]}s_{pn})=s_{pqn}$, that $f(1\otimes_{\mathbb{C}[D_{2n}]}r_{pn})=r_{pqn}^q$, and that $f(r_{qn}\otimes_{\mathbb{C}[D_{2n}]} 1)=r_{pqn}^p$. Since $p$ and $q$ are distinct primes, the diophantine equation $pu+qv=1$ is solvable in $u$ and $v$, so that we have $f(r_{qn}^u\otimes_{\mathbb{C}[D_{2n}]}r_{pn}^v)=r_{pqn}^{pu+qv}=r_{pqn}$. Since $s_{pqn}$ and $r_{pqn}$ generate $D_{2pqn}$, we get that $f$ is surjective. The module $\mathbb{C}[D_{2qn}]\otimes_{\mathbb{C}[D_{2n}]}\mathbb{C}[D_{2pn}]$ has dimension $2qn\cdot 2pn/(2n)=2pqn$, which is also the dimension of $\mathbb{C}[D_{2pqn}]$. Hence $f$ must indeed be an isomorphism. \end{proof} \begin{comment} \begin{myprop} \label{rels2prop} Let $p$ be a prime number and let $M$ be a simple $D_{2pn}$-module. Then the following hold. \begin{enumerate} \item[$($i$)$] $\res_p\circ\res_p\circ\ind_p M\cong \res_p\circ\ind_p\circ\res_p M \text{ if $(p,M)\ne(2,V_{-1,b}(pn))$}$. \item[$($ii$)$] $\res_p\circ\ind_p\circ\ind_p M\cong \ind_p\circ\res_p\circ\ind_p M \text{ if $(p,M)\ne(2,V_{1,b}(pn))$}$. \end{enumerate} \end{myprop} \begin{proof} \mbox{} \begin{enumerate} \item[$($i$)$] For $p\ne 2$, the result is immediate from applying $\res_p$ to the expressions for $\res_p\circ\ind_p$ and $\ind_p\circ\res_p$ in the proof of Proposition \ref{relsprop}. [The $p=2$ part is done by direct computation, and is so far omitted in this preprint.] \item[$($ii$)$] For $p\ne 2$, the result follows from the previous part and Frobenius reciprocity. \end{enumerate} \end{proof} \end{comment} \section{Results for the algebras \texorpdfstring{$A_{P,\mathcal{M}}$}{APM} for modules over dihedral groups of order not divisible by 4}\label{s5} Throughout this section, $P$ will be a set of odd prime numbers, and $\mathcal{M}\subset\mathcal{G}$ will be an $A_P$-submodule spanned by simple modules over dihedral groups $D_{2n}$ with $n$ odd. For the main results we will furthermore require that for each $n$, either all simple $D_{2n}$-modules or none lie in $\mathcal{M}$. This latter condition means that $\mathcal{M}$ will consists of entire induction/restriction diagrams as in Figure 1, rather than merely some connected components. Allowing for $\mathcal{M}$ which do not satisfy this condition would give rise to an unwieldy amount of additional cases, although it seems that these too should in principle be amenable to the methods used in the paper. The main objective of the present section is to find a basis and a generating set of relations for the algebra $A_{P,\mathcal{M}}$. We will need to consider two cases, depending on whether $\mathcal{M}$ contains a $D_{2n}$-module with all prime factors of $n$ lying in $P$ or not. These cases will be developed in parallel, and culminate in the theorems \ref{babybasisthm} and \ref{basisthm} respectively. \subsection{A translation of the induction/restriction diagrams} The Lemma \ref{partfunlem} to follow may seem quite technical, but formalizes something which is fairly easy to corroborate on an intuitive level by looking at the induction/restriction diagram in Figure 1. It morally says the following: Pick any vertex (i.e. simple module) and consider the subdiagram formed by adding all vertices which are both connected to the starting one and also lie at the same level as or higher than the starting one. Then this subdiagram is isomorphic to one of the connected components of entire induction/restriction diagram. It may nevertheless be preferable to skip ahead at this point and refer back to the lemma and its proof when it is used in Proposition \ref{nadirprop} and Lemma \ref{deplem}. The examples \ref{nadirex} and \ref{depex} illustrate the latter results and should shed further light also on the ideas behind Lemma \ref{partfunlem}. The following notation for the two-dimensional simple modules over the dihedral groups will prove convenient in the statement of the lemma. Let \begin{equation*} \mathcal{I}=\llbracket 1,p\rrbracket^* \end{equation*} be the free monoid generated by $\llbracket 1,p\rrbracket$. Let $n\ge 3$ be an odd integer, let $k\in \mathbb{Z}$, and set $W_{k}^{()}(n)=W_{k}(n)$. For $I\in\mathcal{I}$, let $\len(I)$ denote the length of the word $I$. For each odd prime $p$, define inductively $W_{k}^I(n)$ for all $I\in\mathcal{I}$ by considering the set \begin{equation*} K_{p,n,I}=\{ k'\in\mathbb{N}: 0<k'<\frac{np^{\len(I)+1}}{2}, \res_p(W_{k'}(np^{\len(I)+1}))=W_{k}^I(n)\}, \end{equation*} by letting $k_{p,n,I,j}$ be the $j$:th smallest element in $K_{p,n,I}$ for each $j\in\llbracket 1,p\rrbracket$ (this choice of ordering is not essential), and finally defining \begin{equation*} W_{k}^{I(j)}(n)=W_{k_{p,n,I,j}}(np^{\len(I)+1}). \end{equation*} \begin{mylem} \label{partfunlem} Let $P$ be some set of odd primes, let $p\in P$, and let $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{n}$-modules, for $n$ odd, and such that for a fixed $n$ either every simple $D_{2n}$-module or none lies in $\mathcal{M}$. Let $J$ range over $\mathbb{N}$. Define linear partial functions (i.e. linear maps each defined only on some subspace of its domain) $\Phi_{p,m,J}$ by linearly extending \begin{align*} \Phi_{p,m,J}:\mathcal{M}&\rightarrow\mathcal{M}\\ V_{a,b}(mp^{J'})&\mapsto V_{a,b}(mp^{J'+J})\\ W_{k}(mp^{J'})&\mapsto W_{kp^{J}}(mp^{J'+J}), \end{align*} for all $k$ divisible by $p$, for all $J'\in \mathbb{N}$ and for all odd $m$ with either $m=p$ or $p\not|m$ such that the modules lie in $\mathcal{M}$. Also define, for all $k,k'\in\mathbb{Z}$ and odd $m$ satisfying that $m=p$ or $p\not|m$ and $k\in\llbracket 1,\frac{m-1}{2}\rrbracket$, linear partial functions $\Psi_{k,m,k',J}$ by linearly extending \begin{align*} \Psi_{p,k,m,k',J}:\mathcal{M}&\rightarrow\mathcal{M}\\ W_k^I(m)&\mapsto W_{k'}^I(mp^J) \end{align*} for all $I\in\mathcal{I}$. The following statements then hold. \begin{enumerate} \item[$($i$)$] For any $D_{2mp^l}$-module $M$ and any $J\in\mathbb{N}$, there is a $\Gamma\in\{\Phi_{p,m,J},\Psi_{p,k,m,k',J}:k,k'\in\mathbb{Z}\}$ such that $M\in\dom(\Gamma)$. \item[$($ii$)$] The domains $\dom(\Phi_{p,m,J})$ and $\dom(\Psi_{p,k,m,k',J})$ are closed under the action of $A_{P,\mathcal{M}}$. \item[$($iii$)$] There exist partial linear functions \begin{equation*} \Phi_{p,m,J}^{-1}:\mathcal{M}\rightarrow\mathcal{M} \end{equation*} and \begin{equation*} \Psi_{p,k,m,k',J}^{-1}:\mathcal{M}\rightarrow\mathcal{M} \end{equation*} which are partial inverses to $\Phi_{p,m,J}$ and $\Psi_{p,k,m,k',J}$ respectively, i.e. $\Phi_{p,m,J}^{-1}\circ \Phi_{p,m,J}=\id_{\dom(\Phi_{p,m,J})}$, $\Phi_{p,m,J}\circ \Phi_{p,m,J}^{-1}=\id_{\dom(\Phi^{-1}_{p,m,J})}$, $\Psi_{p,k,m,k',J}^{-1}\circ \Psi_{p,k,m,k',J}=\id_{\dom(\Psi_{p,k,m,k',J})}$, and $\Psi_{p,k,m,k',J}\circ \Psi_{p,k,m,k',J}^{-1}=\id_{\dom(\Psi^{-1}_{p,k,m,k',J})}$. \item[$($iv$)$] Let $M\in\mathcal{M}$ be a simple module and $z\in A_{P,\mathcal{M}}$ be a monomial such that $zM\ne 0$. If $M\in\dom(\Phi_{p,m,J})$, we have \begin{equation*} \Phi_{p,m,J}zM=z\Phi_{p,m,J}M. \end{equation*} If $M\in\dom(\Psi_{p,k,m,k',J})$, we have \begin{equation*} \Psi_{p,k,m,k',J}zM=z\Psi_{p,k,m,k',J}M. \end{equation*} \end{enumerate} \end{mylem} \begin{proof} {\bf Part (i):} Every $V_{a,b}(mp^l)$ and every $W_k(mp^l)$ with $p|k$ lies in the domain of some $\Phi_{p,m,J}$, while every $W_k(mp^l)$ with $p\not|k$ lies in the domain of some $\Psi_{p,k,m,k',J}$. {\bf Part (ii):} It is clear from Proposition \ref{resindprop} that the set of modules of the forms $V_{a,b}(mp^{J'})$ and $W_k(mp^{J'}$ with $k$ divisible by $p$ is closed under the action of $A_{P,\mathcal{M}}$, and that the same holds for the set of modules of the form $W_k^I(m)$ where $k\in\llbracket 1,\frac{m-1}{2}\rrbracket$. {\bf Part (iii):} The partial functions $\Phi_{p,m,J}$ and $\Psi_{k,m,k',J}$ are clearly injective on their domains, so they have partial inverses with domains $\im(\Phi_{p,m,J})$ and $\im(\Psi_{k,m,k',J})$ respectively. {\bf Part(iv):} Let us consider only the case of the partial functions $\Phi_{p,m,J}$ (the proof of the statement for $\Psi_{k,m,k',J}$ is analogous). Let $A,B\in \dom(\Phi_{p,J})$ for some $p$ and $J$. Under the assumption $\res_p(A)\ne 0$, we have that \begin{equation} \label{commeq} \begin{aligned} 1=\dim\Hom(A,\ind_p B)&\Leftrightarrow\\ 1=\dim\Hom(\res_p A,B)&\Leftrightarrow\\ 1=\dim\Hom(\res_p\Phi_{p,m,J}(A),\Phi_{p,m,J}(B))&\Leftrightarrow\\ 1=\dim\Hom(\Phi_{p,m,J}(A),\ind_p\Phi_{p,m,J}(B)). \end{aligned} \end{equation} Here all $\Hom$:s are with respect to the category $\bigoplus_{n\ge 3} D_{2n}\text{-Mod}$. We use Frobenius reciprocity for the first and third equivalences. For the second equivalence, we can use Proposition \ref{resindprop} to verify for every pair of simple modules $A$ and $B$ that $B$ occurs as a summand of $\res_p(A)$ if and only if $\Phi_{p,m,J}(B)$ occurs a a summand of $\res_p(\Phi_{p,m,J}(A))$. For instance, $V_{1,1}(mp^2)$ is a summand of $\res_p(W_{mp^2}(mp^3)\cong V_{1,1}(mp^2)\oplus V_{1,-1}(mp^2)$, and indeed $\Phi_{p,m,1}(V_{1,1}(mp^2))=V_{1,1}(mp^3)$ is a summand of $\res_p(\Phi_{p,m,1}(W_{mp^2}(mp^3))=\res_p(W_{mp^3}(mp^4))\cong V_{1,1}(mp^3)\oplus V_{1,-1}(mp^3)$. Because the dimensions of $\Hom(\res_pA,B)$ for various modules $B$ encode the result of applying $\res_p$ to $A$, because the dimensions of $\Hom(A,\ind_pB)$ for various modules $A$ encode the result of applying $\ind_p$ to $B$, and because part (i) ensures that $A,B\in\dom(\Phi_{p,m,J})$ causes no restriction in this encoding, we get that the desired result follows for $z$ of degree 1. From this, the more general result is immediate. \end{proof} \begin{myprop} \label{nadirprop} Let $P$ be a set of odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd. Let $z_1\in A_P$ be a monomial, and let $z_2$ be the result of reordering the factors of $z_1$ in a way such that the relative order of factors $\res_p$ and $\ind_p$ for each fixed $p\in P$ is unchanged. Assume that at least one of the following conditions holds \begin{enumerate} \item Either none or both of $z_1$ and $z_2$ have a total nadir. \item There is no simple $D_{2n}$-module in $\mathcal{M}$ such that all prime factors of $n$ belong to $P$. \end{enumerate} Then \begin{equation*} \varphi_{P,\mathcal{M}}(z_1)=\varphi_{P,\mathcal{M}}(z_2). \end{equation*} \end{myprop} \begin{proof} By Lemma \ref{wloglem} we may assume that $\mathcal{M}$ satisfies the assumptions of Lemma \ref{partfunlem}. Let $L\in\mathcal{M}$ be any simple $D_{2n}$-module. We have by Lemma \ref{nadirlem} that either $z_1L=0=z_2L$ or $z_1L\ne 0\ne z_2L$ (here we must use either of the two assumptions). In the first case we are done, so let us consider the latter case. If the second assumption holds, the result follows immediately from Propositions \ref{relsprop} and \ref{somecommprop}. If at least one of the assumptions holds, the following argument applies. By Lemmata \ref{nadirlem} and \ref{partfunlem} we may (using the notation of the latter lemma) pick a sequence of \begin{equation*} \Gamma_i\in\{\Phi_{p,m,J},\Psi_{k,m,k',J}|p,m,J,k,k'\text{ ranging over all possibilities allowed by Lemma \ref{partfunlem}}\} \end{equation*} with the index $i$ ranging from 1 to some positive integer $l$, and furthermore a partial inverse $\Gamma_i^{-1}$ of each $\Gamma_i$ such that $z'\Gamma_1\circ\dots\circ\Gamma_lL$ and is well-defined and nonzero for all possible results $z'$ of reordering the factors of $z_1$. By Proposition \ref{somecommprop} we have for $p,q\in P$ distinct that \begin{equation*} \res_p\ind_qL'=\ind_q\res_pL' \end{equation*} for any modules $L'$ such that $\res_pL'\ne 0$. Also using Proposition \ref{relsprop}, we by Lemma \ref{partfunlem} and the choice of our $\Gamma_i$ then have \begin{align*} z_1L&=\Gamma_l^{-1}\circ\dots\circ\Gamma_1^{-1}\circ\Gamma_1\circ\dots\circ\Gamma_lz_1L\\&=\Gamma_l^{-1}\circ\dots\circ\Gamma_1^{-1}z_1\Gamma_1\circ\dots\circ\Gamma_lL\\&=\Gamma_l^{-1}\circ\dots\circ\Gamma_1^{-1}z_2\Gamma_1\circ\dots\circ\Gamma_lL\\&=\Gamma_l^{-1}\circ\dots\circ\Gamma_1^{-1}\circ\Gamma_1\circ\dots\circ\Gamma_lz_2L=z_2L. \end{align*} \end{proof} \begin{myex} \label{nadirex} Neither of the monomials $z_1=\ind_5\res_5\ind_3\res_3$ and $z_2=\ind_3\res_3\ind_5\res_5$ has a total nadir, so according to Proposition \ref{nadirprop}, we have $\varphi_{P,\mathcal{M}}(z_1)=\varphi_{P,\mathcal{M}}(z_2)$ for any $P$ containing $3$ and $5$, and $\mathcal{M}\subset\mathcal{G}$ as in the proposition statement. In particular, we should expect \begin{equation*} z_1V_{1,1}(15)=z_2V_{1,1}(15). \end{equation*} This is indeed the case, as confirmed by the following computations: \begin{align*} &z_1V_{1,1}(15)=\ind_5\res_5\ind_3\res_3V_{1,1}(15)=\ind_5\res_5\ind_3V_{1,1}(5)\\&=\ind_5\res_5(V_{1,1}(15)\oplus W_{5}(15))=\ind_5(V_{1,1}(3)\oplus W_2(3))\\&=V_{1,1}(15)\oplus W_3(15)\oplus W_6(15)\oplus W_2(15)\oplus W_1(15)\oplus W_5(15)\oplus W_4(15)\oplus W_7(15), \end{align*} and \begin{align*} &z_2 V_{1,1}(15)=\ind_3\res_3\ind_5\res_5 V_{1,1}(15)=\ind_3\res_3\ind_5 V_{1,1}(3)\\&=\ind_3\res_3(V_{1,1}(15)\oplus W_3(15)\oplus W_6(15))=\ind_3(V_{1,1}(5)\oplus W_2(5)\oplus W_1(5))\\&=V_{1,1}(15)\oplus W_5(15)\oplus W_2(15)\oplus W_3(15)\oplus W_7(15)\oplus W_1(15)\oplus W_4(15)\oplus W_6(15), \end{align*} which have equal results by direct comparison. It is not, however, the case that $z_1V_{1,1}(15)$ is invariant under elementary transpositions of the factors of $z_1$, even when the composition of these transpositions take $z_1$ to $z_2$. Indeed, we have such transpositions \begin{align*} &z_1=\ind_5\res_5\ind_3\res_3\rightsquigarrow \ind_5\ind_3\res_5\res_3\rightsquigarrow \ind_3\ind_5\res_5\res_3\\&\rightsquigarrow \ind_3\ind_5\res_3\res_5\rightsquigarrow \ind_3\res_3\ind_5\res_5=z_2 \end{align*} but \begin{equation*} \ind_3\ind_5\res_3\res_5 V_{1,1}(15)=\ind_3\ind_5\res_3 V_{1,1}(3)=\ind_3\ind_5 0=0\ne z_1 V_{1,1}(15). \end{equation*} This is because $\ind_3\ind_5\res_3\res_5$ has a total nadir, so we can not apply Proposition \ref{somecommprop}. In order to circumvent this problem in the proof of Proposition \ref{nadirprop}, we for the ``missing link'' $\ind_3\ind_5\res_3\res_5$ instead compute \begin{align*} &\Phi_{3,5,1}^{-1}\ind_3\ind_5\res_3\res_5\Phi_{3,5,1}V_{1,1}(15)=\Phi_{3,5,1}^{-1}\ind_3\ind_5\res_3\res_5 V_{1,1}(45)\\&=\Phi_{3,5,1}^{-1}\ind_3\ind_5 V_{1,1}(3)=\Phi_{3,5,1}^{-1}\ind_3(V_{1,1}(15)\oplus W_3(15)\oplus W_6(15))\\&=\Phi_{3,5,1}^{-1}(V_{1,1}(45)\oplus W_{15}(45)\oplus W_3(45)\oplus W_{12}(45)\oplus W_{18}(45)\oplus W_6(45)\oplus W_9(45)\oplus W_{21}(45))\\&=V_{1,1}(15)\oplus W_5(15)\oplus W_1(15)\oplus W_4(15)\oplus W_6(15)\oplus W_2(15)\oplus W_3(15)\oplus W_7(15). \end{align*} This is indeed equal to $z_1V_{1,1}(15)$ and $z_2V_{1,1}(15)$. \end{myex} \begin{mylem} \label{deplem} Let $P$ be some set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be any $A_{P,\mathcal{G}}$-submodule spanned by $D_{2n}$-modules with odd $n$, such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$. Let also $S\subset A_{P}$ be a set of monomials whose image in $A_{P,\mathcal{M}}$ is linearly dependent and minimal with this property. Then the following hold. \begin{enumerate} \item[$($i$)$] The respective termini of the elements in $S$ with respect to each prime in $P$ are equal, and the respective nadirs of the elements in $S$ with respect to each prime in $P$ are equal. \item[$($ii$)$] If in addition $\mathcal{M}$ consists of $D_{2n}$-modules ($n$ is not necessarily fixed) with all prime factors of $n$ belonging to $P$, then either all elements in $S$ have a total nadir or none has. \end{enumerate} \end{mylem} \begin{proof} Let $\gamma\in A_P$ be a nonzero linear combination of elements in $S$ whose image in $A_{P,\mathcal{M}}$ is zero. {\bf Part (i):} Let $M\in\mathcal{M}$ be an arbitrary simple $D_{2n}$-module. By assumption, we have $\gamma(M)=0$. Fix an arbitrary $p\in P$, let $e_p$ be the largest terminus and $d_p$ be the largest nadir of any element in $S$ with respect to $p$. Write $\gamma=\alpha'+\beta'$, where the termini of the terms in $\alpha'$ with respect to $p$ equal $e_p$ while the termini of the terms in $\beta'$ with respect to $p$ are less than $e_p$. Then $\alpha'(M)$ is a linear combination of $D_{2np^{e_p}}$-modules (or trivially zero if $2np^{e_p}$ is not an integer $\ge 3$) while $\beta'(M)$ is a linear combination of modules over other groups, so we must have $\alpha'(M)=0$. Hence by the minimality of $S$, there are no monomial terms in $\gamma$ except for those occurring already in $\alpha'$, so we must have $\beta'=0$ and $\gamma=\alpha'$. As for the corresponding statement for nadirs, write $n=mp^{J_1}$, where either $p\not|m$ or $m=p$. Write also $\gamma=\alpha+\beta$, where the nadirs of the terms in $\alpha$ with respect to $p$ equal $d_p$ while the nadirs of the terms of $\beta$ with respect to $p$ are less than $d_p$. It follows from our assumptions that $\alpha\ne 0$. Assume towards a contradiction that also $\beta\ne 0$. Since $M$ is arbitrary, it now suffices to show that also $\alpha(M)=0$ in order to contradict the minimality of $S$. We may without loss of generality assume that $J_1\ge -d_p$, since otherwise $\alpha(M)=0$ trivially. If $\res_p^{-d_p}(M)$ has nonzero projection onto some $V_{a,b}(m')$, then $M$ is either of the form $M=V_{a,b}(mp^{J_1})$ or of the form $M=W_k(mp^{J_1})$ for some $k$ divisible by $mp^{J_1+d_p}$. Define in this case $\Gamma=\Phi_{p,m,J_1+d_p}$. Then either $\Gamma(V_{a,b}(mp^{-d_p}))=M$ or $\Gamma(W_{kp^{-J_1-d_p}}(mp^{-d_p}))=M$. If instead $\res_p^{-d_p}(M)$ has zero projection onto all one-dimensional simple dihedral group modules, then $M$ may be written on the forms $M=W_k(mp^{J_1})=W_{k'}^{I\cdot I'}(m')$ with $\res_p^{-d_p}(M)=W_k(mp^{J_1+d_p})=W_{k'}^{I}(m')$ simple. Define in this case $\Gamma=\Psi_{p,1,m,k,J_1+d_p}$. Then $\Gamma(W_1^{I'}(m))=M$. Let $\Gamma^{-1}$ be the partial inverse of $\Gamma$ (the existence of which is given by part (iii) of Lemma \ref{partfunlem}). In any of the above cases we have that $M\in\im(\Gamma)$, so that $M\in\dom(\Gamma^{-1})$. Let $M'=\Gamma^{-1}(M)$. Using the assumption on $\gamma$, we compute \begin{equation*} 0=\gamma(M')=(\alpha+\beta)(M')=\alpha(M'), \end{equation*} where for the last equality we used Lemma \ref{nadirlem} together with the facts that the nadirs of the terms of $\beta$ with respect to $p$ are smaller than $d_p$ and $M'$ is a $D_{2mp^{-d_p}}$-module. This implies that indeed \begin{equation*} 0=\Gamma(\alpha(M'))=\alpha(\Gamma(M'))=\alpha(M), \end{equation*} where for the second equality we used part (iv) of Lemma \ref{partfunlem} together with the fact that no monomial term in $\alpha$ annihilates $M'$. This latter fact in turn follows from $M'$ being a $D_{2mp^{-d_p}}$-module, the terms of $\alpha$ having nadir $d_p$ with respect to $p$, and Lemma \ref{nadirlem}. {\bf Part (ii):} Let $M\in\mathcal{M}$ be an arbitrary simple $D_{2n}$-module (where we have assumed that the prime factors of $n$ all belong to $P$). By our assumption on $\gamma$, we have $\gamma(M)=0$. Write $\gamma=\alpha+\beta$ where this time all monomial terms in $\alpha$ do not have any total nadir and all monomial terms in $\beta$ do have a total nadir. Assume towards a contradiction that $\alpha\ne 0\ne\beta$ (that $\alpha\ne 0$ implies in particular that $|P|\ge 2$). By part (i) we may assume that for each $p\in P$, the same number $d_p$ is the nadir of every monomial term of $\gamma$ with respect to $p$. We may also assume as in part (i) that for every $p\in P$ we have $np^{d_p}$ is an integer $\ge 3$. In particular, we may assume that $n$ is not a prime power, since otherwise $\alpha(M)=0$ already because the terms of $\alpha$ must have nonzero nadirs with respect to at least two different primes. Fix any total order on $P$, and an indexing such that for $p_i,p_j\in P$ we have $p_i<p_j$ if and only if $i<j$, where $i,j\in\llbracket 1,|P|\rrbracket$. Define some partial functions $\Gamma_i$ and modules $M_i$ as follows. Let first $M_0=M$. Then inductively let $\Gamma_{i+1}$ be constructed out of $M_i$ and $p_{i+1}$ as $\Gamma$ was constructed out of $M$ and $p$ in part (i), and set $M_{i+1}=\Gamma_{i+1}^{-1}(M_i)$, up to $i=|P|-1$. If $M_i$ is a $D_{2n_i}$-module, note that $n_i$ will contain a factor $p^{-d_p}$ for every $p\in P$. Since $d_p\ne 0$ for at least two choices of $p\in P$, no $n_i$ is a prime power. It in particular follows that the $m$-value in each step $i$ of the construction will satisfy $p_i\not|m$, hence that the finally obtained module $M_{|P|}$ is a $D_{2\prod_{p\in P}p^{-d_p}}$-module. Similarly to part (i), we now compute \begin{equation*} 0=\gamma(M_{|P|})=(\alpha+\beta)(M_{|P|})=\alpha(M_{|P|}), \end{equation*} where for the last equality we used part (ii) of Lemma \ref{nadirlem} together with the facts that the terms of $\beta$ have a total nadir and $M_{|P|}$ is a $D_{2\prod_{p\in P}p^{-d_p}}$-module. This as before implies that \begin{equation*} 0=\Gamma_1\circ\dots\circ\Gamma_{|P|}(\alpha(M_{|P|}))=\alpha(\Gamma_1\circ\dots\circ\Gamma_{|P|}(M_{|P|}))=\alpha(M), \end{equation*} where for the second equality we used part (iv) of Lemma \ref{partfunlem} together with the fact that no monomial term of $\alpha$ annihilates $M_{|P|}$. This latter fact in turn follows from $M_{|P|}$ being a $D_{2\prod_{p\in P}p^{-d_p}}$-module, the terms of $\alpha$ having no total nadir, and Lemma \ref{nadirlem}. \end{proof} \begin{myex} \label{depex} Consider the following situation, which is a very special case of the proof of part (i) of Lemma \ref{deplem}. Let $P$ be a set of odd primes with $3\in P$, let $\alpha,\beta\in A_P$ be linear combinations of monomials such that the nadir of every monomial term of $\alpha$ with respect to $3$ is $-1$ while the nadir of every monomial term of $\beta$ with respect to $3$ is $-2$. Assume that \begin{equation*} (\alpha+\beta)W_{-1+5}(15)=0. \end{equation*} We will show that \begin{equation*} \alpha W_{-5+15}(45)=0. \end{equation*} Note that the action of $\res_3$ maps the modules $W_5(45)$, $W_{-5+15)}(45)$ and $W_{5+15}(45)$ to $W_{5}(15)$. Hence $W_{-5+15}(45)=W_5^{(2)}(15)$. Similarly, $W_{-1+5}(15)=W_1^{(2)}(5)$. Because every monomial term of $\beta$ must annihilate $W_{-1+5}(15)$ by Lemma \ref{nadirlem}, we have \begin{align*} 0=(\alpha+\beta)W_{-1+5}(15)=\alpha W_{-1+5}(15), \end{align*} from which follows that \begin{align*} 0=\Psi_{3,1,5,5,1}\alpha W_{-1+5}(15)=\alpha\Psi_{3,1,5,5,1}W_{-1+5}(15)=\alpha W_{-5+15}(45). \end{align*} \end{myex} \begin{mycor} \label{welldefcor} Let $P$ be some set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$. Furthermore, let $p\in P$ be arbitrary, and let $z_1,z_2\in A_P$ be monomials such that $\varphi_{P,\mathcal{M}}(z_1)=\varphi_{P,\mathcal{M}}(z_2)$. Then the respective termini and nadirs of $z_1$ and $z_2$ are equal. \end{mycor} \begin{proof} If $\varphi_{P,\mathcal{M}}(z_1)=\varphi_{P,\mathcal{M}}(z_2)$, then $z_1-z_2\in\ker(\varphi_{P,\mathcal{M}})$. Now apply Lemma \ref{deplem} to $S=\{z_1,z_2\}$. \end{proof} For an image, $z\in A_{P,\mathcal{M}}$, of a monomial in $A_P$, we define termini and nadirs of $z$ to be those of a monomial representative in $A_P$. By Corollary \ref{welldefcor}, these are well-defined. \subsection{Additional relations of \texorpdfstring{$A_{P,\mathcal{M}}$}{APM}} \begin{mylem} \label{rellem} Let $P$ be a set of odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd. Let $p\in P$. Then \begin{equation*} \varphi_{P,\mathcal{M}}(\res_p\res_p\ind_p\ind_p)=\varphi_{P,\mathcal{M}}((p+1)\res_p\ind_p-p). \end{equation*} \end{mylem} \begin{proof} By direct computation using Proposition \ref{resindprop} (or by looking at Figure 1) we have for any $n\ge 3$ that \begin{align*} (p -(p+1)\res_p\ind_p+\res_p\res_p\ind_p\ind_p)(W_k(n))&=pW_k(n)-(p+1)pW_k(n)+p^2W_k(n)\\&=0, \end{align*} and \begin{align*} &(p -(p+1)\res_p\ind_p+\res_p\res_p\ind_p\ind_p)(V_{a,b}(n))\\&=pV_{a,b}(n)-(p+1)(V_{a,b}(n)+\frac{p-1}{2}(V_{a,b}(n)+V_{-a,b}(n)))+V_{a,b}(n)\\&+\frac{p-1}{2}(V_{a,b}(n)+V_{-a,b}(n))+p\frac{p-1}{2}(V_{a,b}(n)+V_{-a,b}(n))=0. \end{align*} The desired result follows. \end{proof} \begin{mylem} \label{mixrel} Let $P$ be a set of odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd. Let $p,q\in P$. Then \begin{equation*} \varphi_{P,\mathcal{M}}(\frac{1}{p-1}(\res_p\ind_p-p))=\varphi_{P,\mathcal{M}}(\frac{1}{q-1}(\res_q\ind_q-q)) \end{equation*} \end{mylem} \begin{proof} For $n\ge 3$, we have \begin{equation*} (\res_p\ind_p-p)W_k(n)=0=(\res_q\ind_q-q)W_k(n). \end{equation*} Also \begin{equation*} \frac{1}{p-1}(\res_p\ind_p-p)V_{1,b}(n)=\frac{-V_{1,b}(n)+V_{1,-b}(n)}{2}=\frac{1}{q-1}(\res_q\ind_q-q)V_{1,b}(n). \end{equation*} \end{proof} The following is a corollary of Proposition \ref{relsprop}, Proposition \ref{nadirprop} and Lemma \ref{mixrel}. Note that we from here on will abuse notation and relax the distinction between $\res_p$ and $\ind_p$ on one hand and their images under $\varphi_{P,\mathcal{M}}$ on the other. \begin{mycor} \label{ccor} Let $P$ be a set of odd primes, let $p\in P$, and let $\mathcal{M}\subset\mathcal{G}$ be any $A_{P,\mathcal{G}}$-submodule spanned by $D_{2n}$-modules with odd $n$. Then the center of $A_{P,\mathcal{M}}$ contains the element $\res_p\ind_p$. \end{mycor} \begin{proof} We may without loss of generality assume that $P$ contains all odd primes. It suffices to show that $\res_p\ind_p$ commutes with $\res_q$ and $\ind_q$ for all primes $q$. If $q=p$ we may pick any odd prime $p'\ne p$ and first use Lemma \ref{mixrel} to rewrite \begin{equation*} \res_p\ind_p=\frac{p-1}{p'-1}(\res_{p'}\ind_{p'}-p')+p \end{equation*} Thus we may assume that $q\ne p$. It is clear that $\res_q\res_p\ind_p$, $\res_p\ind_p\res_q$, $\ind_q\res_p\ind_p$ and $\res_p\ind_p\ind_q$ all have total nadirs (note that we may ignore the factor $\res_p\ind_p$ when determining whether an initial subword is a total nadir in some other; for instance, $\res_q$ and $\res_p\ind_p\res_q$ are both total nadirs in $\res_p\ind_p\res_q$), so the result now follows from Proposition \ref{nadirprop}. \end{proof} \begin{mylem} \label{nadirendlem} Let $P$ be some set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be any $A_{P,\mathcal{G}}$-submodule spanned by $D_{2n}$-modules with odd $n$, such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$. Let $p\in P$, and let $z\in A_{p,\mathcal{M}}\subset A_{P,\mathcal{M}}$. Let further $d_p$ be the nadir and $e_p$ be the terminus of $z$ with respect to $p$. Let $p_1\in P$ be arbitrary. Then the following hold. \begin{enumerate} \item[$($i$)$] If $d_p=0$, then $z$ may be written as a linear combination of monomials of the form \begin{equation*} (\res_{p_1}\ind_{p_1})^t\ind_p^l \end{equation*} with $t\in\{ 0,1\}$ and $l\in\mathbb{N}$. In particular, the empty subword is a nadir in $z$ with respect to $p$. \item[$($ii$)$] If $d_p=e_p$, then $z$ may be written as a linear combination of monomials of the form \begin{equation*} (\res_{p_1}\ind_{p_1})^t\res_p^k \end{equation*} with $t\in\{ 0,1\}$ and $k\in\mathbb{N}$. In particular, $z$ is a nadir in $z$ with respect to $p$. \item[$($iii$)$] If $0\ne d_p\ne e_p$, then $z$ may be written as a linear combination of monomials of the form \begin{equation*} (\res_{p_1}\ind_{p_1})^t\ind_p^l\res_p^k \end{equation*} with $k,l\in\mathbb{Z}_{>0}$. In particular, neither the empty subword nor $z$ is a nadir in $z$ with respect to $p$. \end{enumerate} \end{mylem} \begin{proof} The result follows immediately from Lemma \ref{rellem}, Lemma \ref{mixrel}, and Corollary \ref{ccor}. \end{proof} \subsection{A basis for \texorpdfstring{$A_{P,\mathcal{M}}$}{APM}} We may now describe a basis for our algebra $A_{P,\mathcal{M}}$. Following Theorem \ref{basisthm} is an example which illustrates the proof in some very specific cases. \begin{mythm} \label{babybasisthm} Let $P$ be a set of odd primes. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is no simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Fix any total order on $P$ (say the restriction of the usual one on $\mathbb{N}$) and index the elements of $P$ by $p_i<p_j$ with $i,j\in\llbracket 1,|P|\rrbracket$ if and only if $i<j$. Then the monomials of the forms \begin{equation*} (\res_{p_1}\ind_{p_1})^t\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}} \end{equation*} with $t\in\{ 0,1\}$ and $k_i,l_i\in\mathbb{N}$ form a basis of $A_{P,\mathcal{M}}$, and the relations of $A_{P,\mathcal{M}}$ are generated by the following ones. \begin{enumerate} \item[$($i$)$] $\res_p\res_q=\res_q\res_p$. \item[$($ii$)$] $\ind_p\ind_q=\ind_q\ind_p$. \item[$($iii$)$] $\ind_q\res_p=\res_p\ind_q$,\\ for $p\ne q$. \item[$($iv$)$] $\res_p\res_p\ind_p\ind_p=(p+1)\res_p\ind_p-p$. \item[$($v$)$] $\frac{1}{p-1}(\res_p\ind_p-p)=\frac{1}{q-1}(\res_q\ind_q-q)$. \end{enumerate} \end{mythm} \begin{proof} That the monomials $(\res_{p_1}\ind_{p_1})^t\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}$ span $A_{P,\mathcal{M}}$ follows readily from Proposition \ref{nadirprop} and Lemma \ref{nadirendlem}. For the proof of linear independence, we refer to the first part of the proof of linear independence for Theorem \ref{basisthm}, which applies mutatis mutandis here too (note that while the proof of Theorem \ref{basisthm} refers to the present theorem, it does so only in the final paragraph of the linear independence proof, so there is no circularity). Since every $z\in A_{P,\mathcal{M}}$ can be written as a linear combination of the basis elements using the relations (i)-(v) (via Proposition \ref{nadirprop}, Lemma \ref{rellem}, Corollary \ref{ccor}, and Lemma \ref{nadirendlem}), these relations indeed generate all relations of $A_{P,\mathcal{M}}$. \end{proof} \begin{mythm} \label{basisthm} Let $P$ be a set of odd primes. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is a simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Fix any total order on $P$ (say the restriction of the usual one on $\mathbb{N}$) and index the elements of $P$ by $p_i<p_j$ with $i,j\in\llbracket 1,|P|\rrbracket$ if and only if $i<j$. Then the monomials of the forms \begin{enumerate} \item[$($i$)$] $(\res_{p_1}\ind_{p_1})^t\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}$\\ with $t\in\{ 0,1\}$, and $k_i,l_i\in\mathbb{N}$, \item[$($ii$)$] $(\res_{p_1}\ind_{p_1})^t\ind_{p_1}^{l_1}\res_{p_1}^{k_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_{|P|}}^{k_{|P|}}$\\ with $t\in\{ 0,1\}$, and $k_i,l_i\in\mathbb{N}$ such that $k_i\ne 0\ne l_i$ for at least two choices of $i$, \item[$($iii$)$] $(\res_{p_1}\ind_{p_1})^t\res_{p_i}^k\ind_{p_j}^l$\\ with $t\in\{ 0,1\}$, with $i\ne j$, and $k,l\in\mathbb{Z}_{>0}$, \item[$($iv$)$] $(\res_{p_1}\ind_{p_1})^t\res_{p_{i\text{ $($mod }|P|)+1}}\ind_{p_i}^l\res_{p_i}^k\ind_{p_{i\text{ $($mod }|P|)+1}}$\\ with $t\in\{ 0,1\}$, and $k,l\in\mathbb{Z}_{>0}$, \item[$($v$)$] $(\res_{p_1}\ind_{p_1})^t\ind_{p_j}^l\res_{p_j}^k\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}$\\ with $t\in\{ 0,1\}$, with $j\in\llbracket 1,|P|\rrbracket$, with $k,l\in\mathbb{Z}_{>0}$, and $l_i\in\mathbb{N}$ such that $l_j=0$ but $l_i\ne 0$ for at least one $i$, \item[$($vi$)$] $(\res_{p_1}\ind_{p_1})^t\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}\ind_{p_j}^l\res_{p_j}^k$\\ with $t\in\{ 0,1\}$, with $j\in\llbracket 1,|P|\rrbracket$, with $k,l\in\mathbb{Z}_{>0}$, and $k_i\in\mathbb{N}$ such that $k_j=0$ but $k_i\ne 0$ for at least one $i$, \end{enumerate} form a basis of $A_{P,\mathcal{M}}$, and the relations of $A_{P,\mathcal{M}}$ are generated by the following ones. \begin{enumerate} \item[$($i$)$] $\res_p\res_q=\res_q\res_p$. \item[$($ii$)$] $\ind_p\ind_q=\ind_q\ind_p$. \item[$($iii$)$] $z_1=z_1$,\\ where $z_2$ is the result of reordering the factors of $z_1$ in a way such that the relative order of factors $\res_p$ and $\ind_p$ for each fixed $p\in P$ is unchanged, and where either both or none of $z_1$ and $z_2$ has a total nadir. \item[$($iv$)$] $\res_p\res_p\ind_p\ind_p=(p+1)\res_p\ind_p-p$. \item[$($v$)$] $\frac{1}{p-1}(\res_p\ind_p-p)=\frac{1}{q-1}(\res_q\ind_q-q)$. \end{enumerate} \end{mythm} \begin{proof} We will use Lemma \ref{nadirendlem} and also use the notation $d_p$ and $e_p$ from that lemma. Let us first show that an arbitrary monomial $z\in A_{P,\mathcal{M}}$ can be written as a linear combination of monomials of forms in the theorem statement. For $i\in \llbracket 1,|P|\rrbracket$, let $z_i$ be the maximal subword of $z$ consisting entirely of factors $\res_{p_i}$ and $\ind_{p_i}$. First consider the case when there is a total nadir in $z$. We may then write $z=z'z''$, where $z''$ is a total nadir in $z$. Let $z'_i$ be the maximal subword of $z'$ consisting entirely of factors $\res_{p_i}$ and $\ind_{p_i}$, and $z''_i$ be the maximal subword of $z''$ consisting entirely of factors $\res_{p_i}$ and $\ind_{p_i}$. By Proposition \ref{nadirprop} we have \begin{equation*} z=z'_1\dots z'_{|P|}z''_1\dots z''_{|P|}. \end{equation*} Now apply Lemma \ref{nadirendlem} to write each $z'_i$ as a linear combination of monomials of the form $(\res_{p_1}\ind_{p_1})^t\ind_{p_i}^l$ with $t\in\{0,1\}$ and $l\in\mathbb{N}$ depending on $i$, and also write each $z''_i$ as a linear combination of monomials of the form $(\res_{p_1}\ind_{p_1})^t\res_{p_i}^k$ with $t\in\{0,1\}$ and $k\in\mathbb{N}$ depending on $i$. Now apply Lemma \ref{rellem} and Corollary \ref{ccor} to see that $z$ may be written as a linear combination of monomials of the form (i). Next we consider several cases where there is no total nadir in $z$. Note that this in particular may only be the case if $|P|>1$. Since there is no total nadir in $z$, we can not have that $d_p=0$ for all $p\in P$, and neither that $d_p=e_p$ for all $p\in P$. Let us say that $z$ \emph{starts on a $p$-edge} if $d_p\ne 0$ while $d_q=0$ for $q\ne p$, and let us say that $z$ \emph{ends on a $p$-edge} if $d_p\ne e_p$ while $d_q=e_q$ for $q\ne p$. First consider the case where $z$ neither starts nor ends on a $p$-edge for any $p\in P$. Then by part (iii) of Lemma \ref{nadirendlem}, we have for at least two values of $i$ that there is a nadir in $z_i$ with respect to $p$ which equals neither the empty word nor the entire $z_i$. Similarly to above we then have by Proposition \ref{nadirprop} that \begin{equation*} z=z_1\dots z_{|P|}, \end{equation*} where the $z_i$ may be written as a linear combination of monomials as in parts (i)-(iii) of Lemma \ref{nadirendlem}, with part (iii) being the case for at least two different values of $i$. As above, apply Lemma \ref{rellem} and Corollary \ref{ccor} to see that $z$ may be written as a linear combination of monomials of the form (ii) in the theorem statement. Next consider the case where $z$ starts on a $p_i$-edge and ends on a $p_j$-edge, where $i\ne j$. Then by parts (i) and (ii) of Lemma \ref{nadirendlem}, we have that $z_j$ is not a nadir in $z_j$ with respect to $p_j$ and that the empty subword is not a nadir in $z_i$ with respect to $p_i$. Thus we have by Proposition \ref{nadirprop} that \begin{equation*} z=z'z_iz_j, \end{equation*} where $z'$ is any rearrangement of the factors in $z$ that do not lie in $z_i$ or $z_j$. Note that by Lemma \ref{nadirendlem} we have that $z'$ can be written as a linear combination of elements of the form $(\res_{p_1}\ind_{p_1})^t$ with $t\in\{0,1\}$, that $z_i$ can be written as a linear combination of elements of the form $(\res_{p_1}\ind_{p_1})^t\res_{p_i}^k$ with $t\in\{0,1\}$ and $k$ depending on $i$, and finally that $z_j$ can be written as a linear combination of elements of the form $(\res_{p_1}\in_{p_1})^t\ind_{p_j}^l$ with $t\in\{0,1\}$ and $l$ depending on $j$. As before, apply Lemma \ref{rellem} and Corollary \ref{ccor} to see that $z$ may be written as a linear combination of monomials of the form (iii) in the theorem statement. Next consider the case where $z$ starts and ends on a $p_i$-edge. Let $z=z'z''$ where $z''$ is a nadir in $z$ with respect to $p_i$. Then the empty subword and $z$ are both nadirs in $z$ with respect to any $p_j$ with $j\ne i$. Because there is no total nadir in $z$, we must then have that $z''$ contains a factor $\ind_q$ for some $q\ne p_i$, and that $z'$ contains the factor $\res_q$. Using Proposition \ref{nadirprop}, Lemma \ref{rellem} and Corollary \ref{ccor} we may write $z$ as a linear combination of monomials of the form $(\res_{p_1}\ind_{p_1})^t\res_q\ind_p^l\res_p^k\ind_q$. By Lemma \ref{nadirendlem} we may assume that $k,l> 0$, and by Lemma \ref{rellem} together with Proposition \ref{nadirprop} we may assume that $q$ is any prime different from $p_i$, say $q=p_{i\text{ $($mod }|P|)+1}$, obtaining the form stated in part (iv). Next consider the case where $z$ starts on a $p_j$-edge but does not end on a $p$-edge for any $p\in P$. By parts (i) and (iii) of Lemma \ref{nadirendlem} together with Proposition \ref{nadirprop}, we may write $z$ as a linear combination of monomials of the form stated in part (v), which have no total nadir because $\res_{p_j}^k\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}$ is the unique nadir in $(\res_{p_1}\ind_{p_1})^t\ind_{p_j}^l\res_{p_j}^k\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}$ with respect to $p_j$, and this can not be a nadir with respect to every other $p_i$. Finally consider the case where $z$ ends on a $p_j$-edge but does not start on a $p$-edge for any $p\in P$. By parts (ii) and (iii) of Lemma \ref{nadirendlem} together with Proposition \ref{nadirprop}, we may write $z$ as a linear combination of monomials of the form stated in part (vi), which similarly to the monomials in case (v) have no total nadir. Now for linear independence. Let us first consider the case where every (rather than just some) simple $D_{2n}$-module $M$ in $\mathcal{M}$ satisfies that every prime factor of $n$ lies in $P$. It is clear from Lemma \ref{nadirlem} that the monomials of the forms (i)-(vi) are nonzero. From Lemma \ref{deplem} we may immediately rule out all linear dependences except for ones of the forms \begin{equation*} cz=\res_{p_1}\ind_{p_1}z, \end{equation*} where $z$ is of one of the forms (i)-(vi) and $c\in\mathbb{C}$. Assume towards a contradiction that there exist some such $z$ and $c$. In particular we must have \begin{equation*} (c-\res_{p_1}\ind_{p_1})zW_1(n)=0 \end{equation*} for arbitrary $n$ satisfying $W_1(n)\in\mathcal{M}$. Note from Proposition \ref{resindprop} that $zW_1(n)$ for some $n$ chosen using Lemma \ref{nadirlem} is a nonzero linear combination of modules of the form $W_{k'}(n')$ (for some fixed $n'$ and various $k'$), and by the same proposition that $\res_{p_1}\ind_{p_1}W_{k'}(n')=p_1W_{k'}(n')$, so that we must have $c=p_1$. Similarly, we have $\res_{p_1}\ind_{p_1}(V_{a,b}(n')+V_{a,-b}(n'))=p_1(V_{a,b}(n')+V_{a,-b}(n'))$. Again from Proposition \ref{resindprop} we see that the property that $M\in\mathcal{M}$ with coefficients in $\mathbb{N}$ (with respect to the natural basis of simple dihedral group modules) has a larger $V_{1,1}(m)$-coefficient than $V_{1,-1}(m)$-coefficient for every $m$ where at least one of the coefficients is nonzero is invariant under the action of any monic monomial in $A_P$. Combining this with the above paragraph, we get that \begin{equation*} (p_1-\res_{p_1}\ind_{p_1})zV_{1,1}(n)=c'(p_1-\res_{p_1}\ind_{p_1})V_{1,1}(n') \end{equation*} for some $c'\in\mathbb{C}^*$ and some $n'$. However, we may by direct computation verify that $(p_1-\res_{p_1}\ind_{p_1})V_{1,1}(n')=\frac{p_1-1}{2}(V_{1,1}(n')-V_{1,-1}(n'))\ne 0$, contradicting \begin{equation*} (p_1-\res_{p_1}\ind_{p_1})zV_{1,1}(n)=0. \end{equation*} Since every $z\in A_{P,\mathcal{M}}$ can be written as a linear combination of the basis elements using the relations (i)-(v) (via Proposition \ref{nadirprop}, Lemma \ref{rellem}, Corollary \ref{ccor}, and Lemma \ref{nadirendlem}), these relations indeed generate all relations of $A_{P,\mathcal{M}}$. Finally consider the more general case where some (but not necessarily every) simple $D_{2n}$-module $M\in\mathcal{M}$ satisfies that every prime factor of $n$ lies in $P$. Let $\mathcal{M}\cong\mathcal{N}\oplus\mathcal{N}'$, where $\mathcal{N}\subset\mathcal{M}$ is the submodule that is spanned by those $M$ that satisfy the aforementioned condition, and $\mathcal{N}'$ is spanned by those which do not. From Lemma \ref{wloglem}, we see that $A_{P,\mathcal{N}}$ has all the relations of $A_{P,\mathcal{M}}$ but potentially additional ones as well. Any such additional relation would be of the form $z=0$ where $z$ annihilates every module in $\mathcal{N}$ but not every module in $\mathcal{N}'$. However, we have by Theorem \ref{babybasisthm} a complete description of the relations of $A_{P,\mathcal{N}'}$, from which we see that every relation of $A_{P,\mathcal{N}}$ is also a relation of $A_{P,\mathcal{N}'}$, hence of $A_{P,\mathcal{M}}$. \end{proof} \begin{myex} We will exhibit examples of how monomials may be written as a linear combination of the monomials (i)-(vi) in Theorem \ref{basisthm} by going through the steps described in the proof. Let $P=\{3,5,7\}$. {\bf Case (i): } \begin{align*} &\ind_3\res_5^2\res_3\ind_5=\ind_3\res_3\res_5^2\ind_5=\ind_3\res_3\res_5(\res_5\ind_5)\\&=(\res_5\ind_5)\ind_3\res_3\res_5=(\frac{5-1}{3-1}(\res_3\ind_3-3)+5)\ind_3\res_3\res_5\\&=(2\res_3\ind_3-1)\ind_3\res_3\res_5=2(\res_3\ind_3)\ind_3\res_3\res_5-\ind_3\res_3\res_5. \end{align*} {\bf Case (ii): } \begin{align*} &\ind_3\res_3\ind_5\res_3\ind_3\res_5\ind_3\res_3=\ind_3(\res_3\res_3\ind_3\ind_3)\res_3\ind_5\res_5\\&=\ind_3((3+1)\res_3\ind_3-3)\res_3\ind_5\res_5\\&=4(\res_3\ind_3)\ind_3\res_3\ind_5\res_5-3\ind_3\res_3\ind_5\res_5. \end{align*} {\bf Case (iii): } \begin{align*} &\res_5\res_7\res_3\ind_5\ind_7\ind_5\\&=(\res_7\ind_7)\res_3(\res_5\ind_5)\ind_5\\&=(\res_7\ind_7)(\res_5\ind_5)\res_3\ind_5\\&=(\frac{7-1}{3-1}(\res_3\ind_3-3)+7)(\frac{5-1}{3-1}(\res_3\ind_3-3)+5)\res_3\ind_5\\&=6(\res_3\res_3\ind_3\ind_3)\res_3\ind_5-7(\res_3\ind_3)\res_3\ind_5+2\res_3\ind_5\\&=6((3+1)\res_3\ind_3-3)\res_3\ind_5-7(\res_3\ind_3)\res_3\ind_5+2\res_3\ind_5\\&=17(\res_3\ind_3)\res_3\ind_5-16\res_3\ind_5. \end{align*} {\bf Case (iv): } \begin{align*} &\res_7\ind_3\res_3^2\ind_7=\frac{1}{5}((5+1)\res_5\ind_5-\res_5\ind_5\res_5\ind_5)\res_7\ind_3\res_3^2\ind_7\\&=\frac{6}{5}(\res_5\ind_5)\res_7\ind_3\res_3^2\ind_7-\frac{1}{5}(\res_5\ind_5)(\res_5\ind_5)\res_7\ind_3\res_3^2\ind_7\\&=\frac{6}{5}(\res_7\ind_7)\res_5\ind_3\res_3^2\ind_5-\frac{1}{5}(\res_5\ind_5)(\res_7\ind_7)\res_5\ind_3\res_3^2\ind_5\\&=\frac{6}{5}(\frac{7-1}{3-1}(\res_3\ind_3-3)+7)\res_5\ind_3\res_3^2\ind_5\\&-\frac{1}{5}(\frac{5-1}{3-1}(\res_3\ind_3-3)+5)(\frac{7-1}{3-1}(\res_3\ind_3-3)+7)\res_5\ind_3\res_3^2\ind_5\\&=\frac{18}{5}(\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5-\frac{12}{5}\res_5\ind_3\res_3^2\ind_5\\&-\frac{6}{5}(\res_3\ind_3\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5+\frac{7}{5}(\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5\\&-\frac{2}{5}\res_5\ind_3\res_3^2\ind_5\\&=\frac{18}{5}(\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5-\frac{12}{5}\res_5\ind_3\res_3^2\ind_5\\&-\frac{6}{5}((3+1)\res_3\ind_3-3)\res_5\ind_3\res_3^2\ind_5+\frac{7}{5}(\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5\\&-\frac{2}{5}\res_5\ind_3\res_3^2\ind_5\\&=\frac{1}{5}(\res_3\ind_3)\res_5\ind_3\res_3^2\ind_5+\frac{4}{5}\res_5\ind_3\res_3^2\ind_5. \end{align*} {\bf Case (v): } \begin{align*} \res_3\ind_5\ind_3\res_5\ind_3\res_5=\ind_5\res_5^2(\res_3\ind_3)\ind_3=(\res_3\ind_3)\ind_5\res_5^2\ind_3. \end{align*} {\bf Case (vi): } \begin{align*} \res_3\ind_5\res_3\res_5\ind_3\res_5=\res_3(\res_3\ind_3)\ind_5\res_5^2=(\res_3\ind_3)\res_3\ind_5\res_5^2. \end{align*} \end{myex} \subsection{The center, and a decomposition, of \texorpdfstring{$A_{P,\mathcal{M}}$}{APM}} \begin{mylem} \label{idemlem} Let $P$ be a set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be any $A_{P}$-submodule generated by $D_{2n}$-modules with $n$ odd. Let also $q\in P$ be arbitrary. Then $A_{P,\mathcal{M}}$ contains the central idempotents \begin{equation*} \epsilon_1=\frac{\res_q\ind_q-1}{q-1} \end{equation*} and \begin{equation*} \epsilon_2=\frac{q-\res_q\ind_q}{q-1}, \end{equation*} which satisfy that \begin{equation*} \epsilon_1+\epsilon_2=1. \end{equation*} \end{mylem} \begin{proof} That $\epsilon_1$ and $\epsilon_2$ belong to the center of $A_{P,\mathcal{M}}$ is immediate from Corollary \ref{ccor}. That they are idempotents is shown by direct calculation and an application of Lemma \ref{rellem}: \begin{align*} \epsilon_1^2&=(\frac{\res_q\ind_q-1}{q-1})^2=\frac{(\res_q\ind_q)^2-2\res_q\ind_q+1}{q^2-2+1}=\frac{\res_q\res_q\ind_q\ind_q-2\res_q\ind_q+1}{(q-1)^2}\\&=\frac{(q+1)\res_q\ind_q-q-2\res_q\ind_q+1}{(q-1)^2}=\frac{(q-1)(\res_q\ind_q-1)}{(q-1)^2}=\epsilon_1, \end{align*} and similarly for $\epsilon_2$. Also that $\epsilon_1+\epsilon_2=1$ is a result of direct computation. \end{proof} For a set of odd primes $P$ and $\mathcal{M}\subset\mathcal{G}$ an $A_{P}$-submodule spanned by $D_{2n}$-modules with $n$ odd and furthermore such that for each fixed $n$ either all simple $D_{2n}$-modules or none belong to $\mathcal{M}$, we define algebras \begin{equation*} T^1_{P,\mathcal{M}}=A_{P,\mathcal{M}}/\langle \res_{p}\ind_{p}-p\rangle \end{equation*} and \begin{equation*} T^2_{P,\mathcal{M}}=A_{P,\mathcal{M}}/\langle \res_{p}\ind_{p}-1\rangle, \end{equation*} where $p\in P$ is arbitrary. That these algebras are well-defined is part of the following theorems \ref{babydecompthm} and \ref{decompthm}. \begin{mythm} \label{babydecompthm} Let $P$ be a set of odd primes and pick an arbitrary indexing of $P$ by $\llbracket 1,|P|\rrbracket$. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is no simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Then the algebras $T^1_{P,\mathcal{M}}$ and $T^2_{P,\mathcal{M}}$ do not depend on the choice of $p$, and each has a basis consisting of the monomials of the forms \begin{equation*} \ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}} \end{equation*} with $k_i,l_i\in\mathbb{N}$, where we have identified monomials in $A_{P,\mathcal{M}}$ with their images under the natural projections $\pi_1: A_{P,\mathcal{M}}\rightarrow T^1_{P,\mathcal{M}}$ and $\pi_2: A_{P,\mathcal{M}}\rightarrow T^2_{P,\mathcal{M}}$ respectively. Furthermore we have isomorphisms \begin{align*} A_{P,\mathcal{M}}\xrightarrow{\sim} A_{P,\mathcal{M}}\epsilon_1&\oplus A_{P,\mathcal{M}}\epsilon_2\xrightarrow{\sim} T^1_{P,\mathcal{M}}\oplus T^2_{P,\mathcal{M}}\\ z\mapsto z\epsilon_1&\oplus z\epsilon_2\mapsto \pi_1(z)\oplus\pi_2(z), \end{align*} where $\epsilon_1$ and $\epsilon_2$ depend on some fixed $q\in P$, as in Lemma \ref{idemlem}. \end{mythm} \begin{proof} The proof of Theorem \ref{decompthm} applies here too, with the exception that we need to invoke Theorem \ref{babybasisthm} instead of Theorem \ref{basisthm} in it. \end{proof} \begin{mythm} \label{decompthm} Let $P$ be a set of odd primes and pick an arbitrary indexing of $P$ by $\llbracket 1,|P|\rrbracket$. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is a simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Then the algebras $T^1_{P,\mathcal{M}}$ and $T^2_{P,\mathcal{M}}$ do not depend on the choice of $p$, and each has a basis consisting of the monomials of the forms \begin{enumerate} \item[$($i$)$] $\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}$\\ with $k_i,l_i\in\mathbb{N}$, \item[$($ii$)$] $\ind_{p_1}^{l_1}\res_{p_1}^{k_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_{|P|}}^{k_{|P|}}$\\ with $k_i,l_i\in\mathbb{N}$ such that $k_i\ne 0\ne l_i$ for at least two $i$, \item[$($iii$)$] $\res_{p_i}^k\ind_{p_j}^l$\\ with $i\ne j$, and $k,l\in\mathbb{Z}_{>0}$, \item[$($iv$)$] $\res_{p_{i\text{ $($mod }|P|)+1}}\ind_{p_i}^l\res_{p_i}^k\ind_{p_{i\text{ $($mod }|P|)+1}}$\\ with $k,l\in\mathbb{Z}_{>0}$, \item[$($v$)$] $\ind_{p_j}^l\res_{p_j}^k\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}$\\ with $j\in\llbracket 1,|P|\rrbracket$, with $k,l\in\mathbb{Z}_{>0}$, and $l_i\in\mathbb{N}$ such that $l_j=0$ but $l_i\ne 0$ for at least one $i$, \item[$($vi$)$] $\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}\ind_{p_j}^l\res_{p_j}^k$\\ with $j\in\llbracket 1,|P|\rrbracket$, with $k,l\in\mathbb{Z}_{>0}$, and $k_i\in\mathbb{N}$ such that $k_j=0$ but $k_i\ne 0$ for at least one $i$, \end{enumerate} where we have identified monomials in $A_{P,\mathcal{M}}$ with their images under the natural projections $\pi_1: A_{P,\mathcal{M}}\rightarrow T^1_{P,\mathcal{M}}$ and $\pi_2: A_{P,\mathcal{M}}\rightarrow T^2_{P,\mathcal{M}}$ respectively. Furthermore we have isomorphisms \begin{align*} A_{P,\mathcal{M}}\xrightarrow{\sim} A_{P,\mathcal{M}}\epsilon_1&\oplus A_{P,\mathcal{M}}\epsilon_2\xrightarrow{\sim} T^1_{P,\mathcal{M}}\oplus T^2_{P,\mathcal{M}}\\ z\mapsto z\epsilon_1&\oplus z\epsilon_2\mapsto \pi_1(z)\oplus\pi_2(z), \end{align*} where $\epsilon_1$ and $\epsilon_2$ depend on some fixed $q\in P$, as in Lemma \ref{idemlem}. \end{mythm} \begin{proof} Because $\epsilon_1$ and $\epsilon_2$ are central idempotents which add up to $1$ by Lemma \ref{idemlem}, we indeed have the isomorphism \begin{align*} A_{P,\mathcal{M}}&\xrightarrow{\sim} A_{P,\mathcal{M}}\epsilon_1\oplus A_{P,\mathcal{M}}\epsilon_2\\ z&\mapsto z\epsilon_1\oplus z\epsilon_2. \end{align*} Let us show that we have \begin{align*} A_{P,\mathcal{M}}\epsilon_1&\xrightarrow{\sim} T^1_{P,\mathcal{M}}\\ z\epsilon_1&\mapsto \pi_1(z), \end{align*} as well as the claimed basis of $T^1_{P,\mathcal{M}}$ (the corresponding proofs for $T^2_{P,\mathcal{M}}$ are done analogously). In $A_{P,\mathcal{M}}$, we have \begin{align*} \res_p\ind_p\epsilon_1&=\res_p\ind_p\frac{\res_{q}\ind_{q}-1}{q-1}=(\frac{p-1}{q-1}(\res_{q}\ind_{q}-q)+p)\frac{\res_{q}\ind_{q}-1}{{q}-1}\\&=(p-q\frac{q}{q-1})\frac{\res_{q}\ind_{q}-1}{q-1}+\frac{p-1}{q-1}\frac{\res_{q}\res_{q}\ind_{q}\ind_{q}-\res_{q}\ind_{q}}{{q}-1}\\&=(p-{q}\frac{p-1}{{q}-1})\frac{\res_{q}\ind_{q}-1}{{q}-1}+\frac{q}{{q}-1}\frac{({q}+1)\res_{q}\ind_{q}-{q}-\res_{q}\ind_{q}}{{q}-1}\\&=(p-{q}\frac{p-1}{{q}-1})\frac{\res_{q}\ind_{q}-1}{{q}-1}+{q}\frac{p-1}{{q}-1}\frac{\res_{q}\ind_{q}-1}{{q}-1}=p\epsilon_1. \end{align*} Thus we have a natural epimorphism \begin{align*} A_{P,\mathcal{M}}/\langle \res_{p}\ind_{p}-p\rangle &\twoheadrightarrow A_{P,\mathcal{M}}\epsilon_1\\ 1&\mapsto\epsilon_1. \end{align*} From this epimorphism and the basis of $A_{P,\mathcal{M}}$ given in Theorem \ref{basisthm} we get that the elements of the forms \begin{align*} z\epsilon_1=\frac{1}{q-1}(\res_{q}\ind_{q}z-z) \end{align*} with $z$ of the form (i)-(vi) span $A_{P,\mathcal{M}}\epsilon_1$, and by the same theorem that they are even linearly independent. \end{proof} \begin{myprop} \label{tisoprop} For a set of odd primes $P$ and $\mathcal{M}\subset\mathcal{G}$ an $A_{P}$-submodule spanned by $D_{2n}$-modules with $n$ odd and furthermore such that for each fixed $n$ either all simple $D_{2n}$-modules or none belong to $\mathcal{M}$, we have a mapping \begin{align*} T^1_{P,\mathcal{M}}&\rightarrow T^2_{P,\mathcal{M}}\\ \res_p&\mapsto \res_p\\ \ind_p&\mapsto p\ind_p \end{align*} which extends to an isomorphism of algebras. \end{myprop} \begin{proof} The relation $\res_p\ind_p=p$ in $T^1_{P,\mathcal{M}}$ is preserved by the mapping because of the relation $\res_p\ind_p=1$ in $T^2_{P,\mathcal{M}}$. The other relations (relations (i)-(v) as given in Theorems \ref{babybasisthm} and \ref{basisthm} respectively) are preserved because they either are special cases of the previous relation or are homogeneous in $\ind_p$. \end{proof} Let $B=\langle a,b|ab=1\rangle$ be the bicyclic algebra. \begin{mycor} \label{bicyccor} Let $P$ be a set of odd primes. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is no simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Then \begin{equation*} A_{P,\mathcal{M}}\cong (\bigotimes_{p\in P}B)^2. \end{equation*} \end{mycor} \begin{proof} Thanks to Theorem \ref{babydecompthm} and Proposition \ref{tisoprop}, it suffices to show that \begin{equation*} T^2_{P,\mathcal{M}}\cong \bigotimes_{p\in P}B. \end{equation*} Let, for every $p\in P$, the algebra $B_p=\langle a_p,b_p|a_pb_p=1\rangle$ be a copy of the bicyclic algebra. Then, considering the basis of $T^2_{P,\mathcal{M}}$ given in Theorem \ref{babydecompthm} as well as the relations $\res_p\ind_p=1$ in $T^2_{P,\mathcal{M}}$ and relations (i)-(iii) of Theorem \ref{babybasisthm}, we clearly have an isomorphism defined by \begin{align*} T^2_{P,\mathcal{M}}&\xrightarrow{\sim} \bigotimes_{p\in P}B_p\\ \res_p&\mapsto a_p\\ \ind_p&\mapsto b_p. \end {align*} \end{proof} \begin{mylem} \label{linindlem} With the setup of Theorem \ref{decompthm}, for any two different monomials $z_1$ and $z_2$ either both of the form (i) or both of one of the forms (ii)-(vi) from that theorem, there exists some $q\in P$ such that the respective termini or the respective nadirs of $z_1$ and $z_2$ with respect to $q$ are different. \end{mylem} \begin{proof} It is clear from considering whether $z$ starts or ends on various $p$-edges in the proof of Theorem \ref{basisthm} that the lemma statement holds for $z_1$ and $z_2$ being of different forms (ii)-(vi). For each fixed form (i)-(vi) it is straightforward to verify that the termini and nadirs with respect to the $p\in P$ uniquely determine a monomial of that form. \end{proof} \begin{mylem} \label{clem} Let $P$ be a set of odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_{P}$-submodule spanned by $D_{2n}$-modules with $n$ odd and furthermore such that for each fixed $n$ either all simple $D_{2n}$-modules or none belong to $\mathcal{M}$. Then the only central elements of $T^2_{P,\mathcal{M}}$ which are linear combinations of monomials which have a total nadir are the scalars. \end{mylem} \begin{proof} Assume towards a contradiction that $z\in T^2_{P,\mathcal{M}}\backslash\mathbb{C}$ lies in the center and is a linear combination of monomials each having a total nadir. Then $z$ can be written as a linear combination of monomials of the form (i) as in Theorem \ref{decompthm}. First consider the case where $z$ contains some factor $\res_p$ for some fixed $p\in P$. Let $z=z_1+z_2$ where the monomial terms of $z_1$ contain a factor $\res_p$ while the monomial terms of $z_2$ do not. Then \begin{equation*} \ind_pz-z\ind_p=\ind_pz_1-z_1\ind_p. \end{equation*} Multiplying a monomial term of $z_1$ by $\ind_p$ from the right increases both the nadir and the terminus of the monomial with respect to $p$ by 1, while leaving the other termini and nadirs unchanged. It then follows from Lemmata \ref{nadirlem} and \ref{linindlem} that this multiplication does not annihilate any monomial terms of $z_1$. The largest nadir of any term of $\ind_pz_1-z_1\ind_p$ with respect to $p$ is clearly to be found in $z_1\ind_p$ and not in $\ind_pz_1$. But then $\ind_pz-z\ind_p$ will contain a nonzero multiple of such a term, contradicting $\ind_pz-z\ind_p=0$. The case where the monomial terms of $z$ only have factors $\ind_p$ for various $p\in P$ is handled by fixing one such $p\in P$, considering the expression \begin{equation*} \res_p z-z\res_p, \end{equation*} and applying a similar argument. \end{proof} \begin{mythm} \label{cthm} Let $P$ be a set of odd primes, let $p\in P$ be arbitrary, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_{P}$-submodule spanned by $D_{2n}$-modules with $n$ odd and furthermore such that for each fixed $n$ either all simple $D_{2n}$-modules or none belong to $\mathcal{M}$. Then the center of $A_{P,\mathcal{M}}$ is generated by 1 and $\res_p\ind_p$. \end{mythm} \begin{proof} By Theorems \ref{babydecompthm} and \ref{decompthm}, together with Proposition \ref{tisoprop}, it suffices to show that the center of $T^2_{P,\mathcal{M}}$ is $\mathbb{C}$. Assume towards a contradiction that $z$ lies in the center of $T^2_{P,\mathcal{M}}$ but not in $\mathbb{C}$. Let $l$ be the degree of $z$ (as a polynomial in various $\res$ and $\ind$). For arbitrary monic monomials $z',z''\in T^2_{P,\mathcal{M}}$, let $e'_q$ be the terminus of $z'$ and $d'_q$ the nadir of $z'$, and also $e''_q$ the terminus of $z''$ and $d''_q$ the nadir of $z''$, with respect to $q$ for all $q\in P$. Note that $z'z''$ and $z''z'$ both have terminus $e'_q+e''_q$ with respect to $q$, and that by assumption we have the relation \begin{equation*} z'z=zz' \end{equation*} in $T^2_{P,\mathcal{M}}$. Using Lemma \ref{deplem} and the fact that the monomials involved in the extra relation $\res_q\ind_q=1$ all have the same termini, we may assume that all monomial terms in $z$ have the same termini (with respect to every $q\in P$). Using (for instance) the same indexation of the primes in $P$ as in Theorem \ref{basisthm}, let \begin{equation*} x=\res_{p_1}^l\res_{p_2}^l\dots\res_{p_{|P|}}^l. \end{equation*} Let furthermore $z=z_1+z_2$, where the monomial terms of $z_1$ have a total nadir while the monomial terms of $z_2$ do not. Then $z_1$ can be written as a linear combination of basis elements of the form (i) in Theorem \ref{decompthm}. Note that $xz'$ is a total nadir in $xz'$ if $z'$ has degree at most $l$, because for each $q\in P$, the number of factors $\res_q$ in $x$ is greater than or equal to the number of factors $\ind_q$ in $z'$. Also, $z''x$ is a nadir in $z'x$ with respect to $q$ if and only if $z''$ is a nadir in $z'$ with respect to $q$. In particular, $xz_1$, $z_1x$, and $xz_2$ will all have a total nadir, while $z_2x$ will not. Write these expressions in the basis of Theorem \ref{decompthm} to see that it then follows from $xz-zx=0$ that $z_2x=0$ Now, either $z_2=0$ (which is in particular the case if there is no $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$, as in Theorem \ref{babybasisthm}) or $z_2\ne 0$. In the first case, we are done by Lemma \ref{clem}. In the second case, let $u$ be an arbitrary monomial term of $z_2$ when the latter is expressed in the basis given in Theorem \ref{decompthm}. Let $e_q$ be the terminus of $u$ with respect to $q\in P$ and $d_q$ be the nadir of $u$ with respect to $q$. Using (for instance) Lemma \ref{nadirlem} we see that $ux\ne 0$. Hence the relation $z_2x=0$ is nontrivial. Moreover, the terminus of $ux$ with respect to $q$ is $e_q-l$, and the nadir of $ux$ with respect to $q$ is $d_q-l$. In particular, we get by Lemma \ref{linindlem} that the linear independence of the terms of $z_2$ (expressed in the basis of Theorem \ref{decompthm}) is preserved by right multiplication by $x$. But that $z_2x$ is a linear combination of linearly independent elements where not all coefficients are zero contradicts $z_2x=0$. \end{proof} \begin{mycor} \label{indeccor} Let $P$ be a set of odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_{P}$-submodule spanned by $D_{2n}$-modules with $n$ odd and furthermore such that for each fixed $n$ either all simple $D_{2n}$-modules or none belong to $\mathcal{M}$. Then the algebras $T^1_{P,\mathcal{M}}$ and $T^2_{P,\mathcal{M}}$ are indecomposable. \end{mycor} \begin{proof} If $T^1_{P,\mathcal{M}}$ or $T^2_{P,\mathcal{M}}$ were decomposable, then the identity of any summand would be a non-scalar central element of the sum. This element would via theorems \ref{babydecompthm} and \ref{decompthm} correspond to a central element of $A_{P,\mathcal{M}}$ not generated by $1$ and $\res_p\ind_p$ for any $p\in P$, which would contradict Theorem \ref{cthm}. \end{proof} \section{Results for restriction and induction with respect to the prime 2, and further directions}\label{s6} This section deals with some partial results for the case where $2\in P$, and seeks to point towards some directions suitable for further investigation. The discussion will largely be intended to convey intuition, at some expense to rigor. Induction and restriction of $D_{2n}$-modules for $2\in P$ (and hence possibly even $n$) handles differently than for the odd primes only, as is evident from Proposition \ref{resindprop} (and also from Figures 1 and 2). This leads to the failure of analogues of some of the results leading up to the main result of the preceding section, Theorem \ref{basisthm}. We recover the following analogue of Lemma \ref{rellem}. \begin{myprop} \label{relprop} Let $P\ni 2$ be a set of prime numbers, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_P$-submodule. Then \begin{equation*} \varphi_{P,\mathcal{M}}(\res_2\res_2\res_2\ind_2\ind_2\ind_2)=\varphi_{P,\mathcal{M}}(3\res_2\res_2\ind_2\ind_2-2\res_2\ind_2). \end{equation*} \end{myprop} \begin{proof} By direct computation using Proposition \ref{resindprop} (or by looking at Figure 2) we have for $n\ge 3$ that \begin{align*} &(2\res_2\ind_2-3\res_2\res_2\ind_2\ind_2+\res_2\res_2\res_2\ind_2\ind_2\ind_2)W_k(n)\\&=2\cdot 2W_k(n)-3\cdot 2^2W_k(n)+2^3W_k(n)=0, \end{align*} and \begin{align*} &(2\res_2\ind_2-3\res_2\res_2\ind_2\ind_2+\res_2\res_2\res_2\ind_2\ind_2\ind_2)V_{1,b}(n)\\&=2\cdot 2V_{1,b}(n)-3(3V_{1,b}(n)+V_{1,-b}(n))+5V_{1,b}(n)+3V_{1,-b}(n)=0, \end{align*} and \begin{align*} &(2\res_2\ind_2-3\res_2\res_2\ind_2\ind_2+\res_2\res_2\res_2\ind_2\ind_2\ind_2)V_{-1,b}(n)\\&=2(V_{-1,b}(n)+V_{-1,-b}(n))-3(2V_{-1,b}(n)+2V_{-1,-b}(n))+4V_{-1,b}(n)+4_{-1,-b}(n)=0. \end{align*} The desired result follows. \end{proof} The above relation was indeed discovered in the same way as the relation of Lemma \ref{rellem}: For each $p\in P$, the actions of the monomials of the form $\res_p^l\ind_p^l$ on $\mathcal{M}$ is, because of the regularity of the induction/restriction diagrams, determined by their actions on a finite number of simple modules in $\mathcal{M}$, and on the span of these modules, the infinitely many monomials $\res_p^l\ind_p^l$ act as endomorphisms, hence only a finite number of them can be linearly independent. It is now not difficult to find the above linear dependence explicitly. Essential for Theorem \ref{basisthm} were also our ability to commute factors corresponding to different primes using Proposition \ref{nadirprop}, the linear independence of monomials of different nadirs and ``total nadirity'' by Lemma \ref{deplem}, and the simplification provided by each $\res_p\ind_p$ being central in $A_{P,\mathcal{M}}$ by Corollary \ref{ccor}. The first two of these results are a consequence of Lemma \ref{partfunlem}, which in turn relies on properties of the induction/restriction diagrams discussed in the paragraphs preceding the lemma. By inspection of the induction/restriction diagram for the prime $2$ (see Figure 2), it seems likely that we by a construction similar to that of Lemma \ref{partfunlem} may perform the ``upward translation'' required for Proposition \ref{nadirprop}. The ``downward translation'' used for Lemma \ref{deplem} seems to fail for the prime $2$, since either the module $W_m(4m)$ or the modules $V_{-1,b}(2m)$ (depending on the parity of $m$) can not be translated to a module at the bottom level of the diagram. Nevertheless, it seems very plausible to me that this quite small gap in the proof of an analogue to Lemma \ref{deplem} may be bridged by other means. Here one may mention one additional case which has so far been swept under the rug, namely the case where $2\not\in P$ but where $\mathcal{M}$ contains some $D_{2n}$-module with $n$ even. Here, like above, the downward translation used for Lemma \ref{deplem} seems to fail, but my guess is that one could resolve this issue without much trouble and arrive at results similar to the Theorem \ref{babybasisthm} case. As for hopes of finding an analogue to Corollary \ref{ccor}, it is easily verified (e.g. by computing $\res_2\res_2\ind_2V_{-1,1}(2m)\ne \res_2\ind_2\res_2V_{-1,1}(2m)$) that $\res_2\ind_2$ is not central in $A_{P,\mathcal{M}}$. Instead, we have the following relations. \begin{myprop} Let $P\ni 2$ be a set of prime numbers, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_P$-submodule. Then \begin{enumerate} \item[$($i$)$] $\varphi_{P,\mathcal{M}}(\ind_2\res_2\ind_2)=\varphi_{P,\mathcal{M}}(2\ind_2)$, \item[$($ii$)$] $\varphi_{P,\mathcal{M}}(\res_2\ind_2\res_2)=\varphi_{P,\mathcal{M}}(2\res_2)$. \end{enumerate} \end{myprop} \begin{proof} This is done by straightforward computation using Proposition \ref{resindprop} (or by looking at Figure 2) similarly to the proof of Proposition \ref{relprop}. \end{proof} We also have the following relation. \begin{myprop} Let $P\ni 2$ be a set of prime numbers, and let $\mathcal{M}\subset\mathcal{G}$ be an $A_P$-submodule. Then \begin{equation*} \varphi_{P,\mathcal{M}}((\res_2^2\ind_2^2)^4)=\varphi_{P,\mathcal{M}}(6(\res_2^2\ind_2^2)^3-8(\res_2^2\ind_2^2)^2). \end{equation*} \end{myprop} \begin{proof} This is again done by straightforward computation using Proposition \ref{resindprop} (or by looking at Figure 2) similarly to the proof of Proposition \ref{relprop}. \end{proof} There may well exist additional relations, but the problems of finding these and ultimately a basis for $A_{P,\mathcal{M}}$ are likely more difficult to solve than for the case of odd primes, although possibly within reasonable reach for future investigation. There are additional directions which would be natural to pursue on the topic of the algebras $A_{P,\mathcal{M}}$. One is that of the cases where $\mathcal{M}$ does not necessarily contain either all or none of the simple $D_{2n}$-modules for a fixed $n$. As mentioned in the beginning of Section \ref{s5}, this should not be too difficult. Another is to go the Coxeter route when defining the dihedral groups, and in particular obtain a well-defined group $D_{2n}$ also for $n=1,2$. This would remove the relevance of total nadirs in our proof, giving us no reason to distinguish between the cases of Theorem \ref{babybasisthm} and Theorem \ref{basisthm} respectively. I expect the outcome to be very similar to the Theorem \ref{babybasisthm} case. In all of these cases, it may be interesting to look into the representation theory of the algebras $A_{P,\mathcal{M}}$. Finally, we note that a more general class of algebras corresponding to induction/restriction diagrams of sufficient regularity should be amenable to methods used in this paper. Indeed, diagrams satisfying the property described in the discussion preceding Lemma \ref{partfunlem} should as noted above admit analogues to Proposition \ref{nadirprop} and Lemma \ref{nadirlem}, and also an analogue to Lemma \ref{rellem} and Proposition \ref{relprop} by the discussion succeeding the latter. \begin{comment} \begin{mythm} \label{bicycthm} Let $P$ be a set of odd primes. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is no simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. For all $p\in P$, let $B_p=\langle a_p,b_p|a_pb_p=1\rangle$ denote a copy of the semigroup algebra of the bicyclic semigroup. Then \begin{equation*} A_{P,\mathcal{M}}\cong (\bigotimes_{p\in P}B_p)^2. \end{equation*} \end{mythm} \begin{proof} From Lemma \ref{idemlem} we get that \begin{equation*} A_{P,\mathcal{M}}\cong A_{P,\mathcal{M}}\epsilon_1\oplus A_{P,\mathcal{M}}\epsilon_2, \end{equation*} where $\epsilon_1$ and $\epsilon_2$ depend on some fixed $q\in P$, as in the lemma. For any $p\in P$, we have \begin{align*} \res_p\ind_p\epsilon_1&=\res_p\ind_p\frac{\res_q\ind_q-1}{q-1}=(\frac{p-1}{q-1}(\res_p\ind_p-q)+p)\frac{\res_q\ind_q-1}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+\frac{p-1}{q-1}\frac{\res_q\res_q\ind_q\ind_q-\res_q\ind_q}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+\frac{p-1}{q-1}\frac{(q+1)\res_q\ind_q-q-\res_q\ind_q}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+q\frac{p-1}{q-1}\frac{\res_q\ind_q-1}{q-1}=p\epsilon_1. \end{align*} Using that $\res_p\ind_p$ is central by Proposition \ref{cprop}, we may in particular assume that any element $z\epsilon_1\in A_{P,\mathcal{M}}\epsilon_1$ is such that the monomial terms of $z$ contains no factor $\res_p\ind_p$ (where $p\in P$ is arbitrary). Note that it then follows from Lemma \ref{deplem} that all distinct monic such elements $z\epsilon_1$ are linearly independent. Therefore we have an isomorphism \begin{align*} A_{P,\mathcal{M}}\epsilon_1&\xrightarrow{\sim} \bigotimes_{p\in P}B_p\\ \res_p&\mapsto a_p\\ \ind_p&\mapsto pb_p. \end {align*} A completely analogous argument gives an isomorphism \begin{align*} A_{P,\mathcal{M}}\epsilon_2&\xrightarrow{\sim} \bigotimes_{p\in P}B_p\\ \res_p&\mapsto a_p\\ \ind_p&\mapsto b_p. \end {align*} The desired result follows. \end{proof} \begin{comment} We may for the $A_{P,\mathcal{M}}$ of Theorem \ref{babybasisthm} strengthen the assertion of Corollary \ref{ccor}. \begin{myprop} \label{cprop} Let $P$ be a set of odd primes and let $q\in P$ be arbitrary. Let also $\mathcal{M}\subset\mathcal{G}$ be some $A_P$-submodule spanned by simple $D_{2n}$-modules with $n$ odd and such that for each fixed $n$ either all or none of the simple $D_{2n}$-modules lie in $\mathcal{M}$, and furthermore such that there is no simple $D_{2n}$-module in $\mathcal{M}$ with all prime factors of $n$ belonging to $P$. Then the center of $A_{P,\mathcal{M}}$ equals $\langle 1,\mathbb{C}\res_q\ind_q\rangle$. \end{myprop} \begin{proof} Let $x_p$ abbreviate $\res_p$, and $y_p$ abbreviate $\ind_p$. Assume towards a contradiction that $z\in A_{P,\mathcal{M}}\backslash \langle 1,\mathbb{C}\res_q\ind_q\rangle$ is a central element. Fix the basis of $A_{P,\mathcal{M}}$ given by Theorem \ref{babybasisthm}. Then there is a $p'\in P$ such that $z$ has nonzero projection to at least one basis element where at least one exponent $l_{p'}^x$ or $l_{p'}^y$ is nonzero. Assume the latter case (the former case is similar). Then any basis element with $l_{p'}^y=0$ commutes with $x_{p'}$, while for $l_{p'}^y\ne 0$ we have (using the relations stated in Theorem \ref{mainthm}) that \begin{align*} &(\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x}))x_{p'}-x_{p'}(\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x}))\\&=\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})-\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})x_{p'}y_{p'}(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&=\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})-\frac{p'-1}{q-1}x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&-(p'-\frac{p'-1}{q-1}q)\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x}) \end{align*} and \begin{align*} &(x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x}))x_{p'}-x_{p'}(x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x}))\\&=x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})-x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})x_{p'}y_{p'}(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&=x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})-\frac{p'-1}{q-1}x_qx_qy_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&-(p'-\frac{p'-1}{q-1}q)x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&=x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})-\frac{p'-1}{q-1}((q+1)x_qy_q-q)\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&-(p'-\frac{p'-1}{q-1}q)x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&=x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})+(\frac{p'-1}{q-1}-p')x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x})\\&+\frac{p'-1}{q-1}q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y-1}x^{l_{p'}^x}) \end{align*} Let $c_1 \prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x})+c_2 x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x})$ be the (nonzero) projection of $z$ onto the basis elements where the $x_{p'}$-exponents $l_{p'}^x$ is the highest among all $x_{p'}$-exponents. By the above calculations we have that the projection of $zx_{p'}-x_{p'}z$ onto the basis elements where the $x_{p'}$-exponents are the highest equals \begin{equation*} c_1(\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1}))+c_2(x_qy_q\prod_{p\in P\backslash\{p'\}}(y^{l_p^y}x^{l_p^x})(y^{l_{p'}^y}x^{l_{p'}^x+1})). \end{equation*} This expression is nonzero, so $zx_{p'}-x_{p'}z\ne 0$, which contradicts that $z$ lies in the center, and we are done. \end{proof} \begin{comment} \begin{mythm} Let $P$ be a set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be the $A_{P,\mathcal{G}}$-submodule spanned by all $D_{2n}$-modules for all $n$ whose prime factors belong to $P$. For $i\in\{ 1,2\}$, let $T^i_{P,\mathcal{M}}$ be as defined in Theorem \ref{decompthm}. Define the algebras $K^i_{P,\mathcal{M}}$ to have basis where each element is a tuple \begin{equation*} (t,d,b)\text{, with }t\in\mathbb{Z}^{|P|}\text{, }d\in\mathbb{Z}^{|P|}\text{ and }b\in\begin{cases} \{\text{True}\}, & \mbox{if } d=0\text{ or }d=t,\\ \{\text{True, False}\}, & \mbox{otherwise}, \end{cases} \end{equation*} and multiplication defined by \begin{equation*} (t^2,d^2,b^2)(t^1,d^1,b^1)=\prod_{p\in P}p^{(2-i)\min(t^1_p-d^1_p,d^2_p)}(t=t^1+t^2,d=\min(d^1,t^1+d^2),b), \end{equation*} where the $\min$ is taken componentwise, and where \begin{equation*} b=\begin{cases} b^1\vee b^2, & \mbox{if } d=d^1,\\ b^1, & d\ne d^1\text{ and }d^1_p\le d_p\text{ (for all $p\in P$)},\\ b^2, & d\ne d^1\text{ and }d^1_p\ge d_p\text{ (for all $p\in P$)},\\ \text{True}, & \text{otherwise}. \end{cases} \end{equation*} Then we have isomorphisms \begin{align*} T^i_{P,\mathcal{M}}&\xrightarrow{\sim}K^i_{P,\mathcal{M}}\\ \ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}&\mapsto (l-k, -k, \text{True})\\ \ind_{p_1}^{l_1}\res_{p_1}^{k_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_{|P|}}^{k_{|P|}}&\mapsto (l-k, -k, \text{False}) \text{ (if $l_{j_2},k_{j_2}\ne 0$ for some $j_1<j_2$ in $\llbracket 1,|P|\rrbracket$)}. \end{align*} \end{mythm} \begin{proof} For a tuple $(t,d,b)\in K^i_{P,\mathcal{M}}$, think of $t$ as recording the change in the order of a dihedral group when we apply a basis element of $A_{P,\mathcal{M}}$ to a module, think of $d$ as recording the nadirs of the same basis element with respect to each prime, and think of $b$ as recording whether there is a total nadir in that basis element or not. It is easily seen that the alleged isomorphism is a bijection from the set of basis elements $\ind_{p_1}^{l_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_1}^{k_1}\dots\res_{p_{|P|}}^{k_{|P|}}$ to the set of tuples $(t,d,\text{True})$, and from the set of basis elements $\ind_{p_1}^{l_1}\res_{p_1}^{k_1}\dots\ind_{p_{|P|}}^{l_{|P|}}\res_{p_{|P|}}^{k_{|P|}}$ to the set of tuples $(t,d,\text{False})$. \end{proof} \section{Results for $\mathcal{M}$ consting of $D_{2n}$-modules with not all prime factors of $n$ occurring in $P$} \begin{mylem} \label{stdlem} Let $P$ be a set of odd primes, and let $I\subset A_P$ be the ideal generated by the relations given in Proposition \ref{relsprop}, Lemma \ref{rellem}, and Lemma \ref{mixrel}. Fix some $q\in P$. Then the elements of the forms \begin{enumerate} \item[$($i$)$] $\prod_{p\in P}(\ind_p^{l_p'}\res_p^{l_p''})$, \item[$($ii$)$] $\res_q\ind_q\prod_{p\in P}(\ind_p^{l_p'''}\res_p^{l_p''''})$, \end{enumerate} form a basis of $A_P/I$. \end{mylem} \begin{proof} Let $x_p$ abbreviate $\res_p$, and $y_p$ abbreviate $\ind_p$. By Proposition \ref{relsprop} and Corollary \ref{ccor}, we may write any element of $A_P/I$ as a linear combination of monomials of the form $\prod_{p\in P}(x_py_p)^{j_p}\prod_{p\in P}(y^{l_p^y}x^{l_p^x})=\prod_{p\in P}(x_p^{j_p}y_p^{j_p})\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$. Applying Lemma \ref{rellem} and then Lemma \ref{mixrel}, we indeed obtain a linear combination of monomials of the forms $\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$ and $x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$. To show that any element of $A_P/I$ has a unique expression as such a linear combination, we apply the Diamond Lemma from \cite{Be78}. Towards that end, consider the semigroup partial ordering $\le$ on words $\langle x_p,y_p|p\in P\rangle$ given by word length and, for words of equal length, the lexicographical ordering defined by $x_p<y_{q}$ for all $p,q\in P$ and for $q<p$ the inequalities $x_q<x_p$ and $y_q<y_p$. Define the reduction system $S$ on $A_P$ consisting of the pairs \begin{enumerate} \item[$($a$)$] $(\res_p\res_p\ind_p\ind_p, (p+1)\res_p\ind_p-p)$, \item[$($b$)$] $(\res_p\ind_p, \frac{p-1}{q-1}(\res_q\ind_q-q)+p)$, for the smallest $q\in P$, \item[$($c$)$] $(t,\sigma(t))$, for all monomials $t\in A_P$ and $\sigma(t)$ being the smallest (with respect to $\le$) monomial obtainable from $t$ by applying the commutativity relations of Proposition \ref{relsprop}. \end{enumerate} It is clear that $\le$ is compatible with $S$ and has the descending chain condition. If we can show that all ambiguities of $S$ are resolvable, the Diamond Lemma (Theorem 1.2. of \cite{Be78}) will give us that $z$ can be uniquely written as a linear combination of monomials that each is equivalent to (i) or (ii) modulo reordering of factors using the relations of Proposition \ref{relsprop}. The desired result will follow. Note that for every monomial $t\in A_P$ and prime $p\in P$, the monomial $\sigma(t)$ contains a factor $x_p^l y_p^l$ of maximal length $2l$ among all monomials obtainable from $t$ by applying the relations from Proposition \ref{relsprop}. Therefore \begin{equation*} \sigma(\tau(t))=\sigma(\tau'(\sigma(t))), \end{equation*} for any reductions $\tau$ and $\tau'$ both corresponding to the same one of (a), (b) or (c) and acting nontrivially on $t$ and $\sigma(t)$ respectively. In particular, since any reduction generated by (c) is a factor of $\sigma$, any ambiguity involving (c) is resolvable. Finally, the only possible ambiguity involving (a) and (b) is the inclusion ambiguity obtained by applying either reduction (a) or (b) to the (sub)word $x_px_py_py_p$, with result either $(p+1)x_py_p-p$ or $x_p(\frac{p-1}{q-1}(x_qy_q-q)+p)y_p=x_py_p(\frac{p-1}{q-1}(x_qy_q-q)+p)$. Applying further reductions to the two we get \begin{equation*} (p+1)x_py_p-p\rightsquigarrow (p+1)(\frac{p-1}{q-1}(x_qy_q-q)+p)-p, \end{equation*} respectively \begin{align*} x_py_p(\frac{p-1}{q-1}(x_qy_q-q)+p)&\rightsquigarrow (\frac{p-1}{q-1}(x_qy_q-q)+p)^2\\&=(\frac{p-1}{q-1})^2(x_qy_qx_qy_q-2qx_qy_q+q^2)+2p\frac{p-1}{q-1}(x_qy_q-q)+p^2\\&\rightsquigarrow (\frac{p-1}{q-1})^2(x_qx_qy_qy_q-2qx_qy_q+q^2)+2p\frac{p-1}{q-1}(x_qy_q-q)+p^2\\&\rightsquigarrow (\frac{p-1}{q-1})^2((q+1)x_qy_q-q-2qx_qy_q+q^2)+2p\frac{p-1}{q-1}(x_qy_q-q)+p^2. \end{align*} It is straightforward to verify (most easily using software) that these expressions are in fact equal, so that the ambiguity is resolvable. \end{proof} \begin{mythm} \label{mainthm} Let $P$ be a set of odd primes, and let $I\subset A_P$ be the ideal generated by the relations given in Proposition \ref{relsprop}, Lemma \ref{mixrel}, and Lemma \ref{rellem}, i. e. the relations \begin{enumerate} \item[$($i$)$] $\res_p\res_q=\res_q\res_p,$ \item[$($ii$)$] $\ind_p\ind_q=\ind_q\ind_p,$ \item[$($iii$)$] $\res_p\ind_q=\ind_q\res_p,$ \item[$($iv$)$] $\res_p\res_p\ind_p\ind_p=(p+1)\res_p\ind_p-p,$ \item[$($v$)$] $\frac{1}{p-1}(\res_p\ind_p-p)=\frac{1}{q-1}(\res_q\ind_q-q).$ \end{enumerate} Let also $\mathcal{M}\subset\mathcal{G}$ be the $A_{P,\mathcal{G}}$-submodule generated by all $D_{2n}$-modules for $n$ having its prime factors in $P$. Then we have a natural isomorphism \begin{align*} A_P/I\rightarrow A_{P,\mathcal{M}}. \end{align*} \end{mythm} \begin{proof} Let $x_p$ abbreviate $\res_p$, and $y_p$ abbreviate $\ind_p$. It is clear that we have a natural surjective homomorphism $A_P\rightarrow A_{P,\mathcal{M}}$ that factors through $A_P/I$. To show injectivity of the above homomorphism, we need to prove that there are no other elements in the kernel of this morphism than those already in $I$. Assume the opposite. Consider an arbitrary element $z$ of $A_P/I$ and fix some $q\in P$. By Lemma \ref{stdlem} we may assume that $z$ is a linear combination of monomials of the forms $\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$ and $x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$. By Lemma \ref{deplem}, there is no linear dependence between such monomials with different exponents $l_p^y$ or $l_p^x$. Thus the only possible relation remaining is one involving two monomials $\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$ and $x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$. By direct computation we have \begin{equation*} x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})W_1(\prod_{p\in P}p^{l_p^x+1})=q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})W_1(\prod_{p\in P}p^{l_p^x+1}), \end{equation*} but \begin{equation*} x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})V_{1,1}(\prod_{p\in P}p^{l_p^x+1})\ne q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})V_{1,1}(\prod_{p\in P}p^{l_p^x+1}), \end{equation*} where the inequality is easily seen to hold by noting that the LHS will contain some multiple of a module $V_{1,-1}(m)$, whereas the RHS will not. Thus also $\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$ and $x_qy_q\prod_{p\in P}(y^{l_p^y}x^{l_p^x})$ are linearly independent in $A_{P,\mathcal{M}}$, and we are done. \end{proof} We may now strengthen the assertion of Corollary \ref{ccor}. We may now decompose the $A_{P,\mathcal{M}}$ of Theorem \ref{mainthm} into a direct sum of algebras. \begin{mythm} \label{bicycthm} Let $P$ be a set of odd primes and let $\mathcal{M}\subset\mathcal{G}$ be the $A_{P,\mathcal{G}}$-submodule generated by all $D_{2n}$-modules for $n$ having its prime factors in $P$. For all $p\in P$, let $B_p=\langle a_p,b_p|a_pb_p=1\rangle$ denote a copy of the semigroup algebra of the bicyclic semigroup.Then \begin{equation*} A_{P,\mathcal{M}}\cong (\bigotimes_{p\in P}B_p)^2. \end{equation*} \end{mythm} \begin{proof} From Lemma \ref{idemlem} we get that \begin{equation*} A_{P,\mathcal{M}}\cong A_{P,\mathcal{M}}\epsilon_1\oplus A_{P,\mathcal{M}}\epsilon_2, \end{equation*} where $\epsilon_1$ and $\epsilon_2$ depend on some fixed $q\in P$, as in the lemma. For any $p\in P$, we have \begin{align*} \res_p\ind_p\epsilon_1&=\res_p\ind_p\frac{\res_q\ind_q-1}{q-1}=(\frac{p-1}{q-1}(\res_p\ind_p-q)+p)\frac{\res_q\ind_q-1}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+\frac{p-1}{q-1}\frac{\res_q\res_q\ind_q\ind_q-\res_q\ind_q}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+\frac{p-1}{q-1}\frac{(q+1)\res_q\ind_q-q-\res_q\ind_q}{q-1}\\&=(p-q\frac{p-1}{q-1})\frac{\res_q\ind_q-1}{q-1}+q\frac{p-1}{q-1}\frac{\res_q\ind_q-1}{q-1}=p\epsilon_1. \end{align*} Using that $\res_p\ind_p$ is central by Proposition \ref{cprop}, we may in particular assume that any element $z\epsilon_1\in A_{P,\mathcal{M}}\epsilon_1$ is such that the monomial terms of $z$ contains no factor $\res_p\ind_p$ (where $p\in P$ is arbitrary). Note that it then follows from Lemma \ref{deplem} that all distinct monic such elements $z\epsilon_1$ are linearly independent. Therefore we have an isomorphism \begin{align*} A_{P,\mathcal{M}}\epsilon_1&\xrightarrow{\sim} \bigotimes_{p\in P}B_p\\ \res_p&\mapsto a_p\\ \ind_p&\mapsto pb_p. \end {align*} A completely analogous argument gives an isomorphism \begin{align*} A_{P,\mathcal{M}}\epsilon_2&\xrightarrow{\sim} \bigotimes_{p\in P}B_p\\ \res_p&\mapsto a_p\\ \ind_p&\mapsto b_p. \end {align*} The desired result follows. \end{proof} Let us in the light of Theorem \ref{bicycthm} have a look at the representation theory of the algebra $A_{P,\mathcal{M}}$ of Theorem \ref{babybasisthm}. \begin{mythm} Let $I_1,I_2\subset P$ be disjoint with $I_1\cup I_2=P$, let $\lambda\in (\mathbb{C}^*)^{I_1}$, and let $X=(\mathbb{Z}_{\ge 0})^{I_2}$ (with the convention $(\mathbb{Z}_{\ge 0})^\varnothing =\{0\}$). The simple $\bigotimes_{p\in P}B_p$-modules are classified by modules $M_{\lambda,X}$ with underlying vector space $\{w_v|v\in X\}$ and $\bigotimes_{p\in P}B_p$-action \begin{align*} a_k\cdot w_v&=\begin{cases} w_{v-e_k}, & \mbox{if } k\in I_2\text{ and }v_k>0,\\ \lambda_k^{-1}w_v, & \mbox{if } k\in I_1,\\ 0, & \mbox{otherwise }. \end{cases}\\ b_k\cdot w_v&=\begin{cases} w_{v+e_k}, & \mbox{if } k\in I_2,\\ \lambda_kw_v, & \mbox{otherwise }. \end{cases}\\ \end{align*} \end{mythm} \begin{proof} Let $M$ be a simple $\bigotimes_{p\in P}B_p$-module. Pick some nonzero $w\in M$ and $p_1\in P$. If $\dim(\langle b_{p_1}\rangle w)<\infty$ then $b_{p_1}$ has an eigenvector in $\langle b_{p_1}\rangle w$, with an eigenvalue $\lambda(p_1)$. That $a_{p_1}b_{p_1}=1$ implies that $a_{p_1}$ has the same eigenvector with eigenvalue $\lambda(p_1)^{-1}$ (so in particular $\lambda(p_1)\ne 0$). That $a_{p}$ and $b_{p}$ commute with $a_{p'}$ and $b_{p'}$ whenever $p\ne p'$ (we will refer to this property as ``partial commutativity'') implies that the entire $(\bigotimes_{p\in P}B_p) w=M$ is an eigenspace for $b_{p_1}$ and $a_{p_1}$ with eigenvalues $\lambda(p_1)$ and $\lambda(p_1)^{-1}$ respectively. Define $I_1=\{p\in P|\dim(\langle b_p\rangle w)<\infty\}$ and $I_2=P\backslash I_1$. In case $I_2=\varnothing$, it is obvious that $M\cong M_{\lambda,X}$. In what follows, assume that $I_2\ne\varnothing$. Consider a $p_2\in I_2$. We claim that $\ker(a_{p_2}\cdot\_)\subset M$ is nonzero. Assume the opposite, i.e. that $a_{p_2}\cdot\_$ is injective. This map is also surjective since $a_{p_2}b_{p_2}=1$, hence bijective, and has inverse $b_{p_2}\cdot\_$. Because of the partial commutativity, we may impose a $\mathbb{Z}$-grading on $M$, with $\deg(a_{p_2})=-1$, $\deg(b_{p_2})=1$ and $\deg(a_p)=0=\deg(b_p)$ for $p\ne p_2$. For any $z\in \bigotimes_{p\in P}B_p$ we then have that $z(w+b_{p_2}w)$ either equals zero (in case $zw=0=zb_{p_2}w$) or is not homogeneous. In particular the homogeneous elements of $M$ are not of the form $z(w+b_{p_2}w)$, so $w+b_{p_2}w$ generates a proper submodule of $M$, contradicting the simplicity of $M$. The claim follows. Note that because of partial commutativity, we have that $\ker(a_p\cdot\_)$ is stable under the actions of $a_{p'}$ and $b_{p'}$ whenever $p\ne p'$. Since $M$ is simple, we thus have for each $p\in I_2$ that $a_p$ acts locally nilpotently on $M$. If they all exist, consider some $I_2'\subsetneq I_2$, some $p_3\in I_2\backslash I_2'$ and some nonzero $w\in \bigcap_{p\in I_2'}\ker(a_p\cdot\_)$. Let $l+1\in\mathbb{N}$ be minimal such that $a_{p_3}^{l+1}w=0$. Then $0\ne a_{p_3}^lw\in \bigcap_{p\in I_2'\cup\{p_3\}}\ker(a_p\cdot\_)$. By induction on $|I_2'|$ we may obtain a nonzero $u\in \bigcap_{p\in I_2}\ker(a_p\cdot\_)$. It is clear that the elements of the form $\prod_{p\in I_2} b_p^{l_p}u$, with $l_p\in\mathbb{Z}_{\ge 0}$, span $M$. We claim that they even form a basis. Assume that we have $\sum_{l\in K\subset(\mathbb{Z}_{\ge 0})^{I_2}}c_l(\prod_{p\in I_2}b_p^{l_p})u=0$. Then there exists some $l'\in K$ such that for any other $l\in K$ there is a $p\in I_2$ satisfying $l'_p>l_p$. Thus $\prod_{p\in I_2}a_p^{l'_p}\sum_{l\in K\subset(\mathbb{Z}_{\ge 0})^{I_2}}c_l(\prod_{p\in I_2}b_p^{l_p})u=c_{l'}u$, hence $c_{l'}=0$. By repeating this argument sufficiently many times (replacing $K$ by $K\backslash\{l'\}$ at each step), we obtain that $c_l=0$ for every $l\in K$. Therefore the elements $\prod_{p\in I_2} b_p^{l_p}u$ are linearly independent. It is now obvious that \begin{align*} M&\rightarrow M_{\lambda,X}\\ \prod_{p\in I_2} b_p^{v_p}u&\mapsto w_v \end{align*} defines an isomorphism, and also that different $M_{\lambda,X}$ are non-isomorphic. \end{proof} \begin{comment} The following Lemma generalizes Lemma \ref{wrellem}. \begin{mylem} Let $P$ be the set of all odd primes, and let $\mathcal{M}\subset\mathcal{G}$ be the $A_{P,\mathcal{G}}$-submodule generated by all $W_1(p)$ with $p\in P$. Let $I\subset A_P$ be the ideal generated by the relations of Proposition \ref{relsprop} and also all $\res_p\ind_p-p$ with $p\in P$. Then we have a natural isomorphism \begin{align*} A_P/I\rightarrow A_{P,\mathcal{M}}. \end{align*} \end{mylem} \begin{proof} Thanks to Proposition \ref{relsprop} and the computation $(\res_p\ind_p-p)W_1(p)=0$ is clear that we have a natural surjective morphism as in the lemma statement. It remains to show that this morphism is injective. \end{proof} \begin{myprop} Let $P$ be the set of all odd primes, and let $I\subset A_P$ be the ideal generated by the relations given in Proposition \ref{relsprop}, Proposition \ref{rels2prop} and Lemma \ref{rellem}. Let also $\mathcal{M}\subset\mathcal{G}$ be the $A_{P,\mathcal{G}}$-submodule generated by all $D_{2n}$-modules for odd $n$. Then we have a natural isomorphism \begin{align*} A_P/I\rightarrow A_{P,\mathcal{M}}. \end{align*} \end{myprop} \begin{proof} Let $x_p$ abbreviate $\res_p$, and $y_p$ abbreviate $\ind_p$. It is clear that we have a natural surjective homomorphism $A_P\rightarrow A_{P,\mathcal{M}}$ that factors through $A_P/I$. To show injectivity, we need to prove that there are no other elements in the kernel of this morphism than those already in $I$. Assume the opposite, and consider another element in the kernel. Then, by Proposition \ref{relsprop}, this element is a linear combination of products of (noncommutative) monomials in $x_p$ and $y_p$, where the $p$ differs between the different monomials. By Lemmata \ref{stdlem} and \ref{rellem}, we may assume that each such monomial is of one of the forms \begin{enumerate} \item[$($i$)$] $x^{j_1}y^{j_2}$ \item[$($ii$)$] $y^{j_2}x^{j_1}$ \item[$($iii$)$] $xyy^{j_2}x^{j_1}$. \end{enumerate} \end{proof} \end{comment}
{ "timestamp": "2018-05-08T02:18:57", "yymm": "1805", "arxiv_id": "1805.02567", "language": "en", "url": "https://arxiv.org/abs/1805.02567" }
\section{Introduction} Over the last few years there have been several measurements in the $B$-meson sector, more specifically in decays induced by the flavor changing neutral current (FCNC) quark level transition $ b \rightarrow s\, l^+ \, l^-$, which are incompatible with the predictions of the Standard Model (SM). The following measurements have been of rigorous attention: \begin{itemize} \item In 2012, the LHCb collaboration reported the measurement of the ratio $R_K \equiv \Gamma(B^+ \to K^+ \,\mu^+\,\mu^-)/\Gamma(B^+ \to K^+\,e^+\,e^-)$ performed in the low dilepton invariant mass-squared $q^2$ range ($1.0 \le q^2 \le 6.0 \, {\rm GeV}^2$) \cite{rk}, which deviates from the SM prediction of order $\simeq 1$ \cite{Hiller:2003js, Bordone:2016gaq} by 2.6 $\sigma$. This could be an indication of lepton flavor universality violation in the $b \to s l^+ l^-$ sector. \item The measurement of $R_K$ was further corraborated recently by the measurement of the $R_{K^*} \equiv \Gamma (B^0 \to K^{*0} \mu^+\mu^-)/\Gamma(B^0 \to K^{*0} e^+ e^-)$ in the low ($0.45 \le q^2 \le 1.1 \, {\rm GeV}^2$) and central ($1.1 \le q^2 \le 6.0 \, {\rm GeV}^2$) $q^2$ bins \cite{rkstar}. These measurements differ from the SM prediction of order $\simeq 1$ \cite{Hiller:2003js, Bordone:2016gaq} by 2.2-2.4$\sigma$ and 2.4-2.5$\sigma$, in the low and central $q^2$ regions, respectively. \item The experimentally measured values of some of the angular observables in $B \to K^* \mu^+ \mu^-$ \cite{Kstarlhcb1,Kstarlhcb2,KstarBelle} disagree with their SM predictions \cite{sm-angular}. In particular, the angular observable $P'_5$ in the $q^2$-bin 4.3-8.68 disagress with the SM at the level of 4$\sigma$. The recent ATLAS \cite{kstaratlas} and CMS \cite{kstarcms} measurements confirm this disagreement. \item The measured value of the branching ratio of $B_s \to \phi \mu^+ \mu^-$ \cite{bsphilhc1,bsphilhc2} does not agree with its SM value. This disagreement is at the level of 3$\sigma$. \end{itemize} The measurements of $R_K$ and $R_{K^*}$ could be an indication of presence of new physics in $b \to s \mu^+ \mu^-$ and/or $b \to s e^+ e^-$ sector where as the discrepancies in $P'_5$ and the branching ratio of $B_s \to \phi \mu^+ \mu^-$ are related to $b \to s \, \mu^+ \, \mu^-$ sector only. Hence it is quite natural to account for all of these anomalies by assuming new physics only in the $b \to s \mu^+ \mu^-$ sector. A recent global fit \cite{Capdevila:2017bsm} also favours this point of view. In order to probe the Lorentz structure of new physics in $b \to s \, \mu^+ \, \mu^-$, several model independent analyses have been performed. It is observed that the present $b \to s \, \mu^+ \, \mu^-$ data can be accommodated by new physics in the form of vector ($V$) and axial-vector operators ($A$) \cite{Capdevila:2017bsm, Altmannshofer:2017yso,DAmico:2017mtc,Hiller:2017bzc,Geng:2017svp,Ciuchini:2017mik,Celis:2017doq,Alok:2017sui,Alok:2017jgr}. However there is no unique solution. For e.g., new physics operator $O_9=(\bar{s} \gamma^\mu P_L b)\, (\bar{l} \gamma^\mu l)$ alone as well as a combination of operators $O_9$ and $O_{10}=(\bar{s} \gamma^\mu P_L b)\, (\bar{l} \gamma^\mu \gamma^5 l)$ can both account for all $b \to s \, \mu^+ \, \mu^-$ anomalous data. Therefore one needs additional observables to discriminate between various possible solutions, and hence identify the Lorentz structure of new physics in $b \to s \, \mu^+ \, \mu^-$ sector. In this work we consider radiative leptonic decay of $B_s$ meson to explore such a possibility. The decay $B_s \to \mu^+ \, \mu^-\, \gamma$ has several advantages over its non radiative counterpart $B_s \to \mu^+ \, \mu^-$. In the SM, the decay $B_s \to \mu^+ \, \mu^-$ is chirally suppressed, and hence has a smaller branching ratio. On the other hand, the decay $B_s \to \mu^+ \, \mu^-\, \gamma$ is free from chirality suppression owing to the emission of a photon in addition to the muon pair. The photon emission, however, suppresses the branching ratio of $B_s \to \mu^+ \, \mu^-\, \gamma$ by a factor of $\alpha_{em}$. The SM prediction of the branching ratio of $B_s \to \mu^+ \, \mu^-\, \gamma$ is of the order $\sim 10^{-8}$ and hence would be helpful in the experimental observation. Further, $B_s \to \mu^+ \, \mu^-$ is sensitive to a wider range of new physics operators as compared to that of $B_s \to \mu^+ \, \mu^-$. It is sensitive to the new physics operators $O_9$, $O_{10}$, and also their chirality flipped counterparts $O'_9=(\bar{s} \gamma^\mu P_R b)\, (\bar{l} \gamma^\mu l)$ and $O'_{10}=(\bar{s} \gamma^\mu P_R b)\, (\bar{l} \gamma^\mu \gamma^5 l)$. Hence it is sensitive to all new physics operators which can provide a possible explanation for present $b \to s \, \mu^+ \, \mu^-$ anomalies. From the experimental point of view, the observation of $B_s \to \mu^+ \, \mu^-\, \gamma$ decay is a challenging task. The photon in the final state is difficult to detect, in general, the detection efficiency of photon is smaller as compared to that of the charged leptons. Further, the photon makes the other daughter particles softer which results in smaller reconstruction efficiencies. Hence the observation of the $B_s \to \mu^+ \, \mu^-\, \gamma$ decay seems to be a rather challenging task. However due to the fact that its branching ratio is $\sim 10^{-8}$, this decay mode might not be beyond the reach of the higher runs of the LHC. In ref. \cite{Dettori:2016zff}, a method was suggested for the detection of $B_s \to \mu^+ \, \mu^-\, \gamma$ by making use of the event sample selected for the measurements of the branching ratio of $B_s \to \mu^+ \, \mu^-$. This would be applicable at Run 2 of the LHC. The decay $B_s \to \mu^+ \, \mu^-\, \gamma$ has been studied in \cite{Aliev:1996ud,Geng:2000fs,Dincer:2001hu,Kruger:2002gf,Melikhov:2004mk,Alok:2006gp,Balakireva:2009kn,Alok:2010zd,Alok:2011gv,Wang:2013rfa,Guadagnoli:2017quo,Kozachuk:2017mdk}. In this work we perform a model independent analysis of $B_{s} \to \mu^+ \, \mu^- \, \gamma$ decay by considering all new physics $V$ and $A$ operators. Apart from the branching ratio, $B(B_{s} \to \mu^+ \, \mu^- \, \gamma)$, we consider the ratio $R_{\gamma} \equiv \Gamma(B_s \to \mu^+\,\mu^-\, \gamma)/\Gamma(B_s \to e^+\,e^-\, \gamma)$ and the forward backward asymmetry ($A_{FB}$) of muons. We obtain predictions for these observables for the various new physics solutions which provide a good fit to the present $b \to s \mu^+ \mu^-$ data. We intend to identify the new physics interactions which can provide large deviation in these observables. The paper is organized as follows. In Section II, we provide theoretical expressions for various observables in $B_{s} \to \mu^+ \, \mu^- \, \gamma$ decay in the presence of new physics in the form of $V$ and $A$ operators. The predictions for $B(B_{s} \to \mu^+ \, \mu^- \, \gamma)$, $R_{\gamma}$ and $A_{FB}$ of muons for the existing new physics solutions are presented in Section III. We provide concluding remarks in Section IV. \section{$B_{s} \to \mu^+ \, \mu^- \, \gamma$ decay } The effect of new physics in the $b \rightarrow s\, l^+ \, l^- \gamma $ decays can be most conveniently probed by making use of the effective field theory approach where the new physics contributions are encoded in the Wilson coefficients of the operators of the $b \rightarrow s\, l^+ \, l^-$ effective Hamiltonian. The decay $B_{s} \to l^+ \, l^- \, \gamma$ is induced by the effective Hamiltonian for the quark level transition $b \rightarrow s\, l^+ \, l^-$ ($l=e,\,\mu$), and is given by, \begin{eqnarray} \label{b2qll} &&H_{\rm eff}^{\rm SM} (b\to s\, l^{+}\,l^{-})\, =\, {\frac{G_{F}}{\sqrt2}}\, {\frac{\alpha_{\rm em}}{2\pi}}\, V_{tb}V^*_{ts}\, \left[\,-2i\,m_b\, {\frac{C_{7\gamma}(\mu)}{q^2}}\cdot \bar s\sigma_{\mu\nu}q^{\nu}\left (1+\gamma_5\right )b \cdot{\bar l}\gamma^{\mu}l \right.\nonumber\\ &&\left.\qquad\qquad\quad +\, C_{9}(\mu)\cdot\bar s \gamma_{\mu}\left (1\, -\,\gamma_5 \right) b \cdot{\bar l}\gamma^{\mu}l \, +\, C_{10}(\mu)\cdot\bar s \gamma_{\mu}\left (1\, -\,\gamma_5 \right) b \cdot{\bar l}\gamma^{\mu}\gamma_{5}l \right], \end{eqnarray} where $G_F$ is the Fermi constant, and $V_{ts}$, $V_{tb}$ are the elements of the Cabbibo-Kobayashi-Maskawa (CKM) matrix. The Wilson Coefficients $C_9$ and $C_{10}$ above are associated with the standard short-distance semi-leptonic operators $O_9$ and $O_{10}$ respectively, \begin{align} O_{9} &= (\bar{s} \gamma^\mu P_L b)\, (\bar{l} \gamma^\mu l) \,\,\,\,,\,\,\,\, O_{10} = (\bar{s} \gamma^\mu P_L b)\, (\bar{l} \gamma^\mu \gamma^5 l) \label{eq:O9O10} \end{align} where $P_{L,R} = (1 \mp \gamma^5)/2$. The remaining dominant contribution to this decay emerges from the Wilson Coefficient $C_{7\gamma}$ associated with the magnetic penguin operator, $O_{7} = (\bar{s} \sigma_{\mu\nu} q^\nu P_R b )\, (\bar{l} \gamma^\mu l)$. The operators $O_{7,9,10}$ present in the effective Hamiltonian contributing to the $B_s^0 \rightarrow \mu^+ \mu^- \gamma$ amplitude can be parameterised in terms of the $B_s \rightarrow \gamma^*$ form factors as follows \cite{Kozachuk:2017mdk}, \begin{eqnarray} \label{real} \label{ffs} \nonumber \langle \gamma^* (k,\,\epsilon)|\bar s \gamma_\mu\gamma_5 b|\bar B_s(p)\rangle &=& i\, e\,\epsilon^*_{\alpha}\, \left ( g_{\mu\alpha} \, k'k-k'_\alpha k_\mu \right )\,\frac{F_A(k'^2,k^2)}{M_{B_s}}, \\ \langle \gamma^*(k,\,\epsilon)|\bar s\gamma_\mu b|\bar B_s(p)\rangle &=& e\,\epsilon^*_{\alpha}\,\epsilon_{\mu\alpha \xi \eta}k'_\xi k_\eta\frac{F_V(k'^2,k^2)}{M_{B_s}}, \\ \langle\gamma^*(k,\,\epsilon)|\bar s \sigma_{\mu\nu}\gamma_5 b|\bar B_s(p) \rangle\, k'^{\nu} &=& e\,\epsilon^*_{\alpha}\,\left( g_{\mu\alpha}\,k'k- k'_{\alpha}k_{\mu}\right)\, F_{TA}(k'^2, k^2), \nonumber \\ \langle \gamma^*(k,\,\epsilon)|\bar s \sigma_{\mu\nu} b|\bar B_s(p)\rangle\, k'^{\nu} &=& i\, e\,\epsilon^*_{\alpha}\epsilon_{\mu\alpha \xi \eta} k'_\xi k_\eta F_{TV}(k'^2, k^2)\,, \nonumber \end{eqnarray} where $k$ is the momentum of the photon emitted from the valence quark of the $B_s$ meson and $k'$ is the momentum emitted from the $b\rightarrow s$ penguin vertex. The form factors relevant for this process are, $F_i (k'^2 = q^2, k^2 = 0) = F_i (q^2)$ and $F_i (k'^2=0, k^2 = q^2) = F_i (0,q^2) $, with $q^2$ being the momentum of the lepton pair and $i = \{V, A, TV, TA \}$. In this work we consider both single and double pole parametrization of these form factors. The form factors $F_i(q^2)$ in the single pole parametrization are given by, \begin{equation} \label{modifiedpole-s} F_i(q^2)= \beta_i \frac{f_{Bs} m_{Bs}}{\Delta_i + 0.5\, m_{Bs}(1-q^2/m_{Bs}^2) } \,, \end{equation} where the parameters $f_{Bs}$, $\beta_i$ and $\Delta_i$ can be found in Ref.~\cite{Kruger:2002gf}. The form factors in the double pole parametrization use a modified pole parametrization to explicitly take into account the poles at $q^2 = M_R^2$ where $M_R$ is the mass of the meson and provide better precision in the entire region of $q^2$. These form factors are parameterized as, \begin{equation} \label{modifiedpole-d} F_i(q^2)=\frac{F_i(0)}{(1-{q^2}/{M_{R_i}^2})(1-\sigma_1({q^2}/{M_{R_i}^2})+\sigma_2({q^2}/{M_{R_i}^2})^2)}\,. \end{equation} The details of the calculation of these form factors as well as the numerical values of the parameters can be found in Ref.~\cite{Kozachuk:2017mdk}. Further, the form factors $F_i(0, q^2)$ for $i = TV, TA$ are given by, \begin{eqnarray} \label{vmd} F_{TV,TA}(0, q^2) = F_{TV,TA}(0, 0)\, -\,\sum_V\,2\,f_V^{\rm e.m.} g^{B\to V}_+(0)\frac{q^2/M_V}{q^2\, -\, M^2_V\, +\, iM_V\Gamma_V}, \end{eqnarray} where the values of the mass and width of the vector meson resonances, $M_V$ and $\Gamma_V$, respectively and the $B\to V$ transition form factors $g^{B\to V}_+(0)$ can be found in Ref.~\cite{Kozachuk:2017mdk}. The two form factor parametrizations agree upto 10\% in the low $q^2$ region below 15 $\mathrm{GeV}^2$ and deviate from each other by upto 20\% in the high $q^2$ region. In our analyses we consider systematic uncertainty arising from the form factors to be about 10\% for both parameterizations. The global analyses of the $b \rightarrow s l^+ l^-$ anomalies have shown that if new physics is present, it will contribute mainly via the operators $O_9$ and $O_{10}$ and their chirality flipped counterparts. Therefore to study new physics effects in the $B_{s} \rightarrow l^+ l^- \gamma $ transition, we consider new physics in the form of $O^{(')}_9$ and $O^{(')}_{10}$ operators. The effective Hamiltonian in the presence of these additional new physics operators is, \begin{eqnarray} H_{\mathrm{eff}}(b\to s \, l^{+} \, l^{-}) = H^{\rm SM}_{\mathrm{eff}} (b\to s \, l^{+} \, l^{-}) + H^{\rm VA}_{\mathrm{eff}} (b\to s \, l^{+} \, l^{-}), \end{eqnarray} where $H^{\rm VA}_{\mathrm{eff}}(b\to s \, l^{+} \, l^{-})$ is given by, \begin{align} \nonumber H^{\rm VA}_{\mathrm{eff}}(b\to s \, l^{+} \, l^{-}) &= \frac{\alpha G_F}{\sqrt{2} \pi} V_{ts}^* V_{tb} \bigg[C_9^{NP} (\overline{s} \gamma^{\mu} P_L b)(\overline{l} \gamma_{\mu} l) + C_{10}^{NP} (\overline{s} \gamma^{\mu} P_L b)(\overline{l} \gamma_{\mu} \gamma_{5} l) \\ &~~~~~~~~~~~~~~~ + C_9'^{NP} (\overline{s} \gamma^{\mu} P_R b)(\overline{l} \gamma_{\mu} l) + C_{10 }'^{NP} (\overline{s} \gamma^{\mu} P_R b)(\overline{l} \gamma_{\mu} \gamma_{5} l) \bigg], \label{eq:NPHeff} \end{align} where $C_9^{NP}$ and $C_{10}^{NP} $ are the new physics couplings associated with the operators $O_{9}$ and $O_{10}$, respectively while $C_9'^{NP}$ and $C_{10}'^{NP} $ are the coefficients of the the primed operators $O_{9}^{'}$ and $O_{10}^{'}$, respectively which are obtained by replacing $P_L$ by $P_R$ in Eq.~\ref{eq:O9O10}. The decay $B_s^0 \rightarrow \mu^+ \mu^- \gamma$ receives contributions from many channels \cite{Kozachuk:2017mdk, Guadagnoli:2017quo}. We present the amplitudes for these channels in the presence of additional new physics VA operators, defined in Eq.~\ref{eq:NPHeff}\, as follows: \begin{itemize} \item The amplitude for the emission of a real photon from the valence quarks of $B_s$ meson and a lepton pair from the FCNC vertex is given by, \begin{eqnarray} \label{A1} A^{(1)}&=& \langle\gamma (k,\,\epsilon),\,l^+(p_1),\,l^-(p_2)|H_{\rm eff}^{b\to s l^+l^-}|\bar B_s(p) \rangle\, =\, \frac{G_F}{\sqrt{2}}\, V_{tb}V^*_{ts}\,\frac{\alpha_{\rm em}}{2\pi}\, e\, \epsilon^*_{\alpha} \nonumber \\ && \times \left[ \Big((C_{9}^{\mathrm{eff}} + C_9^{NP} + C_9'^{NP})P^{\perp}_{\mu \alpha} \frac{F_{V}(q^2)}{M_{Bs}} - (C_{9}^{\mathrm{eff}} + C_9^{NP} - C_9'^{NP})P^{\parallel}_{\mu \alpha}\frac{F_{A}(q^2)}{M_{Bs}} \Big) \bar l (p_2)\gamma_{\mu} l (-p_1)\, \right. \nonumber \\ && \left. +\, \Big((C_{10} + C_{10}^{NP} + C_{10}'^{NP})P^{\perp}_{\mu \alpha} \frac{F_{V}(q^2)}{M_{Bs}}- (C_{10} + C_{10}^{NP} - C_{10}'^{NP}) P^{\parallel}_{\mu \alpha}\frac{F_{A}(q^2)}{M_{Bs}} \Big) \bar l (p_2)\gamma_{\mu} \gamma_{5} l (-p_1)\right. \nonumber \\ && \left. +\, \frac{2C_{7\gamma}(\mu)}{q^2}m_b \Big(P^{\perp}_{\mu \alpha} F_{TV}(q^2,0) - P^{\parallel}_{\mu \alpha} F_{TA}(q^2,0) \Big) \bar l (p_2)\gamma_{\mu}l (-p_1)\, \right], \end{eqnarray} where \begin{align} P^{\perp}_{\mu \alpha} &= \epsilon_{\mu\alpha \xi \eta} k'_\xi k_\eta \,\,\,,\,\,\, P^{\parallel}_{\mu \alpha} = i \left (g_{\mu\alpha}\, k'k\, -\, k'_{\alpha}k_{\mu}\right)\,. \end{align} In this process, the momentum from the FCNC vertex, $k' = q$ and the momentum emitted from the valence quark, $k = p-q$. So $k'^2 = q^2$ and $k^2 = 0$ and the form factors $F_{i}(q^2,0)$ given in Eq.~\eqref{modifiedpole-d} contribute. \item The amplitude for the emission of a virtual photon from the valence quark of $B_s$ meson and a real photon from the FCNC vertex is, \begin{eqnarray} \label{A2} A^{(2)}&=&\langle\gamma (k',\,\epsilon),\, l^+(p_1),\,l^-(p_2)\left |H_{\rm eff}^{b\to s\gamma} \right|\bar B_s(p) \rangle\,= \frac{G_F}{\sqrt{2}}\,V_{tb}V^*_{ts} \frac{\alpha_{\rm em}}{2\pi}\, e\,\epsilon^*_{\mu}\bar l (p_2)\gamma_{\alpha} l (-p_1) \nonumber \\ &\times&\left[ \frac{2 m_b C_{7\gamma}(\mu)}{q^2} \Big(P^\perp_{\mu \alpha}\, F_{TV}(0,q^2) - P^\parallel_{\mu \alpha} F_{TA}(0,q^2)\Big) \right]\,, \end{eqnarray} where $k^2= q^2$ and $k'^2 =0$ and the form factors appearing in the amplitude are $F_{TV}(0,q^2)$ and $F_{TA}(0, q^2)$ defined in Eq.~\eqref{vmd}. \item The amplitude for bremsstrahlung emission from the leptons in the final state is given by, \begin{align} \label{bremsstrahlung} A^{\rm Brems}&= -i\, e\,\frac{G_F }{\sqrt{2}}\,\frac{\alpha_{\rm em}}{2\pi}\, V^*_{td}V_{tb}\, \frac{f_{B_s}}{M_{B_s}}\, \frac{2\, m_{l}}{M_{Bs}}\, \bar l (p_2) \left ( \frac{(\gamma\epsilon^*)\,(\gamma p)}{\hat t-\hat m^2_{l}}\, -\, \frac{(\gamma p)\,(\gamma\epsilon^*)}{\hat u-\hat m^2_{l}} \right ) \gamma_5\, l (-p_1) \nonumber \\ & \,\,\,\, \times \left( C_{10}(\mu) + C_{10}^{NP}- C_{10}'^{NP}\right)\, \end{align} This contribution is suppressed compared to $A^{(1)}$ by the lepton mass but is important in the high $q^2$ region, when $q^2$ approaches $M_{Bs}^2$. \end{itemize} In an effective theory approach, the heavy degrees of freedom like the top quark and W/Z boson masses are integrated out, however the effects of lighter degrees of freedom like charm and up quarks need to be taken into account in the loops. The contribution of the charm quarks to the $B \rightarrow \gamma^* \gamma^*$ amplitude, at the lowest order arise from the charming penguin topology and the weak annihilation topology. We now discuss the contribution of the penguin diagrams containing the charm quarks in the loop. The amplitudes of these penguin diagrams can be found in Ref.~\cite{Kozachuk:2017mdk}. These amplitudes have the same Lorentz structure as that of the amplitudes $A^{(1)}$ and $A^{(2)}$ defined in Eq.~\eqref{A1} and Eq.~\eqref{A2}. Hence, the form factor in these penguin diagram amplitudes can be expressed as corrections to the Wilson coefficient $C_{9}$ appearing in the amplitudes $A^{(1)}$ and $A^{(2)}$. These corrections arising due to the charm loop effects consist of factorizable contribution and non-factorizable contribution arising due to soft gluon exchanges. The non-factorizable corrections due to charm-loop effects for the $B\to\gamma l^+l^-$ amplitude have not been calculated yet. These non-factorizable corrections have been computed for $B\to K^* l^+l^-$ amplitude \cite{Khodjamirian:2010vf}, and are a good approximation for $B \rightarrow \mu^+ \mu^- \gamma$ at low $q^2$. We implement the factorizable and non-factorizable corrections in the low $q^2$ region by adding to $C_9$, a simplified $q^2$-dependent parameterization of these corrections, taken from Ref.~\cite{Khodjamirian:2010vf}. The amplitude of the weak annihilation diagrams including the QCD corrections is \cite{Kozachuk:2017mdk}, \begin{eqnarray} A^{WA}=-\frac{G_F}{\sqrt{2}}\alpha_{\rm em}e\, a_1 \{V_{ub}V^*_{ud}+V_{cb}V^*_{cd}\} \frac{16}{3} \epsilon_{\mu \varepsilon^* q k}\frac{1}{q^2}\,\bar l \gamma_\mu l. \label{eq:WA} \end{eqnarray} The amplitude $A^{(2)}$ has a structure similar to the $C_{7\gamma}$ part of $A^{(1)}$ except for the form factors. Similarly, the Lorentz structure of the weak annihilation amplitude as given in Eq.~\eqref{eq:WA} is the same as that of $C_{7\gamma} P_{\mu \alpha}^\perp$ in the amplitude $A^{(1)}$. These two amplitudes can therefore be combined with $A^{(1)}$ by redefining the form factors as follows, \begin{align} \bar{F}_{TV}(q^2) &= F_{TV}(q^2,0) + F_{TV}(0, q^2)- \frac{16}{3} \frac{V_{ub}V_{us}^* + V_{cb}V_{cs}^*}{V_{tb}V_{ts}^*}\frac{a_1\,f_b}{C_{7\gamma}\,m_b} \\ \bar{F}_{TA}(q^2) &= F_{TA}(q^2,0) + F_{TA}(0, q^2) \end{align} The double differential decay rate for $B_s \rightarrow \mu^+ \mu^- \gamma$ process in the presence of new physics V and A operators can now be calculated using the amplitudes and form factors defined above. It is expressed as a sum of three contributions: \begin{itemize} \item ${d^2\Gamma^{(1)}}/{d\hat s\, d\hat t}$, which receives contributions from the combined amplitudes ($A^{(1)} + A^{(2)}+ A^{(WA)}$), \item ${d^2\Gamma^{(2)}}/{d\hat s\, d\hat t}$, which is the contribution from the bremsstrahlung amplitude $A^{\mathrm{Brems}}$, and \item ${d^2\Gamma^{(12)}}/{d\hat s\, d\hat t}$, which is the mixing between the amplitudes $A^{(1 + 2 + WA)}$ and $A^{\mathrm{Brems}}$. \end{itemize} These decay rates are as given below, \begin{align} \label{Gamma1} \frac{d^2\Gamma^{(1)}}{d\hat s\, d\hat t}\, &=\, \frac{G^2_F\,\alpha^3_{em}\, M^5_{Bs}}{2^{10}\,\pi^4}\, \left |V_{tb}\, V^*_{ts} \right |^2 \left [ x^2\, B_0\left (\hat s,\,\hat t\right )\, +\, x\,\,\xi\left (\hat s,\hat t\right )\,\tilde B_1\left (\hat s,\,\hat t\right ) \, +\, \xi^2\left (\hat s,\hat t\right )\,\tilde B_2\left (\hat s,\,\hat t\right ) \right ], \end{align} where \begin{align} B_0\left (\hat s,\,\hat t\right )\, &=\, \left (\hat s\, +\, 4\hat m^2_{l} \right ) \left (F_1\left(\hat s\right )\, +\, F_2\left(\hat s\right )\right)\, -\, 8\hat m^2_{l}\,\left |C_{10}(\mu) + C_{10}^{NP} + C_{10}'^{NP} \right |^2 F^2_V(q^2 )\, + \nonumber\\ & \left |C_{10}(\mu) + C_{10}^{NP} - C_{10}'^{NP} \right |^2 F^2_A (q^2 ), \nonumber \\ B_1\left (\hat s,\,\hat t\right )\, &=\, 8\,[ \hat s\, F_V(q^2)\, F_A(q^2)\, Re \left[\left(C^{\rm eff\, *}_{9}(\mu, q^2)+ C_9^{*NP}\,+ C_9'^{*NP}\right)\, \left(C_{10}(\mu)+C_{10}^{NP}- C_{10}'^{NP} \right)\right]\, \nonumber\\ & +\, \hat m_b\, F_V(q^2)\, Re \left[ C^*_{7\gamma}(\mu)\, \bar F^*_{TA}(q^2)\, \left(C_{10}(\mu)+ C_{10}^{NP}-C_{10}'^{NP}\right) \right] \nonumber \\ & +\, \hat m_b\, F_A(q^2)\, Re \left[C^*_{7\gamma}(\mu)\, \bar F^*_{TV}(q^2)\, \left(C_{10}(\mu)+ C_{10}^{NP}-C_{10}'^{NP}\right) \right] ,\nonumber \\ B_2\left (\hat s,\,\hat t\right )\, &=\,\hat s\, \left (F_1\left(\hat s\right )\, +\, F_2\left(\hat s\right )\right), \\ \text{and} \nonumber \\ F_1\left (\hat s\right )\, &=\, \left (\left |C^{\rm eff}_{9}(\mu, q^2)+ C_9^{NP} + C_9'^{NP} \right |^2 + \left |C_{10}(\mu) + C_{10}^{NP} + C_{10}'^{NP} \right |^2 \right)F^2_V(q^2) + \left (\frac{2\hat m_b}{\hat s}\right )^2 \nonumber\\ & \left |C_{7\gamma}(\mu)\, \bar F_{TV}(q^2)\right |^2 + \frac{4\hat m_b}{\hat s}\, F_V(q^2)\, Re\left[ C_{7\gamma}(\mu)\, \bar F_{TV}(q^2)\, \left(C^{\rm eff\, *}_{9}(\mu, q^2)+ C_9^{*NP} + C_9'^{*NP} \right) \right ] \nonumber\\ F_2\left (\hat s\right )\, &=\, \left (\left |C^{\rm eff}_{9}(\mu, q^2)+ C_9^{NP} - C_9'^{NP} \right |^2 + \left |C_{10}(\mu) + C_{10}^{NP} - C_{10}'^{NP} \right |^2 \right)F^2_A(q^2) + \left (\frac{2\hat m_b}{\hat s}\right )^2 \nonumber\\ & \left |C_{7\gamma}(\mu)\, \bar F_{TA}(q^2)\right |^2 + \frac{4\hat m_b}{\hat s}\, F_A(q^2)\, Re\left[ C_{7\gamma}(\mu)\, \bar F_{TA}(q^2)\, \left(C^{\rm eff\, *}_{9}(\mu, q^2)+ C_9^{*NP} - C_9'^{*NP} \right) \right ] \end{align} \begin{align} \label{Gamma2} \frac{d^2\Gamma^{(2)}}{d\hat s\, d\hat t} &= \frac{G^2_F\,\alpha^3_{em}\, M^5_{Bs}}{2^{10}\,\pi^4}\, \left |V_{tb}\, V^*_{tq} \right |^2\, \left (\frac{8\, f_{B_s}}{M_B}\right )^2\,\hat m^2_{l}\, \left |C_{10A}(\mu) + C_{10}^{NP} - C_{10}'^{NP} \right |^2 \times \nonumber \\ & \,\,\,\left [ \frac{\hat s\, +\, x^2/2} {(\hat u\, -\,\hat m^2_{l})(\hat t\, -\,\hat m^2_{l})}\, -\,\left (\frac{x\,\hat m_{l}} {(\hat u\, -\,\hat m^2_{l})\, (\hat t\, -\,\hat m^2_{l})} \right )^2\, \right ] \\ \\ \label{Gamma12} \frac{d^2\Gamma^{(12)}}{d\hat s\, d\hat t} &= -\frac{G^2_F\,\alpha^3_{em}\, M^5_{Bs}}{2^{10}\,\pi^4}\, \left |V_{tb}\, V^*_{ts} \right |^2\,\frac{16\, f_{B_s}}{M_{Bs}}\, \hat m^2_{l} \,\frac{x^2}{(\hat u\, -\,\hat m^2_{l})(\hat t\, -\,\hat m^2_{l})} \Bigg[\frac{2\, x\, \hat m_b}{\hat s}\, Re \Big[(C^*_{10A}(\mu)+ C_{10}^{*NP} - C_{10}'^{* NP}) \nonumber \\ & \,\,\, C_{7\gamma}(\mu)\bar F_{TV}(q^2, 0)\Big] \, +\, x\, F_V(q^2)\, Re \Big[(C^*_{10A}(\mu)+C_{10}^{*NP} - C_{10}'^{*NP})(C^{\rm eff}_{9V}(\mu, q^2)+ C_9^{NP} + C_9'^{NP}) \Big] \nonumber\\ & +\,\xi(\hat s,\hat t)\, F_A(q^2)\,\left |C_{10A}(\mu)+ C_{10}^{NP} - C_{10}'^{NP} \right |^2 \Bigg]. \end{align} Here \begin{eqnarray} \label{mandelstam} \hat s\, =\,\frac{\left (p\, -\, k\right )^2}{M_{Bs}^2},\quad \hat t\, =\,\frac{\left (p\, -\, p_1\right )^2}{M_{Bs}^2},\quad \hat u\, =\,\frac{\left (p\, -\, p_2\right )^2}{M_{Bs}^2}, \end{eqnarray} with $\hat s\, +\,\hat t\, +\,\hat u\, =\, 1\, +\, 2\hat m^2_{l}$, $\hat m^2_{l}\, =\, m^2_{l}/M^2_{Bs}$, $\hat m_b\, =\, m_b/M_{Bs}$, and \cite{Kruger:2002gf} \begin{eqnarray} x\, =\, 1\, -\, \hat s,\qquad \cos\theta\, =\,\frac{\xi\left (\hat s,\hat t\right )} {x\,\sqrt{1\, -\, 4\hat m^2_l/\hat s}},\qquad \xi\left (\hat s,\hat t\right )\, =\,\hat u\, -\,\hat t. \label{eq:costheta} \end{eqnarray} The total differential branching ratio is then given by, \begin{equation} \frac{dB (B_s \to \mu^+ \, \mu^-\,\gamma)}{d q^2}= \frac{\tau_{Bs}}{m_{Bs}^2} \int dt \Big(\frac{d^2\Gamma^{(1)}}{d\hat s\, d\hat t} + \frac{d^2\Gamma^{(2)}}{d\hat s\, d\hat t} + \frac{d^2\Gamma^{(12)}}{d\hat s\, d\hat t} \Big). \end{equation} We also consider the ratio, \begin{equation} R_{\gamma}(q^2) \equiv \frac{d\Gamma(B_s \to \mu^+ \, \mu^-\,\gamma)/d q^2}{d\Gamma (B_s \to e^+ \, e^-\,\gamma)/d q^2}, \end{equation} along with the forward backward asymmetry of muons, \begin{eqnarray} A_{FB}(q^2)\,=\,\frac{\int\limits_0^1 d\cos\theta \, \frac{d^2\Gamma(B_{s}\to l^+l^-\gamma)}{dq^2 \, d\cos\theta}-\int\limits_{-1}^0 d\cos\theta \, \frac{d^2\Gamma(B_{s}\to l^+ l^-\gamma)}{dq^2 \, d\cos\theta}}{\frac{d\Gamma(B_{s}\to l^+l^-\gamma)}{dq^2}}, \end{eqnarray} where $\theta$, the angle between the momentum of $B_{s}$ meson $\vec{p}$ and $\vec{p_2}$, the momentum of the lepton, can be expressed in terms of $\hat{t}$ as given in Eq.~\ref{eq:costheta}. In the next section, we obtain predictions for these observables for various new physics solutions which provide a good fit to the present $b \to s \mu^+ \mu^-$ data. \section{Results and Discussions} After the measurement of $R_{K^*}$, several groups have performed global fits to all the $b \to s \, \mu^+ \, \mu^-$ data to identify one or combinations of Wilson coefficients which provide a good fit to the data \cite{Capdevila:2017bsm, Altmannshofer:2017yso,DAmico:2017mtc,Hiller:2017bzc,Geng:2017svp,Ciuchini:2017mik,Celis:2017doq,Alok:2017sui}. In most of the analyses, three scenarios: (I) $C_9^{\mu \mu}$ (NP) $<0$, (II) $C_9^{\mu \mu}$ (NP) = - $C_{10}^{\mu \mu}$ (NP) $<0$ and (III) $C_9^{\mu \mu}$ (NP) = - $C_{9}^{'\mu \mu}$ (NP) $<0$ were suggested as an explanation of anomalies in the $b \to s \, \mu^+ \, \mu^-$ sector \cite{Capdevila:2017bsm}. The numerical values of the Wilson coefficients corresponding to these scenarios are listed in Table~\ref{table-wc}. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|} \hline Scenario & WC & Operator \\ \hline (I) $C_9^{\mu \mu}$ (NP) & $-1.25 \pm 0.19$ & $[\bar{s}\gamma_{\mu}P_Lb] \,[\bar{\mu}\gamma^{\mu}\mu]$\\ \hline (II) $C_9^{\mu \mu}$ (NP) = - $C_{10}^{\mu \mu}$ (NP) & $-0.68 \pm 0.12$ & $[\bar{s}\gamma_{\mu}P_Lb] \,[\bar{\mu}\gamma^{\mu}P_L\mu]$\\ \hline (III) $C_9^{\mu \mu}$ (NP) = - $C_{9}^{'\mu \mu}$ (NP) & $-1.11 \pm 0.17$ & $[\bar{s}\gamma_{\mu}\gamma_5b] \,[\bar{\mu}\gamma^{\mu}\mu]$ \\ \hline \end{tabular} \caption{New physics scenarios suggested as an explanation for all $b \to s \, \mu^+ \, \mu^-$ data. The numerical values of Wilson coefficients are taken from \cite{Alok:2017sui}.} \label{table-wc} \end{center} \end{table} Eventually these scenarios must arise in some new physics models. The simplest new physics models that can give rise to these scenarios involve the tree-level exchange of a leptoquark or a $Z'$ boson. There are three leptoquark models that can explain the data in the $b \to s \, \mu^+ \, \mu^-$ sector. They are scalar triplet with $Y=1/3$ ($S_3$), vector isosinglet with $Y=-2/3$ ($S_3$) and vector isotriplet with $Y=-2/3$ ($U_3$). All of these leptoquark models give rise to scenario (II). The first and third scenarios can only be achieved in $Z'$ models. $Z'$ couples vectorially to $\bar{s}b$ in scenarios (I) and (II), and hence one can easily construct new physics models. Scenario (III) requires an axial-vector coupling of the $Z'$ to $\bar{s}b$, and hence it can only arise in contrived $Z'$ models. Further, scenario (III) predicts $R_K\sim 1$ at the best fit point and hence is in disagreement with the measurement. Therefore scenario (III) is disfavoured both theoretically and experimentally \cite{Alok:2017sui}. We obtain predictions for several observables in the $B_s \to \mu^+ \mu^- \gamma$ decay such as the branching ratio, the ratio $R_{\gamma}$ of the differential distribution $B_s \to \mu^+ \mu^- \gamma$ and $B_s \to e^+ e^- \gamma$ and the muon forward-backward asymmetry $A_{FB}$ for new physics scenarios listed in Table.~\ref{table-wc}. We look for large deviations in these observables as compared to their SM values. Furthermore, we study the discriminating capability of these observables for various new physics solutions, in particular scenarios (I) and (II). However, for completeness, we also include scenario (III) in our analysis. \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=82mm]{dBR-singlepole-lowq.pdf}& \includegraphics[width=82mm]{dBR-singlepole-highq.pdf}\\ \end{tabular} \caption{Left and right panels depict the differential branching ratio, $dB/dq^2$, in low (2-6 $\mathrm{GeV}^2$) and high-$q^2$ (15.8-23 $\mathrm{GeV}^2$) regions, respectively for the single pole parameterization of the form factors. The green band corresponds to the 1$\sigma$ range of the SM prediction. The red, blue and orange curves correspond to $dB/dq^2$ for scenarios (I), (II) and (III), respectively at the best fit values of the new physics WCs. } \label{fig-dbr1} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=82mm]{dBR-lowq-new.pdf}& \includegraphics[width=82mm]{dBR-highq-new.pdf}\\ \end{tabular} \caption{Left and right panels depict the differential branching ratio, $dB/dq^2$, in low (2-6 $\mathrm{GeV}^2$) and high-$q^2$ (15.8-23 $\mathrm{GeV}^2$) regions, respectively for the double pole parameterization of the form factors. The green band corresponds to the 1$\sigma$ range of the SM prediction. The red, blue and orange curves correspond to $dB/dq^2$ for scenarios (I), (II) and (III), respectively at the best fit values of the new physics WCs.} \label{fig-dbr2} \end{figure} The predictions for various observables are obtained in the low- and high-$q^2$ regions. We choose low-$q^2$ region as 2 $\mathrm{GeV}^2$ $\leq$ $q^2$ $\leq$ 6 $\mathrm{GeV}^2$ as the dominant contribution in this region comes mainly from the diagrams where the final state photon is emitted either from the bottom or strange quark and hence the decay is driven mainly by the $b \to s\, \mu^+ \,\mu^-$ effective Hamiltonian \cite{Alok:2010zd}. The high-$q^2$ region is chosen as 15.8 $\mathrm{GeV}^2$ $\leq$ $q^2$ $\leq$ 23 $\mathrm{GeV}^2$, the reasons for which is explained below. The differential branching ratio $dB/dq^2$ in the low and high $q^2$ regions corresponding to single and double pole form-factor parametrizations for various new physics scenarios and SM are depicted in Figs.~\ref{fig-dbr1} and \ref{fig-dbr2}, respectively. From these figures, it can be seen that none of the new physics scenarios can provide large deviation from the SM prediction. This can be further seen from the integrated values of the branching ratio of $B_s \to \mu^+ \, \mu^-\, \gamma$. The integrated values of $\mathcal{B}(B_s \to \mu^+ \, \mu^-\, \gamma)$ in the SM and in the various new physics scenarios for single and double pole parametrization of the form factors are given in Table.~\ref{tab2}. Here we have added uncertainties in the form-factors, CKM matrix elements \cite{pdg} and the contribution of the light vector meson $\phi$ (in the low $q^2$ region) \cite{Khodjamirian:2010vf} in quadrature. From the table, it can be seen that the predictions for all new physics scenarios are consistent with the SM. This conclusion is independent of the choice of form factor parametrization considered in this work. Therefore the decay $B_s \to \mu^+ \, \mu^-\, \gamma$ is expected to be observed with a branching ratio close to its SM prediction. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Scenario & \multicolumn{2}{|c|}{$\mathcal{B}$: Double Pole} & \multicolumn{2}{|c|}{$\mathcal{B}$: Single Pole} \\ \hline & Low $(\times 10^{-10})$ & High $(\times 10^{-10})$ & Low $(\times 10^{-10})$& High $(\times 10^{-10})$ \\ \hline SM &$\,1.50 \pm 0.29 \, $ & $\,2.44 \pm 0.36\, $ & $\, 1.80 \pm 0.16\, $ & $\,4.97 \pm 0.78\,$\\ \hline (I) $C_9^{\mu \mu}$ (NP) & $\, 1.89 \pm 0.37 \, $ & $\, 2.07 \pm 0.28 \,$ & $\,2.24 \pm 0.22\,$ & $\,4.05 \pm 0.61\,$ \\ \hline (II) $C_9^{\mu \mu}$ (NP) = - $C_{10}^{\mu \mu}$ (NP) & $\,1.51 \pm 0.33 \,$ & $\, 1.67 \pm 0.26 \,$ & $1.71 \pm 0.18\,$ & $\,3.39 \pm 0.57\,$ \\ \hline (III) $C_9^{\mu \mu}$ (NP) = - $C_{9}^{'\mu \mu}$ (NP) & $\, 1.71 \pm 0.34 \,$ & $\, 2.35 \pm 0.36 \,$ & $\,2.07 \pm 0.20\,$ & $\,4.52 \pm 0.74\,$ \\ \hline \end{tabular} \caption{Integrated values of the branching ratio $\mathcal{B}(B_s \rightarrow \mu^+ \mu^- \gamma)$ in the low $q^2$ (2-6 $\mathrm{GeV}^2$) and high $q^2$ (15.8-23 $\mathrm{GeV}^2$) region. } \label{tab2} \end{center} \end{table} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=82mm]{Rgamma-singlepole_lowq.pdf}& \includegraphics[width=82mm]{Rgamma-singlepole.pdf}\\ \end{tabular} \caption{Left and right panel depicts $R_{\gamma}(q^2)$ in the low (2-6 $\mathrm{GeV}^2$) and high-$q^2$ (15.8-23 $\mathrm{GeV}^2$) regions, respectively for the single pole parametrization of the form factors. The green band corresponds to the 1$\sigma$ range of the SM prediction. The red, blue and orange curves correspond to $R_{\gamma}$ for scenarios (I), (II) and (III), respectively at the best fit values of the new physics WCs.} \label{fig-rs} \end{figure} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[width=82mm]{Rgamma_new_lowq.pdf}& \includegraphics[width=82mm]{Rgamma_new_highq.pdf}\\ \end{tabular} \caption{Left and right panel depicts $R_{\gamma}(q^2)$ in the low (2-6 $\mathrm{GeV}^2$) and high-$q^2$ (15.8-23 $\mathrm{GeV}^2$) regions, respectively for the double pole parametrization of the form factors. The green band corresponds to the 1$\sigma$ range of the SM prediction. The red, blue and orange curves correspond to $R_{\gamma}$ for scenarios (I), (II) and (III), respectively at the best fit values of the new physics WCs.} \label{fig-rd} \end{figure} The ratio $R_{\gamma}(q^2)$ in the low and high $q^2$ regions corresponding to single and double pole parametrizations of the form factors in the SM and for various new physics scenarios are presented in Figs.~\ref{fig-rs} and \ref{fig-rd}, respectively. It can be seen from the figures that the SM predictions of $R_{\gamma}(q^2)$ is close to $ 1$ in the entire low-$q^2$ region for both form factor parametrizations. In the high-$q^2$ region, $R_{\gamma}(q^2) \sim 1$ for $q^2<$ 18 $\rm GeV^2$. Above $q^2 =$ 18 $\rm GeV^2$, the value of $R_{\gamma}(q^2)$ starts to increase from unity and at the extreme end of the $q^2$ spectrum, $R_{\gamma}(q^2)$ increases upto $3.5$ for double pole parametrization. The value of $R_\gamma$ deviates from unity mainly due to lepton mass effects in the bremsstrahlung contribution to the $B_s \rightarrow l^+ l^- \gamma$ decay rate. At low $q^2$, the bremsstrahlung amplitude is suppressed by O$(m_l/m_{Bs})$ compared the amplitude $A^{(1)}$. At higher $q^2$ values when $q^2 \rightarrow M_{Bs}^2$, the contribution from bremsstrahlung being proportional to $m_l^2/M_{Bs}^3\,\times (M_{Bs}^4 + q^4)/(M_{Bs}^2- q^2)$, starts to dominate the total branching ratio and hence increases $R_\gamma$ above 1. The contribution from the interference amplitude, being proportional to $m_l^2\,(M_{Bs}^2-q^2)^2$, is small compared to the bremsstrahlung contribution at large $q^2$. The dominant contribution to $R_\gamma(q^2)$ in the high $q^2$ region comes from the terms containing the Wilson coefficients $C_9$ and $C_{10}$ while the contribution from terms containing $C_{7\gamma}$ are small. The uncertainties in the form-factor cancels up to a large extent in the ratio $R_{\gamma}$. However the cancellations in these uncertainties become worse with increase in $q^2$, in particular in the extreme high $q^2$ region \cite{Guadagnoli:2017quo}. Hence the uncertainties in $R_{\gamma}$ increases with increase in $q^2$ in the high-$q^2$ region. This is true for both single and double pole parametrization of form-factors with uncertainty being larger in the case of double pole parametrization. For this reason we truncate the high-$q^2$ region at 23 $\rm GeV^2$. It can be seen from the left panel of Fig.~\ref{fig-rs} that $R_{\gamma}(q^2)$, in the low-$q^2$ region, corresponding to the best fit values of the WCs for scenarios (I) and (III) lie well beyond the SM band. For scenario (III), the deviation from SM band is more prominent for $q^2 \sim$ 4.5 $\rm GeV^2$ - 6 $\rm GeV^2$. In the high-$q^2$ region, as can be seen from the right panel of Fig.~\ref{fig-rs}, $R_{\gamma}(q^2)$ curves for scenarios (I) and (II) lie well outside the SM band, the maximum deviation being for scenario (II). The deviation is relatively less for scenario (III). This is in agreement with the findings of \cite{Guadagnoli:2017quo} where, using the single pole approximation for the form factors, it was shown that the curve $R_{\gamma}(q^2)$ in the high-$q^2$ region for scenario (II) falls well outside the SM range of $R_{\gamma}(q^2)$. The predictions for $R_{\gamma}(q^2)$ using double pole parametrization of the form factors are given in Fig.~\ref{fig-rd}. From the left panel of the figure, one can infer that the results, in the low-$q^2$ region, are almost the same as that of the single pole case. However, in the high-$q^2$ range, the results differ. $R_{\gamma}(q^2)$ curve for scenario (III) lies within the SM band. The $R_{\gamma}(q^2)$ curves for scenarios (I) and (II) still lie outside the SM band but the deviation is less as compared to that of the single pole case. \begin{table}[h] \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Scenario & \multicolumn{2}{|c|}{$R_\gamma$: Double Pole} & \multicolumn{2}{|c|}{$R_\gamma$: Single Pole} \\ \hline & Low & High & Low & High \\ \hline SM &$\,0.99 \pm 0.01 \, $ & $\,1.37 \pm 0.07\, $ & $\, 0.99 \pm 0.01\, $ & $\,1.15 \pm 0.02\,$\\ \hline (I) $C_9^{\mu \mu}$ (NP) & $\, 1.24 \pm 0.05\, $ & $\, 1.17 \pm 0.08 \,$ & $\,1.23 \pm 0.05\,$ & $\,0.94 \pm 0.04\,$ \\ \hline (II) $C_9^{\mu \mu}$ (NP) = - $C_{10}^{\mu \mu}$ (NP) & $\,0.99 \pm 0.04 \,$ & $\, 0.94 \pm 0.07 \,$ & $\,0.95 \pm 0.04\,$ & $\,0.83 \pm 0.04\,$ \\ \hline (III) $C_9^{\mu \mu}$ (NP) = - $C_{9}^{'\mu \mu}$ (NP) & $\,1.13 \pm 0.04 \,$ & $\, 1.32 \pm 0.08 \,$ & $\,1.14 \pm 0.04\,$ & $\,1.05 \pm 0.05\,$ \\ \hline \end{tabular} \caption{Integrated values of $R_\gamma$ in the low $q^2$ (2 $\mathrm{GeV}^2$ $\leq q^2 \leq$ 6 $\mathrm{GeV}^2$) and high $q^2$ (15.8 $\mathrm{GeV}^2$ $\leq q^2 \leq$ 23 $\mathrm{GeV}^2$) region.} \label{table-rg} \end{center} \end{table} The integrated values of $R_{\gamma}(q^2)$, $R_{\gamma}$, for single and double pole scenarios are listed in Table.~\ref{table-rg}. Here the uncertainties due to form factors are added in quadrature. In the low-$q^2$ region, we have included additional uncertainties related to the light vector meson $\phi$. For new physics scenarios, the error in new physics WCs as given in Table.~\ref{table-wc}, are also included. For the single pole parametrization of the form factors, the predictions for $R_{\gamma}$ in the low-$q^2$ region for scenarios (I) and (III) deviates from its SM prediction by 4$\sigma$ and 3$\sigma$, respectively. $R_{\gamma}$ for scenario (II) is consistent with the SM. For the double pole form factor parametrization, $R_{\gamma}$ for scenario (II) is consistent with the SM while $R_{\gamma}$ for scenarios (I) and (III) deviates from the SM at the level of 4.1$\sigma$ and 2.8$\sigma$, respectively. Thus we see that the conclusions are almost independent of the choice of form factor parametrization in the low-$q^2$ region. For the single pole case, the predictions for $R_{\gamma}$ in the high-$q^2$ region for scenarios (I), (II) and (III) deviates from its SM prediction by 3.5$\sigma$, 5.3$\sigma$ and 1.4$\sigma$, respectively. For the double pole case, $R_{\gamma}$ for scenario (III) is consistent with the SM. $R_{\gamma}$ for scenarios (I) and (II) deviates from the SM at the level of 1.3$\sigma$ and 3$\sigma$, respectively. Thus the conclusions in the high-$q^2$ region rely heavily on the choice of form factor parametrization. We now consider NP effects in FB asymmetry of muons in $B_s \to \mu^+ \, \mu^-\, \gamma$. For single pole parametrization, the SM predictions for the integrated values of $A_{FB}(q^2)$, $\langle A_{FB}(q^2) \rangle$, in the low and high $q^2$ regions are $0.48 \pm 0.05$ and $-0.58\pm 0.03$, respectively. For scenarios (I), (II) and (III), the prediction for $\langle A_{FB}(q^2) \rangle$, in the low-$q^2$ region, are $(0.58 \pm 0.02)$, $(0.55 \pm 0.03)$ and $(0.42 \pm 0.05)$, respectively. In the high-$q^2$ region the predictions for NP scenarios (I), (II) and (III) are $(-0.42 \pm 0.05)$, $(-0.56 \pm 0.04)$ and $(-0.64 \pm 0.06)$, respectively. Thus we see that the predictions for all NP scenarios are consistent with the SM value. These conclusions remain the same for double pole parametrization as well. \section{Conclusions} The measurements of several observables in the decays induced by the quark level transition $b \to s \mu^+ \mu^-$ do not agree with the predictions of SM. These measurements could be considered as hints of physics beyond the SM. Several new physics scenarios, all in the form of vector and axial-vector operators, were suggested as an explanation of anomalies in the $b \to s \mu^+ \mu^-$ sector. Therefore it is worth to consider the impact of these solutions on other related decay modes. In this work we study new physics effects on the radiative leptonic decay of $B_s$ meson in the light of the present $b \to s \mu^+ \mu^-$ data. We consider contributions to the $b \to s \mu^+ \mu^- \gamma $ decay from: (i) direct emission of real or virtual photons from the valence quarks of the $B_s$ meson, (ii) emission of real photon from the $b \rightarrow s$ loop, (iii) weak annihilation, and (iv) bremsstrahlung from leptons in the final state. We compute the branching ratio of $B_s \to \mu^+ \mu^- \gamma$, the ratio $R_{\gamma}$ of the differential distribution $B_s \to \mu^+ \mu^- \gamma$ and $B_s \to e^+ e^- \gamma$ and the muon forward-backward asymmetry $A_{FB}$ in the presence of the additional new physics vector and axial-vector operators. We consider the form factors relevant for this decay both in the single pole and modified pole parameterization and obtain predictions for these quantities for all allowed new physics solutions. We find that for all allowed new physics solutions, the predicted values of the branching ratio and $A_{FB}$ are consistent with the SM. However, a large deviation in $R_{\gamma}$ as compared to its SM value is allowed for some of the new physics solutions. The prediction of $R_{\gamma}$, in the low-$q^2$ region (2-6 $\rm GeV^2$), for $C_9^{\mu \mu}$ (NP) $<0$ solution deviates from its SM prediction at the level of 4$\sigma$ for both single and modified pole parameterization of the form-factors. The conclusions in the high-$q^2$ region (16-23 $\rm GeV^2$) rely heavily on the choice of parametrization of the form-factors. However, both parametrizations allow 3$\sigma$ deviation in $R_{\gamma}$ for $C_9^{\mu \mu}$ (NP) = - $C_{10}^{\mu \mu}$ (NP) $<0$ solution. Hence the measurement of $R_{\gamma}$ can be useful in identifying the Lorentz structure of NP in $b \to s \mu^+ \mu^-$ transition. \section{Acknowledgment} We are thankful to Diego Guadagnoli for useful suggestions regarding our analyses related to $R_{\gamma}$. We also thank S. Uma Sankar, Dinesk Kumar and Jacky Kumar for useful discussions and suggestions.
{ "timestamp": "2018-10-29T01:07:54", "yymm": "1805", "arxiv_id": "1805.02265", "language": "en", "url": "https://arxiv.org/abs/1805.02265" }
\section{System Overview} vitrivr is an open-source content-based multimedia retrieval stack, capable of retrieving not only video, but also images, audio, and 3D models~\cite{rossetto2018open}. The vitrivr stack is the open-source continuation of the IMOTION system, which has participated in the Video Browser Showdown for several years~\cite{rossetto2015imotion,rossetto2016imotion,rossetto2017enhanced}. vitrivr supports many different query modes such as Query-by-Sketch and Query-by-Example as well as text-based queries for text-on-screen, dialog, or semantic concepts. The mechanisms used for visual content-based queries are already detailed in previous publications~\cite{rossetto2014cineast,rossetto2015imotion,rossetto2016imotion,rossetto2017enhanced,rossetto2018competitive} and have remained the same also for the version used at the Video Browser Showdown 2018. Figure~\ref{fig:screenshot} shows a screenshot of the vitrivr user interface. \begin{figure*} \centering \includegraphics[width=0.9\textwidth]{vitrivr_screenshot_01.png} \caption{Screenshot of the vitrivr user interface} \label{fig:screenshot} \end{figure*} \section{Additions} This section provides a brief overview of the additions that were made after the submission of the original publication~\cite{rossetto2018competitive}. They concern both the user interface and the textual features used. In order to improve the usability and the retrieval effectiveness in a time-sensitive and competitive environment, the user interface was adapted. Since the user interface was originally designed to support retrieval across different types of multimedia, it was not optimized to display a high density of video data. Two additional data views have therefore been added to the interface to improve its capabilities in this regard. The first one fills the screen with representative thumbnail images of video segments and arranges them by their similarity to the query, independently of their video of origin. The second view, an example of which can be seen in Figure~\ref{fig:screenshot}, groups the same thumbnails by video and displays them in temporal order within each group. In these views, the result set can be expanded by manually loading neighboring segments of a given segment in order to gain a better overview of the video's content. Loading all segments of a video in such a way is also possible. Segments can also be annotated with different colors in order to quickly find them again when switching between views or re-ordering the segments based on different similarity criteria. The video player was extended to support higher playback speeds as well as submitting a result directly based on the current playback position. With respect to the data used for retrieval, in particular to execute textual queries, two data sources were used: The spoken words were already provided with the collection as the result of an automated speaker recognition process. This ASR data was transformed into approximate subtitles by thresholding the probabilities associated with the recognized words. For object labeling and scene text recognition, the Google Cloud Vision API\footnote{\url{https://cloud.google.com/vision/}} was used. The text-based retrieval was performed using the Apache Solr\footnote{\url{http://lucene.apache.org/solr/}} text search platform with fuzzy queries in order to compensate for minor mis-detections and typos. \section{System use and performance} During the competition, both `expert' and `novice' users relied heavily on the text-based retrieval capabilities, at least in a first attempt. This was probably due to the little amount of time needed to specify a textual query as opposed to creating a visual sketch, which was rather time consuming. Text-based queries have also shown to work very well in cases where there is distinctive dialog in the target sequence or clearly readable text visible on-screen. Textual queries also perform well when a detectable object is being displayed in the scene which is uncommon enough to serve as an effective filter while being sufficiently common to still be recognized by one of the available object detectors. In case none of these prerequisites for an effective textual query were present, the visual query served as a fallback option. During the competition, many tasks would not lend themselves well to text-based queries and for some of them, even visual queries proved largely ineffective. Some possible reasons for this are elaborated on in Section~\ref{sec:lessons-learned}. The overall placements for the different task types were as follows: \begin{itemize} \item 1$^{st}$ place in pre-competition textual KIS tasks (100/100 points). These points were not counted towards the final points as this was a separate session the day prior to the actual competition. \item 2$^{nd}$ place in competition textual KIS tasks (55/100 points) \item 4$^{th}$ place in visual expert KIS tasks (79/100 points) \item 5$^{th}$ place in expert AVS tasks (64/100 points) \item 7$^{th}$ place in novice AVS tasks (35/100 points) \item no placement in novice visual KIS tasks (0/100 points) \item 7$^{th}$ place overall (47/100) \end{itemize} \section{Lessons Learned} \label{sec:lessons-learned} The experiences during the competition as well as the subsequent analysis of the results uncovered several lessons for future participations, which can be summarized as follows: \begin{itemize} \item As in 2017, we used the master shot references provided with the IACC.3. While this worked well in 2017, there were problems with this segmentation in 2018. In many instances, the segmentation was too coarse so that the actual target scene was part of a much longer segment and could therefore not be found. \item The fuzzy search method used for textual queries, while adequate for ASR and OCR data, proved disadvantageous for concepts. For example, results for the query `toast' would include beach-settings as they were tagged with `coast'. \item The actual browsing capabilities of the system turned out to be insufficient in situations where no sufficiently selective query could be formulated. In case of many possibly relevant results, the submission mechanism for AVS turned out to also be not very effective. \item For several queries, the same elements dominated the result set independent of query variations. Methods to increase result diversity should be explored in the future. \end{itemize} \section{Conclusion} The results show that the overall approaches used for the vitrivr stack are sound but that there is still room for improvement. The wide range in placement shows that the system does not yet reliably produce high quality results in a competitive setting. The learned lessons discussed above will serve as a guide for improving this reliability in the future. \section*{Acknowledgements} This work was partly supported by the Chist-Era project IMOTION with contributions from the Swiss National Science Foundation (SNSF, contract no. 20CH21\_151571). The authors would like to thank our `novice' user for operating the system during the novice tracks of the competition. \balance \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-05-08T02:14:39", "yymm": "1805", "arxiv_id": "1805.02371", "language": "en", "url": "https://arxiv.org/abs/1805.02371" }
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:introduction}} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Communications Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE Computer Society conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE \textsc{Transactions on Magnetics} journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} This document provides instructions for submitting papers to the 49th International Symposium on microarchitecture (MICRO), 2016. In an effort to respect the efforts of reviewers and in the interest of fairness to all prospective authors, we request that all submissions to MICRO 2016 follow the formatting and submission rules detailed below. Submissions that violate these instructions may not be reviewed, at the discretion of the program chair, in order to maintain a review process that is fair to all potential authors. An example file (formatted using the MICRO'16 submission format) that contains the formatting guidelines can be downloaded from here: \href{http://www.microarch.org/micro49/samplepaper.pdf}{Sample PDF}. The content of this document mirrors that of the submission instructions that appear on \href{http://www.microarch.org/micro49/submission.php}{this website}, where the paper submission site will be linked online shortly. All questions regarding paper formatting and submission should be directed to the program chair. \subsection{Format Highlights} Note that there are some changes from last year. \begin{itemize} \item Paper must be submitted in printable PDF format. \item Text must be in a minimum 10pt ({\bf not} 9pt) font. \item Papers must be at most 11 pages, not including references. \item No page limit for references. \item Each reference must specify {\em all} authors (no {\em et al.}). \item Authors may optionally suggest reviewers. \item Authors of {\em all} accepted papers will be required to give a lightning presentation (about 90s) and a poster in addition to the regular conference talk. \end{itemize} \subsection{Paper Evaluation Objectives} The committee will make every effort to judge each submitted paper on its own merits. There will be no target acceptance rate. We expect to accept a wide range of papers with appropriate expectations for evaluation --- while papers that build on significant past work with strong evaluations are valuable, papers that open new areas with less rigorous evaluation are equally welcome and especially encouraged. Given the wide range of topics covered by MICRO, every effort will be made to find expert reviewers, including providing the ability for authors' to suggest additional reviewers. \section{Paper Preparation Instructions} \subsection{Paper Formatting} Papers must be submitted in printable PDF format and should contain a {\bf maximum of 11 pages} of single-spaced two-column text, {\bf not including references}. You may include any number of pages for references, but see below for more instructions. If you are using \LaTeX~\cite{lamport94} to typeset your paper, then we suggest that you use the template here: \href{http://www.microarch.org/micro49/micro49-latex-template.tar.gz}{\LaTeX~Template}. This document was prepared with that template. If you use a different software package to typeset your paper, then please adhere to the guidelines given in Table~\ref{table:formatting}. \begin{scriptsize} \begin{table}[h!] \centering \begin{tabular}{|l|l|} \hline \textbf{Field} & \textbf{Value}\\ \hline \hline File format & PDF \\ \hline Page limit & 11 pages, {\bf not including}\\ & {\bf references}\\ \hline Paper size & US Letter 8.5in $\times$ 11in\\ \hline Top margin & 1in\\ \hline Bottom margin & 1in\\ \hline Left margin & 0.75in\\ \hline Right margin & 0.75in\\ \hline Body & 2-column, single-spaced\\ \hline Space between columns & 0.25in\\ \hline Body font & 10pt\\ \hline Abstract font & 10pt, italicized\\ \hline Section heading font & 12pt, bold\\ \hline Subsection heading font & 10pt, bold\\ \hline Caption font & 9pt (minimum), bold\\ \hline References & 8pt, no page limit, list \\ & all authors' names\\ \hline \end{tabular} \caption{Formatting guidelines for submission. } \label{table:formatting} \end{table} \end{scriptsize} \textbf{Please ensure that you include page numbers with your submission}. This makes it easier for the reviewers to refer to different parts of your paper when they provide comments. Please ensure that your submission has a banner at the top of the title page, similar to \href{http://www.microarch.org/micro49/samplepaper.pdf}{this one}, which contains the submission number and the notice of confidentiality. If using the template, just replace XXX with your submission number. \subsection{Content} \noindent\textbf{\sout{Author List.}} Reviewing will be double blind; therefore, please do not include any author names on any submitted documents except in the space provided on the submission form. You must also ensure that the metadata included in the PDF does not give away the authors. If you are improving upon your prior work, refer to your prior work in the third person and include a full citation for the work in the bibliography. For example, if you are building on {\em your own} prior work in the papers \cite{nicepaper1,nicepaper2,nicepaper3}, you would say something like: "While the authors of \cite{nicepaper1,nicepaper2,nicepaper3} did X, Y, and Z, this paper additionally does W, and is therefore much better." Do NOT omit or anonymize references for blind review. There is one exception to this for your own prior work that appeared in IEEE CAL, workshops without archived proceedings, etc.\, as discussed later in this document. \noindent\textbf{Figures and Tables.} Ensure that the figures and tables are legible. Please also ensure that you refer to your figures in the main text. Many reviewers print the papers in gray-scale. Therefore, if you use colors for your figures, ensure that the different colors are highly distinguishable in gray-scale. \noindent\textbf{References.} There is no length limit for references. {\bf Each reference must explicitly list all authors of the paper. Papers not meeting this requirement will be rejected.} Authors of NSF proposals should be familiar with this requirement. Knowing all authors of related work will help find the best reviewers. Since there is no length limit for the number of pages used for references, there is no need to save space here. \section{Paper Submission Instructions} \subsection{Guidelines for Determining Authorship} IEEE guidelines dictate that authorship should be based on a {\bf substantial intellectual contribution}. It is assumed that all authors have had a significant role in the creation of an article that bears their names. In particular, the authorship credit must be reserved only for individuals who have met each of the following conditions: \begin{enumerate} \item Made a significant intellectual contribution to the theoretical development, system or experimental design, prototype development, and/or the analysis and interpretation of data associated with the work contained in the article; \item Contributed to drafting the article or reviewing and/or revising it for intellectual content; and \item Approved the final version of the article as accepted for publication, including references. \end{enumerate} A detailed description of the IEEE authorship guidelines and responsibilities is available \href{https://www.ieee.org/publications_standards/publications/rights/Section821.html}{here}. Per these guidelines, it is not acceptable to award {\em honorary } authorship or {\em gift} authorship. Please keep these guidelines in mind while determining the author list of your paper. \subsection{Declaring Authors} Declare all the authors of the paper upfront. Addition/removal of authors once the paper is accepted will have to be approved by the program chair, since it potentially undermines the goal of eliminating conflicts for reviewer assignment. \subsection{Areas and Topics} Authors should indicate these areas on the submission form as well as specific topics covered by the paper for optimal reviewer match. If you are unsure whether your paper falls within the scope of MICRO, please check with the program chair -- MICRO is a broad, multidisciplinary conference and encourages new topics. \subsection{Declaring Conflicts of Interest} Authors must register all their conflicts on the paper submission site. Conflicts are needed to ensure appropriate assignment of reviewers. If a paper is found to have an undeclared conflict that causes a problem OR if a paper is found to declare false conflicts in order to abuse or ``game'' the review system, the paper may be rejected. We use the NSF conflict of interest guidelines for determining the conflict period for MICRO'15. Please declare a conflict of interest (COI) with the following people for any author of your paper: \begin{enumerate} \item Your Ph.D. advisor(s), post-doctoral advisor(s), Ph.D. students, and post-doctoral advisees, forever. \item Family relations by blood or marriage, or their equivalent, forever (if they might be potential reviewers). \item People with whom you have collaborated in the last FOUR years, including \begin{itemize} \item co-authors of accepted/rejected/pending papers. \item co-PIs on accepted/rejected/pending grant proposals. \item funders (decision-makers) of your research grants, and researchers whom you fund. \end{itemize} \item People (including students) who shared your primary institution(s) in the last FOUR years. \item Other relationships, such as close personal friendship, that you think might tend to affect your judgment or be seen as doing so by a reasonable person familiar with the relationship. \end{enumerate} ``Service'' collaborations such as co-authoring a report for a professional organization, serving on a program committee, or co-presenting tutorials, do not themselves create a conflict of interest. Co-authoring a paper that is a compendium of various projects with no true collaboration among the projects does not constitute a conflict among the authors of the different projects. On the other hand, there may be others not covered by the above with whom you believe a COI exists, for example, an ongoing collaboration which has not yet resulted in the creation of a paper or proposal. Please report such COIs; however, you may be asked to justify them. Please be reasonable. For example, you cannot declare a COI with a reviewer just because that reviewer works on topics similar to or related to those in your paper. The PC Chair may contact co-authors to explain a COI whose origin is unclear. We hope to draw most reviewers from the PC and the ERC, but others from the community may also write reviews. Please declare all your conflicts (not just restricted to the PC and ERC). When in doubt, contact the program chair. \subsection{Optional Reviewer Suggestions} Authors may optionally mark (non-conflicted) PC and ERC members that they believe could provide expert reviews for their submission. If authors believe there is insufficient expertise on the PC and ERC for the topic of their paper, they may suggest alternate reviewers. The program chair will use the authors' input at his discretion. We provide this opportunity for input mostly for papers on non-traditional and emerging topics. \subsection{Concurrent Submissions and Workshops} By submitting a manuscript to MICRO'15, the authors guarantee that the manuscript has not been previously published or accepted for publication in a substantially similar form in any conference, journal, or the archived proceedings of a workshop (e.g., in the ACM digital library) -- see exceptions below. The authors also guarantee that no paper that contains significant overlap with the contributions of the submitted paper will be under review for any other conference or journal or an archived proceedings of a workshop during the MICRO'15 review period. Violation of any of these conditions will lead to rejection. The only exceptions to the above rules are for the authors' own papers in (1) workshops without archived proceedings such as in the ACM digital library (or where the authors chose not to have their paper appear in the archived proceedings), or (2) venues such as IEEE CAL where there is an explicit policy that such publication does not preclude longer conference submissions. In all such cases, the submitted manuscript may ignore the above work to preserve author anonymity. This information must, however, be provided on the submission form -- the PC chair will make this information available to reviewers if it becomes necessary to ensure a fair review. As always, if you are in doubt, it is best to contact the program chair. Finally, we also note that the ACM Plagiarism Policy ({\em http://www.acm.org/publications/policies/plagiarism\_policy}) covers a range of ethical issues concerning the misrepresentation of other works or one's own work. \section{Acknowledgements} This document is derived from previous conferences, in particular MICRO 2013 and ASPLOS 2015. We thank Christos Kozyrakis and Sandhya Dwarkadas for their inputs. \bibliographystyle{ieeetr} \section*{Acknowledgments} We thank the reviewers and our shepherd for their valuable suggestions. We thank the members of the SAFARI group for their feedback and the stimulating research environment they provide. Special thanks to Vivek Seshadri, Kathryn McKinley, Steve Keckler, Evgeny Bolotin, and Mike O'Connor for their feedback during various stages of this project. We acknowledge the support of our industrial partners: Facebook, Google, IBM, Intel, Microsoft, NVIDIA, Qualcomm, Samsung, and VMware. This research was partially supported by NSF (grant 1409723), the Intel Science and Technology Center for Cloud Computing, and the Semiconductor Research Corporation. \section{Significance and Long-Term Impact} In this section, we describe the significance and long-term impact of our MICRO 2016 work, Zorua, by delineating its novelty, what it can enable in future systems, and new research directions that it triggers. \subsection{Novelty} \noindent{\textbullet}~This is the first work that takes a holistic approach to decoupling a GPU application's resource specification from its physical on-chip resource allocation via the use of virtualization. We develop a comprehensive virtualization framework that provides \emph{controlled} and \emph{coordinated} virtualization of \emph{multiple} on-chip resources to maximize the effectiveness of virtualization. \vspace{5pt}% \noindent{\textbullet}~Making GPUs easy to program is critical for their widespread use, and also to achieve the high performance promised by the massively parallel architecture. A key limiting factor in GPU programming today is the burden placed on the programmer in finding a hardware resource specification that achieves very high performance. This is the first work to ease that burden without compromising performance by virtualizing the major hardware resources programmers are required to manage today.% \vspace{5pt}% \noindent{\textbullet}~Portability across GPU architectures is vital in environments such as cloud computing and data centers to achieve predictably good performance, \emph{irrespective} of the GPU generation the application is executing on. This is the first work to tackle the portability challenges that arise from the \highlight{programmer's management} of the fixed on-chip resources with a holistic resource virtualization strategy. \subsection{What Zorua Can Enable in Future Systems} GPUs have emerged as the dominant massively parallel GPU architecture, used as the platform of choice for a wide range of parallel applications from machine learning to scientific simulation. However, there are a number of key challenges that limit the adoption of GPUs across broader classes of applications and environments, e.g., data centers, cloud computing, etc. Programmability and portability of GPU applications are two such challenges. But future GPUs will need to address several other challenges before truly becoming first-class compute engines. As we describe below, we believe that our work can help address some of these other challenges. \vspace{1mm} \textbf{Multiprogramming in Virtualized Environments.} Zorua lends itself to easily addressing two key challenges in enabling multiprogramming in virtualized environments today: \emph{Fine-grained resource sharing across kernels:} \X manages the different resources independently and at a fine granularity, using a dynamic runtime system. Hence, Zorua can be extended to support fine-grained sharing and partitioning of resources across multiple kernels to enable efficient multiprogramming in GPUs. Zorua enables better resource utilization in these multiprogrammed environments, while providing the ability to control the partitioning of resources at runtime to provide QoS, fairness, etc., by leveraging the hardware runtime system. \highlight{Zorua can work synergistically with systems such as Mosaic~\cite{mosaic} and MASK~\cite{mask}, which enable efficient memory virtualization techniques for GPUs, to enable true full-system multi-kernel execution.} \emph{Preemptive multitasking:} Another key challenge in enabling true multiprogramming in GPUs is enabling rapid preemption of kernels~\cite{isca-2014-preemptive,simultaneous-sharing,chimera}. Context switching on GPUs incurs a very high latency and overhead, as a result of the large amount of register file/scratchpad state that needs to be saved before a new kernel can be executed. \X enables fine-grained management and virtualization of on-chip resources. It can be naturally extended to enable quick preemption of a task via intelligent management of the swap space and the mapping tables. It can also work synergistically with CABA~\cite{caba}, framework for assist warp execution in GPUs, to provide flexible and efficient support for multitasking and context switching. \textbf{Support for Other Parallel Programming Paradigms.} The fixed static resource allocation for each thread in modern GPU architectures requires statically dictating the parallelism for the program throughout its execution. Other forms of parallel execution that are \emph{dynamic} (e.g., CILK~\cite{cilk}) require more flexible allocation of resources at runtime, and are hence more challenging to enable on GPUs. Examples of this include \emph{nested parallelism}~\cite{nested}, where a kernel can dynamically spawn new kernels or thread blocks, and \emph{helper threads}~\cite{caba} to utilize idle resource at runtime to perform different optimizations or background tasks in parallel. Zorua makes it easy to enable these paradigms by providing on-demand dynamic allocation of resources. \textbf{Energy Efficiency, Scalability, and Reliability.} To support massive parallelism, on-chip resources are a precious and critical resource. However, these resources \emph{cannot} grow arbitrarily large as GPUs continue to be area-limited and on-chip memory tends to be extremely power hungry and area intensive~\cite{energy-register,virtual-register,compiler-register,warped-register,virtual-thread,ltrf-sadrosadati-asplos18}, which are trends we believe will become increasingly important for the foreseeable future. Furthermore, complex thread schedulers that can select a thread for execution from an increasingly large thread pool are required. \X enables using smaller register files, scratchpad memory and less complex or fewer thread schedulers to save power and area while still retaining or improving parallelism. The indirection offered by \X, along with the dynamic management of resources, could also enable better reliability. The virtualization framework trivially allows portions of a resource that contain hard or soft faults to be remapped to other portions of the resource that do not contain faults, or to spare structures, thereby increasing the error tolerance of these resources. \subsection{New Research Directions Zorua Enables} Zorua opens up several new avenues for more research, which we briefly discuss here. \textbf{Flexible Programming Models for GPUs and Heterogeneous Systems.} By providing a flexible but dynamically controlled view of the on-chip hardware resources, Zorua changes the abstraction of the on-chip resources that is offered to the programmer and software. This offers the opportunity to rethink resource management in GPUs from the ground up. One could envision more powerful resource allocation and better programmability with programming models that do \emph{not} require static resource specification, leaving the compiler/runtime system and the underlying virtualized framework to \highlight{\emph{completely} handle \emph{all}} forms of on-chip resource allocation, unconstrained by the fixed physical resources in a specific GPU, entirely at runtime. This is especially significant in future systems that are likely to support a wide range of compute engines and accelerators, making it important to be able to write high-level code that can be partitioned easily, efficiently, and at a fine granularity across any \highlight{set of accelerators}, without statically tuning any code segment to run efficiently on the GPU. \textbf{Virtualization-Aware Compilation and Auto-Tuning.} Zorua changes the contract between the hardware and software to provide a more powerful resource abstraction (in the software) that is \emph{flexible and dynamic}, by pushing some more functionality to the hardware, which can more easily react to runtime resource requirements of the program. We can re-imagine compilers and auto-tuners to be more intelligent, leveraging this new abstraction and, hence the virtualization, to deliver more efficient and high-performing code optimizations that are \highlight{\emph{not} possible with the \emph{fixed} and \emph{static}} abstractions of today. They could, for example, \emph{leverage} the oversubscription and dynamic management that Zorua provides to tune the code to more aggressively use resources. \textbf{Support for System-Level Tasks on GPUs.} As GPUs become increasingly general purpose, a key requirement is better integration with the CPU operating system, and with complex distributed software systems such as those employed for large-scale distributed machine learning~\cite{tensorflow, gaia} or graph processing\highlight{~\cite{graphlab, tesseract,pim-enabled}}. If GPUs are architected to be first-class compute engines, rather than the slave devices they are today, they can be programmed and utilized in the same manner as a modern CPU. This integration requires the GPU execution model to support system-level tasks like interrupts, exceptions, etc. and more generally provide support for access to distributed file systems, disk I/O, or network communication. Support for these tasks and execution models require dynamic provisioning of resources for execution of system-level code. Zorua provides a building block to enable this. \textbf{Applicability to General Resource Management in Accelerators.} Zorua uses a program \emph{phase} as the granularity for managing resources. This allows handling resources across phases \emph{dynamically}, while leveraging \emph{static} information regarding resource requirements from the software by inserting annotations at phase boundaries. Future work could potentially investigate the applicability of the same approach to manage resources and parallelism in \emph{other} accelerators (\highlight{e.g., processing-in-memory accelerators~\cite{pim-enabled, tesseract, tom, impica,googlepim-asplos18, shaw1981non, boroumand2016pim, kim.bmc18, ambit, guo-wondp14, stone1970logic, zhang-2014, kogge.iccp94, patterson.ieeemicro97} or direct-memory access engines~\cite{rowclone,decoupled-dma, chang.hpca16}}) that require efficient dynamic management of large amounts of particular critical resources. \section{Related Work} \label{sec:related} To our knowledge, our MICRO 2016 paper~\cite{zorua} is the first work to propose a holistic framework to decouple a GPU application's resource specification from its physical on-chip resource allocation by virtualizing multiple on-chip resources. This enables the illusion of more resources than what physically exists to the programmer, while the hardware resources are managed at runtime by employing a swap space (in main memory), transparently to the programmer. We design a new hardware/software cooperative framework to effectively virtualize multiple on-chip GPU resources in a controlled and coordinated manner, thus enabling many benefits of virtualization in GPUs. We briefly discuss prior work related to different aspects of our proposal: \emph{(i)}~ virtualization of resources, \emph{(ii)}~ improving programming ease and portability, and \emph{(iii)}~ more efficient management of on-chip resources. \textbf{Virtualization of Resources.} \emph{Virtualization}~\cite{virtual-memory1,virtual-memory2,virtualization-1,virtualization-2} is a concept designed to provide the illusion, to the software and programmer, of more resources than what truly exists in physical hardware. It has been applied to the management of hardware resources in many different contexts ~\cite{virtual-memory1,virtual-memory2,virtualization-1,virtualization-2,vmware-osdi02,how-to-fake,pdp-10,ibm-360}, with virtual memory~\cite{virtual-memory1, virtual-memory2, multics, fotheringham.cacm61} being one of the oldest forms of virtualization that is commonly used in high-performance processors today. Abstraction of hardware resources and use of a level of indirection in their management leads to many benefits, including improved utilization, programmability, portability, isolation, protection, sharing, and oversubscription. In this work, we apply the general principle of virtualization to the management of multiple on-chip resources in modern GPUs. Virtualization of on-chip resources offers the opportunity to alleviate many different challenges in modern GPUs. However, in this context, effectively adding a level of indirection introduces new challenges, necessitating the design of a new virtualization strategy. There are two key challenges. First, we need to dynamically determine the \emph{extent} of the virtualization to reach an effective tradeoff between improved parallelism due to oversubscription and the latency/capacity overheads of swap space usage. Second, we need to coordinate the virtualization of \emph{multiple} latency-critical on-chip resources. To our knowledge, this is the first work to propose a holistic software-hardware cooperative approach to virtualizing multiple on-chip resources in a controlled and coordinated manner that addresses these challenges, enabling the different benefits provided by virtualization in modern GPUs. Prior works propose to virtualize a specific on-chip resource for specific benefits, mostly in the CPU context. For example, in CPUs, the concept of virtualized registers was first used in the IBM 360~\cite{ibm-360} and DEC PDP-10~\cite{pdp-10} architectures to allow logical registers to be mapped to either fast yet expensive physical registers, or slow and cheap memory. More recent works~\cite{how-to-fake,cpu-virt-regs-1,cpu-virt-regs-2}, propose to virtualize registers to increase the effective register file size to much larger register counts. This increases the number of thread contexts that can be supported in a multi-threaded processor~\cite{how-to-fake}, or reduces register spills and fills~\cite{cpu-virt-regs-1,cpu-virt-regs-2}.\ignore{ Virtual Local Stores~\cite{virtual-local-stores} is a scratchpad virtualization mechanism to map the scratchpad inside the hardware-managed cache and enable context-switching of the scratchpad state along with the rest of the process state.} Other works propose to virtualize on-chip resources in CPUs (e.g.,~\cite{virtual-local-stores,spills-fills-kills,hierarchical-scheduling-windows,twolevel-hierarchical-registerfile,virtual-physical-registers-hpca98}). In GPUs, Jeon et al.~\cite{virtual-register} propose to virtualize the register file by dynamically allocating and deallocating physical registers to enable more parallelism with smaller, more power-efficient physical register files. Concurrent to this work, Yoon et al.~\cite{virtual-thread} propose an approach to virtualize thread slots to increase thread-level parallelism. These works propose specific virtualization mechanisms for a single resource for specific benefits. None of these works provide a cohesive virtualization mechanism for \emph{multiple} on-chip GPU resources in a controlled and coordinated manner, which forms a key contribution of our MICRO 2016 work. \textbf{Enhancing Programming Ease and Portability.} There is a large body of work that aims to improve programmability and portability of modern GPU applications using software tools, such as auto-tuners~\cite{toward-autotuning,atune,maestro,parameter-profiler,autotuner1,autotuner-fft}, optimizing compilers~\cite{g-adapt,optimizing-compiler1,parameter-selection,porple,optimizing-compiler2,sponge}, and high-level programming languages and runtimes~\cite{cuda-lite,halide,hmpp,hicuda}. These tools tackle a multitude of optimization challenges, and have been demonstrated to be very effective in generating high-performance portable code. They can also be used to tune the resource specification. However, there are several shortcomings in these approaches. First, these tools often require profiling runs~\cite{toward-autotuning,atune,maestro,porple,optimizing-compiler1,optimizing-compiler2} on the GPU to determine the best performing resource specifications. These runs have to be repeated for each new input set and GPU generation. Second, software-based approaches still require significant programmer effort to write code in a manner that can be exploited by these approaches to optimize the resource specifications.\ignore{ For example, auto-tuners require \emph{parameterization} of code, where the programmer is required to ensure correctness of the program for any of the possible specification that an auto-tuner optimizes. Optimizing compilers require programmers to write kernels to ensure that each thread block is sized as small as possible for the algorithm being implemented as the compiler has to conservatively preserve synchronization primitives within a thread block. Some high-level languages~\cite{cuda-lite,halide,hmpp,hicuda} and compilers~\cite{g-adapt} require annotations from the programmer or require the program to be written in such a way that the algorithm is decoupled from potential optimization schedules.} Third, selecting the best performing resource specifications statically using software tools is a challenging task in virtualized environments (e.g., cloud computing, data centers), where it is unclear which kernels may be run together on the same SM or where it is not known, a priori, which GPU generation the application may execute on. Finally, software tools assume a fixed amount of available resources. This leads to runtime underutilization due to static allocation of resources, which cannot be addressed by these tools. In contrast, the programmability and portability benefits provided by \X require no programmer effort in optimizing resource specifications. Furthermore, these auto-tuners and compilers can be used in conjunction with \X to further improve performance. \textbf{Efficient Resource Management.} Prior works aim to improve parallelism by increasing resource utilization using hardware-based~\cite{warp-level-divergence,shmem-multiplexing,unified-register,virtual-register,alternative-thread-block,register-mapping-patent,owl-asplos13,osp-isca13,largewarp,medic,rachata-isca,toggle-aware,decoupled-dma, usui.taco16}, software-based~\cite{shmem-multiplexing,stash, asplos-sree,automatic-placement,onchip-allocation,enabling_coordinated,fine-grain-hotpar}, and hardware-software cooperative~\cite{mask, mosaic, caba, bis, acs, marshaling, uba-joao-isca13, ltrf-sadrosadati-asplos18} approaches. Among these works, the closest to ours are~\cite{virtual-register,virtual-thread} (discussed earlier), \cite{shmem-multiplexing}, and \cite{warp-level-divergence}. These approaches propose efficient techniques to dynamically manage a single resource, and can be used along with \X to improve resource efficiency further. Yang et al.~\cite{shmem-multiplexing} aim to maximize utilization of the scratchpad with software techniques, and by dynamically allocating/deallocating scratchpad memory. Xiang et al.~\cite{warp-level-divergence} propose to improve resource utilization by scheduling threads at the finer granularity of a warp rather than a thread block. This approach can help alleviate performance cliffs, but not in the presence of synchronization or scratchpad memory, nor does it address the dynamic underutilization within a thread during runtime. We quantitatively compare to this approach in Section~\ref{sec:eval} and demonstrate \X's benefits over it. Other works leverage resource underutilization to improve energy efficiency~\cite{warped-register,energy-register,gebhart-hierarchical,compiler-register,virtual-register} or perform other useful work~\cite{caba,spareregister}. These works are complementary to \X. \section{Conclusion} We propose \X, a new framework that decouples the application resource specification from the allocation in the physical hardware resources (i.e., registers, scratchpad memory, and thread slots) in GPUs. \X encompasses a holistic virtualization strategy to effectively virtualize multiple latency-critical on-chip resources in a controlled and coordinated manner. We demonstrate that by providing the illusion of more resources than physically available, via dynamic management of resources and the judicious use of a swap space in main memory, \X enhances \emph{(i)}~ \emph{programming ease} (by reducing the performance penalty of suboptimal resource specification), \emph{(ii)}~ \emph{portability} (by reducing the impact of different hardware configurations), and \emph{(iii)}~ \emph{performance} for code with an optimized resource specification (by leveraging dynamic underutilization of resources). We conclude that \X is an effective, holistic virtualization framework for GPUs. We believe that the indirection provided by {\X}'s virtualization mechanism makes it a generic framework that can address other challenges in modern GPUs. For example, \X can enable fine-grained resource sharing and partitioning among multiple kernels/applications, as well as low-latency preemption of GPU programs. We hope that future work explores these promising directions, building on the insights and the framework developed in our MICRO 2016 paper. \ignore{We conclude that by decoupling the programmer's view of the resources from what is physically available, \X enhances programming ease, portability, and performance of GPU applications while paving the way for many other use cases that can leverage the fluid view of resources.} \section{Motivation: Key Challenges in \\ Modern GPUs} Modern Graphics Processing Units (GPUs) offer high performance and energy efficiency for many classes of applications by concurrently executing thousands of threads. In order to execute, each thread requires several major on-chip resources: \emph{(i)}~ registers, \emph{(ii)}~ scratchpad memory (if used in the program), and \emph{(iii)}~ a thread slot in the thread scheduler that keeps all the bookkeeping information required for execution. Today, these resources are {\em statically} allocated to threads based on several parameters{\textemdash}the number of threads per thread block, register usage per thread, and scratchpad usage per block. We refer to these static application parameters as the \emph{resource specification} of the application. This resource specification forms a critical component of modern GPU programming models (e.g., CUDA~\cite{cuda}, OpenCL~\cite{opencl}). The static allocation over a fixed set of hardware resources based on the software-specified resource specification creates a \emph{tight coupling} between the program and the physical hardware resources. As a result of this tight coupling, for each application, there are only a few optimized resource specifications that maximize resource utilization. Picking a suboptimal specification leads to underutilization of resources and hence, very often, performance degradation. This leads to three key difficulties related to obtaining good performance on modern GPUs: programming ease, portability, and performance degradation. \textbf{Programming Ease.} First, the burden falls upon the programmer to optimize the resource specification. For a naive programmer, this is a challenging task because, in addition to selecting a specification suited to an algorithm, the programmer needs to be aware of the details of the GPU architecture to fit the specification to the underlying hardware resources. This \emph{tuning} is easy to get wrong because there are \emph{many} highly suboptimal performance points in the specification space, and even a minor deviation from an optimized specification can lead to a drastic drop in performance due to lost parallelism. We refer to such drops as \emph{performance cliffs}. Even a small change in one resource can result in a significant performance cliff, degrading performance by as much as 50\%. Figure~\ref{fig:mst-cliff} depicts multiple sizable cliffs in an example application, when different resource specifications are used when the program is run on a real modern GPU, the NVIDIA GTX 745.\footnote{Our MICRO 2016 paper~\cite{zorua} describes the experimental methodology for collecting these real system results.} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/cliffs-3-camera.pdf} \caption{Performance cliffs in \emph{Minimum Spanning Tree} (\emph{MST}) when run on the NVIDIA GTX 745. Reproduced from~\cite{zorua}.} \label{fig:mst-cliff} \end{figure} \textbf{Portability.} Second, different GPUs have varying quantities of each of the resources. Hence, an optimized specification on one GPU may be highly suboptimal on another. This lack of \emph{portability} necessitates that the programmer \emph{re-tune} the resource specification of the application for \emph{every} new GPU generation. This problem is especially significant in virtualized environments, such as data centers, cloud computing, or compute clusters, where the same program may run on a wide range of GPU architectures. Figure~\ref{fig:port} depicts the 69\% performance loss when porting optimized code from the NVIDIA Kepler~\cite{kepler}/Maxwell~\cite{maxwell} architectures to the NVIDIA Fermi~\cite{fermi} architecture. \begin{figure}[h] \centering \includegraphics[width=0.40\textwidth]{figures/port-2-camera.pdf} \caption{Performance variation across different GPU generations from NVIDIA (Fermi, Kepler, and Maxwell) for \emph{Discrete Fourier Transform (DCT)}. Reproduced from~\cite{zorua}.} \label{fig:port} \end{figure} \textbf{Performance.} Third, for a programmer who chooses to employ software optimization tools (e.g., auto-tuners~\cite{toward-autotuning,atune,maestro,parameter-profiler,autotuner1,autotuner-fft}) or manually tailor the program to fit the hardware, performance is still constrained by the \emph{fixed, static} resource specification. It is well known~\cite{virtual-register,compiler-register,shmem-multiplexing, caba, kayiran-pact16, largewarp} that the on-chip resource requirements of a GPU application vary throughout execution. Since the program (even after auto-tuning) has to {\em statically} specify its {\em worst-case} resource requirements, severe \emph{dynamic underutilization} of several GPU resources ensues~\cite{caba}, leading to suboptimal performance. \section{A Holistic Approach to\\ Resource Virtualization} To address these three challenges at the same time, we propose Zorua, a new framework that \emph{decouples} an application's resource specification from the available hardware resources by \emph{virtualizing} all three major resources (i.e., scratchpad memory, register file, and thread slots) in a holistic manner. This virtualization provides the illusion of more resources to the GPU programmer and software than physically available, and enables the runtime system and the hardware to {\em dynamically} manage multiple resources in a manner that is transparent to the programmer. \subsection{Key Concepts} The virtualization strategy used by \X is built upon two key concepts. First, to mitigate performance cliffs when we do not have enough physical resources, we \emph{oversubscribe} resources by a small amount at runtime, by leveraging their dynamic underutilization and maintaining a swap space (in main memory) for the extra resources required. Second, \X improves utilization by determining the runtime resource requirements of an application. It then allocates and deallocates resources dynamically, managing them \emph{(i)}~ \emph{independently} of each other to maximize each resource's utilization; and \emph{(ii)}~ in a \emph{coordinated} manner, to enable efficient execution of each thread with all its required resources available. Figure~\ref{fig:overview} depicts the high-level overview of the virtualization provided by \X. The \emph{virtual space} refers to the \emph{illusion} of the quantity of available resources. The \emph{physical space} refers to the \emph{actual} hardware resources (specific to the target GPU architecture), and the \emph{swap space} refers to the resources that do \emph{not} fit in the physical space and hence are \emph{spilled} to other physical locations. For the register file and scratchpad memory, the swap space is mapped to the global memory space in the memory hierarchy. For threads, only those that are mapped to the physical space are available for scheduling and execution at any given time. If a thread is mapped to the swap space, its state (e.g., the PC) is saved in memory. Resources in the virtual space can be freely re-mapped between the physical and swap spaces to maintain the illusion of the virtual space resources. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{figures/Overview-camera.pdf} \caption{High-level overview of \X. Reproduced from \cite{zorua}.} \label{fig:overview} \end{figure} \subsection{Challenges in Virtualization} Unfortunately, oversubscription means that latency-critical resources, such as registers and scratchpad, may be swapped to memory at the time of access, resulting in high overheads in performance and energy. This leads to two critical challenges in designing a framework to enable virtualization. The first challenge is to effectively determine the \emph{extent} of virtualization, i.e., by how much each resource appears to be larger than its real physical amount, such that we can \emph{minimize} oversubscription while still reaping its benefits. This is difficult as the resource requirements continually vary during runtime. The second challenge is to minimize accesses to the swap space. This requires \emph{coordination} in the virtualized management of \emph{multiple resources}, so that enough of each resource is available on-chip at the same time when needed. \subsection{Design Ideas} To solve these challenges, \X employs two key ideas. First, we leverage the software (the compiler) to provide annotations with information regarding the future resource requirements of each \emph{phase} of the application. This information enables the framework to make intelligent dynamic decisions ahead of time, with respect to both the extent of oversubscription and the allocation/deallocation of resources. Second, we use an adaptive runtime system to control the allocation of resources. This allows us to \emph{(i)}~ dynamically alter the extent of oversubscription; and \emph{(ii)}~ continuously coordinate the allocation of multiple on-chip resources and the mapping between their virtual and physical/swap spaces; depending on the varying runtime requirements of each thread. We briefly describe each design idea in turn. \subsubsection{Leveraging Software Annotations of Phase Characteristics} \label{sec:key_idea_phases} We observe that the runtime variation in resource requirements typically occurs at the granularity of \emph{phases} of a few tens of instructions. This variation occurs because different parts of kernels perform different operations that require different resources. For example, loops that primarily load/store data from/to scratchpad memory tend to be less register heavy. Sections of code that perform specific computations (e.g., matrix transformation, graph manipulation), can either be register heavy or primarily operate out of scratchpad. Often, scratchpad memory is used for only short intervals~\cite{shmem-multiplexing}, e.g., when data exchange between threads is required, such as for a reduction operation. Figure~\ref{fig:phases} depicts a few example phases from the \emph{N-Queens Solver (NQU)}~\cite{NQU} kernel. \emph{NQU} is a scratchpad-heavy application, but it does not use the scratchpad at all during the initial computation phase. During its second phase, it performs its primary computation out of the scratchpad, using as much as 4224B. During its last phase, the scratchpad is used only for reducing results, which requires only 384B. There is also significant variation in the maximum number of live registers in the different phases, as shown in Figure~\ref{fig:phases}. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{figures/nqu-phases-camera.pdf} \caption{Example phases from \emph{N-Queens Solver (NQU)}. Reproduced from~\cite{zorua}.} \label{fig:phases} \end{figure} In order to capture both the resource requirements as well as their variation over time, we partition the program into a number of \emph{phases}. A phase is a sequence of instructions with sufficiently different resource requirements than adjacent phases.\footnote{We refer the reader to Section~4.6 of our MICRO 2016 paper~\cite{zorua} for specific details on how phases are identified.} Barrier or fence operations also indicate a change in requirements for a different reason{\textemdash}threads that are waiting at a barrier do not immediately require the thread slot that they are holding. We interpret barriers and fences as phase boundaries since they potentially alter the utilization of their thread slots. The compiler inserts special instructions called \emph{phase specifiers} to mark the start of a new phase. Each phase specifier contains information regarding the resource requirements of the next phase. Phase changes are shown as ``\texttt{.phasechange}'' pragmas in Figure~\ref{fig:phases}. A phase forms the basic unit for resource allocation and deallocation, as well as for making oversubscription decisions. It offers a finer granularity than an \emph{entire thread} to make such decisions. The phase specifiers provide information on the \emph{future resource usage} of the thread at a phase boundary. This enables \emph{(i)}~ preemptively controlling the extent of oversubscription at runtime, and \emph{(ii)}~ dynamically allocating and deallocating resources at phase boundaries to maximize utilization of the physical resources. \subsubsection{Control with an Adaptive Runtime System} \label{sec:key_idea_coordinator} Phase specifiers provide information to make oversubscription and allocation/deallocation decisions. However, we still need a way to make decisions on the extent of oversubscription and appropriately allocate resources at runtime. To this end, we use an adaptive runtime system, which we refer to as the \emph{coordinator}. Figure~\ref{fig:coordinator} presents an overview of the coordinator. \begin{figure}[h] \centering \includegraphics[width=0.45\textwidth,scale=0.8]{figures/coordinator-camera.pdf} \caption{Overview of the coordinator. Reproduced from~\cite{zorua}.} \label{fig:coordinator} \end{figure} The virtual space enables the illusion of a larger amount of each of the resources than what is physically available, to adapt to different application requirements. This illusion enables higher thread-level parallelism than what can be achieved with solely the fixed, physically available resources, by allowing more threads to execute concurrently. The size of the virtual space at a given time determines this parallelism, and those threads that are effectively executed in parallel are referred to as \emph{active threads}. All active threads have thread slots allocated to them in the virtual space (and hence can be executed), but some of them may \emph{not} be mapped to the physical space at any given time. As discussed previously, the resource requirements of each application continuously change during execution. To adapt to these runtime changes, the coordinator leverages information from the phase specifiers to make decisions on oversubscription. The coordinator makes these decisions at every phase boundary and thereby controls the size of the virtual space for each resource. \subsection{Zorua: An Overview} To address the challenges in virtualization by leveraging the above ideas, \X employs a software-hardware codesign that comprises three components: \emph{(i)}~ \textbf{\emph{The compiler}} annotates the program by adding special instructions (\emph{phase specifiers}) to partition it into \emph{phases} and to specify the resource needs of each phase of the application. \emph{(ii)}~ \textbf{\emph{The coordinator}}, a hardware-based adaptive runtime system, uses the compiler annotations to dynamically allocate/deallocate resources for each thread at phase boundaries. The coordinator plays the key role of continuously controlling the extent of the oversubscription at each phase boundary. \emph{(iii)}~ \textbf{\emph{Hardware virtualization support}} includes a mapping table for each resource to locate each virtual resource in either the physically available on-chip resources or the swap space in main memory, and the machinery to swap resources between the physical space and the swap space. Zorua has two key hardware components: \emph{(i)}~the \emph{coordinator} that contains queues to buffer the \emph{pending threads} and control logic to make oversubscription and resource management decisions, and \emph{(ii)}~\emph{resource mapping tables} to map each of the resources to their corresponding physical or swap spaces. Our MICRO 2016 paper~\cite{zorua} provides the detailed implementation of Zorua in Section~4. In particular, we describe several key issues, including how (1)~Zorua determines the amount of oversubscription for each resource (Section~4.4 of \cite{zorua}), (2)~Zorua virtualizes each resource (Section~4.5 of \cite{zorua}), and (3)~the compiler identifies each phase (Section~4.6 of \cite{zorua}). \section{Results} \label{sec:eval} In this section, we evaluate the effectiveness of \X in improving programming ease, portability, and performance. Our detailed experimental methodology is described in Section~5 of our MICRO 2016 paper~\cite{zorua}. More results are provided in Section~6 of \cite{zorua}. \subsection{Effect on Performance Variation and Cliffs} \label{sec:eval:var} We first examine how \X alleviates the high variation in performance by reducing the impact of resource specifications on resource utilization. Figure~\ref{fig:performance_range} summarizes the range in performance across a wide range of resource specifications (indicating an undesirable dependence on the specification), for the baseline architecture, WLM (which allocates resources at the finer granularity of a warp~\cite{warp-level-divergence}), and Zorua for a representative set of applications, using a Tukey box plot~\cite{mcgill1978variations}. The boxes in the box plot represent the range between the first quartile (25\%) and the third quartile (75\%). The whiskers extending from the boxes represent the maximum and minimum points of the distribution, or 1.5$\times$ the length of the box, whichever is smaller. Any points that lie more than 1.5$\times$ the box length beyond the box are considered to be outliers~\cite{mcgill1978variations}, and are plotted as individual points. The line in the middle of the box represents the median, while the ``X'' represents the average. We make two major observations from Figure~\ref{fig:performance_range}. \begin{figure}[h] \centering \includegraphics[width=0.47\textwidth]{figures/performance-camera-2.pdf} \vspace{-2mm} \caption{Normalized performance distribution. Reproduced from~\cite{zorua}.} \label{fig:performance_range} \end{figure} First, we find that \X significantly reduces the \emph{performance range} across all evaluated resource specifications. Averaged across all of our applications, the worst resource specification for Baseline achieves 96.6\% lower performance than the best performing resource specification. For WLM~\cite{warp-level-divergence}, this performance range reduces only slightly, to 88.3\%. With \X, the performance range drops significantly, to 48.2\%. We see drops in the performance range for \emph{all} applications except \emph{SSSP}. With \emph{SSSP}, the range is already small to begin with (23.8\% in Baseline), and \X exploits the dynamic underutilization, which improves performance but also adds a small amount of variation. Second, while \X reduces the performance range, it also preserves or improves performance of the best performing points. As we examine in more detail in Section~\ref{sec:eval:perf}, the reduction in performance range occurs as a result of improved performance mainly at the lower end of the distribution. \begin{figure*}[ht!] \centering \begin{subfigure}[t]{0.297\linewidth} \centering \includegraphics[width=1.0\textwidth]{figures/dct-cliffs-camera.pdf} \vspace{-4mm} \caption{\emph{DCT}} \label{fig:performance_cliff_result_dct} \end{subfigure}\qquad \begin{subfigure}[t]{0.28\linewidth} \centering \includegraphics[width=1.0\textwidth]{figures/mst-cliffs-camera.pdf} \vspace{-4mm} \caption{\emph{MST}} \label{fig:performance_cliff_result_mst} \end{subfigure}\qquad \begin{subfigure}[t]{0.28\linewidth} \centering \includegraphics[width=1.0\textwidth]{figures/nqu-cliffs-camera.pdf} \vspace{-4mm} \caption{\emph{NQU}} \label{fig:performance_cliff_result_nqu} \end{subfigure} \caption{Effect on performance cliffs. Reproduced from~\cite{zorua}.} \label{fig:performance_cliff_result} \end{figure*} To gain insight into how \X reduces the performance range and improves performance for the worst performing points, we analyze how it reduces performance cliffs. We study the tradeoff between resource specification and execution time for three representative applications: \emph{DCT} (Figure~\ref{fig:performance_cliff_result_dct}), \emph{MST} (Figure~\ref{fig:performance_cliff_result_mst}), and \emph{NQU} (Figure~\ref{fig:performance_cliff_result_nqu}). For all three figures, we normalize execution time to the \emph{best} execution time under Baseline. We make two observations from the figures. First, \X successfully mitigates the performance cliffs that occur in Baseline. For example, \emph{DCT} and \emph{MST} are both sensitive to the thread block size, as shown in Figures~\ref{fig:performance_cliff_result_dct} and~\ref{fig:performance_cliff_result_mst}, respectively. We have circled the locations at which cliffs exist in Baseline. Unlike Baseline, \X maintains more steady execution times across the number of threads per block, employing oversubscription to overcome the loss in parallelism due to insufficient on-chip resources. We see similar results across all of our applications. Second, we observe that while WLM~\cite{warp-level-divergence} can reduce some of the cliffs by mitigating the impact of large block sizes, many cliffs still exist under WLM (e.g., \emph{NQU} in Figure~\ref{fig:performance_cliff_result_nqu}). This cliff in \emph{NQU} occurs as a result of insufficient scratchpad memory, which cannot be handled by warp-level management. Similarly, the cliffs for \emph{MST} (Figure~\ref{fig:performance_cliff_result_mst}) also persist with WLM because \emph{MST} has a lot of barrier operations, and the additional warps scheduled by WLM ultimately stall, waiting for other warps within the same block to acquire resources. We find that, with oversubscription, \X is able to smooth out those cliffs that WLM is unable to eliminate. \subsection{Effect on Performance} \label{sec:eval:perf} As Figure~\ref{fig:performance_range} shows, \X either retains or improves the best performing point for each application, compared to the Baseline. \X improves the best performing point for each application by 12.8\% on average, and by as much as 27.8\% (for \emph{DCT}). This improvement comes from the improved parallelism obtained by exploiting the dynamic underutilization of resources, which exists \emph{even for optimized specifications}. Applications such as \emph{SP} and \emph{SLA} have little dynamic underutilization, and hence do not show any performance improvement. \emph{NQU} \emph{does} have significant dynamic underutilization, but \X does not significantly improve the best performing point as the overhead of oversubscription outweighs the benefit, and \X dynamically chooses \emph{not} to oversubscribe. We conclude that even for many specifications that are \emph{optimized} to fit the underlying hardware resources, \X is able to further improve performance. We also note that, in addition to reducing performance variation and improving performance for optimized points, \X improves performance by \emph{25.2\% on average} for all resource specifications across all evaluated applications. \subsection{Effect on Portability} \label{sec:eval:port} Performance cliffs often behave differently across different GPU architectures, and can significantly shift the best performing resource specification point. We study how \X can ease the burden of performance tuning if an application has been already tuned for one GPU model, and is later ported to another GPU. To understand this, we define a new metric, \emph{porting performance loss}, that quantifies the performance impact of porting an application without re-tuning it. To calculate this, we first normalize the execution time of each specification point to the execution time of the best performing specification point. We then pick a source GPU architecture (i.e., the architecture that the GPU was tuned for) and a target GPU architecture (i.e., the architecture that the code will run on), and find the point-to-point drop in performance (when the code is executed on the target GPU) for all points whose performance on the source GPU comes within 5\% of the performance at the best performing specification point.\footnote{We include any point within 5\% of the best performance as there are often multiple points close to the best point, and the programmer may choose any of them.} Figure~\ref{fig:portability_result_overall} shows the \emph{maximum} porting performance loss for each application, across any two pairings of our three simulated GPU architectures (NVIDIA Fermi, Kepler, and Maxwell). We find that \X greatly reduces the maximum porting performance loss that occurs under both Baseline and WLM for all but one of our applications. On average, the maximum porting performance loss is 52.7\% for Baseline, 51.0\% for WLM, and only 23.9\% for \X. \begin{figure}[h] \centering \includegraphics[width=0.48\textwidth]{figures/portability-camera.pdf} \caption{Maximum porting performance loss. Reproduced from~\cite{zorua}.} \label{fig:portability_result_overall} \end{figure} Notably, \X delivers significant improvements in portability for applications that previously suffered greatly when ported to another GPU, such as \emph{DCT} and \emph{MST}. For both of these applications, the performance variation differs so much between GPU architectures that, despite tuning the application on the source GPU to be within 5\% of the best achievable performance, their performance on the target GPU is often more than twice as slow as the best achievable performance on the target platform. \X significantly lowers this porting performance loss down to 28.1\% for \emph{DCT} and 36.1\% for \emph{MST}. We also observe that for \emph{BH}, \X actually slightly increases the porting performance loss with respect to the Baseline. This is because for Baseline, there are only two points that perform within the 5\% margin for our metric, whereas with \X, we have five points that fall in that range. Despite this, the increase in porting performance loss for \emph{BH} is low, deviating only 7.0\% from the best performance. We conclude that \X enhances portability of applications by reducing the impact of a change in the hardware resources for a given resource specification. For applications that have already been tuned on one platform, \X significantly lowers the penalty of not re-tuning for another platform, allowing programmers to save development time.
{ "timestamp": "2018-05-08T02:17:21", "yymm": "1805", "arxiv_id": "1805.02498", "language": "en", "url": "https://arxiv.org/abs/1805.02498" }
\section{Introduction} Driverless cars, fruit-picking robots and automated guided vehicles in a warehouse are examples of autonomous motion systems that are up-and-coming in industry. In all these applications, the computation of a collision-free trajectory is essential. Computing motion trajectories that satisfy collision-avoidance constraints has been the topic of substantial research, resulting in a variety of methods, including the potential field method \citep{ge2002dynamic, montiel2015path} and methods using velocity obstacles \citep{guy2009clearpath}. Another popular approach relies on constructing a graph by discretizing the geometric domain, using vonoroi diagrams \citep{takahashi1989motion} or a simple grid of square elements, and performing a subsequent graph search. These graph searches are typically performed using Dijkstra's algorithm or one of its variants with additional heuristics, such as the popular A* \citep{hart1968formal}. Grid based motion planning problems can also be solved by wavefront planners, such as D* \citep{stentz1994optimal} and its variants. The main drawback of graph search methods is that they usually do not consider kinematic constraints of the motion vehicle, which is a problem especially for nonholomic vehicles. Recently, optimization-based strategies for solving motion planning problems, such as Model Predictive Control (MPC), are also becoming more popular. MPC is a control strategy in which an optimal control problem is solved at every time instant \citep{rawlings2009model}. Of the resulting optimal control sequence, the first one is applied to the plant, and the procedure is repeated. A significant advantage of MPC is its ability to take into account constraints on the inputs and states, such as collision-avoidance constraints. One of the main challenges of the practical application of MPC is the strict real-time constraint for solving the optimal control problems. High sampling frequencies are typically required for the system to be able respond appropriately to disturbances and changes in the environment. Moreover, solvers often have to run on resource-constrained embedded hardware. The traditional solvers for numerical optimization, Sequential Quadratic Programming (SQP) and Interior Point (IP) methods, are not very suitable for this purpose as they require the costly operation of solving a linear system of equations at every iteration. In contrast, first-order methods do not require this operation and often involve only simple steps. This explains their increasing popularity for solving MPC problems, \citep{richter2012computational,patrinos2014accelerated,jerez2014embedded}. Furthermore, the optimization algorithm and the problem form, in particular the constraints, are generally linked. For example, most SQP solvers assume that the linear independence constraint qualification (LICQ) is satisfied. Often only simple obstacle shapes, such as circular \citep{wang2014synthesis} and rectangular obstacles are considered. Another approach is based on the separating hyperplane theorem \citep{boyd2004convex}, and allows for the separation of a convex motion system and convex obstacles, or between convex motion systems, as illustrated by \citep{debrouwere2013time} and \citep{mercy2017spline}. Recently, \cite{embedded} have proposed a novel constraint formulation to incorporate general obstacle shapes, described as the intersection of a set of nonlinear inequalities, in the optimization problem. This paper embeds the obstacle constraint formulation presented by \citet{embedded} in a penalty method framework to calculate a trajectory while satisfying collision-avoidance constraints. The penalty parameters allow for a trade-off between the optimality of the trajectory and the extent to which the obstacle constraints may be violated. Virtual enlargements ensure that this trade-off results in a trajectory that avoids all real obstacles. Moreover, the application of the penalty method lowers the likelihood of getting stuck in local optima due to obstacles, as the trajectory is gradually formed around the obstacle. In addition, some heuristics are developed for dealing further with these local optima. The resulting optimization problems are solved using the proximal averaged Newton-type method for optimal control (PANOC), as proposed in \citep{stella2017simple}. As this method combines projected gradient and limited-memory quasi-Newton steps, its implementation is simple and it can achieve a fast rate of convergence. An automatic differentiation toolbox, CasADi \citep{Andersson2018}, is used to efficiently compute the value of the objective function and its derivative. The proposed algorithm is validated for a set of obstacle configurations and for different vehicle models. It is shown to be successful in avoiding obstacles of arbitrary shape, as long as they can be described by the intersection of a set of nonlinear inequalities. In addition, our algorithm is benchmarked against state-of-the-art SQP and IP methods, and is found to outperform these methods both in terms of runtime and robustness. This paper is organized as follows: Section II describes the obstacle constraint formulation introduced by \cite{embedded}. In addition, the resulting optimization problem is presented. Section III discusses the methodology for solving this problem, consisting of necessary reformulations, the first-order algorithm PANOC, the quadratic penalty method and several heuristics. Section IV shows and discusses the numerical simulations results. Section V draws the main conclusions of the paper. \section{PROBLEM FORMULATION} An obstacle constraint formulation that can deal with general obstacle shapes was first introduced by \cite{embedded}. This constraint formulation is an essential part of the optimization problem considered in this paper. Another element of the problem are the kinematics of the motion system for which trajectories are calculated. Both these elements are analyzed in the subsections below and finally incorporated in a nonlinear model predictive control (NMPC) framework. \subsection{Obstacle constraint formulation} In this work, we consider obstacle shapes that can be defined as the intersection of a set of $m$ nonlinear inequalities: \begin{equation} O = \{z \in {\rm I\!R}^{n_z} : h_i(z)>0, \hspace{0.1cm} i=1,...,m\}. \end{equation} Here, $z$ denotes the position vector, $n_z$ the number of dimensions, and $h_i : {\rm I\!R}^{n_z} \rightarrow {\rm I\!R}$ are continuously differentiable functions with Lipschitz continuous gradients. In the remainder of this paper the considered geometry will be two dimensional, thus $n_z = 2$ and $z = (x,y)$. To take into account moving obstacles, time dependent functions $h_i(z,t)$ can also be used in this formulation. However, this paper will only consider the static case. The obstacle avoidance constraint ($z \not\in O$) can then be written as follows: \begin{equation} \exists i \in {1,...,m} : \hspace{0.5cm} h_i(z) \leq 0. \end{equation} In other words, at least one of the inequalities defining the obstacle must be violated. This condition can be rewritten as the following equality constraint: \begin{equation} \label{eq:Obstcost} \psi(z):=\displaystyle\prod ^{m}_{i=1} [h_i(z)]_+ = 0, \end{equation} where the operator $[h_i(z)]_+$ is defined as $\max(h_i(z),0)$. The obstacle avoidance constraint (\ref{eq:Obstcost}) is linked to vertical complementary constraints \citep{scheel2000mathematical}, as it can be rewritten as \begin{equation} \mathrm{min}([h_1(z)]_+^2,..., [h_m(z)]_+^2) = 0. \end{equation} Here, the terms are squared to obtain continuously differentiable functions. In general, the linear independence constraint qualification (LICQ) is not satisfied for such constraints. In our case, for example, the gradient of the rewritten obstacle avoidance constraint is zero at and outside the obstacle boundary. Numerous obstacles can be described using the above formulation. For example, any polyhedral set can be cast as a set of affine constraints \begin{equation*} O = \{z \in {\rm I\!R}^{n_z} : b_i - a_i^\intercal z>0, \hspace{0.1cm} i=1,...,m\}. \end{equation*} Also non-convex polytopes, such as a cross shaped obstacle, can often be constructed as the union of a set of intersecting convex polygons. Another type of obstacle shape that can be considered are balls and ellipsoids, given by \begin{equation*} O = \{z \in {\rm I\!R}^{n_z} : 1-(z-c)^\intercal E(z-c)>0, \hspace{0.1cm} i=1,...,m\}. \end{equation*} Furthermore, sections of discs can be described as the combination of an outer radius constraint, an inner radius constraint, and a separating hyperplane (affine) constraint. For example, a half-disc obstacle is shown in Figure \ref{fig:IllusHoldAndIntPoint}. In addition, a set of polynomial functions $h_i(z)$ can be used to define a more general semi-algebraic set. Finally, other functions, such as trigonometric functions, are also possible, as long as they are continuously differentiable. \subsection{Vehicle models} \label{subsec:Vehicle models} The problem under consideration is the real-time computation of the optimal trajectory for a motion system. This motion system can be a robot, a satellite, a car, etc. For the remainder of this paper, this system will be called \lq vehicle\rq, as the example models discussed below will be of the vehicle type. The vehicle is described by a state vector $q$ denoting its position and orientation. In this paper, the considered geometry is two-dimensional. The state therefore has three components: position components $x$ and $y$, and a heading angle $\theta$. The vehicle is steered by control inputs $u$, and its system dynamics are governed by nonlinear ordinary differential equations $\dot{q} = f(q, u)$. Two vehicle models will be used in the numerical validation of the proposed methodology, cf. Section \ref{sec: Numerical Simulations}. The first vehicle model is the simple bicycle model \citep{rajamani2011vehicle}, where slip of the wheels is neglected. A bicycle is controlled by two control inputs, the velocity $v$ and the steering angle of the front wheel(s) $\delta_f$. The corresponding equations of motion are \begin{align} \label{eq:Bicycle} \dot{x} &= v\cdot \mathrm{cos}(\theta) \nonumber \\ \dot{y} &= v\cdot \mathrm{sin}(\theta) \\ \dot{\theta} &= \frac{v}{L} \mathrm{tan}(\delta_f). \nonumber \end{align} Here, $L$ is the distance between the centers of mass of the wheels of the bicycle. The second nonlinear vehicle model considered in this paper is a simplified trailer model \citep{embedded}. Again, slip of the wheels is neglected. This model's inputs are the velocity reference $u_x$ and $u_y$ of the towing vehicle. This velocity reference is tracked by a low-level velocity controller. The equations of motion of the trailer model are \begin{align} \label{eq:Trailer} \dot{x} &= u_x + L\mathrm{sin}(\theta)\cdot\dot{\theta} \nonumber \\ \dot{y} &= u_y - L\mathrm{cos}(\theta)\cdot\dot{\theta} \\ \dot{\theta} &= \frac{1}{L} (u_y \mathrm{cos}(\theta) - u_x \mathrm{sin}(\theta)). \nonumber \end{align} Here, $L$ is the distance between the center of mass of the trailer vehicle and the fulcrum connecting to the towing vehicle. \subsection{NMPC formulation} \label{subsec:NMPC formulation} The continuous-time dynamics describing the motion of the system are discretized using a nonlinear integrator, in this case a fourth order explicit Runge-Kutta method, resulting in the discrete-time representation: \begin{equation} q_{k+1} = \varPhi_k(q_k, u_k) \end{equation} The NMPC problem is the following: \begin{align} \label{eq:Original Problem} {\textrm{\textbf{minimize }}}\hspace{0.2cm}& \ell_N(q_N) + \displaystyle\sum^{N-1}_{k=0}\ell_k(q_k,u_k), \\ {\textrm{\textbf{subject to }}}\hspace{0.2cm}&q_{k+1} = \Phi_k(q_k, u_k), \hspace{0.1cm} k=0,...,N-1, \\ & \psi_{i}(z_k) = 0, \hspace{0.1cm} i=1,...,N_O, \hspace{0.1cm} k=1,...,N, \\ &u_k \in U, \hspace{0.1cm} k=0,...,N-1, \end{align} where $N$ denotes the horizon length and $N_O$ the number of obstacles. The obstacle cost functions $\psi_i$ are defined as in (\ref{eq:Obstcost}), and the position $z_k$ is a subvector of the state vector $q_k$. The stage costs are quadratic functions expressing the distance of the state and input variables to the reference state and input: \begin{equation*} \ell_k(q_k,u_k) = (q_k-q_{\mathrm{ref}})^\intercal Q_k(q_k-q_{\mathrm{ref}}) + (u_k-u_{\mathrm{ref}})^\intercal R_k(u_k-u_{\mathrm{ref}}), \end{equation*} with the terminal cost \begin{equation*} \ell_N = (q_N-q_{\mathrm{ref}})^\intercal Q_N(q_N-q_{\mathrm{ref}}). \end{equation*} The matrices $Q_N$, $Q_k$ and $R_k$ are positive (semi-)definite matrices. A set of input constraints $U$ on which it is easy to project, can straightforwardly be accounted for by the PANOC algorithm. Typical constraints of this type are box constraints of the form $U = \{u \in {\rm I\!R}^{n_u} : u_\mathrm{min} \leq u \leq u_\mathrm{max}\}$. \section{METHODOLOGY} This section discusses the method employed for solving problem (\ref{eq:Problem}), which consists of four parts: (i) a reformulation of the optimization problem itself; (ii), an optimization algorithm for solving the problem for a fixed value of the penalty parameters; (iii), a penalty method algorithm, which allows for an adequate trade-off between the least-squares objective and the obstacle cost; and (iv), heuristics that facilitate convergence to a trajectory that both reaches the destination and avoids all obstacles. \subsection{NMPC reformulation} \label{subsec:NMPC reformulation} Two transformations are applied to the optimization problem before we can introduce a first-order algorithm to solve it. First, the equality constraints representing obstacle avoidance in problem (\ref{eq:Original Problem}) are replaced by appropriate penalty functions in the objective, also known as soft constraints. Given the formulation of the obstacle cost function (\ref{eq:Obstcost}), it is straightforward to construct a quadratic penalty function, $\widetilde{\psi}(z_k) = \frac{1}{2}\mu_k \psi(z_k)^2$, with penalty factors $\mu_k$. This obstacle penalty function has the advantage of being continuously differentiable, in contrast to an exact penalty formulation of these constraints. It is also better conditioned than higher order penalties, with gradient: \begin{equation} \nabla \widetilde{\psi}(z_k) = \mu_k\displaystyle\sum^{m}_{i=1} [h_i(z_k)]_+ \displaystyle\prod_{j\neq i} [h_j(z_k)]_+^2 \nabla h_i(z_k). \end{equation} Note that \[ \nabla ([w]_+^2) = 2 [w]_+ \nabla ([w]_+) = 2 [w]_+ \nabla w. \] Second, the state vectors are eliminated from the optimization problem by integrating the nonlinear kinematic equations of the motion system \begin{equation} F_{k+1}(u) = \varPhi_k(F_k(u), u_k), \end{equation} with $F_0(u) = q_0$. This is the so-called single-shooting formulation, where the control inputs are the only remaining decision variables, and the initial state vector is a parameter. The resulting optimization problem then becomes \begin{equation} \label{eq:Problem} \underset{u \in U^N}{\textrm{\textbf{minimize }}} \ell(u), \end{equation} where the objective function is given by \begin{equation} \label{eq:Objective} \ell(u) = \ell_N(F_N(u)) + \displaystyle\sum^{N-1}_{k=0}\ell_k(F_k(u),u_k) + \frac{1}{2} \displaystyle\sum^{N}_{k=1} \displaystyle\sum^{N_O}_{i=1} \mu_{ik} {\psi}^2_i(F_k(u)). \end{equation} and $U_N=\underbrace{U\times\cdots\times U}_{N\ \textrm{times}}$. \subsection{PANOC algorithm} For solving problem (\ref{eq:Problem}) with a fixed value for the penalty parameters $\mu_{ik}$, we employ the recently introduced proximal averaged Newton-type method for optimal control \citep{stella2017simple}. This algorithm, presented in Alg. \ref{alg:PANOC}, achieves a fast convergence by combining proximal gradient and limited memory quasi-Newton (L-BFGS) steps. In this manner, curvature information of the optimization problem is incorporated without calculating second-order derivatives. A set of input constraints $U^N$ can straightforwardly be taken into account via the projection step, step \ref{lst:line:ubar}, in the iterative scheme. \begin{algorithm}[H] \caption{PANOC algorithm for problem (\ref{eq:Problem})}\label{alg:PANOC} \begin{algorithmic}[1] \Statex \textbf{Input:} $L_\ell > 0$, $\gamma \in (0,\frac{1}{L_\ell})$, $\sigma \in (0, \frac{\gamma}{2} (1-\gamma\frac{L_\ell}{2}))$, $u^0 \in {\rm I\!R}^{n}$, $\tau > 0$, L-BFGS memory $m$. \For{$k = 0,1,2,\ldots$} \State $\bar{u}^k \leftarrow \Pi_{U^N} (u^k - \gamma \nabla \ell(u^k))$ \label{lst:line:ubar} \State $r^k \leftarrow \frac{u^k - \bar{u}^k}{\gamma}$ \label{lst:line:r} \If {$\|r^k\|_{\infty} < \tau$} \State {\textbf{stop} with solution $\bar{u}^k$.} \EndIf \State $d^k = -H_k r^k$ using L-BFGS \State $u^{k+1} \leftarrow u^k - (1 - \alpha_k) \gamma r^k + \alpha_k d^k$, with $\alpha_k$ the largest in $\{ \frac{1}{2^i}: i \in {\rm I\!N} \}$ such that \begin{equation} \varphi_\gamma(u^{k+1}) \leq \varphi_\gamma(u^k) - \sigma\|r^k\|^2 \end{equation} \EndFor \end{algorithmic} \end{algorithm} In this algorithm, $L_\ell$ denotes the Lipschitz constant of the objective function, $\ell$. If this is not known a priori, as is often the case in practice, the PANOC algorithm can also run with a Lipschitz estimate which is then updated in between iterations, by adding another step after step \ref{lst:line:r} \begin{algorithmic}[0] \State {\footnotesize \ref{lst:line:r}bis:} \textbf{if} {$\ell(\bar{u}^k) > \ell(u^k) - \gamma \nabla \ell(u^k)^\intercal r^k + \frac{L_\ell}{2} \|\gamma r^k\|^2$} \textbf{then} \State \hspace{1.0cm} $\gamma \leftarrow \frac{\gamma}{2}$, $L_\ell \leftarrow 2 L_\ell$, $\sigma \leftarrow \frac{\sigma}{2}$, go to step \ref{lst:line:ubar}. \end{algorithmic} In addition, two new functions were introduced. The first is the fixed-point residual operator \begin{equation} R_\gamma(u) = \frac{1}{\gamma} (u - \Pi_{U^N}(u - \gamma \nabla \ell(u))) \end{equation} The second is the forward-backward envelope, first introduced by \citet{patrinos2013proximal}, which can be computed as \begin{equation} \varphi_\gamma(u) = \ell(u) - \frac{\gamma}{2}\|\nabla\ell(u)\|^2 + \mathrm{dist}_U^2(u - \gamma \nabla\ell(u))). \end{equation} For a more in-depth discussion on the properties of PANOC, the reader is referred to \citep{stella2017simple}. \subsection{Penalty method algorithm} Given the definition of the obstacle constraint function (\ref{eq:Obstcost}), the obstacles are completely avoided when for every point $z$ of the trajectory and for all obstacles, $\psi_O(z) = 0$ holds. However, solving the optimization problem with the objective as defined in (\ref{eq:Objective}), the solution will likely show a trade-off between low stage costs and low obstacle costs. This trade-off depends on the value of the penalty factors. In order to enforce the obstacle constraints to an acceptable predefined tolerance, we employ a penalty method, as shown in Alg. \ref{alg:Penalty}. Here, it is made explicit that the objective function is parametrized in the penalty factors, hence the notation $\ell(u, \mu)$. Outer iterations are denoted by subscripts, inner iterations by superscripts, and $u$ and $\mu$ represent the vector of control inputs and penalty factors, respectively. \begin{algorithm}[H] \caption{Penalty method for problem (\ref{eq:Problem})}\label{alg:Penalty} \begin{algorithmic}[1] \Statex \textbf{Input:} $u^0_0 \in {\rm I\!R}^{n}, \mu_0\in{\rm I\!R}^{N_ON}, \eta_* > 0, \tau_*>0, \{\tau_k\} \rightarrow \tau_*, \omega>1 $ \For{$k = 0,1,2,...$} \State Minimize $\ell(u,\mu_k)$ with starting point $u^0_k$, using PANOC with termination criterion $\|R_\gamma (u)\|_{\infty} < \tau_k$ to find $u^*_k$. \If {$\|R_\gamma (u^*_k)\|_{\infty} \leq \tau_* \land \|\psi(u^*_k)\| \leq \eta_*$} \State {\textbf{stop} with solution $u_k^*$.} \EndIf \State $\mu_{k+1} \leftarrow \omega\mu_k$ \State $u_{k+1}^0 \leftarrow u_k^*$ \label{lst:line:warm starting} \EndFor \end{algorithmic} \end{algorithm} In the penalty method, the penalty factors are raised until the optimization solver has converged to a solution for which the norm of the obstacle cost function is lower than a certain tolerance $\eta_*$. The quadratic penalty method is well known to only be exact ($\eta_* = 0$) if the penalty factors are equal to infinity \citep{nocedal2006numerical}. Therefore, a strictly positive tolerance is chosen. In our algorithm, this is often on the order of $10^{-2}$. % Virtual enlargements of the obstacle complement this formulation, so that the real obstacles can in fact be completely avoided, even though the constraint tolerance is strictly positive. Such enlargements also allow the formulation to be used for a vehicle with a finite width. The penalty update factor $\omega$ is used to increase the penalty factors at each outer iteration. In practice, appropriate values of this factor lie between $2-10$ and there is always a trade-off in choosing this value: low values make the different optimization problems more similar and thus easier to warm-start, but more problems will have to be solved in order to converge to a feasible solution. High values in contrast, render the consecutive optimization problems more difficult, but fewer of them are needed. It is observed that for our motion planning problem, the optimization problems do not suffer from a high penalty update factor, thus $\omega$ is here chosen to be 10. In addition, after every update of the penalty factor, the solver is warm-started with the solution from the previous iteration, step \ref{lst:line:warm starting}. After every MPC step, it is common practice to warm-start the next optimal control problem, shifting the vector of control inputs over one time instant and adding an initial guess, often the zero vector, for the last time instant. Similarly, the penalty factors are shifted, and a vector of ones is added for the last time instant. An illustration of the penalty method applied to a problem with a crescent obstacle is shown in Figure \ref{fig:IllusPenaltyMethod}. The trajectories ranging from blue to green correspond to increasing penalty factors, which determine the balance between a feasible trajectory and one that is optimal for the least squares objective. The enlarged obstacle can never be completely avoided, but for high enough penalty factors, the original obstacle is avoided, as illustrated by the final two green trajectories. The combination of a virtual enlargement of the obstacle and finite values for the penalty factors is therefore indeed successful. Figure \ref{fig:IllusPenaltyMethod} also demonstrates that successive trajectories are usually similar in shape even though the penalty factors are updated somewhat aggressively in this paper. Therefore, warm-starting aids tremendously in the convergence of the penalty method. The application of the penalty method may have an additional benefit for obstacle avoidance problems, also illustrated by Figure \ref{fig:IllusPenaltyMethod}. Assuming some obstacle is blocking the shortest path from start to destination, the initial trajectory calculated with low penalty factors is very likely to arrive at the destination while violating obstacle avoidance constraints. Subsequent iterations with higher and higher penalty factors tend to push the trajectory to the edge of the obstacle, while remaining connected to the destination. In contrast, solving the problem only once with a high value for the penalty factors can impede convergence to a trajectory that reaches the destination, as the vehicle is more likely to get stuck behind an obstacle, as shown in Figure \ref{fig:IllusPenaltyMethod2}. Using the penalty method for the problems considered in this paper therefore aids in avoiding the local minimum. \begin{figure} \centering \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\textwidth]{includes/IllusPenaltyMethod.eps} \caption{Penalty method.} \label{fig:IllusPenaltyMethod} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\textwidth]{includes/IllusPenaltyMethod2.eps} \caption{Fixed (high) penalty factor.} \label{fig:IllusPenaltyMethod2} \end{subfigure} \caption{\small Illustration of the penalty method. The enlarged obstacle is defined by $O = \{(x,y):y>x^2, y< 1+x^2/2\}$.} \label{fig:IllusPenaltyMethod1and2} \end{figure} \subsection{Heuristics} Optimization problem (\ref{eq:Problem}) is a general nonlinear, non-convex problem. The obstacles render the solution space non-convex, and thus often local minima exist near obstacles. An example hereof is illustrated in Figure \ref{fig:local minimum}. To aid in the convergence to a feasible trajectory that reaches the destination, three additional heuristics are applied to the algorithm: (i) the penalty factors are capped at an appropriate value; (ii) the vehicle is stopped if the obstacle costs do not satisfy the specified tolerance $\eta_*$; and (iii) whenever the vehicle remains in place for more than one time instant, it is guided to an intermediate destination before continuing on towards the final destination. Below, the rationale for each of these heuristics is explained. Large penalty factors render the problem ill-conditioned, which impedes fast convergence of the method. In particular, PANOC uses an estimate of the Lipschitz constant of the objective function to determine the stepsize, and the Lipschitz constants of the obstacle cost penalties scale linearly with the penalty factors. Therefore, these factors are capped at a reasonably low value, for example $10^4$. With this cap, however, the algorithm is not guaranteed to find a solution for which the obstacle costs are sufficiently low. In order to solve this problem, the following heuristic is applied: If the obstacle cost is not smaller than the prescribed tolerance within the next three time steps, the vehicle performs an emergency stop. This strategy prevents collisions with obstacles, but causes the vehicle to be more likely to get stuck. Figure \ref{fig:hold} illustrates this strategy in case of a half-disc shaped obstacle. Starting from the green square, the vehicle moves to the blue square in seven MPC steps. There, the calculated trajectory threatens to violate the obstacle constraint within the next three time steps, because the corresponding penalty factors are too low. Instead of following this trajectory, the vehicle is stopped. Finally, to assist the vehicle in circumventing all obstacles, another heuristic is implemented. If the vehicle is stuck behind an obstacle, then the reference state is temporarily replaced by an intermediate destination. The intention behind this is to guide the vehicle around the obstacle. A good intermediate destination is easy to reach from both the point where the vehicle was previously stuck and the final destination. It will usually be close to a corner or edge point of the obstacle. This principle is also illustrated in Figure \ref{fig:IllusHoldAndIntPoint}. Appropriate intermediate destinations lie near the black diamonds. \begin{figure} \centering \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\textwidth]{includes/IllusHoldAndIntPoint2.eps} \caption{High penalty factors cap.} \label{fig:local minimum} \end{subfigure} \begin{subfigure}[b]{0.49\columnwidth} \includegraphics[width=\textwidth]{includes/IllusHoldAndIntPoint.eps} \caption{Low penalty factors cap.} \label{fig:hold} \end{subfigure} \caption{\small Illustration of the local minimum behind the obstacle, and the hold-in-place heuristic and choice of intermediate points. The half-disc shaped obstacle is defined by $O = \{(x,y): x^2+y^2 > 1, x^2+y^2 < 4, x>0\}$. } \label{fig:IllusHoldAndIntPoint} \end{figure} To avoid getting stuck in a local optimum near an obstacle, a suitable set of intermediate destinations must be available and an appropriate choice from this set of points is necessary. The user may provide such a set, based on knowledge of the obstacle definitions (and therefore the locations of their corners). This approach, however, is hard to justify in an automated setup. In order to generate suitable intermediate points automatically, variants of Dijkstra's algorithm can be used to perform a simple graph search. This paper utilizes the A* search algorithm \citep{hart1968formal}, because of its simplicity and efficiency. The worst-case complexity of this algorithm in case a consistent heuristic cost is used, is $O(N)$ \citep{martelli1977complexity}, with $N$ the number of nodes in the graph. The heurstic cost used here is the Euclidean distance between a node and the goal node, which is indeed consistent. Figure \ref{fig:GraphSearchIllustration} shows that the graph search returns a feasible trajectory from the current point to the destination. Intermediate points can be extracted from this trajectory by moving through it and recording points at which the left-right or up-down direction switches. In an unobstructed space, a graph search would find a straight path. Hence, direction changes stem from the presence of an obstacle, and the points at which they occur are close to the corners of the obstacle. These points are therefore suitable candidates for intermediate destinations. In the simulations below, this approach is used and each point of the set of intermediate destinations is successively visited. When this set is exhausted, the reference state is reset to original destination. \begin{figure} \begin{center} \includegraphics[width=5.4cm]{includes/GraphSearchIllustration.eps} \caption{\small Illustration of the graph search, represented by the magenta triangles. At both of the black triangles the direction changes (first left-right, then up-down), so these comprise the set of intermediate destinations. The black line denotes the final trajectory.} \label{fig:GraphSearchIllustration} \end{center} \end{figure} \section{NUMERICAL SIMULATIONS} \label{sec: Numerical Simulations} The proposed methodology is illustrated and analyzed by means of numerical simulations for a wide variety of obstacle configurations, and two different vehicle models. All simulations were performed on a notebook with Intel(R) Core(TM) i7-7600U CPU @ 2.80GHz x 2 processor and 16 GB of memory. Figure \ref{Fig:simulationsTrailer} displays the first set of obstacle configurations, overcome by a vehicle with a trailer. The kinematics of the trailer model, given by (\ref{eq:Trailer}), are discretized using an explicit fourth order Runge Kutta method. The distance between the center of mass of the trailer and the fulcrum of the towing vehicle is $L = 0.5$m. The optimal control problems are solved with sampling time $t_s = 30$ms for Fig. \ref{fig:Crescent} and $t_s = 200$ms for Fig. \ref{fig:Labyrinth}, and horizon length $N = 50$. The inputs are constrained by box constraints $-4\mathrm{m/s} \leq u_x,u_y \leq 4\mathrm{m/s}$ at every time instant. The penalty factors are capped off at $10^4$. It is usually difficult for a first order method, such as PANOC, to find a solution for a strict tolerance, say $10^{-6}$. Therefore, the tolerance $\tau_*$ is set to $10^{-3}$. We observed in simulations that the closed-loop performance is not impacted by this choice, and that a stricter tolerance would be unnecessary. Figure \ref{Fig:simulationsTrailerGraphSearch} displays two more obstacle configurations for which the graph search heuristic proved necessary to find a trajectory that reached the destination. \begin{figure} \centering \begin{subfigure}[t]{0.47\columnwidth} \includegraphics[width=\textwidth]{includes/CrescentTra.eps} \caption{Enlarged obstacle defined as $\{(x,y):y>x^2, y< 1+x^2/2\}$.} \label{fig:Crescent} \end{subfigure} \quad \begin{subfigure}[t]{0.47\columnwidth} \includegraphics[width=\textwidth]{includes/LabyrinthTra.eps} \caption{Slightly complex obstacle configuration using rectangular obstacles.} \label{fig:Labyrinth} \end{subfigure} \caption{\small Two obstacle configurations using the trailer model. The conventions for start, destination and obstacles are the same as those in Figure \ref{fig:IllusPenaltyMethod}, and the black lines denote the trajectories that were found in each case.} \label{Fig:simulationsTrailer} \end{figure} \begin{figure} \centering \begin{subfigure}[t]{.47\columnwidth} \centering \includegraphics[width=\linewidth]{includes/CrossTra.eps} \caption{Cross-shaped obstacle as the combination of two rectangles.} \end{subfigure} \quad \begin{subfigure}[t]{.47\columnwidth} \centering \includegraphics[width=\linewidth]{includes/RackTra2.eps} \caption{Rack-shaped obstacle, defined by $O = \{(x,y): y < \mathrm{sin}(2\pi x - \pi/2) + 2, y > 0, 0 < x < 3 \}$.} \end{subfigure} \caption{\small Illustration of the graph search heuristic for two obstacle configurations using the trailer model.} \label{Fig:simulationsTrailerGraphSearch} \end{figure} Similarly, Figure \ref{Fig:simulationsBicycle} displays two obstacle configurations overcome by a vehicle modeled as a bicycle. The relevant kinematics are given by (\ref{eq:Bicycle}). The optimal control problem for these simulations is constructed with the same parameters as before, with the following exceptions: the sampling time $t_s = 50$ms, and the input constraints are $-0.1 \mathrm{m/s} \leq v \leq 4 \mathrm{m/s}$ and $-\frac{\pi}{3} \leq \delta_f \leq \frac{\pi}{3}$ at every time instant. The figures in this section show the versatility of the proposed approach for constructing and solving collision-avoidance problems. \begin{figure} \centering \begin{subfigure}{.47\columnwidth} \centering \includegraphics[width=\linewidth]{includes/CorridorsBic.eps} \caption{Three polyhedral obstacles that form corridors.} \label{fig:Corridors} \end{subfigure} \quad \begin{subfigure}{.47\columnwidth} \centering \includegraphics[width=\linewidth]{includes/TwoCircOneRectBic.eps} \caption{Configuration of one rectangular and two circular obstacles.} \label{fig:TwoCircOneRect} \end{subfigure} \caption{\small Two obstacle configurations using the bicycle model.} \label{Fig:simulationsBicycle} \end{figure} Figure \ref{fig:RuntimeComparison} compares the runtime of the proposed methodology with that of state-of-the-art SQP \citep[SNOPT]{gill2005snopt} and IP \citep[IPOPT]{wachter2006implementation} solvers. In these solvers, the obstacle avoidance constraint is incorporated as an inequality constraint, $\psi^2(z) \leq \eta_*^2$. The penalty method algorithm using PANOC clearly outperforms the other solvers, by approximately two orders of magnitude. \begin{figure} \begin{center} \includegraphics[width=6.4cm]{includes/CompareRuntimeTCORB.eps} \caption{\small Runtime comparison of three solvers: PANOC, SNOPT and IPOPT. This comparison is for the problem of Fig. \ref{fig:TwoCircOneRect}, with initial point $x_0 = (0, 0, 0)$. All solvers were warm-started with their previous solution after each MPC step. The tolerance for all solvers was set to $10^{-3}$.} \label{fig:RuntimeComparison} \end{center} \end{figure} \begin{table}[!ht] \begin{center} \caption{\small Comparison of closed-loop costs.}\label{tb:closed-loop cost} \begin{tabular}{|l|c|c|c|} \hline \backslashbox{Example}{Solver} & PANOC & IPOPT & SNOPT \\ \hline \hline Figure \ref{fig:TwoCircOneRect}, $x_0 = (0,0,0)$ & 3.56 & 4.02 & 3.54 \\ \hline Figure \ref{fig:Crescent}, $x_0 = (0, 0.3, \pi)$ & 20.78 & 49.43 & 47.50\\ \hline Figure \ref{fig:Corridors}, $x_0 = (-1, 1, 0)$ & 17.65 & 19.62 & 26.17\\ \hline Figure \ref{fig:Corridors}, $x_0 = (-1, 3, 0)$ & 13.21 & 12.76 & 33.48\\ \hline \end{tabular} \end{center} \end{table} The proposed approach is not only faster, but also more adept at finding high quality solutions than the other state-of-the-art solvers. This is illustrated in Table \ref{tb:closed-loop cost}, which lists the closed-loop costs for different problem scenarios. Sometimes, the solvers all converged to the same trajectory, such as for Fig. \ref{fig:TwoCircOneRect}. This was, however, not always the case. For example, IPOPT and SNOPT were both temporarily stuck in the local minimum behind the crescent-shaped obstacle, Fig. \ref{fig:Crescent}, whereas our approach found the optimal trajectory in the first optimal control problem. In other cases, such as the corridors example, Fig. \ref{fig:Corridors}, different paths were taken. Table \ref{tb:closed-loop cost} shows that for these cases, the proposed methodology found trajectories with closed-loop costs as good as or better than the other solvers. \section{Conclusion} This paper presents a penalty method framework for solving optimal control problems with collision-avoidance constraints that typically arise in motion planning problems. The application of the penalty method, coupled with virtual enlargements, allows for the avoidance of obstacles of complex geometry. It also benefits the convergence to a trajectory that reaches the destination, by gradually finding a path around the obstacles as the penalty factors are successively increased. In addition, several heuristics are employed, which have been observed to improve convergence to a feasible trajectory that reaches the destination. The resulting optimization problems are solved with PANOC, a first-order method which exhibits low runtime. Numerical simulations with nonlinear vehicle dynamics show the versatility of the proposed approach in solving motion planning problems with general obstacle avoidance constraints. In the limited number of cases considered, the proposed algorithm outperforms state-of-the-art SQP and IP solvers in both runtime and robustness. \small
{ "timestamp": "2018-08-28T02:17:12", "yymm": "1805", "arxiv_id": "1805.02524", "language": "en", "url": "https://arxiv.org/abs/1805.02524" }
\section{Introduction and results} Recently remarkable progress has been made on interacting particle processes in the KPZ universality class \cite{Corwin2012, QS2015, Sa2016}. In spite of their interaction of nonlinear nature, the limiting distributions and correlations of certain quantities have been identified for many models. Among them is the $q$-totally asymmetric simple exclusion process ($q$-TASEP), in which the $i$th particle hops to the right neighboring site on $\mathbb{Z}$ with the rate $a_i(1-q^{\rm gap})$ where $0\le q<1$ and $0<a_i\leq 1$ and the gap means the distance to the particle ahead. More precisely the rules of the $q$-TASEP for the case with $N$ particles are given as follows. Let $x_i(t),~i=1,2,\cdots,N$ be the position of the particle labeled $i$ at time $t\in \mathbb{R}_{\ge 0}$. A double occupancy at a site is prohibited and we put the order $x_1(t)>x_2(t)>\cdots >x_N(t)$. Each particle hops to the right neighboring site with the rate of the $i$-th particle given by \begin{align} a_i(1-q^{x_{i-1}(t)-x_i(t)-1}), \label{11} \end{align} $1\leq i\leq N$, with $x_{0}(t)=\infty$ by convention. Note that $x_{i-1}(t)-x_i(t)-1$ means the gap (of empty sites) between $i$-th and the former $(i-1)$-th particle and that the hopping rate (\ref{11}) depends on a configuration of the particles. As the gap becomes large, so does the rate and it approaches $a_i$ when the gap tends to infinity. On the other hand, if the two particles adjoin, the rate becomes zero, representing the exclusion interaction among particles. For the special case of $q=0$, the rate~\eqref{11} is always $a_i$ and does not depend on the gap. This is nothing but the TASEP (with particle dependent hopping rate $a_i$). The $q$-TASEP was introduced in~\cite{BC2014} as a marginal dynamics associated with the $q$-Whittaker process (though the dynamics of the gaps is called the $q$-boson totally asymmetric zero range process and had been introduced earlier in \cite{SW1998}) and has become one of the standard models for studying KPZ universality. Various generalized models have been introduced and studied since then, see for instance \cite{Po2013, CP2015,BP2016p}. We are interested in the fluctuation properties of $x_N(t)$, the position of the $N$th particle. For the TASEP, with $q=0$, the limiting distribution of $x_N(t)$ has been identified for various initial conditions. First, for the step initial condition, which is given by \begin{align} x_i(0)=-i,~i=1,2,\cdots,N, \label{step} \end{align} it was shown in ~\cite{Johansson2000} that the limiting distribution is given by the GUE Tracy-Widom distribution~\cite{TW1994}. Then various other cases have been studied such as a random initial condition \cite{BR2000, IS2004, FS2006}, periodic initial condition~\cite{Sasamoto2005,BFPS2007}, and more general fixed initial condition \cite{MQR2017p, FO2017p}. All these results for TASEP (and for its cousin models like the polynuclear growth model) have been obtained by using its connection to determinantal point processes, which are related to the random matrix theory~\cite{Mehta2004,Forrester2010} and provide a unified approach. For general $q$, the same method does not work because a direct connection to a determiantal point process has not been found (see, however, a few works in this direction \cite{IS2016, Borodin2016p,BO2017}). But a few methods have been devised and successfully been applied to the $q$-TASEP and its generalizations with many results. A standard approach for general $q$ has been to first write down a formula for $q$-deformed moments and constitute their generating functions. For example, for the $q$-TASEP with the step initial condition (\ref{step}), the $n$-th $q$-deformed moment can be written as a multiple integral as \begin{equation} \langle q^{n(x_N(t)+N)} \rangle = \frac{(-1)^nq^{\frac12 n(n-1)}}{(2\pi i)^n}\int \prod_{1\leq j<k\leq n} \frac{z_j-z_k}{z_j-qz_k} \prod_{j=1}^n \left(\prod_{m=1}^N\frac{a_m}{a_m-z_j}\right) e^{(q-1)tz_j}\frac{dz_j}{z_j}, \label{mom_step} \end{equation} where the contour for $z_j$ contains $\{ qz_k\}_{k>j}$ and $a_m$'s but not 0. This type of formula can be obtained by either using Macdonald operators \cite{BC2014} or duality \cite{BCS2014}. Next we compose a generating function of these moments. By using the $q$-binomial theorem, we find \begin{align} \sum_{n=0}\frac{\zeta^n}{(q;q)_n} \left\langle q^{n(x_N(t)+N)} \right\rangle = \left\langle \frac{1}{(\zeta q^{x_N(t)+N};q)_{\infty}}\right\rangle . \label{qLapgen} \end{align} The right hand side is nothing but the $q$-Laplace transform of the pdf of $x_N(t)+N$, which should contain the full information about the statistics of $x_N(t)$. In fact it turns out that the $q$-Laplace transform can be written as a Fredholm determinant and one can establish the Tracy-Widom law from this formula \cite{BC2014, BCS2014}. However, most results, obtained with this approach, have been restricted for the step initial condition (\ref{step}) due to a few technical difficulties. Let us consider, as a generalization of the step initial condition (\ref{step}), the following random initial condition, \begin{align} -1-x_1(0)=X_1,~x_{i-1}(0)-x_{i}(0)-1=X_i \text{~for}~i=2,\cdots,N, \label{rand} \end{align} where $X_1,\cdots,X_N$ are independent $q$-Poisson random variables defined for $\alpha\in [0,a_k)$ by \begin{align} \text{Prob}(X_k=n)=(\alpha/a_k;q)_{\infty}\frac{(\alpha/a_k)^n}{(q;q)_n}, \label{16} \end{align} where $k\in (1,2,\cdots,N)$ and $n\in\mathbb{Z}_{\ge0}$. Note that the initial condition~\eqref{rand} means that the gap between $i$-th and $(i-1)$-th particle is distributed as the $q$-Poisson random variable with parameter $\alpha/a_i$~\eqref{16} except for $i=1$, for which case $X_1$ describes the gap between the particle and the origin. Note that, when $\alpha=0$, the rhs of~\eqref{16} becomes $\delta_{n,0}$, then~\eqref{rand} reduces to the step initial condition~\eqref{step}. It is known that, for the $q$-TASEP with infinite number of particles and with $a_i=1$, if the gaps between the consecutive particles are given by the independent $q$-Poisson random variables with $\alpha$, it is stationary. Because of this reason we call the initial condition (\ref{rand}) the half stationary initial condition. By setting $a_i=1,i\geq 2$ and considering $a_1\to \alpha$ limit, one can study the stationary $q$-TASEP based on the analysis for the half-stationary initial condition \cite{IS2017p}. In~\cite{BCS2014}, a nested contour integral representation of the $n$-th moment for this random initial condition was given, which is reproduced here, \begin{equation} \langle q^{n(x_N(t)+N)} \rangle = \frac{(-1)^nq^{\frac12 n(n-1)}}{(2\pi i)^n}\int \prod_{1\leq j<k\leq n} \frac{z_j-z_k}{z_j-qz_k} \prod_{j=1}^n \left(\prod_{m=1}^N\frac{a_m}{a_m-z_j}\right) e^{(q-1)tz_j}\frac{dz_j}{z_j-\alpha/q}, \label{mom_rand} \end{equation} where the contour for $z_j$ contains $\{ qz_k\}_{k>j}$ and $a_m$'s but not $\alpha/q$. (Note, for this initial data, that $\langle\cdot\rangle$ indicates the expectation with respect to both initial condition and $q$-TASEP dynamics.) This looks very similar to (\ref{mom_step}) but there is an important difference. As can be understood by considering $t=0$ case of this formula, the moment is finite only for small values of $n$, satisfying $\alpha q^{-n}<\max_{m=1,\ldots , N}{a}_m$, except for the step initial condition $\alpha=0$, and the higher moments diverge. Hence one can not use (\ref{qLapgen}) to calculate the $q$-Laplace transform. On the other hand, one has, by definition, \begin{equation} \left\langle \frac{1}{(\zeta q^{x_N(t)+N};q)_\infty} \right\rangle = \sum_{l\in\mathbb{Z}} \frac{1}{(\zeta q^l;q)_{\infty}} \mathbb{P}[x_N(t)+N=l], \label{qLapdef} \end{equation} for $\zeta\neq q^n,n\in\mathbb{Z}$ and, since $\lim_{l\to-\infty}(\zeta q^l;q)_{\infty}=\infty, \lim_{l\to\infty}(\zeta q^l;q)_{\infty}=1$, this $q$-Laplace transform is finite for the random initial condition (\ref{rand}) as well. One has to find a way to calculate the $q$-Laplace transform without using the $q$-moments. In \cite{IS2017p}, we have overcome this difficulty and found a Fredholm determinant representation for the $q$-Laplace transform by using Ramanujan's summation formula and the Cauchy determinant for theta functions (also known as the Frobenius determinant). In this note, we study the same problem and present a somewhat different approach without using them and give a different Fredholm determinant representation of the $q$-Laplace transform~\eqref{qLapgen} for the $q$-TASEP with the half stationary initial condition~\eqref{rand}. Our main result is the following. \begin{theorem} For the $q$-TASEP with the random initial condition (\ref{rand}),(\ref{16}) we have \begin{align} &\left\langle\frac{1}{(\zeta q^{x_N(t)+N};q)_{\infty}}\right\rangle = \det\left( 1 + K_{\zeta} \right)_{L^2(C_a)}, \label{qLapFr} \end{align} where $\zeta \neq q^n, n\in\mathbb{Z}$, $C_a$ is a contour around $a_i$'s and the kernel is given by \begin{align} K_{\zeta}(w_1,w_2) = \frac{-1}{2\pi i} \int_{i\mathbb{R}+\epsilon} ds \frac{(-\zeta)^s\pi}{\sin\pi s} \frac {e^{(q^{s}w_1-w_2)t}} {q^{s}w_1-w_2} \frac{(\alpha/w_1;q)_{\infty}}{(\alpha/(q^s w_1);q)_{\infty}} \prod_{m=1}^N \frac {(q^{s}w_1/a_m;q)_{\infty}} {(w_1/a_m;q)_{\infty}}. \label{c121} \end{align} \end{theorem} Once this type of formula is found, then there is a standard way to study the limiting distribution. In particular by setting $a_i=1,i\geq 2$ and taking $a_1\to \alpha$ limit and applying similar arguments as in \cite{IS2017p}, one can study the stationary $q$-TASEP and reproduce the results there that the limiting distribution of a particle is given by the Baik-Rains distribution. Compared to the kernel found in \cite{IS2017p}, the above kernel is much closer to the one in \cite{BC2014,BCS2014}. But we stress again that the method to find a kernel through the $q$-moments in \cite{BC2014,BCS2014} does not work for the random initial condition (\ref{rand}). For a certain part, our approach has a similarity to the one in~\cite{BCR2013} for the log-Gamma and the O\rq{}Connell-Yor polymers (for the case corresponding to the step case). For the case of ASEP, a somewhat different approach was employed to study the stationary case in \cite{Aggarwal2016p}. There the author uses the fact that the ASEP can be obtained as a limit of the higher spin vertex model, because at the level of higher spin vertex model everything is discrete and all moments are finite. However, we emphasize that our approach, initiated in \cite{IS2017p} for $q$-TASEP and generalized to the higher spin vertex model in \cite{IMuS2018p}, has an advantage that all models in the hierarchy can be treated directly in the same manner without relying on a limiting procedure. It would be an interesting question to clarify interrelationships between various approaches and representations. This note is organized as follows. In Sec.~\ref{sec:dist}, we recall a few facts from \cite{IS2017p}. We introduce a two-sided version of the $q$-Whittaker measure and gives an multiple integral formula for the distribution of the position of a particle of the $q$-TASEP with the half-stationary initial condition ~\eqref{rand}. In Sec.~\ref{sec:qLap}, we explain a way to calculate the $q$-Laplace transform~\eqref{qLapgen} and present a multiple integral formula for it. In Sec.~\ref{sec:det}, we obtain a Fredholm determinant representation and show that it is equivalen to~\eqref{qLapFr} {\bf Notation}: In order to make some formulas look better, we replace $a_i$ by $q^{a_i}$ and $\alpha$ by $q^{\alpha}$ in the following sections. \section{$q$-Whittaker measure and distribution of a particle position} \label{sec:dist} First we recall a few facts from \cite{IS2017p}. For a $N\in\mathbb{Z}_{>0}$, let $\Lambda_N$ be the set of signatures (or integer partitions) defined by $\Lambda_N=\{\lambda=(\lambda_1,\cdots,\lambda_N)\in\mathbb{Z}^N| \lambda_j\in\mathbb{Z}~\text{for} ~j=1,2,\cdots,N, \lambda_1\ge\cdots\ge\lambda_N\}$. Note that, unlike for the the case of usual partitions, each $\lambda_j$ can be a negative integer. Let $a_i\geq 0,\alpha_i\geq 0,1\leq i\leq N$ and $q^c=(q^{c_1},\ldots,q^{c_N})$ for $c=a,\alpha$. We introduce a measure on $\Lambda_N$, \begin{align} W_t(\lambda) =\frac{P_\lambda(q^a)Q_\lambda(q^\alpha,t)}{\Pi (q^a;q^\alpha,t)}, \label{ptn} \end{align} where $P_{\lambda},Q_{\lambda}$ are the $q$-Whittaker functions defined by \begin{align} P_\lambda(q^a) &= \sum_{\substack{\lambda_i^{(j)}\in\mathbb{Z}, 1\leq i\leq j\leq N-1\\ \lambda_{i+1}^{(j+1)} \leq \lambda_i^{(j)} \leq \lambda_i^{(j+1)}\\ \lambda_i^{(N)}=\lambda_i} } \prod_{j=1}^N \prod_{i=1}^N q^{a_j {\lambda^{(j)}_i}}\cdot \prod_{i=1}^{N-1}\frac{ q^{-a_j \lambda^{(j-1)}_i}(q;q)_{\lambda_i^{(j)}-\lambda_{i+1}^{(j)}}} {(q;q)_{\lambda_i^{(j)}-\lambda_i^{(j-1)}}(q;q)_{\lambda_i^{(j-1)}-\lambda_{i+1}^{(j)}}}, \label{P}\\ Q_\lambda(q^a) &= \prod_{i=1}^{N-1}(q^{\lambda_i-\lambda_{i+1}+1};q)_{\infty} \int_{\mathbb{T}^N}\prod_{i=1}^N\frac{dz_i}{z_i}\cdot P_{\lambda}(1/z) \Pi\left(z;q^\alpha,t\right)m_N^q\left(z\right) \label{Q} \end{align} with \begin{equation} m_N^q(z)=\frac{1}{(2\pi i)^NN!}\prod_{1\le i<j\le N}(z_i/z_j;q)_{\infty}(z_j/z_i;q)_{\infty} \label{qsk} \end{equation} and \begin{equation} \Pi\left(q^a;q^\alpha,t\right) = \prod_{i,j=1}^N\frac{1}{(q^{\alpha_i-a_j};q)_{\infty}}\cdot\prod_{j=1}^Ne^{q^{a_j}t} . \label{Pi} \end{equation} The measure (\ref{ptn}) is called the two-sided $q$-Whittaker measure. Let $P_t(\lambda_N)$ denote the marginal distribution of $\lambda_N$ under the above two-sided $q$-Whittaker measure, $P_t(\lambda_N)= \sum_{\lambda_i,1\leq i\leq N-1} W_t(\lambda)$. By rewriting the Cauchy identity for the $q$-Whittaker function, we found the following multiple integral formula \cite{IS2017p}. \begin{proposition}\label{p8} The marginal distribution of $\lambda_N$ under the two-sided $q$-Whittaker measure (\ref{ptn}) is given by \begin{align} P_t(\lambda_N)=(q;q)_{\infty}^{N-1} \int_{\mathbb{T}^N}\prod_{j=1}^N \frac{dz_j}{z_j} \cdot \left( \frac{q^{A}}{z_1\cdots z_N} \right)^{\lambda_N} m^q_N(z) \frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac {\left(q^{A}/z_1\cdots z_N;q\right)_{\infty}} {\prod_{i,j=1}^N(q^{a_i}/z_j;q)_{\infty}} \label{p82} \end{align} with $A=\sum_{j=1}^n a_j$. \end{proposition} \noindent Note that on the right hand side the dependence on $\lambda_N$ appears only as the power of a factor in the integrand. This is useful for further calculations. As shown in \cite{IS2017p}, we have, with $\alpha_1=\alpha, \alpha_i\to\infty, i\geq 2$, \begin{equation} \mathbb{P}[x_N(t)+N=\lambda_N] = P_t(\lambda_N), \label{xNlambda} \end{equation} that is, the probability $\mathbb{P}[x_N(t)+N=\lambda_N],\lambda_N\in\mathbb{Z}$ that the position of the $N$-th particle in $q$-TASEP with the half-stationary initial condition (\ref{rand}) is $\lambda_N$ is the same as the marginal distribution of $\lambda_N$ for the two sided $q$-Whittaker measure with $\alpha_1=\alpha, \alpha_i\to\infty, i\geq 2$. In the sequel we focus on the study of $P_t(\lambda_N)$ for general $a_i$'s and $\alpha_i$'s. \section{$q$-Laplace transform} \label{sec:qLap} In this section we consider the $q$-Laplace transform of $P_t(\lambda_N)$. Our strategy is to evaluate it directly without using the $q$-moments~\eqref{mom_rand}. First we obtain the following integral representation. \begin{proposition}\label{p6} For $\epsilon>0$ and $\zeta\in\mathbb{C}\setminus\mathbb{R}_+$, we have \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle &= -(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon}ds \frac{\pi}{\sin \pi(s-A)} (-\zeta)^{s-A}(q^{s-A+1};q)_{\infty}\left(q^{A-s};q\right)_{\infty} \notag \\ &~~~~\times\int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot m^q_N(w)\frac{\Pi(w;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \prod_{i,j=1}^N\frac{1}{\left(\frac{q^{a_i}}{w_j};q\right)_{\infty}}, \label{p60} \end{align} where $m_N^q(w)$, $\Pi(w;q^{\alpha},t)$ and $A$ are defined by~\eqref{qsk},~\eqref{Pi} and below~\eqref{p82} respectively and $w_1$ is defined by \begin{align} w_1 = \begin{cases} q^s,& \text{for~} N=1,\\ q^{s}/w_2\cdots w_N,& \text{for~} N\ge 2. \end{cases} \label{p900} \end{align} \end{proposition} \smallskip \noindent {\bf Proof.} In this proof we assume $|\zeta|<1$. Once (\ref{p60}) is proved for $|\zeta|<1$, the extension to $\zeta\in\mathbb{C}\setminus\mathbb{R}_+$ is easy by analytic continuation. We start from writing LHS of~\eqref{p60} as \begin{align} \sum_{\lambda_N=-\infty}^{\infty} \frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}P_t\left(\lambda_N\right), \label{p91} \end{align} where $P_t\left(\lambda_N\right)$ is given by~\eqref{p82}. (This is the same as (\ref{qLapdef}) by (\ref{xNlambda}).) Next we separate the sum into two parts, $0\le \lambda_N$(the positive part) and $\lambda_N\le -1$(the negative part). The calculation of the positive part is straightforward. Since we assumed $|\zeta|<1$, by an application of the $q$-binomial theorem, it is written as \begin{equation} \sum_{n=0}^{\infty}\int_{\mathbb{T}^N} \frac{\zeta^n}{(q;q)_n} \prod_{j=1}^N dz_j \cdot \frac{m^q_N(z_1,\cdots,z_N)}{z_1\cdots z_n-q^{A+n}} \frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}(q;q)_{\infty}^{N-1}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}}. \label{p65} \end{equation} For the negative part, we will show that it is written as \begin{align} &- \sum_{n=0}^{\infty}\int_{\mathbb{T}^N} \prod_{j=1}^N dz_j \cdot \frac{\zeta^n}{(q;q)_n} \frac{m^q_N(z_1,\cdots,z_N)}{z_1\cdots z_n-q^{A+n}} \frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}(q;q)_{\infty}^{N-1}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}} \notag\\ &-(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon}ds \frac{\pi}{\sin \pi(s-A)} (-\xi)^{s-A}(q^{s-A+1};q)_{\infty}\left(q^{A-s};q\right)_{\infty} \notag \\ &\hspace{3.5cm}\times\int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot m^q_N(w)\frac{\Pi(w;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \prod_{i,j=1}^N\frac{1}{\left(\frac{q^{a_i}}{w_j};q\right)_{\infty}}, \label{p62} \end{align} where $w_1$ in the second term is defined by~\eqref{p900}. Combining \eqref{p91}--\eqref{p62}, we obtain our desired result~\eqref{p60}. Note that the $q$-binomial theorem is not applicable to the negative part since $|\zeta q^{\lambda_N}|$ becomes bigger than one as $\lambda_N\to -\infty$. Let us introduce $L\in\mathbb{Z}_{> 0}$ and $x\in\mathbb{C}$, and consider the quantity, \begin{align} \sum_{\ell=-L}^{-1} \frac{1}{(\zeta x^{-\ell};q)_{\infty}} P_t\left(\ell\right). \label{p610} \end{align} As a function of $x$, this is analytic for $|x|<1$. Below we will extend this region of analyticity in $x$, $|x|<1$, to one which includes $x=1/q$ and then take the limit $L\rightarrow\infty$. Note that the case $x=q^{-1}$ with this limit corresponds to LHS of~\eqref{p62}. Note that, when $|x|<1$, the $q$-binomial theorem is applicable for arbitrary fixed $L$ since $|\zeta x^L|<1$ is satisfied. Hence \eqref{p610} can be written easily as \begin{align} \sum_{n=0}^\infty \frac{\zeta^n}{(q;q)_n}\int_{\mathbb{T}^N}\prod_{j=1}^N\frac{dz_j}{z_j} \cdot \sum_{\ell=1}^L\left(\frac{x^n z_1\cdots z_N}{q^A}\right)^{\ell} \cdot m^q_N(z)\frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}(q;q)_{\infty}^{N-1}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}}. \label{p611} \end{align} Then we rewrite the sum over $n$ as the contour integral by the residue theorem as \begin{align} \sum_{n=0}^\infty \frac{\zeta^n x^{\ell n}}{(q;q)_n} = \frac{-1}{2 \pi i}\int_{i\mathbb{R}-\epsilon}du \frac{\pi}{\sin\pi u} \frac{(-\zeta x^{\ell})^u(q^{1+u};q)_{\infty}}{(q;q)_{\infty}}, \end{align} where we set the branch cut of the function $z^u$ as $\mathbb{R}_-$ (see Lemma 3.20 and Step 2 in the proof of Theorem 3.18 in~\cite{BC2014} for a similar identity). We find \begin{align} &\sum_{\ell=-L}^{-1} \frac{1}{(\zeta x^{-\ell};q)_{\infty}} P_t\left(\ell\right) = \int_{i\mathbb{R} -\epsilon}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} \frac{(-\zeta)^u(q^{u+1};q)_{\infty}}{(q;q)_{\infty}} \notag\\ &\times \int_{\mathbb{T}^N}\prod_{j=1}^N \frac{dz_j}{z_j} \cdot \sum_{\ell=1}^L\left(\frac{x^u z_1\cdots z_N}{q^A}\right)^{\ell} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}(q;q)_{\infty}^{N-1}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}} \label{p612} \end{align} with $\epsilon>0$. Now we consider the analytic continuation for $x$ in both hand sides of~\eqref{p612}. Let $\mu, \theta\in(-\pi,\pi)$ be the argument of $-\zeta$ and $x$ respectively, i.e., $-\zeta=|\zeta|e^{i\mu},~x=|x|e^{i\theta}$. In lhs, one can see that it is analytic for $x\in\mathbb{C}\setminus\Omega$ where \begin{align} \Omega= \left\{ \frac{e^{i(\mu+\pi)/\ell}}{|\zeta|q^n} ;\ell\in\mathbb{Z}_{>0},L, n\in\mathbb{N} \right\} . \end{align} Thus we see that by analytic continuation one can set $x=1/q$ since, for $\mu\in (-\pi,\pi)$, one finds $1/q\notin\Omega$. In rhs of~\eqref{p612}, let us focus on the integral \begin{align} \int_{i\mathbb{R}-\epsilon}\frac{du}{2\pi i} \frac{\pi}{\sin\pi u} (-\zeta x^{\ell})^u(q^{u+1};q)_{\infty} \label{p615} \end{align} for $\ell=1,2,\cdots,L$. Considering the fact that $|q^{ix}|=1$ for $x\in\mathbb{R}$ and \begin{align} \lim_{x\rightarrow\pm \infty}\frac{e^{-\pi |x|}}{2 i \sin(\pi i x)}=1, \end{align} we find that~\eqref{p615} is analytic as a function of $x$ if \begin{align} \lim_{y\rightarrow\pm\infty} \frac{(-\zeta x^\ell)^{iy}}{e^{\pi|y|}}=0 \Leftrightarrow \frac{-\pi-\mu}{\ell}<\theta<\frac{\pi-\mu}{\ell} \label{p616} \end{align} for $\ell=1,2,\cdots,L$. Note that the most strict case $\ell=L$, \begin{align} \frac{-\pi-\mu}{L}<\theta<\frac{\pi-\mu}{L} \label{p617} \end{align} includes the vicinity of $\theta=0$ for any fixed $\mu\in (-\pi,\pi)$. Thus under the condition~\eqref{p617}, we can extend the region of analyticity from $|x|<1$ to a larger region including $\mathbb{R}_+$ and in particular we can set $x=1/q$. Thus setting $x=1/q$ and taking the sum for $\ell$, we get \begin{align} &\sum_{\ell=-L}^{-1} \frac{1}{(\zeta q^{\ell};q)_{\infty}} P_t\left(\ell\right) =(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\zeta)^u(q^{u+1};q)_{\infty} \notag\\ &\times \int_{\mathbb{T}^N}\prod_{j=1}^N dz_j \cdot \frac{1-(z_1\cdots z_N/q^{A+u})^L}{z_1\cdots z_N-q^{A+u}} m^q_N(z)\frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}}. \label{p612a} \end{align} Note that at this stage we can take $\epsilon$ to be an arbitrary positive real value. Here we set it to be $\epsilon_A$ such that $\epsilon_A>A$, which leads to $|z_1\cdots z_N/q^{A+u}|<1$. Then we can take the $L\rightarrow\infty$ limit and have \begin{align} \sum_{\ell=-\infty}^{-1} \frac{1}{(\zeta q^{\ell};q)_{\infty}} P_t\left(\ell\right) &=(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon_A}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\zeta)^u(q^{u+1};q)_{\infty} \notag\\ &\times \int_{\mathbb{T}^N}\prod_{j=1}^N dz_j \cdot \frac{1}{z_1\cdots z_N-q^{A+u}} \cdot m^q_N(z)\frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}}. \end{align} For later use, we change the variables $w_1=z_1\cdots z_N$ and $w_j=z_j$ for $j\ge 2$. We have \begin{align} &\sum_{\ell=-\infty}^{-1} \frac{1}{(\zeta q^{\ell};q)_{\infty}} P_t\left(\ell\right) \notag\\ &=(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon_A}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\zeta)^u(q^{u+1};q)_{\infty} \int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot \int_\mathbb{T} dw_1\frac{ C(w;a,\alpha)}{w_1-q^{A+u}}, \end{align} where \begin{align} C(w;a,\alpha)= \left. m^q_N(z)\frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}} \right|_{z_1=w_1/w_2\cdots w_N,~z_j=w_j~\text{for}~j\ge 2}. \end{align} We change the contour $\mathbb{T}$ of $w_1$ to $\mathbb{T}_A$ such that the contour encloses the pole $w_1=q^{A+u}$. We have \begin{align} &~\sum_{\ell=-\infty}^{-1} \frac{1}{(\zeta q^{\ell};q)_{\infty}} P_t\left(\ell\right) \notag\\ &=(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon_A}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\zeta)^u(q^{u+1};q)_{\infty} \int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot \int_{\mathbb{T}_A} dw_1\frac{C(w;a,\alpha)}{w_1-q^{A+u}} \notag\\ &-(q;q)_{\infty}^{N-2} \int_{i\mathbb{R} -\epsilon_A}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\zeta)^u(q^{u+1};q)_{\infty} \int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot \underset{w_1=q^{A+u}}{\text{Res}} \frac{C(w;a,\alpha)}{w_1-q^{A+u}}. \label{p618} \end{align} Replacing the integration of $u$ by the summation of the residues at $u=0,1,2,\cdots$ and changing the variables $w_j$ to $z_j$, we find that the first term of~\eqref{p618} becomes \begin{align} -\sum_{n=0}^{\infty}\int_{\mathbb{T}^N} \prod_{j=1}^N dz_j \cdot \frac{\zeta^n}{(q;q)_n} \frac{m^q_N(z_1,\cdots,z_N)}{z_1\cdots z_n-q^{A+n}} \frac{\Pi(z;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \cdot \frac{\left(\frac{q^A}{z_1\cdots z_N};q\right)_{\infty}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{z_j};q)_{\infty}}, \end{align} which is exactly equal to the first term in~\eqref{p62} and cancels~\eqref{p65}. We easily find that the second term is written as \begin{align} -(q;q)_{\infty}^{N-2}\int_{i\mathbb{R} -\epsilon_A}\frac{du}{2\pi i} \frac{\pi}{\sin \pi u} (-\xi)^u(q^{u+1};q)_{\infty} \int_{\mathbb{T}^{N-1}}\prod_{j=2}^N\frac{dw_j}{w_j} \cdot m^q_N(w)\frac{\Pi(w;q^\alpha,t)}{\Pi(q^a;q^\alpha,t)} \frac{\left(q^{-u};q\right)_{\infty}}{\prod_{i,j=1}^N(\frac{q^{a_i}}{w_j};q)_{\infty}} \end{align} with $w_1=q^{A+u}/w_2\cdots w_N$. Shifting $u$ as $s=u+A$, we arrive at the second term in~\eqref{p62}. \qed Now we have another integral representation, which is more useful for our purpose. \begin{proposition} \label{p10} For $\epsilon>0$ and $\zeta\in\mathbb{C}\setminus\mathbb{R}_+$, we have \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle &= \frac{(-\pi)^N}{(2\pi i)^NN!} \int_{(i\mathbb{R}-\epsilon)^N} \prod_{j=1}^N \frac{ds_j}{q^{(N-1)a_j}(q;q)_{\infty}} \frac{(-\zeta)^{s_j}}{(-\zeta)^{a_j}} \cdot \frac {\Pi(q^s;q^\alpha,t)} {\Pi(q^a;q^\alpha,t)} \cdot \prod_{i,j=1}^N\frac{(q^{s_i-a_j+1};q)_{\infty}}{\sin\pi(s_i-a_j)}\notag\\ & \hspace{1.4cm} \times \prod_{1\le i<j\le N}\frac{\sin\pi(s_j-s_i)\sin\pi(a_j-a_i)(q^{s_j}-q^{s_i})(q^{a_j}-q^{a_i})} {(q^{a_i-a_j};q)_{\infty}(q^{a_j-a_i};q)_{\infty}}. \label{p100} \end{align} \end{proposition} \smallskip \noindent {\bf Proof.} As in the proof Proposition 3, we assume $|\zeta|<1$ in this proof. When $|\zeta|<1$, we find that both~\eqref{p60} and~\eqref{p100} can be evaluated as sums of residues on the right-half plane since both integrands vanishes as $\Re s\rightarrow\infty$. We see that the integrand in~\eqref{p60} has poles at the following points: \begin{align} &s=\alpha_k+\frac{\log\prod_{j=2}^Nw_j}{\log q}+n_1+\frac{2\pi i\ell}{\log q},~ a_k+\frac{\log\prod_{j=2}^Nw_j}{\log q}+n_1+\frac{2\pi i\ell}{\log q},~ \notag\\ &w_j=q^{\alpha_k+n_j},~q^{a_k+n_j},~j=2,3,\cdots,N \label{p101} \end{align} for $k=1,2,\cdots,N$, $n_1,\cdots,n_N\in\mathbb{N}$ and $\ell\in\mathbb{Z}$. By using $w_1$~\eqref{p900} in place of $s$, \eqref{p101} can be written in a more compact form, \begin{align} w_l=q^{\alpha_k+n_l},~q^{a_k+n_l},~l=1,\cdots,N, \label{p1002} \end{align} for $k=1,2,\cdots,N$ and $n_l\in\mathbb{N}$. However we find that when at least one pair of $w_j$\rq{}s shares common $a_k$ or $\alpha_k$, the choices have no contribution: e.g. in both cases \begin{align} (w_j,w_k)=(q^{a_m+n_j},~q^{a_m+n_k}),~(q^{\alpha_m+n_j},~q^{\alpha_m+n_k}) \end{align} for some $j,k$ and $m$, we find that the factor $\left(w_j/w_k;q\right)_\infty\left(w_j/w_k;q\right)_\infty$ in $m_N^q(w)$ in the integrand of~\eqref{p60} vanishes, \begin{align} \left(w_j/w_k;q\right)_\infty\left(w_j/w_k;q\right)_\infty =\left(q^{n_j-n_k};q\right)_\infty\left(q^{n_k-n_j};q\right)_\infty =0. \end{align} Thus we need to consider only the poles where all $w_j~j=1,2,\cdots,N$ have distinct $a_k$\rq{}s or $\alpha_k$\rq{}s. Furthermore the pole contribution does not change under the exchange of the poles since the integrand in~\eqref{p60} is symmetric under the exchange of $w_i$\rq{}s. Thus we find that~\eqref{p60} can be represented as follows: \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle = N!\sum_{N_1=0}^{\infty} &\sum_{1\le j_1<\cdots<j_{N_1}\le N} \sum_{1\le j_{N_1+1}<\cdots<j_{N}\le N} \notag \\ &\sum_{n_1,\cdots,n_N=0}^{\infty} \sum_{\ell\in\mathbb{Z}} f(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots ,\alpha_{j_N},{\bf n},\ell,N_1), \label{p1004} \end{align} where ${\bf n}=(n_1,\cdots,n_N)$, and $f(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots ,\alpha_{j_N},{\bf n},\ell,N_1) $ denotes the residue of the poles \begin{align} w_k= \begin{cases} q^{a_{j_k}+n_k},&~\text{for}~ k=1,\cdots,N_1, \\ q^{\alpha_{j_k}+n_k},&~\text{for}~ k=N_1+1,\cdots,N. \end{cases} \end{align} Note that in this case the corresponding pole for $s$ is at \begin{align} s=\sum_{k=1}^{N_1}a_{j_k}+\sum_{k=N_1+1}^N\alpha_{j_k}+\sum_{j=1}^Nn_j+\frac{2\pi i\ell}{\log q}. \end{align} Similarly~\eqref{p100} has poles at \begin{align} s_j=a_k+n_{j},~\alpha_k+n_j+\frac{2\pi i\ell_k}{\log q}, \end{align} where $j,k=1,2,\cdots N$, $n_j\in \mathbb{Z}_{\ge 0}$ and $\ell_j\in\mathbb{Z}$. We easily find that when at least one pair of $s_j, j=1,2\cdots,N$ shares the common $a_k$ or $\alpha_k$, they have no contribution: E.g. when $(s_i,s_j)=(a_k+n_i,~a_k+n_j)$, the factor $\sin \pi (s_j-s_i)$ in the numerator in~\eqref{p100} becomes \begin{align} \sin \pi (s_j-s_i)=\sin\pi (n_j-n_i)=0 . \end{align} Similarly we find that $(s_{j_1},s_{j_2})=(\alpha_k+n_{j_1}+ 2\pi i\ell_{j_1}/\log q,~\alpha_k+n_{j_2}+2\pi i\log \ell_{j_2}/\log q)$ cancels the contribution $(s_{j_1},s_{j_2})=(\alpha_k+n_{j_1}+ 2\pi i\ell_{j_2}/\log q,~\alpha_k+n_{j_2}+ 2\pi i\ell_{j_1}/\log q)$ since in the former case, the same factor $\sin \pi (s_{j_2}-s_{j_1})$ produces \begin{align} \sin\pi (s_{j_2}-s_{j_1})=(-1)^{n_{j_2}-n_{j_1}}\sin\left(\frac{2\pi i (\ell_{j_2}-\ell_{j_1})}{\log q}\right) \end{align} and in the latter one, it gives the same quantity with opposite sign. Furthermore the residue of the poles with distnct $a_k$\rq{}s or $\alpha_k$\rq{}s does not change under the exchange of the poles since the integrand in~\eqref{p100} is symmetric under the exchange of $s_j$\rq{}s. Thus we arrive at the following expression for~\eqref{p60} \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle &= N!\sum_{N_1=0}^{\infty} \sum_{1\le j_1<\cdots<j_{N_1}\le N} \sum_{1\le j_{N_1+1}<\cdots<j_{N}\le N} \notag \\ &~\times\sum_{n_1,\cdots,n_N=0}^{\infty} \sum_{\ell_{N_1+1},\cdots,\ell_N\in\mathbb{Z}} g(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots,\alpha_{j_N},{\bf n},{\boldsymbol\ell},N_1). \label{p1003} \end{align} where ${\bf n}=(n_1,\cdots,n_N)$, ${\boldsymbol\ell}=(\ell_{N_1+1},\cdots,\ell_N)$, and $ g(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots,\alpha_{j_N},{\bf n},{\boldsymbol\ell},N_1) $ represents the residue at the poles, \begin{align} s_k= \begin{cases} a_{j_k}+n_k,& k=1,2,\cdots,N_1, \\ \alpha_{j_k}+n_{k}+\frac{2\pi i\ell_k}{\log q}, & k=N_1+1,\cdots,N. \end{cases} \end{align} In Appendix~\ref{a}, we will prove \begin{align} f(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots \alpha_{j_N},{\bf n},\ell,N_1) =\hspace{-5mm} \sum_{\substack{\ell_{N_1+1},\cdots,\ell_N\in\mathbb{Z}\\ \ell_{N_1+1}+\cdots+\ell_N=\ell }} \hspace{-2mm} g(a_{j_1},\cdots,a_{j_{N_1}},\alpha_{j_{N_1+1}},\cdots,\alpha_{j_N},{\bf n},{\boldsymbol\ell},N_1), \label{p105} \end{align} from which the equivalence of the two expressions~\eqref{p1004} and~\eqref{p1003} immediately follows. \qed \section{Fredholm determinant formulas} \label{sec:det} In this section, we obtain a Fredholm determinant representation for the $q$-Laplace transform $\left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle$. For this purpose we use Proposition~\ref{p6} and the following rational and trigonometric versions of the Cauchy identities, \begin{align} & \frac{ \prod_{1\le i<j\le N} (q^{a_i}-q^{a_j}) (q^{s_j}-q^{s_i}) } { \prod_{i,j=1}^N (q^{s_i}-q^{a_j}) } = \det \left( \frac{1} {q^{s_i}-q^{a_j}} \right)_{i,j=1}^N, \label{41} \\ & \frac{ \prod_{1\le i<j\le N} \sin\pi (a_i-a_j) \sin\pi(s_j-a_i) } { \prod_{i,j=1}^N \sin\pi(s_i-a_j) } = \det \left( \frac{1} {\sin\pi(s_i-a_j)} \right)_{i,j=1}^N. \label{42} \end{align} For more general description about the Cauchy determinants, see for instance~\cite{KN2003}. \begin{theorem} For the two-sided $q$-Whittaker measure (\ref{ptn}), we have, for $\zeta\neq q^n,n\in\mathbb{Z}$, \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle = \det\left(1-fK\right)_{L^2(\mathbb{R})}. \label{t110} \end{align} Here the kernel is defined by \begin{align} &f(x)=\frac{-\zeta}{-\zeta+e^x}, \label{t1101} \\ &K(x_1,x_2) =\sum_{k=0}^{N-1}\phi_k(x_1;a,\alpha,t)\psi_{k}(x_2;a,\alpha,t), \label{t1102} \end{align} and the functions $\phi_k(x_1;a,\alpha,t)$ and $\psi_k(x_2;a,\alpha,t)$ are given as \begin{align} &\phi_k(x_1;a,\alpha,t) = \sqrt{q^{a_{k+1}}-q^{\alpha_{k+1}}} \int_D\frac{\log q~dv}{2\pi i} \frac{e^{v x_1+q^v t}} {q^{(N-1)v}(q^{v}-q^{a_{k+1}})} \prod_{\ell=1}^k \frac{q^v-q^{\alpha_\ell}} {q^{v}-q^{a_\ell}} \cdot \prod_{m=1}^N \frac{(q^{s-\alpha_m+1};q)_{\infty}} {(q^{a_m-s+1};q)_{\infty}}, \label{t1103} \\ &\psi_k(x_2;a,\alpha,t) = \sqrt{q^{a_{k+1}}-q^{\alpha_{k+1}}} \int_{i\mathbb{R}}\frac{ds}{2\pi i} \frac{e^{s x_2+q^s t-\delta |s|}q^{Ns}} {q^{s}-q^{\alpha_{k+1}}} \prod_{\ell=1}^k \frac{q^s-q^{a_\ell}} {q^s-q^{\alpha_\ell}} \cdot \prod_{m=1}^N \frac{(q^{s-a_m+1};q)_{\infty}} {(q^{\alpha_m-s+1};q)_{\infty}}, \label{t1104} \end{align} where in~\eqref{t1103} the convergence factor $\delta=0+$ is introduced and $D$ denotes the contour enclosing $a_j,~j=1,\cdots,N$ positively. \end{theorem} \smallskip \noindent {\bf Proof.} We will show that $N$-fold integral representation~\eqref{p100} can be rewritten as~\eqref{t110}. The following calculations have certain similarities to the ones in ~\cite{BCR2013} for the O'Connell-Yor and the $\log$-Gamma polymers model. Note that the factor in~\eqref{p100} \begin{align} \prod_{i,j=1}^N\frac{(q^{s_i-a_j+1};q)_{\infty}}{\sin\pi(s_i-a_j)} \prod_{1\le i<j\le N}\frac{\sin\pi(s_j-s_i)\sin\pi(a_j-a_i)(q^{s_j}-q^{s_i})(q^{a_j}-q^{a_i})} {(q^{a_i-a_j};q)_{\infty}(q^{a_j-a_i};q)_{\infty}} \end{align} can be written as \begin{align} &(-1)^N \prod^N_{ \substack{ ij=1 \\ i\neq j } } \frac{1}{(q^{a_i-a_j};q)_{\infty}} \prod_{i,j=1}^N (q^{s_i-a_j};q)_{\infty} \cdot \prod_{j=1}^Nq^{N_{a_j}} \notag \\ &\hspace{3cm}\times \frac{ \prod_{1\le i<j\le N} \sin\pi (a_i-a_j) \sin\pi(s_j-a_i) } { \prod_{i,j=1}^N \sin\pi(s_i-a_j) } \cdot \frac{ \prod_{1\le i<j\le N} (q^{a_i}-q^{a_j}) (q^{s_j}-q^{s_i}) } { \prod_{i,j=1}^N (q^{s_i}-q^{a_j}) } \notag\\ &= (-1)^N \prod^N_{ \substack{ ij=1 \\ i\neq j } } \frac{1}{(q^{a_i-a_j};q)_{\infty}} \prod_{i,j=1}^N (q^{s_i-a_j};q)_{\infty} \cdot \prod_{j=1}^Nq^{N{a_j}} \notag \\ &\hspace{5cm}\times \det \left( \frac{1} {q^{s_i}-q^{a_j}} \right)_{i,j=1}^N \cdot \det \left( \frac{1} {\sin\pi(s_i-a_j)} \right)_{i,j=1}^N, \label{t111} \end{align} where in the last expression we used~\eqref{41} and~\eqref{42}. Substituting this into~\eqref{p100}, we get \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda^{(N)}_N};q)_{\infty}}\right\rangle &= \frac{1}{(2i)^NN!} \int_{(i\mathbb{R}-\epsilon)^N} \prod_{j=1}^N \frac{ds_j q^{a_j}}{(q;q)_{\infty}} \frac{(-\zeta)^{s_j}\Pi(q^{s_j};q^{\alpha},t)}{(-\zeta)^{a_j}\Pi(q^{a_j};q^\alpha,t)} \cdot \frac {\prod_{i,j=1}^N(q^{s_i-a_j};q)_{\infty}} {\prod_{\substack{i,j=1\\ i\neq j}}^N (q^{a_j-a_i};q)_{\infty}} \notag \\ &\hspace{1cm} \times \det \left( \frac{1} {q^{s_i}-q^{a_j}} \right)_{i,j=1}^N \cdot \det \left( \frac{1} {\sin\pi(s_i-a_j)} \right)_{i,j=1}^N. \label{t112} \end{align} Once we get this type of determinantal formula, it is straightforward to reach the Fredholm determinant. Noting that Andr{\'e}ief identity can be applicable to~\eqref{t112}, we get a single determinant expression with rank $N$, \begin{align} &\left\langle\frac{1}{(\zeta q^{\lambda^{(N)}_N};q)_{\infty}}\right\rangle \notag\\ &= \det\left( \frac{1}{2\pi i} \int_{i\mathbb{R}-\epsilon} \frac{ds q^{a_j}}{(q;q)_{\infty}} \frac{(-\zeta)^{s}\Pi(q^s;q^\alpha,t)}{(-\zeta)^{a_j}\Pi(q^{a_j};q^\alpha,t)} \cdot \frac{\prod_{m=1}^N(q^{s-a_m};q)_{\infty}}{\prod_{\substack{m=1\\ m\neq j}}^N (q^{a_j-a_m};q)_{\infty}} \frac{q^{a_i}} {q^s-q^{a_i}} \frac{\pi}{\sin\pi(s-a_j)} \right)_{i,j=1}^N. \label{t113} \end{align} Now we shift the contours from $i\mathbb{R}-\epsilon$ to $i\mathbb{R}+\epsilon_A$ where $a_j<\epsilon_A<a_j+1$~$j=1,\cdots,N$. Noting \begin{align} &\frac{1}{2\pi i} \int_{i\mathbb{R}-\epsilon} \frac{ds q^{a_j}}{(q;q)_{\infty}} \frac{(-\zeta)^{s}\Pi(q^s;q^\alpha,t)}{(-\zeta)^{a_j}\Pi(q^{a_j};q^{\alpha},t)} \cdot \frac{\prod_{m=1}^N(q^{s-a_m};q)_{\infty}}{\prod_{\substack{m=1\\ m\neq j}}^N (q^{a_j-a_m};q)_{\infty}} \frac{q^{a_i}} {q^s-q^{a_i}} \frac{\pi}{\sin\pi(s-a_j)} \notag \\ &=\delta_{i,j} +\frac{1}{2\pi i} \int_{i\mathbb{R}+\epsilon_A} \frac{ds q^{a_j}}{(q;q)_{\infty}} \frac{(-\zeta)^{s}\Pi(q^s;q^\alpha,t)}{(-\zeta)^{a_j}\Pi(q^{a_j};q^\alpha,t)} \cdot \frac{\prod_{m=1}^N(q^{s-a_m};q)_{\infty}}{\prod_{\substack{m=1\\ m\neq j}}^N (q^{a_j-a_m};q)_{\infty}} \frac{q^{a_i}} {q^s-q^{a_i}} \frac{\pi}{\sin\pi(s-a_j)}, \label{t114} \end{align} and using the relation, \begin{align} \frac {\pi (-\zeta)^y} {\sin\pi y} = \int_{-\infty}^{\infty} dx \frac {-\zeta e^{xy}} {-\zeta+e^x}, \label{t115} \end{align} for $0<y<1$, we have \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda^{(N)}_N};q)_{\infty}}\right\rangle = \det\left( \delta_{i,j} + \int_{-\infty}^{\infty}dx A(i,x)B(x,j) \right)_{i,j=1}^N, \label{t116} \end{align} where \begin{align} & A(i,x)= \int_{i\mathbb{R}+\epsilon_A} \frac{ds}{2\pi i} \frac{e^{sx-\delta |s|}\Pi(q^s;q^\alpha,t)q^{a_i}}{q^s-q^{a_i}} \prod_{m=1}^N (q^{s-a_m};q)_{\infty}, \label{t117} \\ & B(x,i)= \frac{e^{-a_j x}}{(q;q)_{\infty}\Pi(q^{a_j};q^\alpha,t)} \prod_{m=1}^N \frac{1}{(q^{s-a_m};q)_{\infty}} \cdot \frac{-\zeta}{-\zeta+e^x}. \label{t118} \end{align} Here in~\eqref{t117}, we introduced the the convergence factor $\delta=0+$. Noting the basic property of the determinant, we see that this is equal to \begin{align} \det\left( \delta_{i,j} + \int_{-\infty}^{\infty}dx A(i,x)B(x,j) \right)_{i,j=1}^N = \det\left(1+BA\right)_{L^2(\mathbb{R})}, \label{t119} \end{align} where RHS is the Fredholm determinant on $L^2(\mathbb{R})$ with the kernel \begin{align} &(BA)(x_1,x_2) \notag \\ &= \frac{-\zeta}{-\zeta+e^{x_1}} \sum_{j=1}^N \int_{i\mathbb{R}+\epsilon_A} \frac{ds}{2\pi i} \frac{e^{(q^s-q^{a_j})t+sx_2-a_jx_1-\delta |s|}}{(q;q)_{\infty}} \frac{(q^{\alpha_m-a_j};q)_{\infty}}{(q^{\alpha_m-s};q)_{\infty}} \cdot \frac {\prod_{m=1}^N(q^{s-a_m};q)_{\infty}} {\prod_{\substack{m=1\\m\neq j}}(q^{a_j-a_m};q)_{\infty}} \cdot \frac{q^{a_j}}{q^s-q^{a_j}}. \end{align} Rewriting the summation over $j=1,\cdots,N$ as the contour integral enclosing $a_j,~j=1,\cdots,N$ positively (this contour is denoted by $D$), we get \begin{align} &~~ \frac{\zeta\log q}{-\zeta+e^x} \int_D \frac{dv}{2\pi i} \int_{i\mathbb{R}+\epsilon_A} \frac{ds}{2\pi i} e^{(q^s-q^v)t+sx_2-vx_1-\delta |s|} \prod_{m=1}^N \frac {(q^{\alpha_m-v};q)_{\infty}(q^{s-a_m};q)_{\infty}} {(q^{\alpha_m-s};q)_{\infty}(q^{v-a_m};q)_{\infty}} \cdot \frac{q^{v}}{q^s-q^{v}} \notag \\ &= \frac{\zeta\log q}{-\zeta+e^x} \int_D \frac{dv}{2\pi i} \int_{i\mathbb{R}+\epsilon_A} \frac{ds}{2\pi i} e^{(q^s-q^v)t+sx_2-vx_1-\delta|s|} q^{N(s-v)+v} \prod_{m=1}^N \frac {(q^{\alpha_m-v+1};q)_{\infty}(q^{s-a_m+1};q)_{\infty}} {(q^{\alpha_m-s+1};q)_{\infty}(q^{v-a_m+1};q)_{\infty}} \notag \\ & \hspace{7cm} \times \left( \prod_{m=1}^N \frac {q^v-q^{\alpha_m}}{q^s-q^{a_m}} \frac {q^s-q^{\alpha_m}}{q^v-q^{a_m}} -1 \right) \frac{1}{q^s-q^{v}}. \label{t1110} \end{align} Here in the equality, we insert ``-1'' in the parenthesis which has no contribution to the integral. Substituting the relation, \begin{align} \left( \frac {q^v-q^{\alpha_m}}{q^s-q^{a_m}} \frac {q^s-q^{\alpha_m}}{q^v-q^{a_m}} -1 \right) \frac{1}{q^s-q^{v}} = \sum_{k=0}^{N-1} \frac {q^{a_{k+1}}-q^{\alpha_{k+1}}} {(q^s-q^{\alpha_{k+1}})(q^v-q^{a_{k+1}})} \prod_{\ell=1}^k \frac {(q^s-q^{a_\ell})(q^v-q^{\alpha_\ell})} {(q^s-q^{\alpha_\ell})(q^v-q^{a_\ell})} \label{t1111} \end{align} into~\eqref{t1110}, we find $(BA)(x_1,x_2)=-f(x_1)K(x_1,x_2)$ where $f(x)$ and $K(x_1,x_2)$ are defined by~\eqref{t1101} and~\eqref{t1102} respectively. From this and~\eqref{t116},~\eqref{t119}, we obtain our desired expression~\eqref{t110}. \qed \vspace{3mm} Our representation~\eqref{t110} can be rewritten in a closer form as in Theorem 3.18 of ~\cite{BC2014}. \begin{corollary} \begin{align} &\left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle = \det\left( 1 + K_{\zeta} \right)_{L^2(C_a)}, \label{c120} \end{align} where \begin{align} K_{\zeta}(v_1,v_2) = \frac{-1}{2\pi i} \int_{i\mathbb{R}+\epsilon} ds \frac{(-\zeta)^s\pi}{\sin\pi s} \frac {e^{(q^{v_1+s}-q^{v_1})t}} {q^{s+v_1}-q^{v_2}} \prod_{m=1}^N \frac {(q^{s+v_1-a_m};q)_{\infty}(q^{\alpha_m-v_1};q)_{\infty}} {(q^{v_1-a_m};q)_{\infty}(q^{\alpha_m-s-v_1};q)_{\infty}}. \label{c121d} \end{align} \end{corollary} \noindent {\bf Remark.} With $\alpha_1=\alpha$ and $\alpha_m\rightarrow\infty,~m=2,3,\cdots$, the factor $\prod_{m=2}^N (q^{\alpha_m-v_1};q)_\infty/(q^{\alpha_m-x};q)_\infty$ becomes unity and~\eqref{c121} is reduced to the kernel of our Theorem 1 under $q^{\alpha}\to\alpha$ and the change of variables $q^{v_i}=w_i,i=1,2$. \smallskip \noindent {\bf Proof.} Getting back to LHS of~\eqref{t1110}, we write it as $\int_D\frac{dv}{2\pi i} C(x_2,v)D(v,x_1)$ where \begin{align} &C(x,v)= \frac{\log q}{2\pi i} \int_{i\mathbb{R}+\epsilon_A}ds \frac{e^{s x+q^s t-\delta |s|}}{q^s-q^{v}} \prod_{m=1}^N \frac {(q^{s-a_m};q)_{\infty}} {(q^{\alpha_m-s};q)_{\infty}}, \label{c1202} \\ & D(v,x)= \frac{\zeta}{-\zeta+e^x} e^{-v x-q^v t} \prod_{m=1}^N \frac {(q^{\alpha_m-v};q)_{\infty}} {(q^{v-a_m};q)_{\infty}}. \label{c1203} \end{align} From these equations with~\eqref{t116} and~\eqref{t119}, we have \begin{align} \left\langle\frac{1}{(\zeta q^{\lambda_N};q)_{\infty}}\right\rangle = \det(1+ CD)_{L^2(\mathbb{R})}=\det(1+DC)_{L^2(C_a)}. \label{c1204} \end{align} We easily see that $(DC)(v_1,v_2)=K_{\zeta}(v_1,v_2)$~\eqref{c121d}. \qed
{ "timestamp": "2018-05-08T02:11:04", "yymm": "1805", "arxiv_id": "1805.02197", "language": "en", "url": "https://arxiv.org/abs/1805.02197" }
\section{Introduction} The last two decades have seen major activity in the study of magnetoelectric multiferroics, an exciting class of materials that exhibit ferroelectric polarization alongside magnetic order. Interest in these materials largely stems from the possibility of controlling one order using the stimulus that usually controls the other, offering great potential for development of novel multifunctional devices \cite{spaldin2005multiferroics,spaldin2010MF_past_present}. Among single phase multiferroics, interesting candidates for future technological applications are those in which ferroelectricity is induced by inversion-symmetry-breaking magnetic order (multiferroics of type II) \cite{cross1978magnetoferroelectricity}, since their ferroelectric (magnetic) properties can be easily tuned by applied magnetic (electric) field \cite{cheong2007multiferroics,kimura2003magnetic}. Type II multiferroics are usually frustrated magnets in which competing exchange interactions give rise to several magnetic phases with similar energies. As a result, transitions between them can be driven by control parameters such as chemical or hydrostatic pressure, epitaxial strain, or even by ultrashort light pulses \cite{johnson2012CuO,Bothschafter2017TbMnO3}, offering multiple routes to manipulating and controlling their properties \cite{fiebig2016multiferroics}. The orthorhombic \textit{R}MnO$_3$ (o-\textit{R}MnO$_3$), in which \textit{R} is a rare-earth cation or Y, are prototypical representatives of type II multiferroics. It was discovered in 2003 \cite{kimura2003magnetic} that in bulk o-TbMnO$_3$ the establishment of an incommensurate spiral magnetic order \cite{kenzelmann2005tbmno3} gives rise to a spontaneous electric polarization whose direction and magnitude can be manipulated by an external magnetic field. This effect, however, occurred at quite low temperatures and the measured values of the electric polarization were relatively small compared to those of conventional ferroelectrics. Nevertheless, this discovery stimulated experimental and theoretical studies aiming to understand and improve the multiferroic properties of systems with frustrated magnetic orders. In particular, it was theoretically predicted that E-type antiferromagnetic order (E-AFM), which was observed in early neutron diffraction measurements in o-HoMnO$_3$ \cite{munoz2001homno3} and expected to be a magnetic ground state in other o-\textit{R}MnO$_3$ with small \textit{R}, may induce an electric polarization at least one order of magnitude higher than that of spiral-order systems \cite{sergienko2006ferroelectricity,picozzi2007dual}. The experimental verification of this prediction, however, gave contradictory results. On one hand, the predicted polarization values have not yet been measured experimentally for bulk o-\textit{R}MnO$_3$ \cite{feng2010homno3,lorenz2007homno3_ymno3,chai2012rmno3_polar}. Moreover, magnetic orders different from E-AFM were reported for $R$=Ho, Er, Y \cite{lee2011mechanism,ye2007incommensurate} and there is still no agreement about the type of these orders, the mechanisms of their establishment, or the directions and magnitudes of the electric polarizations which they induce. On the other hand, increased polarization values were observed in structurally modified o-\textit{R}MnO$_3$ samples. Indeed, it has been shown that the spiral order in o-TbMnO$_3$ can evolve to E-AFM under isotropic pressure \cite{makarova2011pressure} and this evolution significantly enhances the electric polarization in this system \cite{aoyama2014tbmno3}. Variations of the magnetic modulation vector and enhancement of electric polarization were also observed in epitaxially strained films of o-$R$MnO$_3$ \cite{wadati2012origin,shimamoto2016multiferroic,shimamoto2017phase_diagram}. The microscopic origin of such an evolution of the magnetic order in strained samples as well as the difference in magnitudes of the electric polarization between bulk and strained samples of o-$R$MnO$_3$, however, are still not understood. \begin{figure} \centering \includegraphics[width=0.5\textwidth]{rmno3_structure.eps} \caption{\label{fig:rmno3_structure} Crystal structure of o-$R$MnO$_3$. Green spheres indicate $R^{3+}$ cations, purple - Mn$^{3+}$ cations and red - O$^{2-}$ anions. (a) shows the view in the $bc$ plane, (b) in the $ab$ plane (only Mn and O ions are shown).} \end{figure} In this work, we present a systematic study of the relationship between the crystal lattice and the magnetism in o-\textit{R}MnO$_3$ using X-ray diffraction measurements, density functional theory (DFT) and Monte Carlo (MC) simulations. We focus specifically on how the microscopic exchange interactions can be varied by controlling the crystal lattice using chemical pressure or epitaxial strain and how these variations affect the long-range magnetic order. First, we employ non-resonant and resonant X-ray diffraction measurements to determine the lattice parameters and magnetic modulation vectors, respectively, in a set of epitaxially strained o-\textit{R}MnO$_3$ films. We show that the magnetic modulation vectors in highly strained films can differ significantly from those in bulk samples and relaxed films having the same \textit{R} cation. Next, we use DFT to calculate the internal coordinates for the experimentally reported crystal structures of several o-\textit{R}MnO$_3$ bulk samples (from the literature) and films (both from the literature and our new measurements) and analyze how the internal lattice parameters evolve across the series. We find that chemical pressure affects primarily the Mn-O-Mn bond angles, while epitaxial strain is accommodated by changes in the Mn-O bond lengths. To study the magnetism, we employ a model Hamiltonian which includes Heisenberg, biquadratic and four-spin ring exchange couplings, as well as Dzyaloshinskii-Moriya interactions (DMI) and single-ion anisotropy (SIA). We extract the exchange couplings and anisotropy constants by mapping the results of DFT calculations onto this model Hamiltonian and analyze how they are affected by the structural variations in bulk samples and films of o-\textit{R}MnO$_3$. We show that variations of the Mn-O-Mn bond angles caused by chemical pressure have a strong effect on the in-plane nearest-neighboring (NN) Heisenberg exchange while all the other couplings stay almost constant with changing \textit{R}. In turn, changes in the Mn-O bond lengths caused by epitaxial strain affect both in-plane and inter-plane NN Heisenberg couplings as well as next-nearest-neighboring (NNN) couplings and higher order exchanges. Then we use the calculated exchanges and anisotropies in a series of Monte Carlo simulations to determine the magnetic ground states and corresponding magnetic modulation vectors $q_b$. The latter are then compared to experimental values reported in the literature and obtained in this work through resonant X-ray diffraction measurements. We show that for most bulk and strained systems our model Hamiltonian and calculated couplings reproduce well the experimentally reported values of $q_b$. Moreover, we find that unconventional H-AFM and I-AFM orders can be stabilized in the strained films of o-LuMnO$_3$. Finally, we discuss the nature of the ferroelectricity that is induced in bulk and strained o-\textit{R}MnO$_3$ by the magnetic phases obtained in our MC simulations. This article is organized as follows: In Sec. \ref{sec:structure} we describe the crystal structure and its relation to microscopic exchange interactions in o-\textit{R}MnO$_3$. In Sec.\ \ref{sec:phase_diagram} we introduce the magnetoelectric phase diagram of bulk o-\textit{R}MnO$_3$ and summarize the literature data on studies of the magnetic and ferroelectric properties of o-\textit{R}MnO$_3$ under hydrostatic pressure and epitaxial strain. In Sec.\ \ref{sec:experiment} we present the details of the experimental procedure and the results of our X-ray diffraction measurements. Then, in Sec.\ \ref{sec:computations} we introduce the magnetic model Hamiltonian which is used to describe the magnetism in o-\textit{R}MnO$_3$ and summarize the details of our DFT and MC simulations. In Sec.\ \ref{sec:results:structure} we present the results of our theoretical study of the evolution of internal lattice parameters in bulk and films of o-\textit{R}MnO$_3$. In Sec.\ \ref{sec:couplings} we present the calculated exchange couplings and anisotropies for all considered o-\textit{R}MnO$_3$ samples. In Sec.\ \ref{sec:montecarlo} we show which magnetic phases are stabilized in MC simulations using the calculated exchange coupling and anisotropy constants for the considered o-\textit{R}MnO$_3$ samples. In Sec.\ \ref{sec:polarization} we present the electric polarizations calculated for several representative bulk and strained o-\textit{R}MnO$_3$ using the magnetic phases obtained in our MC simulations. Finally, in Sec.\ \ref{sec:summary} we summarize the main results of our investigation. \section{Background and motivation} \subsection{Crystal structure and exchange interactions in o-$R$MnO$_3$} \label{sec:structure} The o-\textit{R}MnO$_3$ have $Pbnm$ (\#62) symmetry and differ from the perfect cubic perovskites by the presence of Jahn-Teller (JT) distortions of the MnO$_6$ octahedra \cite{kanamori1960JT} and GdFeO$_3$-type (GFO) tiltings of these octahedra \cite{woodward1997tiltings} (see Fig.\ \ref{fig:rmno3_structure}). The JT distortions lift the degeneracy of the singly occupied majority spin $e_g$ states of the Mn$^{3+}$ ions (3$d^4$: $t^3_{2g}e^1_g$). The resulting occupied $e_g$ state on each Mn site $i$ can be represented as a linear combination of $\vert d_{z^2} \rangle$ and $\vert d_{x^2-y^2} \rangle$ orbitals: \begin{equation} \label{eq:orb_mix_state} \vert \phi_i \rangle=\mathrm{cos} \Big(\frac{\theta_i}{2} \Big) \vert d_{z^2} \rangle+\mathrm{sin} \big(\frac{\theta_i}{2} \Big) \vert d_{x^2-y^2} \rangle, \end{equation} where $\theta_i$ is the orbital mixing angle, which is determined by the balance between the energy of the orbital-lattice interaction and the elastic energy \cite{khomskii1973orbital_ordering}. A simple estimate of $\theta_i$ can be made using the following formula \cite{khomskii2005orbitals}: \begin{equation} \theta_i=\arctan\left(\frac{\sqrt[]{3}\left(l-s\right)}{2m-l-s}\right), \label{eq:orb_mixing} \end{equation} where $s$, $m$ and $l$ are the lengths of short, medium and long Mn-O bonds in the MnO$_6$ octahedron. The cooperative character of the JT distortions leads to an ordering of the occupied $e_g$ orbitals with $\theta_i=-\theta_j$ on the NN Mn sites $i$ and $j$ within the $ab$ planes, and $\theta_i=\theta_j$ along the $c$ direction. The GFO distortion in o-\textit{R}MnO$_3$ reduces the unit cell volume and so is larger for \textit{R} cations with smaller radii. This distortion reduces the Mn-O-Mn bond angles and decreases the lengths of the O(1)-O(2) bridges within the $ab$ planes (see Fig.\ \ref{fig:rmno3_structure} (b)). \begin{figure} \centering \includegraphics[width=0.45\textwidth]{orbitals.eps} \caption{\label{fig:orbitals} $d$-$p$-$d$ superexchange paths within the $ab$ planes in o-$R$MnO$_3$. (a) $e_g$-$p_\sigma$-$e_g$ superexchange paths. The cooperative JT distortion of MnO$_6$ octahedra favors the ordering of the $e_g$ orbitals such that an occupied $e_g$ orbital (colored) on one Mn site overlaps with an empty $e_g$ orbital (white) on the neighboring Mn site via the $p_\sigma$ state of oxygen. (b) $t_{2g}$-$p_\pi$-$t_{2g}$ superexchange paths.} \end{figure} The magnetic ground state in o-\textit{R}MnO$_3$ is defined by the network of competing exchange couplings between the spins on NN and NNN Mn sites. Each superexchange interaction contains contributions from both $e_g$ and $t_{2g}$ orbitals mediated by the $p$ states of O anions. The crystal structure plays an important role in defining the relative strength of NN and NNN exchange couplings as well as the contributions from $e_g$ and $t_{2g}$ states to each coupling. Indeed, for interactions within the $ab$ planes, the presence of the $e_g$ orbital ordering described above leads to superexchange between an occupied $e_g$ orbital on one Mn site with an empty $e_g$ orbital on the NN Mn site through the $p_\sigma$ states of O (see Fig.\ \ref{fig:orbitals} (a)). This favors ferromagnetic (FM) coupling between the $e_g$ spins according to the Goodenough-Kanamori-Anderson (GKA) rules \cite{goodenough1955superexchange,kanamori1959superexchange,anderson1959superexchange}. The $t_{2g}$ states, in turn, form covalent bonds with the $p_\pi$ states of O anions (see Fig.\ \ref{fig:orbitals} (b)) and electron transfer along the path $t_{2g}$-$p_\pi$-$t_{2g}$ favors antiferromagnetic (AFM) coupling of the $t_{2g}$ spins. Thus $e_g$ and $t_{2g}$ contributions compete with each other within the $ab$ planes. In general, the $e_g$-$p_\sigma$-$e_g$ contribution is expected to be larger than that of the $t_{2g}$-$p_\pi$-$t_{2g}$ in absolute values, because the $e_g$ states are directed towards the $p_\sigma$ states of O, which provides a stronger overlap between them and, therefore, stronger coupling. Thus the resulting NN exchange within the $ab$ planes is expected to be FM. Nevertheless, the relative strengths of $e_g$-$p_\sigma$-$e_g$ and $t_{2g}$-$p_\pi$-$t_{2g}$ contributions can be changed by varying the bond angles and bond lengths (the amplitudes of the GFO and JT distortions, respectively), which can be achieved by hydrostatic or chemical pressure, or epitaxial strain. In fact, the change in the Mn-O bond lengths affects the overlap integral between the orbitals participating in the superexchange, and should modify both $e_g$ and $t_{2g}$ contributions by decreasing as bond lengths increase and vice versa. One has to keep in mind that variation of the Mn-O bond lengths can also change the mixing of the two $e_g$ states on each Mn site (in other words, the orbital mixing angle), which can in turn affect the $e_g$-$p_\sigma$-$e_g$ interaction. The variation of Mn-O-Mn bond angles is expected to influence the $e_g$-$p_\sigma$-$e_g$ coupling significantly due to the geometry of this bond (the coupling decreases with reducing angle and vice versa), while the $t_{2g}$-$p_\pi$-$t_{2g}$ should be less affected due to the isotropic character of $t_{2g}$ orbitals within the $ab$ planes. Changes in the GFO distortion can also modify the NNN exchange interactions along the $b$ direction due to variation of the lengths of the O(1)-O(2) bridges (see Fig.\ \ref{fig:rmno3_structure} (b)). Indeed, an increasing GFO distortion brings oxygens O(1) and O(2) closer to each other, which enhances the hybridization between their $p$ orbitals and leads to stronger coupling. Along the $c$ direction the interactions occur between empty $e_g$ states mediated by $p_\sigma$ orbitals and between singly occupied $t_{2g}$ states mediated by $p_\pi$ orbitals; both are antiferromagnetic according to the GKA rules. Therefore, in this case $e_g$ and $t_{2g}$ contributions reinforce each other \cite{zhou2006rmno3}. \subsection{Experimental phase diagram of bulk o-\textit{R}MnO$_3$} \label{sec:phase_diagram} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{PD_v05.eps} \caption{\label{fig:PD_2_options} Phase diagram of bulk o-\textit{R}MnO$_3$, following Ref.\ \onlinecite{ishiwata2010perovskite}. Borders are drawn based on magnetic susceptibility and electric polarization measurements conducted on powders. Labels at the top of the image indicate the \textit{R} ion associated with the radii indicated on the lower horizontal axis. Increased orange shading indicates higher predicted electric polarization.} \end{figure} The interplay between lattice and spin degrees of freedom described in Sec.\ \ref{sec:structure} manifests in the magnetic phase diagram of bulk o-\textit{R}MnO$_3$ (see Fig.\ \ref{fig:PD_2_options}), which was experimentally established through multiple studies of magnetic order in these materials. At low temperatures, o-\textit{R}MnO$_3$ with larger $R$ ion species ($R$=La,...,Gd) exhibit an A-type AFM ordered ground state (A-AFM, modulation vector $q_b=0$) \cite{Wollan1955ABC} which is favored by their orbital ordering. Decreasing the radius of the $R$ cation increases the GFO distortion, which leads to an evolution of the magnetic order, initially to incommensurate (IC) spiral structures ($R$=Tb, Dy) and then to E-type AFM order (E-AFM, $q_b=\nicefrac{1}{2}$; $R$=Tm, Yb and Lu). Conflicting reports exist for the intermediate radii of Ho and Y. For o-HoMnO$_3$ both E-AFM order \cite{munoz2001homno3} and incommensurate order with $q_b\approx0.4$ \cite{brinks2001crystal}, identified as a sinusoidal spin density wave, have been reported. For o-YMnO$_3$ an IC \textit{ac} spiral ($q_b=0.078$) and sinusoidal spin density wave ($q_b=0.435$) \cite{munoz2002ymno3} have both been observed, and E-AFM order has been reported based on a study of the structural modulation at low temperatures. Lastly, an incommensurate magnetic structure has also been reported for o-ErMnO$_3$ \cite{ye2007incommensurate}, with a similar propagation vector ($q_b=0.433$) to that of o-HoMnO$_3$ and o-YMnO$_3$, but the magnetic structure was not specified. In Ref.\ \onlinecite{ishiwata2010perovskite} this phase was discussed in terms of coexisting spiral and E-AFM phases. In our recent theoretical study based on DFT and Monte Carlo simulations for o-HoMnO$_3$ and o-ErMnO$_3$, we demonstrated that this IC magnetic phase is likely a "w-spiral" order \cite{fedorova2018fourspin}. Since all the aforementioned magnetic phases can be described by the modulation vector $\mathbf{q}$=$(0,q_b,1)$, the evolution of magnetism across the series of bulk o-$R$MnO$_3$ can be represented as a variation of $q_b$ with decreasing radius of the $R$ cation ($r_R$). In Fig.\ \ref{fig:q_Vs_R} we summarize the literature values of the modulation vectors $q_b$ reported for bulk o-$R$MnO$_3$ (single crystals and powders) \cite{brinks2001crystal,munoz2002ymno3,ye2007incommensurate,pomjakushin2009evidence,okamoto2008neutron,yamasaki2007mixtures,Yamasaki2008mixtures,OFlynn2011SmMnO3} shown as red open circles versus $r_R$. One can see that \textit{q$_b$} varies systematically with $r_R$, from $q_b=0$ for the A-AFM phase to $q_b=\nicefrac{1}{2}$ for the E-AFM phase. The two contradicting values for $R$ = Ho and Y are also presented. \begin{figure} \centering \includegraphics[width=0.48\textwidth]{q_Vs_R_v04_with_two_colors_v03.eps} \caption{\label{fig:q_Vs_R} Magnetic modulation vector \textit{q$_b$} as function of radius of the \textit{R} ion. Literature data for bulk powder samples of o-\textit{R}MnO$_3$ are presented as empty circles. Films (literature data from Refs. \onlinecite{wadati2012origin,windsor2014multiferroic,windsor2015interplay} and our new measurements) are indicated by filled circles. The dashed line indicates the trend for bulk materials, and the green solid line that for relaxed films. The blue line shows the discrepancy between bulk samples and strained films with the same $R$ ion. $q_b$ is in reciprocal lattice units.} \end{figure} The microscopic mechanism that drives this evolution of magnetism in bulk o-\textit{R}MnO$_3$ is still debated. It was initially considered in terms of competing Heisenberg exchange interactions between NN and NNN Mn$^{3+}$ spins within the $ab$ planes: the relative strengths of these interactions are directly affected by the increasing GFO distortion (decreasing Mn-O-Mn bond angles) \cite{zhou2006rmno3,kimura2003distorted}. Ref.\ \onlinecite{mochizuki2011theory} suggested that an increase of the GFO distortion primarily affects the NNN in-plane coupling $J_b$ (see Fig.\ \ref{fig:exchanges}), and that this causes the evolution of magnetism. However, recent theoretical studies have demonstrated that this effect mainly reduces the NN coupling $J_{ab}$, and that magnetic order evolves because the effect of other couplings (such as NNN Heisenberg and higher order exchanges) becomes more pronounced \cite{fedorova2015biquadratic,zhang2018rmno3}. In Ref.\ \onlinecite{solovyev2009j3nn} the importance of the interaction between third-nearest-neighboring Mn spins within the $ab$ planes ($J_{3nn}$ in Fig.\ \ref{fig:exchanges}) was demonstrated. Moreover, it was shown that biquadratic exchange interactions play a key role in stabilizing E-AFM over spiral order \cite{kaplan2009biquadratic,hayden2010biquadratic}. We recently found that inter-plane four-spin ring exchange interactions ($K_c$ in Fig. \ \ref{fig:exchanges}) are crucial to explain the establishment of the w-spiral state in o-HoMnO$_3$ and o-ErMnO$_3$, as well as two unconventional commensurate magnetic phases (so-called H-AFM and I-AFM) which can, in principle, form in these systems \cite{fedorova2018fourspin}. Understanding the interplay between ferroelectricity and magnetism has been a central motivator for studying this family of materials. IC spiral and E-AFM orders break inversion symmetry and induce an electric polarization in o-$R$MnO$_3$, making them type II multiferroics. For the spiral orders, the electric polarization is usually treated as an effect arising from spin-orbit coupling and is explained in terms of the spin-current model \cite{katsuraPRLknb_model} and/or antisymmetric exchange striction \cite{sergienko2006DMI}. Since spin-orbit coupling is weak in o-$R$MnO$_3$, the resulting electric polarization is relatively small ($P\approx0.1$ $\mu$C/cm$^2$) \cite{kimura2003magnetic,kimura2005magnetoelectric}. In systems with E-AFM order, the proposed mechanism for the magnetically induced electric polarization is symmetric exchange striction. It was theoretically predicted by Sergienko \textit{et al.} \cite{sergienko2006ferroelectricity} that this mechanism should provide significantly enhanced polarization values ($P\approx0.5-12$ $\mu$C/cm$^2$) compared to systems with spiral order. This prediction was confirmed later by Berry phase calculations for o-HoMnO$_3$ ($P\approx 6$ $\mu$C/cm$^2$) \cite{picozzi2007dual}. However, to the best of our knowledge, the predicted polarization values for E-AFM order have not been experimentally detected in bulk o-$R$MnO$_3$; the largest $P$ values were reported for o-LuMnO$_3$ and o-YMnO$_3$, reaching 0.17 and 0.24 $\mu$C/cm$^2$, respectively \cite{chai2012rmno3_polar,okuyama2011magnetically}, which is at least an order of magnitude smaller than $P$ obtained from first principles. This contradiction between theory and experiment is still not fully understood. Moreover, measurements of $P$ in o-$R$MnO$_3$ with $R$=Ho, Er and Y gave contradictory results. For example, in Ref. \onlinecite{lorenz2007homno3_ymno3} $P \approx 0.009$ $\mu$C/cm$^2$ was observed in o-HoMnO$_3$ along the $a$ axis, while Ref.\ \onlinecite{lee2011mechanism} reported $P \approx 0.15$ $\mu$C/cm$^2$ along the $c$ direction. In both cases the importance of the Ho$^{3+}$ $f$-electron moments in inducing $P$ was underlined, since $P$ demonstrated a drastic increase only below their ordering temperature. Ref. \onlinecite{lorenz2007homno3_ymno3} reported $P \approx 0.025$ $\mu$C/cm$^2$ for o-YMnO$_3$. The origin of this value is not yet understood since neither the reported sinusoidal spin density wave nor the $ac$ spiral are expected to produce an electric polarization according to the mechanisms of magnetically induced ferroelectricity, described above. This also cannot be explained by an ordering of $R^{3+}$ moments as Y has an empty $f$-shell. In o-ErMnO$_3$ no sizable polarization was measured by Ye \textit{et al.} \cite{ye2007incommensurate}, while Ishiwata \textit{et al.} reported $P\approx0.06$ $\mu$C/cm$^2$ for this compound \cite{ishiwata2010perovskite}. Recent experimental studies have demonstrated that structural modifications due to hydrostatic pressure or epitaxial strain can stabilize magnetic phases in o-$R$MnO$_3$ that are different from those that are stable in unperturbed bulk samples. Moreover, the electric polarization in such structurally modified samples can be significantly larger than in bulk samples. For example, the magnetic order in o-TbMnO$_3$ evolves under high pressure from a spiral to the E-AFM state \cite{makarova2011pressure,aoyama2014tbmno3} which produces $P\approx1$ $\mu$C/cm$^2$. It was recently demonstrated that films of o-$R$MnO$_3$ with $R$=Gd,...,Lu epitaxially grown on YAlO$_3$ yield electric polarizations of up to 1 $\mu$C/cm$^2$ (for o-TbMnO$_3$ $P$ of up to 2 $\mu$C/cm$^2$ was measured), suggesting that the E-AFM phase is likely stabilized \cite{shimamoto2017phase_diagram}. Strain was also found to affect or even tune the magnetic modulation vector \cite{windsor2014multiferroic}. This effect was detailed in a recent study on o-HoMnO$_3$ films. A strained film of o-HoMnO$_3$ (32 nm [010]-oriented film grown on YAlO$_3$ substrate) was shown to possess a magnetic modulation vector of $q_b\approx0.49$, while a relaxed film (120 nm) had $q_b\approx0.42$, which is close to that of bulk o-HoMnO$_3$ \cite{brinks2001crystal}. Both films showed enlarged polarization values compared to bulk o-HoMnO$_3$ \cite{shimamoto2016HoMnO3}. In spite of these advances in structural manipulation, the underlying mechanism behind the evolution of magnetic order in the strained samples as well as the enhancement of the polarization remain to be understood. From all the literature data summarized above, it is clear that both magnetism and ferroelectricity in o-$R$MnO$_3$ can be manipulated by structural variations, such as hydrostatic or chemical pressure or epitaxial strain. Since the ferroelectricity in these materials is governed by magnetism, an understanding of the relationship between the crystal lattice and magnetic orders is of primary importance for potential optimization of their multiferroic properties. \section{Experiments on crystalline films} \label{sec:experiment} \begin{figure} \centering \includegraphics[trim={0cm 2.5cm 0cm 0cm},width=0.6\textwidth]{Raw_Data_v03.eps} \caption{\label{fig:raw_data} Magnetic intensity from reciprocal space scans along the [010] direction, taken using resonant X-ray diffraction at the Mn $L_3$ edge. Data are from [010]-oriented films of orthorhombic TbMnO$_3$, HoMnO$_3$ and LuMnO$_3$. $q_b$ is in reciprocal lattice units.} \end{figure} As a basis for studying the effects of epitaxial strain on the relationship between the lattice and magnetic order, we measured the lattice parameters and magnetic modulation vectors of a selection of epitaxially grown films. These were grown by pulsed laser deposition using stoichiometric ceramic targets of the corresponding hexagonal \textit{R}MnO$_3$ materials. Further growth details are found in Ref.\ \onlinecite{windsor2014multiferroic}. A full list of films discussed here is available in Table I of the Supplemental materials. Non-resonant X-ray diffraction (XRD) was employed to measure lattice constants to high precision using the Surface Diffraction end station of the Materials Science beam line of the Swiss Light Source (SLS) \cite{Willmott2013MS}. The lattice constants were determined by collecting precise motor positions of several reflections and computing the best fit to a UB matrix of an orthorhombic crystal. The photon energies used were all between 8 and 10 keV. Diffracted intensities were collected using a Pilatus 100K detector \cite{Broennimann2006Pilatus} mounted on the detector arm. In both experiments samples were mounted on the cold head of a Janis flow cryostat. The measured lattice parameters for all considered o-\textit{R}MnO$_3$ films are summarized in Table I of the Supplemental materials. Resonant X-ray diffraction (RXD) experiments were conducted to probe antiferromagnetic order. These were done using the RESOXS UHV diffraction end station \cite{Staub2008RESOXS} at the SIM beam line \cite{Flechsig2010SIM} of the SLS. Photon energies used correspond to the Mn $L_3$ absorption edge using $\pi$-polarized incident light (electric field in the scattering plane). Data were taken at 10 K. Scattered intensities were collected using an IRD AXUV100 photodiode. Scans were conducted along the [010] direction of reciprocal space, following the (0,$q_b$,0) magnetic reflection. This reflection provides a direct and unequivocal measure of the modulation parameter $q_b$. In Fig.\ \ref{fig:raw_data} we present as an example the scans for [010]-oriented o-TbMnO$_3$ (150 nm), o-HoMnO$_3$ (120 nm) and o-LuMnO$_3$ (104 nm) films. In Figure \ref{fig:q_Vs_R} we present our measured $q_b$ values for o-$R$MnO$_3$ films with different $R$ ions and different levels of strain (see details in Table I of Supplemental materials) alongside the literature values for bulk samples and additional literature values for films. Two notable observations can be made. First, despite having the same $R$ ion, relaxed films follow the gradual trend of the bulk samples, while highly strained films tend towards locking to the commensurate $q_b = \nicefrac{1}{2}$ value. Second, for the lower $r_R$ values (Tm, Yb, and Lu), the bulk o-$R$MnO$_3$ samples have $q_b = \nicefrac{1}{2}$, but relaxed films do not reach this value, and instead show a gradual evolution of $q_b$. These film-bulk discrepancies support the idea that small variations in the crystal lattice have a strong effect on the position of a material in the magnetic phase diagram and serve as a motivation for our theoretical study of the relationship between $q_b$ and the crystal lattice in o-$R$MnO$_3$. \begin{figure} \centering \includegraphics[width=0.43\textwidth]{exchanges.eps} \caption{\label{fig:exchanges} Heisenberg ($J_c$, $J_{ab}$, $J_a$, $J_b$, $J_{diag}$, $J_{3nn}$), biquadratic ($B_c$ and $B_{ab}$) and four-spin ring ($K_c$ and $K_{ab}$) exchange interactions considered in the model Hamiltonian of Eq.\ \ref{eq:fullHam}. The 40 atom o-$R$MnO$_3$ supercell (1$\times$2$\times$1 of the 20-atom unit cell) containing 8 Mn ions (purple spheres, the lighter spheres indicate Mn ions in neighboring cells) is shown ($R$ and O ions are not shown).} \end{figure} \section{Computational details} \label{sec:computations} \subsection{Spin model Hamiltonian} \label{sec:hamiltonian} In order to accurately describe the complex magnetic phase diagram of the o-$R$MnO$_3$ series (see Sec.\ \ref{sec:phase_diagram}), we employ the following spin model Hamiltonian: \begin{equation} \label{eq:fullHam} H=H_{Heis}+H_{BQ}+H_{4sp}+H_{SIA}+H_{DM}, \end{equation} where \begin{equation} H_{Heis}=\sum_{<i,j>}J_{ij}(\mathbf{S}_i\cdot\mathbf{S}_j), \label{eq:HeisHam} \end{equation} \begin{equation} H_{BQ}=\sum_{<i,j>}B_{ij}(\mathbf{S}_i\cdot\mathbf{S}_j)^2, \label{eq:biqHam} \end{equation} \begin{eqnarray} H_{4sp}=\sum_{<i,j,k,l>}K_{ijkl}\left[ \left(\mathbf {S}_i\cdot\mathbf {S}_j\right)\left(\mathbf S_k\cdot\mathbf S_l\right) \right. \nonumber \\ + \left. \left(\mathbf S_i\cdot\mathbf S_l\right)\left(\mathbf S_k\cdot\mathbf S_j\right) - \left(\mathbf S_i\cdot\mathbf S_k\right)\left(\mathbf S_j\cdot\mathbf S_l\right)\right], \label{eq:4bodyHam} \end{eqnarray} \begin{equation} H_{SIA}=A\sum_{i}S^2_{i,b}, \label{eq:siaHam} \end{equation} \begin{equation} H_{DM}=\sum_{<i,j>}\mathbf{D}_{ij}\cdot[\mathbf{S}_i\times\mathbf{S}_j]. \label{eq:dmHam} \end{equation} The first term, $H_{Heis}$ (Eq.\ \ref{eq:HeisHam}), is a Heisenberg Hamiltonian, where $J_{ij}$ are exchange interactions between spins $\mathbf{S}_i$ and $\mathbf{S}_j$ on Mn sites $i$ and $j$, respectively. A $H_{Heis}$ including only AFM $J_c$ and $J_{b}$ and FM $J_{ab}$ (see Fig.\ \ref{fig:exchanges}) can explain the establishment of the A-AFM and spiral orders, the latter occuring if the NNN $J_b$ is large enough to compete with NN $J_{ab}$. We extend our model by including also the second NN couplings along the $c$ direction ($J_{diag}$, see Fig.\ \ref{fig:exchanges}) and second ($J_a$) and third NN exchanges ($J_{3nn}$) within the $ab$ planes. Further neighbor couplings are not taken into account since we showed in our previous work that they are negligible in comparison with those mentioned above \cite{fedorova2015biquadratic}. The second term, $H_{BQ}$ (Eq.\ \ref{eq:biqHam}), describes the biquadratic exchange interactions between spins $\mathbf{S}_i$ and $\mathbf{S}_j$. It has been demonstrated that the biquadratic couplings between NN spins within the $ab$ planes, $B_{ab}$, are crucial for establishment of E-AFM order \cite{kaplan2009biquadratic,hayden2010biquadratic}. In this work we consider NN biquadratic couplings, both within the $ab$ planes ($B_{ab}$) and along the $c$ direction ($B_c$) (see Fig.\ \ref{fig:exchanges}). The third term, $H_{4sp}$ (Eq. \ref{eq:4bodyHam}), corresponds to the four-spin ring exchange couplings, which arise from consecutive electron hoppings between the NN Mn ions forming four-site plaquettes. We recently showed that the energies of different magnetic orders calculated using DFT for several o-\textit{R}MnO$_3$ cannot be accurately fitted to the isotropic spin Hamiltonian including only Heisenberg and biquadratic exchanges, and the four-spin ring terms need to be included to provide an accurate description of the magnetism \cite{fedorova2015biquadratic}. Moreover, we found that the presence of strong inter-plane four-spin ring exchange $K_c$ can stabilize several exotic magnetic orders in o-\textit{R}MnO$_3$ such as incommensurate w-spiral and commensurate H-AFM and I-AFM (see Ref.\ \onlinecite{fedorova2018fourspin} for details). Here we include in the analysis the four-spin interactions in two types of plaquettes: those within the $ab$ planes ($K_{ab}$) as well as inter-plane ($K_c$) plaquettes (Fig.\ \ref{fig:exchanges}). The fourth term, $H_{SIA}$ (Eq.\ \ref{eq:siaHam}), is a single ion anisotropy which sets the magnetic easy axis along the $b$ direction. The fifth term, $H_{DM}$ (Eq.\ \ref{eq:dmHam}), describes the Dzyaloshinskii-Moriya interactions. We consider DM vectors, $\mathbf{D}_{ij}$, which are defined both for Mn-O-Mn bonds along the $c$ direction and within the $ab$ planes \cite{mochizuki2009microscopic,solovyev1996lamno3}. As shown in Ref.\ \onlinecite{solovyev1996lamno3}, due to the symmetry of o-$R$MnO$_3$ crystals, their DM vectors can be described using five parameters: $\alpha_{ab}$, $\beta_{ab}$ and $\gamma_{ab}$ for the in-plane DM interactions ($\mathbf{D}_{ij}^{ab}$) and $\alpha_{c}$ and $\beta_c$ for the inter-plane ones ($\mathbf{D}_{ij}^c$) (see Fig.\ 3 in Ref. \onlinecite{solovyev1996lamno3}). The $\alpha_c$ components of the $\mathbf{D}_{ij}^c$ vectors favor a canting of the Mn spins from the $b$ axis towards the $c$ axis \cite{mochizuki2009microscopic,solovyev1996lamno3}, which was experimentally observed for several o-\textit{R}MnO$_3$ \cite{matsumoto1970lamno3, mukherjee2017lumno3}. The $\gamma_{ab}$ components of the $\mathbf{D}_{ij}^{ab}$ vectors can favor stabilization of the $ab$ spiral instead of the $bc$ spiral \cite{mochizuki2009microscopic}. In this work we consider only $\alpha_{c}$ and $\gamma_{ab}$ and neglect all other components of the DM vectors. \subsection{First-principles calculations} All density functional calculations are performed using the Vienna \textit{Ab initio} Simulation Package (VASP) based on the projector-augmented plane wave (PAW) method of DFT \cite{kresseVasp}. We employ the generalized gradient approximation with Hubbard $U$ correction (GGA+$U$) for the exchange-correlation potential in the form of Perdew, Burke and Ernzerhof (PBE) revised for solids (PBEsol)\cite{perdew2008Pbesol} as it gives better agreement between theoretically optimized and experimental lattice parameters for the considered systems in comparison with the standard PBE \cite{perdew1996pbe}. The parameter of the on-site Coulomb repulsion for the Mn $d$ states is set to $U$=1 eV and the on-site exchange interaction to $J_H$=0 eV since these values give reasonable sizes of the band gaps and correct magnetic ground states for many o-\textit{R}MnO$_3$. The $f$ states of the rare-earth elements are treated as core states. The cutoff energy for the plane wave basis set is 600 eV. All the calculations using the 20 atom unit cells (structural relaxations, calculations of the biquadratic couplings, DMI and anisotropy constants) are performed with a $\Gamma$-centered 7$\times$7$\times$5 k-point mesh. For the 80 atom (2$\times$2$\times$1) supercells (calculations of the Heisenberg and four-spin ring exchanges) we use a $\Gamma$-centered 3$\times$3$\times$5 k-point mesh and for 80-atom 1$\times$2$\times$2 supercells (calculations of electric polarizations) we use a $\Gamma$-centered 7$\times$3$\times$2 k-point mesh. For the lattice optimizations the structures are considered to be relaxed if the Hellmann-Feynman forces acting on the atoms are smaller than $10^{-4}$ eV/{\AA} and, when the volume is allowed to relax, the components of the stress tensor are smaller than 0.1 kbar. All the structural relaxations are performed with A-AFM order imposed. Spin-orbit coupling is included only in the calculations of the DMI and SIA. \subsection{Monte Carlo simulations} Monte Carlo simulations performed in this work are based on the Metropolis algorithm \cite{metropolis1953montecarlo} combined with overrelaxation moves \cite{creutz1987overrelaxation}. We employ the replica exchange technique \cite{swendsen1986replicas,earl2005partemp} which is efficient in finding a global energy minimum in systems with many local energy minima, which is the case for frustrated spin systems with many competing interactions. For each compound we simulate in parallel $M$=200 replicas, each at a different temperature. The range of temperatures is defined as $T_k$=$T_0/\alpha^k$, where $T_0$=0.005 meV is the temperature of interest, $k$=1...$M-1$ and $\alpha$=0.962 (this value gives the maximal temperature $T_{M-1}$ larger than the strongest exchange interactions in the considered systems). We consider unit cells containing two Mn atoms (in the following we call this the MC unit cell) - Mn$_1$ (0,0.5,0) and Mn$_2$ (0.5,1,0) - and perform simulations for different system sizes (12$\times$40$\times$12 and 4$\times$100$\times$4 MC unit cells). We apply periodic boundary conditions in all directions and double check the results using open boundary conditions along the $b$ direction (and periodic along the $a$ and $c$ axes) to ensure that the modulation vectors of the obtained magnetic structures are not affected by the choice of boundary conditions. We also perform calculations starting from different types of magnetic order - A-AFM, E-AFM, H-AFM (see our recent work, Ref.\ \onlinecite{fedorova2018fourspin}, for the details about the latter state) and random orientation - as an additional check that the results are not affected by the starting configurations and the systems are not trapped in a local energy minimum. \section{Calculated effects of chemical pressure and epitaxial strain on the crystal lattice} \label{sec:results:structure} \begin{figure*} \includegraphics[width=0.95\linewidth,trim=0cm 0cm 0cm 0cm]{structure_vs_R_pluslegend.eps} \\ \caption{\label{fig:str_vs_R}Theoretically optimized structural parameters of strained films and bulk o-$R$MnO$_3$ versus the radius of the $R$ cation: (a) and (b) give the Mn-O-Mn bond angles within the $ab$ planes (IP angle) and along the $c$ direction (OP angle), respectively; (c), (d) and (e) show short ($s$), medium ($m$) and long ($l$) Mn-O bond lengths of the MnO$_6$ octahedra, respectively; (f) shows the lengths of the O(1)-O(2) bridges (see Fig.\ \ref{fig:rmno3_structure} (b)) within the $ab$ planes. Bulk samples are shown by empty circles, strained films by filled circles. For LuMnO$_3$ the triangles denote calculations for hypothetical films which are compressively strained in the $ac$ plane by the same amount but in the opposite direction as the experimentally measured tensile strained films. LuMnO$_3$ 26 nm film and the corresponding inverse case are highlighted in gray, 104 nm film and the inverse case in black. Compressive strain within the $ac$ planes of the o-$R$MnO$_3$ films is shown by the violet color, tensile strain by the blue color. The dashed lines connecting the data points for bulk o-$R$MnO$_3$ are guides to the eye.} \end{figure*} First we calculate how the o-\textit{R}MnO$_3$ crystal structure evolves under chemical pressure and epitaxial strain. We start by considering bulk o-\textit{R}MnO$_3$ and analyze how the internal lattice parameters vary with the radius of the \textit{R} cation. For this purpose we consider several representatives of the o-\textit{R}MnO$_3$ series (namely \textit{R}=Gd, Tb, Ho, Er, Yb and Lu) and fully optimize their lattice parameters and internal coordinates using DFT, with the experimentally reported structures as the starting point \cite{mori2002lnmno3,alonso2000evolution,munoz2001homno3,ye2007incommensurate,tachibana2007jahn,okamoto2008neutron}. This allows us to make a direct comparison between our findings for bulk samples and for strained films, for which the internal coordinates are not readily measurable. In Fig.\ \ref{fig:str_vs_R} we present the obtained lengths of the short ($s$), medium ($m$) and long ($l$) Mn-O bonds within the MnO$_6$ octahedra, the O(1)-O(2) distances (see Fig.\ \ref{fig:rmno3_structure} (b)) as well as the Mn-O-Mn bond angles within the $ab$ planes (IPA) and along the $c$ axis (OPA) versus the $R$ radius. The exact values for all the optimized lattice parameters together with the experimentally reported values are summarized in Table II of the Supplemental materials. From Fig.\ \ref{fig:str_vs_R} (a) and (b) one can see that in bulk o-$R$MnO$_3$ the volume reduction due to decrease in the radius of the $R$ cation is almost fully accommodated by reducing the Mn-O-Mn bond angles within the $ab$ planes and along the $c$ direction. As a secondary effect, the O(1)-O(2) distances also decrease as the $R$ radius decreases (Fig.\ \ref{fig:str_vs_R} (f)). In turn, the $s$ and $m$ Mn-O bond lengths (Figs.\ \ref{fig:str_vs_R} (c) and (d), respectively) are almost constant across the series of the bulk samples and $l$ bonds decrease slightly from Gd to Lu (Fig.\ \ref{fig:str_vs_R} (e)). This is in agreement with literature experimental data \cite{zhou2006rmno3} as well as with previous theoretical reports \cite{yamauchi2008rmno3,zhang2018rmno3}. In the next step, we investigate the effects of strain on the crystal structure of o-$R$MnO$_3$. For this we consider a set of [010]-oriented o-\textit{R}MnO$_3$ films (with the same \textit{R} as in the bulk samples described previously in this section) grown epitaxially on YAlO$_3$ substrates. The experimental lattice parameters of the o-GdMnO$_3$ and o-TbMnO$_3$ films are taken from Refs.\ \onlinecite{shimamoto2017phase_diagram,shimamoto2017tbmno3} and for the other films we use values measured in this work. The lattice mismatch between the film and the substrate results in either compressive or tensile strain in the $ac$ planes: For o-GdMnO$_3$ and o-TbMnO$_3$ films the $a$ and $c$ lattice constants are strongly compressed compared to the corresponding bulk values, which in turn leads to an increase in $b$; in o-YbMnO$_3$ and the two o-LuMnO$_3$ films (26 nm and 104 nm) the effect is opposite - $a$ and $c$ are increased and $b$ is reduced; in o-ErMnO$_3$ the $a$ lattice constant is compressed, while $b$ and $c$ are increased. For comparison we also consider a o-HoMnO$_3$ film grown on a NdGaO$_3$ substrate, for which the $a$ and $c$ lattice constants of the film are extended and $b$ is significantly reduced. To simulate the epitaxially strained films, we constrain the lengths $l_i^{str}$ ($i=a,c$) of the $a$ and $c$ lattice constants to the values: \begin{equation} l_i^{str}=(1+\epsilon_i)l_i^{bulk}, \end{equation} where $l_{i}^{bulk}$ is the corresponding lattice parameter of the relaxed bulk crystal structure described above and $\epsilon_i$ is the experimental strain applied to the $i$th lattice constant. Then we use DFT to optimize the length of the $b$ lattice parameter, which is perpendicular to the substrate, and the ionic positions. In Fig. \ref{fig:str_vs_R} we present the Mn-O-Mn bond angles, Mn-O bond lengths and O(1)-O(2) distances of the optimized strained crystal structures (together with the corresponding parameters for the bulk structures) versus the radius of the $R$ cation (all lattice parameter values are summarized in Table III of the Supplemental materials). When we compare each bulk sample with its corresponding strained film(s), we see that applying strain (both compressive and tensile) affects mostly the $m$ and $l$ Mn-O bonds of the MnO$_6$ octahedra while the $s$ bonds as well as both IP and OP Mn-O-Mn bond angles remain almost unchanged between bulk and strained samples. For $m$ and $l$ bonds, compressive and tensile strains clearly have opposite effects: in the first case ($R$=Gd, Tb) $m$ is reduced and $l$ is increased (due to increase in the $b$ lattice constant), and vice versa in the latter case ($R$=Yb, Lu). Next, to check whether the effect of compressive strain can be different in systems with small unit cell volume, we simulated two hypothetical films of o-LuMnO$_3$ in which we artificially compressed the $a$ and $c$ axes of the fully optimized bulk crystal structure by the same amount as they expanded in the experimentally studied tensile strained o-LuMnO$_3$ films (26 nm and 104 nm) described above. These hypothetical films will be called inv26 and inv104, respectively, in the following. The obtained lattice parameters for the inv26 and inv104 o-LuMnO$_3$ films are also shown in Fig.\ \ref{fig:str_vs_R}. One can see that indeed the trend in variation of $m$ and $l$ bonds is the same (the amplitude is larger) as in the compressively strained films (o-GdMnO$_3$ and o-TbMnO$_3$ films) with larger unit cell volumes. In this case, however, the inter-plane Mn-O-Mn bond angles are also reduced from their bulk values. \begin{figure} \includegraphics[width=0.8\linewidth,trim=0.5cm 0cm 0cm 0cm]{theta_vs_R_pluslegend.eps} \\ \caption{\label{fig:theta_vs_R}Orbital mixing angles $\theta$ in strained films and bulk o-$R$MnO$_3$ versus the radius of the $R$ cation $r_R$. Bulk samples are shown by empty circles, experimentally measured strained films by filled circles. For o-LuMnO$_3$ the triangles denote calculations for hypothetical films which are compressively strained in the $ac$ plane by the same amount but in the opposite direction as the experimentally measured tensile strained films. The 26 nm o-LuMnO$_3$ film and the corresponding inverse case are highlighted in gray, the 104 nm film and the inverse case in black. Compressive strain within the $ac$ planes of the o-$R$MnO$_3$ films is shown by the violet color, tensile strain by the blue color. The dashed lines connecting the data points for bulk o-$R$MnO$_3$ are guides to the eye.} \end{figure} To understand how these lattice variations affect the orbital ordering in o-$R$MnO$_3$, we estimate the orbital mixing angles $\theta$ using Eq.\ \ref{eq:orb_mixing} and our optimized values of $s$, $m$ and $l$ Mn-O bond lengths for all considered bulk samples and films of o-$R$MnO$_3$. The calculated $\theta$ are presented in Fig.\ \ref{fig:theta_vs_R}. One can clearly see that, since the Mn-O bond lengths are almost constant across the series of bulk samples, $\theta$ also shows only small variations. By applying strain, however, the orbital mixing angles can be significantly changed with respect to the corresponding bulk values. For example, for bulk LuMnO$_3$ $\theta$$\approx$112$^o$ and, according to Eq.\ \ref{eq:orb_mix_state}, the occupied $e_g$ orbitals on neighboring Mn sites $i$ and $j$ within the $ab$ planes have a character close to either $|3x^2-r^2\rangle$ (on site $i$) or $|3y^2-r^2\rangle$ (on site $j$). For the 26 nm film of LuMnO$_3$, however, $\theta$ is significantly reduced (97$^o$), which affects the character of the occupied orbitals, with the weight of $|3z^2-r^2\rangle$ state increasing and that of $|x^2-y^2\rangle$ going down, see Eq.\ \ref{eq:orb_mix_state}. Thus we see that chemical pressure and epitaxial strain are accommodated by the crystal structure of o-\textit{R}MnO$_3$ in different ways. In particular, the former leads to a change in the Mn-O-Mn bond angles (GFO distortion) while the latter affects mostly the Mn-O bond lengths (JT distortion) in the opposite way for compressive and tensile cases. Variation of the JT distortion in o-\textit{R}MnO$_3$ films changes their orbital ordering compared to bulk samples. Since the magnetism in the o-\textit{R}MnO$_3$ is closely related to the magnitudes of the JT and GFO distortions (as described in detail in Sec.\ \ref{sec:structure}), the fact that the chemical pressure and epitaxial strain affect these distortions differently can be key to understanding of distinct magnetic (and, therefore, ferroelectric) properties of bulk and strained films of o-\textit{R}MnO$_3$. \section{Calculated effects of chemical pressure and epitaxial strain on the magnetism} \subsection{Microscopic exchange interactions} \label{sec:couplings} In order to develop better insight into how these structural variations due to chemical pressure and epitaxial strain affect the magnetic properties of o-\textit{R}MnO$_3$, we analyze their effects on the microscopic exchange interactions. We extract all the considered Heisenberg, biquadratic and four-spin ring exchanges as well as the parameters of DMI and SIA (see Sec.\ \ref{sec:hamiltonian} and Fig.\ \ref{fig:exchanges}) by mapping the DFT energies of different magnetic configurations calculated for all the studied bulk o-\textit{R}MnO$_3$ (\textit{R}=Gd, Tb, Ho, Er, Yb, Lu) on the model Hamiltonian of Eq.\ \ref{eq:fullHam}. The methods which we use to extract the Heisenberg, biquadratic and four-spin ring exchanges are described in detail in our previous work (see Sec.\ IVB of Ref.\ \onlinecite{fedorova2015biquadratic}), while for calculations of DMI and SIA we employ the approach proposed in Sec II C of Ref.\ \onlinecite{xiang2011whangbo}. We show the extracted couplings $J_c$, $J_{ab}$, $J_b$, $J_{3nn}$, $B_{ab}$ and $K_c$ versus the radius of the $R$ cations in Fig.\ \ref{fig:JsvsR} (plots for the other coupling constants are presented in Fig.\ 1 of the Supplemental materials and the exact values of all the extracted couplings are summarized in Table IV of the Supplemental materials.) \begin{figure*} \includegraphics[width=1\linewidth,trim=0cm 0cm 0cm 0cm]{exchanges_vs_R_pluslegend.eps} \caption{\label{fig:JsvsR} Calculated exchange coupling constants versus the radius of the $R$ cation: (a)-(d) show the Heisenberg exchanges $J_c$, $J_{ab}$, $J_b$ and $J_{3nn}$, respectively; (e) shows the biquadratic in-plane exchanges $B_{ab}$ and (f) the four-spin ring couplings $K_c$. Bulk samples are indicated by the empty circles, experimentally measured strained films by the filled circles. For o-LuMnO$_3$ the triangles denote the hypothetical films which are compressively strained in the $ac$ plane by the same amount but opposite direction as experimentally measured tensile strained films. The o-LuMnO$_3$ 26 nm film and the corresponding inverse case are highlighted in gray, the 104 nm and inv104 nm cases in black. Compressive strain within the $ac$ planes of the o-$R$MnO$_3$ films is shown by the violet color, tensile strain by the blue color. The dashed line connecting the data points for the bulk o-$R$MnO$_3$ is used to guide the eye.} \end{figure*} One can see that the decreasing radius of the $R$ cation from Gd to Lu (resulting in the reduction of the Mn-O-Mn bond angles) in bulk o-$R$MnO$_3$, leads to a drastic decrease in the absolute value of the FM NN Heisenberg exchange $J_{ab}$ from -7.04 meV in o-GdMnO$_3$ to -2.20 meV in o-LuMnO$_3$ (Fig.\ \ref{fig:JsvsR} (b)). In contrast, all the other couplings remain almost constant across the series. The drop in $J_{ab}$ can be explained by the significant reduction of the FM contribution from the $e_g$-$p_\sigma$-$e_g$ superexchange, which is strongly dependent on the Mn-O-Mn bond angles. In contrast, the AFM $t_{2g}$-$p_\pi$-$t_{2g}$ contribution remains unchanged since it is much less affected by the variation of the bond angles due to the geometry of the participating orbitals. The latter also explains why the inter-plane NN Heisenberg couplings $J_c$ are nearly the same for all the considered bulk samples of o-\textit{R}MnO$_3$ as these couplings are mostly determined by the $t_{2g}$-$p_\pi$-$t_{2g}$ superexchange. Clearly, there is also a small effect on the NNN Heisenberg coupling $J_b$ (see the inset in Fig.\ \ref{fig:JsvsR} (c)), which increases with reducing \textit{R}. This occurs because of the decrease in the distance between the ions O(1) and O(2) shown in Fig.\ \ref{fig:rmno3_structure} (b), which results in larger overlap between their $p$ orbitals along the Mn-O(1)-O(2)-Mn superexchange path. We can conclude that the evolution of the magnetic order in bulk o-\textit{R}MnO$_3$ is mostly due to the reduction of $J_{ab}$, because the effect of other couplings (NNN Heisenberg, higher order couplings and anisotropic terms) becomes more pronounced when the strong FM NN exchange is reduced. Next, we perform similar calculations of the exchange coupling and anisotropy constants for the films of o-\textit{R}MnO$_3$ to determine how they are influenced by the structural variations caused by epitaxial strain. The resulting couplings are presented in Fig.\ \ref{fig:JsvsR}; see also Fig.\ 1 and Table V of the Supplemental materials. As we showed in the previous section, the application of strain affects the Mn-O bond lengths, whereas the Mn-O-Mn bond angles in most cases change only slightly from their values in the corresponding bulk samples. First we consider four films which are compressively strained within the $ac$ plane (o-GdMnO$_3$, o-TbMnO$_3$, o-LuMnO$_3$ inv26 and inv104) and for which the $b$ lattice constants are expanded, resulting in a reduction of $m$ and increase in $l$ Mn-O bond lengths compared to the bulk samples. As one can see from Figs. \ref{fig:str_vs_R} (d) and \ref{fig:JsvsR} (a), the decrease in $m$ by 0.02 - 0.04 {\AA} provides a significant increase in the coupling $J_c$ (for example, by 3.46 meV for a thin film of o-GdMnO$_3$ relative to the corresponding bulk sample). This can be explained by the increased overlap between the $d$ orbitals of Mn and $p$ states of O participating in the superexchange. The increase in $l$ (by 0.02-0.04 {\AA}, Fig.\ \ref{fig:str_vs_R} (e)), in turn, results in a drastic reduction in the absolute value of $J_{ab}$ coupling relative to the bulk samples for o-GdMnO$_3$ and o-TbMnO$_3$, and for o-LuMnO$_3$ films this coupling even changes sign from FM to AFM (see Fig.\ \ref{fig:JsvsR} (b)). The latter likely occurs because the AFM contribution from the $t_{2g}$ states start to dominate over the FM $e_g$ contribution. The increase in the NNN coupling $J_b$ (see Fig.\ \ref{fig:JsvsR} (c)) originates from the reduction of O(1)-O(2) distance, which is a secondary effect of the increase in $l$. Interestingly, the higher order couplings ($K_c$ and $B_{ab}$) are affected by the variation of the bond lengths while they show almost no dependence on the bond angles (see insets in Figs.\ \ref{fig:JsvsR} (e) and (f)). For the tensile strained films, the variation of the couplings is opposite to the case of compressive strain. For example, in the o-LuMnO$_3$ 26 nm film, $J_c$ is reduced to almost 0 meV due to the increase of $m$ Mn-O bond lengths and $J_{ab}$ is increased in absolute value to -8.7 meV, which is even stronger than the same coupling in bulk o-GdMnO$_3$, by decreasing $l$. The four-spin ring inter-plane exchange $K_c$ increases with tensile strain and starts to compete with the weak $J_c$. Thus we demonstrated that the microscopic exchange interactions in o-\textit{R}MnO$_3$ evolve differently under chemical pressure and epitaxial strain. Specifically, the substitution of smaller \textit{R} in bulk o-\textit{R}MnO$_3$ results in an increased GFO distortion and leads to the reduction of the NN in-plane Heisenberg coupling $J_{ab}$ and a slight increase in the NNN coupling $J_b$, while all the other couplings are almost constant across the series of the bulk samples. On the other hand, the change in the Mn-O bond lengths caused by epitaxial strain affects strongly both in-plane and inter-plane NN Heisenberg exchanges, and leads to a smaller variation of other coupling constants (NNN Heisenberg, biquadratic and four-spin ring exchanges). The changes are clearly different for compressive and tensile strain. The evolution of each coupling depends on whether the structure and, consequently, the Mn-O bond lengths are expanded or reduced in the relevant direction. \subsection{Monte Carlo simulations} \label{sec:montecarlo} In the next step we perform a series of Monte Carlo simulations using the calculated exchange couplings and anisotropy constants for bulk samples and strained films of o-\textit{R}MnO$_3$ to determine their ground state magnetic phases. This also serves as a check of how well the model Hamiltonian of Eq.\ \ref{eq:fullHam} reproduces the experimentally measured magnetism in these systems. \begin{figure} \centering \includegraphics[width=0.43\textwidth]{mc_results_pluslegend.eps} \caption{\label{fig:mc_results} Experimentally determined and calculated modulation vectors of the ground state magnetic phases in bulk and strained o-\textit{R}MnO$_3$. (a) shows $q_b$ for bulk o-\textit{R}MnO$_3$, (b) for the films of o-\textit{R}MnO$_3$. Black circles indicate experimentally determined (exp.) $q_b$, purple circles calculated $q_b$ (MC). The gray circle in (b) indicates the experimentally measured $q_b$ in the 26 nm film of LuMnO$_3$, and the green circle shows the calculated $q_b$ for this film; the $q_b$ values for the 104 nm LuMnO$_3$ film are shown with the usual black (measured) or purple (calculated with MC) circles (note that the purple circle at $q_b$=0.5 is obscured by the green circle). $q_b$ is in reciprocal lattice units.} \end{figure} \begin{table}[t] \caption{DFT energies per spin (in meV) (relative to the energy of the E-AFM order) calculated for bulk and strained o-LuMnO$_3$ (26 nm and 104 nm films) imposing E-AFM, H-AFM and I-AFM orders.} \begin{tabular}{p{36pt}p{65pt}p{65pt}p{65pt}} \hline \hline & \centering{E-AFM} & \centering{H-AFM} & \centering{I-AFM} \tabularnewline \hline \centering{bulk} & \centering{0} & \centering{1.75} & \centering{0.93} \tabularnewline \centering{104 nm} & \centering{0} & \centering{-0.68} & \centering{-1.78} \tabularnewline \centering{26 nm} & \centering{0} & \centering{-3.56} & \centering{-5.18} \tabularnewline \hline \hline \end{tabular} \label{table:DFT_en} \end{table} First, we consider bulk o-\textit{R}MnO$_3$ and determine the magnetic ground states for the systems with $R$=Gd, Tb, Ho, Er, Yb and Lu using the exchange coupling and anisotropy constants listed in Table IV of the Supplemental materials. Since the methods which we use to calculate these constants allow an uncertainty in their values of up to $\pm$10-25\% (see our previous work for details, Ref.\ \onlinecite{fedorova2018fourspin}), we take the lower boundary of this uncertainty range and check whether the experimentally observed magnetic ground states can be reproduced for all systems within this range of parameters. For that purpose we perform for each compound a set of MC simulations in which one of the exchange couplings ($J_c$, $J_{ab}$, $J_a$, $J_{diag}$, $J_b$, $J_{3nn}$, $K_{ab}$, $K_c$, $B_{ab}$, $B_c$, $\gamma_{ab}$, $\gamma_c$) or anisotropy ($A$) presented in Table IV of the Supplemental materials is varied by $\pm$10\% while all the others are kept fixed to the values presented in Table IV of the Supplemental materials. In these simulations the system size is 4$\times$100$\times$4 MC unit cells. For each compound, the lowest energy state obtained in the MC simulations with the couplings and anisotropy constants listed in Table IV of the Supplemental materials is used as a starting configuration. We determine the types of obtained magnetic phases by calculating the order parameters (for A-AFM, E-AFM and H-AFM orders) and magnetic structure factors along different directions in reciprocal space; the positions of the peaks in the magnetic structure factors give the modulation vectors of the resulting magnetic phases. In Fig.\ \ref{fig:mc_results} (a) we present the modulation vectors $q_b$ of the minimum energy phases obtained in our MC simulations for bulk o-\textit{R}MnO$_3$ together with the experimentally reported values. We find that for o-\textit{R}MnO$_3$ with \textit{R}=Gd, Ho, Er, Yb, Lu our model Hamiltonian (Eq.\ \ref{eq:fullHam}) and the calculated couplings reproduce well the experimentally reported $q_b$ values. For o-TbMnO$_3$ we obtain a spiral order with $q_b=0.2$ as the lowest energy state (the experimental value is $q_b=0.28$) using periodic boundary conditions in all directions, while with open boundary conditions along the $b$ axis we obtain $q_b=0.22$. Interestingly, for $R$=Tb, Ho and Er several magnetic phases can be stabilized by varying the exchange couplings by $\pm$10\% of their values listed in Table IV of the Supplemental materials. This behavior is likely due to a competition between exchange interactions in these compounds (almost all calculated couplings are relatively strong), resulting in multiple low-energy magnetic states with very close energies. The favoring of one state over another in the real samples may occur due to different synthesis conditions resulting in slightly different lattice parameters. For example, in o-TbMnO$_3$ samples, both A-AFM order and an incommensurate cycloidal spiral can be the lowest energy states. In o-HoMnO$_3$, in turn, a cycloidal spiral, w-spiral and E-AFM orders can be readily stabilized. The latter two can be the magnetic ground states in o-ErMnO$_3$ as well (see our previous work for the details, Ref.\ \onlinecite{fedorova2018fourspin}). This can explain the contradictory experimental reports of the magnetic and ferroelectric properties of the o-\textit{R}MnO$_3$ that are on the border between spiral and E-AFM phases in the magnetic phase diagram described in Sec.\ \ref{sec:phase_diagram}. \begin{table}[b] \caption{Electric polarizations (in $\mu$C/cm$^2$) calculated for bulk and strained GdMnO$_3$, ErMnO$_3$ and LuMnO$_3$ imposing E-AFM, H-AFM and I-AFM orders. The value of $P$ corresponding to the ground-state magnetic phase is in bold font.} \begin{tabular}{p{36pt}p{65pt}p{65pt}p{65pt}} \hline \hline & \centering{E-AFM} & \centering{H-AFM} & \centering{I-AFM} \tabularnewline \hline & \multicolumn{3}{c}{GdMnO$_3$} \tabularnewline \hline \centering{bulk} & \centering{4.17 $||a$} & \centering{0.08 $||c$} & \centering{0.11 $||a$} \tabularnewline \centering{10 nm} & \centering{\textbf{3.16} $||a$} & \centering{0.31 $||c$} & \centering{0.17 $||a$} \tabularnewline \hline & \multicolumn{3}{c}{ErMnO$_3$} \tabularnewline \hline \centering{bulk} & \centering{4.06 $||a$} & \centering{0.35 $||c$} & \centering{0.12 $||a$} \tabularnewline \centering{30 nm} & \centering{\textbf{3.85} $||a$} & \centering{0.36 $||c$} & \centering{0.17 $||a$} \tabularnewline \hline & \multicolumn{3}{c}{LuMnO$_3$} \tabularnewline \hline \centering{26 nm} & \centering{5.17 $||a$} & \centering{0.18 $||c$} & \centering{\textbf{0.77} $||a$} \tabularnewline \centering{104 nm} & \centering{4.60 $||a$} & \centering{0.16 $||c$} & \centering{\textbf{0.45} $||a$} \tabularnewline \centering{bulk} & \centering{\textbf{4.09} $||a$} & \centering{0.40 $||c$} & \centering{0.19 $||a$} \tabularnewline \centering{inv104} & \centering{3.54 $||a$} & \centering{0.10 $||c$} & \centering{0.05 $||a$} \tabularnewline \centering{inv26} & \centering{3.22 $||a$} & \centering{0.06 $||c$} & \centering{0.26 $||a$} \tabularnewline \hline \hline \end{tabular} \label{table:polarization} \end{table} Next, we perform the same analysis for the strained films of o-\textit{R}MnO$_3$. The modulation vectors of the ground state magnetic phases obtained in our MC simulations and the corresponding experimental values are presented in Fig.\ \ref{fig:mc_results} (b). We find that for o-GdMnO$_3$, o-TbMnO$_3$ and o-ErMnO$_3$ films the lowest energy magnetic phase is E-AFM with the spins slightly canted away from the $b$ axis, which agrees with the experiments \cite{shimamoto2017phase_diagram,shimamoto2017tbmno3}. For the o-LuMnO$_3$ 104 nm and 26 nm films, the experimentally reported $q_b$ values are 0.486 and 0.479, respectively. In our MC simulations we observe magnetic phases with similar incommensurate $q_b$ values for these films, however we find these phases to be metastable. For the 104 nm film the calculated lowest energy state is H-AFM order with $q_b$=0.5. Note, that H-AFM is degenerate with I-AFM order with $\mathbf{q}$=(0,0.5,0.5) within the framework of the model Hamiltonian of Eq.\ \ref{eq:fullHam}, however the latter state does not give the experimentally observed peak in the magnetic structure factor at (0,$q_b$,0) (see Ref.\ \onlinecite{fedorova2018fourspin}). For the 26 nm film both H-AFM (or I-AFM) and A-AFM states can be stabilized in the simulations. The presence of the H-AFM (or I-AFM) order in these films of o-LuMnO$_3$ is interesting since one would rather expect the establishment of A-AFM order in o-$R$MnO$_3$ with such a strong NN in-plane Heisenberg exchange $J_{ab}$ (-5.48 and -8.68 meV, respectively). H-AFM (or I-AFM) order is enabled due to drastic suppression of the NN inter-plane Heisenberg coupling $J_c$ combined with an increased inter-plane four-spin ring interaction $K_c$ which favors this order. The only sample for which we did not reach an agreement with experiment is the o-HoMnO$_3$ film. The experimental value of $q_b$=0.413 was not found even in the range of the couplings of $\pm$30\% of the values presented in Table V of the Supplemental materials. We believe that this is due to an experimental limitation. The low homogeneity of this sample likely causes inconsistencies between the RXD and XRD experiments, as they may probe slightly different positions and volumes of the sample. To double check the results of our MC simulations for the films of o-LuMnO$_3$ (26 nm and 104 nm) in which unconventional H-AFM or I-AFM orders were obtained as the lowest energy states, and to clarify whether one of these states might be favored in these systems by, for example, exchange striction or another distortion of the electronic density, we perform the following analysis: We construct a 1$\times$2$\times$2 supercell for each film (the theoretically optimized unit cell is doubled along \textit{b} and \textit{c} directions) and optimize the ionic positions within this supercell imposing E-AFM, H-AFM and I-AFM orders in turn. Then we calculate the energies of these with their corresponding magnetic orders. The results are presented in Table \ref{table:DFT_en}. For comparison, the corresponding energies calculated for bulk o-LuMnO$_3$ are also presented. One can see that E-AFM order is the lowest energy state for bulk o-LuMnO$_3$. Tensile strain along the \textit{a} and \textit{c} directions favors the establishment of I-AFM order in both the 104 nm film and 26 nm films. Thus we showed that MC simulations based on the model Hamiltonian of Eq.\ \ref{eq:fullHam} and the exchange couplings calculated using DFT accurately reproduce the experimentally determined magnetic phase diagram of both bulk and strained o-\textit{R}MnO$_3$. We find that, in those bulk o-\textit{R}MnO$_3$ that lie near the boundary between IC spiral and E-AFM phases, different magnetic orders can be stabilized by small variations ($\pm$10\%) of the exchange interactions. In real materials, such small variations could arise from slightly different lattice constants due to different synthesis conditions and/or the presence of defects. This could explain the contradictory values reported for the measured magnetism and ferroelectricity in these materials. Our simulations also confirmed that E-AFM can be stabilized in o-GdMnO$_3$, o-TbMnO$_3$ and o-ErMnO$_3$ by epitaxial strain. Finally, we discovered an unconventional I-AFM order, which is degenerate with H-AFM order in the MC simulations, but lower in energy in DFT, in the 26 nm and 104 nm films of o-LuMnO$_3$. This order is enabled by the increased inter-plane four-spin ring exchange interactions $K_c$ and drastically reduced inter-plane NN Heisenberg couplings $J_c$ caused by the longer $m$ Mn-O bonds. \section{Electric polarization in bulk and strained \lowercase{o}-RM\lowercase{n}O$_3$} \label{sec:polarization} Finally, in order to understand how the chemical pressure and epitaxial strain affects the electric polarization in o-\textit{R}MnO$_3$, and to check whether the presence of the magnetic phases which were obtained in our MC simulations can resolve the contradictions in the reported measured values of $P$, we perform the following analysis: We consider bulk and strained films of o-GdMnO$_3$, o-ErMnO$_3$ and o-LuMnO$_3$ (for the latter both experimentally studied and hypothetical inv26 and inv104 films). For each system we construct 1$\times$2$\times$2 supercells by doubling the theoretically optimized unit cells along the \textit{b} and \textit{c} axes, and relax the ionic positions within these supercells imposing E-AFM, I-AFM and H-AFM orders. Then we perform Berry phase calculations using these relaxed structures with the corresponding magnetic orders imposed. The obtained polarizations, with the supercell in which the positions were relaxed with A-AFM order taken as the reference high-symmetry structure, are summarized in Table \ref{table:polarization}. One can see that $P$ calculated for bulk o-\textit{R}MnO$_3$ with E-AFM order imposed is almost unaffected by the size of the \textit{R} ion. All the values are at least an order of magnitude higher than those measured experimentally, in agreement with previous theoretical reports \cite{yamauchi2008rmno3,zhang2018rmno3}. (Note, that for \textit{R} larger than Gd, Refs. \onlinecite{yamauchi2008rmno3,zhang2018rmno3} reported an enhancement in \textit{P} with \textit{R}). Compressive strain along the \textit{a} and \textit{c} axes reduces \textit{P} in the films of E-AFM o-GdMnO$_3$ and inv104 and inv26 hypothetical films of o-LuMnO$_3$ by up to 1 $\mu$C/cm$^2$. Tensile strain along the same direction, in turn, increases $P$; for example, for the 26 nm film of o-LuMnO$_3$ $P$ increases by more than 1 $\mu$C/cm$^2$. Our calculated values are inconsistent with the recent experimental study of $P$ in the series of o-\textit{R}MnO$_3$ thin films, in which $P\approx1$ $\mu$C/cm$^2$ along the \textit{a} axis was reported for all \textit{R}=Gd,...,Lu except Tb, where $P\approx2$ $\mu$C/cm$^2$ was reported \cite{shimamoto2017phase_diagram}. Our calculated $P$ values induced by H-AFM order are aligned along the $c$ axis and their amplitudes are at least an order of magnitude smaller than those induced by E-AFM order. While the absolute values of $P$ are less affected by structural modifications compared to the E-AFM case, the fractional changes are equally dependent. This direction of \textit{P}, has to our knowledge been experimentally observed only in systems with spiral magnetic orders (o-TbMnO$_3$ \cite{kimura2003magnetic,kimura2005magnetoelectric}, o-DyMnO$_3$ \cite{kimura2005magnetoelectric}, o-Eu$_{1-x}$Y$_{x}$MnO$_3$ \cite{hemberger2007multiferroic} and o-Gd$_{1-x}$Tb$_x$MnO$_3$ \cite{Yamasaki2008mixtures}), in bulk samples of o-HoMnO$_3$ \cite{lee2011mechanism} with incommensurate order ($q_b\approx$0.4) and in weakly strained films of o-YMnO$_3$ \cite{fina2010ymno3Pc}. In earlier work, Ref. \onlinecite{fedorova2018fourspin}, we showed that $P||c$ in o-HoMnO$_3$ can be explained by the presence of w-spiral order. I-AFM order induces a small polarization along the \textit{a} axis, with the value of $P$=0.77 $\mu$C/cm$^2$ that we obtain for the 26 nm o-LuMnO$_3$ film being close to the experimentally measured value of $P\approx$ 1 $\mu$C/cm$^2$ \cite{shimamoto2017phase_diagram}. The I-AFM phase, however, is the ground state only in the 26 nm and 104 nm o-LuMnO$_3$ films according to our MC and DFT calculations. In conclusion, in spite of the fact that our DFT calculations correctly capture the various magnetic orderings in o-\textit{R}MnO$_3$ films and bulk samples, we are not able to reproduce the experimentally reported ferroelectric polarizations in many cases. The wide spread in the reported values of ferroelectric polarizations in different samples of o-\textit{R}MnO$_3$, the consistently low values for E-AFM bulk crystals, as well as the similar values across the series of o-\textit{R}MnO$_3$ films remain unexplained. \section{Summary and conclusions} \label{sec:summary} In summary, we studied the effects of chemical pressure and epitaxial strain on the crystal structure and multiferroic orders of the o-$R$MnO$_3$ series using X-ray diffraction measurement techniques (XRD and RXD), first-principles calculations and Monte Carlo simulations. In our RXD measurements we observed that the magnetic modulation vectors $q_b$ measured for o-\textit{R}MnO$_3$ films can differ significantly from those of bulk samples. To clarify the origin of this difference we used DFT to determine how the lattice parameters evolve in the o-$R$MnO$_3$ series, for both bulk and thin-film samples. We then studied the effect of these lattice variations on the microscopic exchange interactions. We found that reducing the radius of the $R$ cation in bulk o-$R$MnO$_3$ leads to decreasing Mn-O-Mn bond angles within the $ab$ planes and along the $c$ axis, while the Mn-O bond lengths stay almost constant throughout the series, with only the $l$ bonds decreasing slightly. In contrast, strain primarily affects the Mn-O bond lengths relative to the corresponding bulk samples, with bond angles varying under strain only in the samples with the smallest unit cell volumes. Next, we showed that reduction of the Mn-O-Mn bond angles due to decreasing $R$-cation radius in bulk o-$R$MnO$_3$ leads to a significant decrease in the absolute value of NN Heisenberg in-plane exchange $J_{ab}$ (see Fig.\ \ref{fig:exchanges}) and a small increase in the NNN Heisenberg coupling $J_b$. All other couplings and anisotropies remain almost constant with respect to $R$ radius. From this finding we concluded that the evolution of the magnetic order across the bulk series is dominated by the reduction in $J_{ab}$, which makes the effect of the other couplings, such as NNN Heisenberg couplings, biquadratic and four-spin ring exchanges, DMI and anisotropies, more pronounced. For films of o-\textit{R}MnO$_3$, we demonstrated that variation of the Mn-O bonds by applying strain can have a drastic effect on both in-plane and inter-plane NN Heisenberg couplings ($J_{ab}$ and $J_c$, respectively), and the magnitudes of the NNN Heisenberg couplings ($J_b$ and $J_{3nn}$) and of higher order exchanges (biquadratic and four-spin ring exchanges) can also be affected. Expansion and compression of the Mn-O bonds have opposite effects on the magnitudes of the exchange couplings. In our Monte Carlo simulations we determined the magnetic ground states of the model Hamiltonian of Eq.\ \ref{eq:fullHam} for bulk and strained o-\textit{R}MnO$_3$ using the extracted exchange coupling and anisotropy constants, and found that the calculated modulation vectors agree well with the available experimental data. We showed that in those bulk o-\textit{R}MnO$_3$ on the boundary between IC spiral and E-AFM states in the magnetic phase diagram (Fig.\ \ref{fig:PD_2_options}), different magnetic orders can be stabilized by small variations of the exchange couplings. This can explain the contradictory experimental reports of their magnetic and ferroelectric properties. For compressively strained o-GdMnO$_3$ and o-TbMnO$_3$ films we confirmed the reported evolution of the magnetic order to the E-AFM phase. This occurs due to a drastic reduction of the NN in-plane Heisenberg coupling $J_{ab}$ caused by the increasing length of the $l$ Mn-O bonds. For tensile-strained films of o-LuMnO$_3$ we found that suppression of the inter-plane Heisenberg coupling $J_c$ and increase in the four-spin ring coupling $K_c$ can stabilize exotic magnetic orders such as H-AFM or I-AFM, with I-AFM having the lower DFT energy. Finally, we used DFT to analyze how the electric polarization would evolve in bulk and strain o-\textit{R}MnO$_3$ if it were induced by one of the magnetic phases which we obtained in our MC simulations. The values of $P$ calculated on imposing E-AFM order were significantly larger than the experimentally measured values for both bulk and films of o-\textit{R}MnO$_3$, and in the latter case is highly strain dependent, increasing with tensile strain along the $a$ and $c$ directions and vice versa. This behavior, however, has not been reported experimentally, where measured $P$ values are similar for both compressively and tensile strained films. We find that the $P$ values calculated with I-AFM order imposed are closest to those measured experimentally. However, in our MC and DFT calculations I-AFM is the lowest energy phase only in the tensile strained films of o-LuMnO$_3$. Therefore, our findings cannot fully resolve the puzzling behavior of $P$ in o-\textit{R}MnO$_3$. \section{Acknowledgements} Experiments were performed at the X11MA and X04SA beamlines at the Swiss Light Source, Paul Scherrer Institut, Villigen, Switzerland. We thank the X11MA and X04SA beamline staff for experimental support. The financial support of PSI and the Swiss National Science Foundation (SNSF) is gratefully acknowledged. Y.W.W., and M.R. acknowledge support by SNSF Projects No. 137657, and No. CRSII2\_147606, respectively. Funding was also received from the SNSF's National Centers of Competence in Research, Molecular Ultrafast Science and Technology (NCCR MUST) and Materials’ Revolution: Computational Design and Discovery of Novel Materials (NCCR MARVEL). E.M.B. acknowledges funding from the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement No. 290605 (PSI-FELLOW/COFUND). A.A. acknowledges funding from the University of Fribourg. Financial support and CROSS funding to Yi Hu and Kenta Shimamoto from PSI are also acknowledged. N.S.F. and N.A.S. acknowledge the ERC Advanced Grant program (No.\ 291151) and ETH Z\"{u}rich for financial support. Computational resources were provided by ETH Z\"{u}rich and Swiss National Supercomputing Centre (CSCS), project No. p504. We thank Andrea Scaramucci for providing the Monte Carlo code and for fruitful discussions. \bibliographystyle{apsrev}
{ "timestamp": "2018-06-11T02:15:45", "yymm": "1805", "arxiv_id": "1805.02172", "language": "en", "url": "https://arxiv.org/abs/1805.02172" }
\section{\label{sec:level1}INTRODUCTION} The field of cavity optomechanics research, that deals with the effects resulting from light-matter interaction among cavity modes and mechanical motion, has attracted considerable attention in recent years \cite{aspelmeyer2014cavity}. This is primarily owing to its possible futuristic applications in quantum sensors, quantum information processing, solid-state implementation of quantum memory and so on, apart from being a viable tool for testing fundamentals of quantum mechanics \cite{aspelmeyer2012quantum, meystre2013short}. Following the early theories on cavity optomechanical cooling of mechanical resonators, recent progress in optomechanical experiments has enabled the realization of mechanical resonators near to the ground state \cite{metzger2004cavity, gigan2006self, arcizet2006radiation, kleckner2006sub, corbitt2007optical, schliesser2008resolved, thompson2008strong, wilson2009cavity, o2010quantum, chan2011laser, sarma2016ground}. This has opened up newer avenues for quantum applications of optomechanical systems \cite{schmidt2012optomechanical, stannigel2010optomechanical, safavi2011proposal}. Recently, cavity optomechanical systems have been studied for its inherent nonlinear coupling to achieve photon blockade \cite{rabl2011photon, nunnenkamp2011single}. Photon blockade arises from the anharmonicity in energy eigenvalues of an optical mode, which can be introduced via nonlinear interactions. Due to the anharmonicity, resonant excitation of one photon prohibits other photons from simultaneous excitation, giving rise to sub-Poissonian light. The early theories and experiments on photon blockade dealt with atom-coupled cavities \cite{birnbaum2005photon, dayan2008photon}, or quantum dot-coupled cavity QED systems \cite{faraon2008coherent}, or cavities with Kerr-type nonlinearity \cite{imamoglu1997strongly}. After that, there have been several studies on photon blockade in optical waveguides \cite{chang2008crystallization}, coupled cavities \cite{hartmann2006strongly, greentree2006quantum, angelakis2007photon}, qubit-cavity systems \cite{miranowicz2014state}, circuit-QED \cite{lang2011observation, hoffman2011dispersive, liu2014blockade}, gain cavity \cite{zhou2018zero}, and multiphoton blockade in some systems \cite{miranowicz2013two, hovsepyan2014multiphoton, deng2015enhancement}. Numerous possible quantum device designs such as: single-photon transistors \cite{hong2008single}, quantum repeaters \cite{han2010quantum}, quantum gates \cite{wu2010implementation}, quantum-optical Josephson interferometer \cite{gerace2009quantum}, fermionization of photons \cite{carusotto2009fermionized}, and crystallization of polaritons \cite{hartmann2010polariton} rely on the phenomenon of photon blockade. In fact, generation of single photons plays a central role in light-based quantum computation and cryptography \cite{knill2001scheme,duan2001long,kimble2008quantum,o2009photonic}. Photon blockade in an optomechanical system was studied recently, where due to the photon-phonon nonlinear interaction, realization of antibunched sub-Poissonian light was predicted \cite{rabl2011photon}. Subsequently photon blockade \cite{komar2013single, liao2013photon, wang2015tunable}, as well as phonon blockade \cite{liu2010qubit, didier2011detecting, miranowicz2016tunable} have been studied in various optomechanical and nanomechanical systems. However, similar to cavity-QED systems, realization of optomechanical photon blockade demands the criterion of strong-coupling, where the single-photon optomechanical coupling is strong enough to overcome system losses, in order to produce sufficient anharmonicity in the energy-levels \cite{rabl2011photon}. Reaching this strong-coupling regime is a long-sought-after goal in cavity optomechanics, however only with a few realizations like cold-atomic clouds in optomechanical cavity \cite{brennecke2008cavity, gupta2007cavity}, where this requirement has been met till date. Recently, another mechanism for photon blockade which does not require the strong-coupling condition to hold, is invoked by Liew and Savona in coupled-polaritonic systems \cite{liew2010single}. This method, named as the unconventional photon blockade, is based on quantum interference, in which strong photon antibunching was predicted with nonlinearity much smaller than the decay rate of the photonic modes \cite{bamba2011origin}. Afterwards, it has been studied in other systems including coupled nonlinear photonic molecules \cite{bamba2011origin}, coupled cavities with Kerr-type nonlinearity \cite{shen2015exact, ferretti2013optimal, flayac2015all, shen2015tunable, flayac2016single, zou2018photon}, coupled optomechanical cavities \cite{xu2013antibunching, savona2013unconventional}, coupled quantum dot-cavity system \cite{tang2015quantum}, bimodal cavity \cite{majumdar2012loss, zhang2014optimal}, weakly nonlinear photonic molecules \cite{xu2014strong}, Gaussian squeezed states \cite{lemonde2014antibunching}, and with second-order nonlinearity \cite{gerace2014unconventional, zhou2015unconventional, sarma2017quantum}. Recently, unconventional photon blockade has also been realized experimentally in a quantum dot cavity system \cite{snijders2018single}. In this paper, we study photon correlations in a nonlinear optomechanical cavity containing two optical modes and one mechanical mode which are cross-coupled by a three-mode interaction. We show that even when the Kerr-nonlinearities in the optical modes are weak, due to the three-mode interaction, distinct two-photon excitation pathways arise which gives rise to strong photon antibunching via unconventional photon blockade. The remainder of the paper is organized as follows. In Sec. II, we describe the model and derive the optimal conditions for photon blockade. In Sec. III, we calculate the equal-time second-order correlation function as well as the two-time correlation function for the higher-frequency cavity mode and analyze the photon blockade characteristics. We also discuss the effects of temperature and pure-dephasing induced decoherences. The results are summarized in Sec. IV. \section{\label{sec:level1} Model and theory} We consider a nonlinear optomechanical system as shown in Fig.~\ref{upb1:fig1}(a), that contains two cavity modes with frequencies, $\omega_1$ and $\omega_2$, and one mechanical mode with frequency $\omega_m$. The Hamiltonian of the system reads \begin{align} \label{upb1:eq1} \nonumber H_0\ =& \ \omega_1 a_1^\dagger a_1+ \omega_2 a_2^\dagger a_2+ \omega_m b^\dagger b + U a_1^\dagger a_1^\dagger a_1 a_1 \\ \nonumber & + U a_2^\dagger a_2^\dagger a_2 a_2 + g(a_1^\dagger a_2 + a_2^\dagger a_1)(b+b^\dagger)\\ & + \Omega_1(a_1^\dagger e^{-i\omega_l t}+a_1 e^{i\omega_l t}) + \Omega_2(a_2^\dagger e^{-i\omega_l t} + a_2 e^{i\omega_l t}), \end{align} where $a_1$ ($a_1^\dagger$), $a_2$ ($a_2^\dagger$), and $b$ ($b^\dagger$) are the annihilation (creation) operators for the two cavity modes with decay rates $\kappa_1$ and $\kappa_2$, and the mechanical mode with damping rate $\gamma$, respectively. Here, $U$ is the strength of the Kerr-nonlinearity experienced by both the optical modes. We assume the difference between the two cavity mode frequencies to be equal to the mechanical frequency, i.e.~$\omega_1 - \omega_2 = \omega_m$, so that the cavity modes can be cross-coupled by the optomechanical interaction \cite{chang2011slowing}. The coupling is characterized by the rate, $g$, and is also proportional to the mechanical displacement. The last two terms in Eq.~\eqref{upb1:eq1} describe the driving input fields and its interaction with the two cavity modes. For simplicity, we will assume that $\kappa_1 = \kappa_2 = \kappa$, for the rest of the paper. \begin{figure}[!] \centering \includegraphics [trim={0cm 0cm 0cm 0cm},width =0.9\linewidth]{fig1a.pdf} \includegraphics [trim={0cm 0cm 0cm 0cm},width =0.8\linewidth]{fig1b.pdf} \caption {(Color online) (a) Schematic diagram of the optomechanical cavity with two optical modes and a single mechanical mode, (b) The low energy-levels of the system for a weak drive and low temperature.} \label {upb1:fig1} \end{figure} In a rotating frame at the laser frequency, $\omega_l$, the Hamiltonian is transformed to \begin{align} \label{upb1:eq2} \nonumber H \ =& \ \Delta a_1^\dagger a_1 + (\Delta - \omega_m) a_2^\dagger a_2 + \omega_m b^\dagger b + U a_1^\dagger a_1^\dagger a_1 a_1 \\ \nonumber & + U a_2^\dagger a_2^\dagger a_2 a_2 + g(a_1^\dagger a_2 + a_2^\dagger a_1)(b + b^\dagger) \\ & + \Omega_1(a_1^\dagger +a_1) + \Omega_2(a_2^\dagger +a_2), \end{align} where, $\Delta = \omega_1 - \omega_l$, is the detuning of the cavity mode $a_1$ from the laser drive. Now, we transform the Hamiltonian to a frame defined by the unitary transformation, $U= \exp[-i\omega_m t(b^\dagger b - a_2^\dagger a_2)]$. Assuming that the coupling rate is much lower in comparison to the mechanical resonator frequency, i.e.~$\omega_m\gg g$, under a rotating-wave approximation, the transformed Hamiltonian is obtained as \begin{align} \label{upb1:eq3} \nonumber H\ =& \ \Delta (a_1^\dagger a_1+ a_2^\dagger a_2) + U a_1^\dagger a_1^\dagger a_1 a_1 + U a_2^\dagger a_2^\dagger a_2 a_2 \\ & + g(a_1^\dagger a_2 b+ a_1 a_2^\dagger b^\dagger)+ \Omega_1(a_1^\dagger +a_1). \end{align} This Hamiltonian indicates a three-mode interaction among the two optical modes and the mechanical mode, in which one photon from the mode, $a_1$, is annihilated to create one photon in the mode, $a_2$, and one phonon in the mechanical mode, $b$. In the reverse process, one photon from the mode, $a_2$, and one phonon in the mode, $b$ of the mechanical resonator are annihilated to create one photon in the mode, $a_1$. In the following, we intend to study the photon antibunching effect in the mode, $a_1$, arising as a result of this three-mode mixing Hamiltonian. Photon antibunching would be studied by analyzing the normalized zero-time-delay second-order correlation function, given by \begin{align} \label{upb1:eq4} g_a^{(2)}(0)= \frac{\langle a_1^\dagger (t) a_1^\dagger (t) a_1(t) a_1(t)\rangle}{\langle a_1^\dagger (t) a_1(t)\rangle^2}. \end{align} This quantity characterizes the joint probability of detecting two photons at the same time, which can be calculated numerically from the Lindblad master equation. The master equation for the driven-dissipative system is given by \begin{align} \label{upb1:eq5} \dot{\rho}=i[\rho, H] + L_1(\rho) + L_2(\rho) + L_b(\rho), \end{align} where, $L_1(\rho)= \frac{\kappa}{2}(2a_1 \rho a_1^\dagger - a_1^\dagger a_1 \rho -\rho a_1^\dagger a_1)$, $L_2(\rho)= \frac{\kappa}{2}(2a_2 \rho a_2^\dagger - a_2^\dagger a_2 \rho -\rho a_2^\dagger a_2)$, and $L_b(\rho)= \frac{\kappa}{2} (n_\textrm{th} +1) (2b \rho b^\dagger - b^\dagger b \rho -\rho b^\dagger b) + \frac{\kappa}{2} n_\textrm{th} (2b^\dagger \rho b - b b^\dagger \rho -\rho b b^\dagger)$, are the Liouvillian operators for the two optical modes and the mechanical mode respectively. Here, $n_\textrm{th} = 1/[\exp(\hbar \omega_m/k_B T)-1]$ denotes the thermal phonon number in the mechanical mode at the bath temperature, $T$. The steady-state value of $g_a^{(2)}(0)$ can be found numerically by solving the master equation and then from the steady state density matrix operator as, $g_a^{(2)}(0)= \textrm{Tr} (\rho a_1^\dagger a_1^\dagger a_1 a_1)/[\textrm{Tr} (\rho a_1^\dagger a_1)]^2$. In addition to the master equation approach, optimal conditions for photon blockade can be determined in the following manner. When the driving field is very weak in comparison to the Kerr nonlinearity, and the temperature is also very low, then, only the lower energy levels of the cavity and the mechanical modes are occupied \cite{wang2015tunable}, as shown in Fig.~\ref{upb1:fig1}(b). Considering the allowed low-energy transitions given by the Hamiltonian in Eq.~\eqref{upb1:eq3}, the truncated state of the system is given by \cite{bamba2011origin} \begin{align} \label{upb1:eq6} \nonumber |\psi\rangle= C_{000}|000\rangle+C_{100}|100\rangle+C_{011}|011\rangle\\+C_{200}|200\rangle+C_{111}|111\rangle+C_{022}|022\rangle, \end{align} where, $C_{a_1a_2b}$'s are the amplitudes of the quantum states for which the corresponding occupation probability is given by $|C_{a_1a_2b}|^2$. The values of the coefficients can be determined by solving the Schr\"{o}dinger equation, $i\frac{d|\psi\rangle}{dt}=H'|\psi\rangle$, where $H'$ is the non-Hermitian Hamiltonian containing the optical decay and mechanical damping terms \begin{align} \label{upb1:eq7} \nonumber H'\ =& \ (\Delta -i\frac{\kappa}{2})(a_1^\dagger a_1+ a_2^\dagger a_2)-i\frac{\gamma}{2}b^\dagger b + U a_1^\dagger a_1^\dagger a_1 a_1\\ & + U a_2^\dagger a_2^\dagger a_2 a_2 + g(a_1^\dagger a_2 b+ a_1 a_2^\dagger b^\dagger)+ \Omega_1(a_1^\dagger +a_1). \end{align} For a weak drive, a set of equations for the coefficients is obtained from the Schr\"{o}dinger equation \begin{align} \label{upb1:eq8} \nonumber i\frac{\partial C_{100}}{\partial t}\ =& \ \left(\Delta-i\frac{\kappa}{2}\right) C_{100}+gC_{011}+\Omega(C_{000}+\sqrt{2}C_{200}),\\ \nonumber i\frac{\partial C_{011}}{\partial t}\ =& \ \left(\Delta-i\frac{\kappa+\gamma}{2}\right) C_{011}+gC_{100}+\Omega C_{111},\\ \nonumber i\frac{\partial C_{200}}{\partial t}\ =& \ 2\left(\Delta + U -i\frac{\kappa}{2}\right) C_{200}+\sqrt{2}gC_{111}+\sqrt{2}\Omega C_{100},\\ \nonumber i\frac{\partial C_{111}}{\partial t}\ =& \ \left[2\left(\Delta-i\frac{\kappa}{2}\right)-i\frac{\gamma}{2}\right] C_{111}+g(2 C_{022} + \sqrt{2}C_{200})\\ \nonumber & +\Omega C_{011},\\ i\frac{\partial C_{022}}{\partial t}\ =& \ 2\left(\Delta + U - i\frac{\kappa+\gamma}{2}\right) C_{022} + 2gC_{111}. \end{align} Under the weak driving assumption, one can consider that $\{C_{111}, C_{022}, C_{200}\} \ll \{C_{100}, C_{011}\} \ll C_{000}$. Now, considering $\gamma \ll \kappa$ for typical optomechanical systems and substituting $C_{200} = 0$, the optimal parameters for complete photon antibunching in the mode $a_1$ is given by (see Appendix): \begin{align} \label{upb1:eq9} \nonumber \Delta_\textrm{opt}\ =& \ \pm \frac{1}{2} \sqrt{2 \sqrt{g^2 (5 g^2 + 2 \kappa^2)} - 4 g^2 - \kappa^2},\\ U_\textrm{opt}\ =& \ \frac{\Delta(4 \Delta^2 + 2 g^2 +5 \kappa^2)}{2(2 g^2 - \kappa^2)}. \end{align} These optimal conditions correspond to the situation, where different transition paths leading to two-photon excitation in the mode $a_1$ interferes destructively, as shown in Fig.~\ref{upb1:fig1}(b). \begin{figure} \centering \includegraphics[width=\linewidth]{fig2a.pdf} \includegraphics[width=\linewidth]{fig2b.pdf} \caption{(Color online) Plot of second-order correlation function, $g_a^{(2)}(0)$ at $T = 0$ as a function of normalized detuning $\Delta/\kappa$ for different values of $g/\kappa$. The nonlinearity is considered to be $U_\textrm{opt}/\kappa = 0.98$, $0.71$, $0.69$, and $0.74$ for the normalized values of coupling, $g/\kappa = 1$, $1.5$, $2$ and $2.5$ respectively. (b) Plot of two-time correlation function, $g_a^{(2)}(\tau)$. The values of $U$ is considered same as in (a). The nonlinear strength is considered as $\Delta_\textrm{opt}/\kappa = 0.27$, $0.47$, $0.66$, and $0.84$ respectively.} \label{upb1:fig2} \end{figure} \section{\label{sec:level1} Results} \begin{figure*}[!] \centering \includegraphics[width=0.45\linewidth]{fig3a.pdf} \includegraphics[width=0.45\linewidth]{fig3b.pdf} \includegraphics[width=0.45\linewidth]{fig3c.pdf} \includegraphics[width=0.45\linewidth]{fig3d.pdf} \caption{(Color online) Contour plots showing the variation of $\log_{10}{g_a^{(2)}(0)}$ as functions of $\Delta/\kappa$ and $U/\kappa$, for different values of $g/\kappa$ considered as: $g/\kappa = 1$ in (a), $g/\kappa = 1.5$ in (b), $g/\kappa = 2$ in (c), and $g/\kappa = 2.5$ in (d).} \label{upb1:fig3} \end{figure*} \noindent In Fig.~\ref{upb1:fig2}(a), we show $g_a^{(2)}(0)$ as a function of $\Delta/\kappa$ with different moderate values of $g$. The values of $U$ is considered to be $U_\textrm{opt}$. For the values of $g$ considered in the plot, the optimal parameters from Eq.~\eqref{upb1:eq9} are obtained as, for $g/\kappa = 1$: $\Delta_\textrm{opt}/\kappa = 0.27$, $U_\textrm{opt}/\kappa = 0.98$; for $g/\kappa = 1.5$: $\Delta_\textrm{opt}/\kappa = 0.47$, $U_\textrm{opt}/\kappa = 0.71$; for $g/\kappa = 2$: $\Delta_\textrm{opt}/\kappa = 0.66$, $U_\textrm{opt}/\kappa = 0.69$; for $g/\kappa = 2.5$: $\Delta_\textrm{opt}/\kappa = 0.84$, $U_\textrm{opt}/\kappa = 0.74$. It can be observed from Fig.~\ref{upb1:fig2}(a) that, as predicted from the optimal conditions calculated analytically, $g_a^{(2)}(0)$ shows a strong antibunching effect at the optimal values of $\Delta/\kappa$. Fig.~\ref{upb1:fig2}(b), demonstrates the two-time second-order correlation function $g_a^{(2)}(\tau)$ which is calculated as: \begin{align} \label{upb1:eq10} g_a^{(2)}(\tau)= \frac{\langle a_1^\dagger (t) a_1^\dagger (t+\tau) a_1(t+\tau) a_1(t)\rangle}{\langle a_1^\dagger (t) a_1(t)\rangle^2}. \end{align} This quantity, $g_a^{(2)}(\tau)$ is proportional to the joint probability of detecting one photon at time, $(t + \tau)$, provided another photon was detected at time, $t$, at that position \cite{knight2005introductory}. The plots show $g_a^{(2)}(\tau)$ under the optimal conditions for different values of $J$. We can observe that at $\tau=0$, $g_a^{(2)}(0) \approx 0$, and for other delay times $g_a^{(2)}(\tau)>g_a^{(2)}(0)$. Therefore, it clearly demonstrates that the emitted photons are antibunched and sub-Poissonian in nature. From Figs.~\ref{upb1:fig2}(a) and (b), one can observe that for the values of $U$ falling in the weak coupling regime, i.e.~$U < \kappa$, photon blockade can be realized owing to the quatum interference-inducing interaction, as verified by the optimal parameters. \begin{figure}[b] \centering \includegraphics[width=\linewidth]{fig4.pdf} \caption{(Color online) Plot showing the effect of environmental temperature on photon blockade characteristics.} \label{upb1:fig4} \end{figure} In order to visualize the photon blockade effects more clearly, we show the contour plots of $g_a^{(2)}(0)$ in Fig.~\ref{upb1:fig3}, as functions of normalized detuning, $\Delta/\kappa$ and normalized nonlinear strength, $U/\kappa$. In Figs.~\ref{upb1:fig3}(a)-(d), the values of $g/\kappa$ are considered as: $g/\kappa = 1$ in (a), $g/\kappa = 1.5$ in (b), $g/\kappa = 2$ in (c), and $g/\kappa = 2.5$ in (d). The plots show that strong photon antibunching occurs exactly at the values predicted from the analytical calculations, in Eq.~\eqref{upb1:eq9}. Next, we want to study the influence of environmental phonon population on the photon blockade characteristics. In Fig.~\ref{upb1:fig4}, we demonstrate $g_a^{(2)}(0)$ as a function of the bath phonon number, $n_{\rm{th}}$. For $g/\kappa=1$, $g_a^{(2)}(0)$ reaches $1$ at $n_{\rm{th}} \approx 0.5$, whereas, for $g/\kappa = 1.5$, $2$ and $2.5$, $g_a^{(2)}(0) \leq 1$ upto $n_{\rm{th}} \approx 0.25$. Therefore, it is evident that the environmental thermal phonon population has undesirable effect on the observation of photon blockade. \begin{figure} \centering \includegraphics[trim={0 0 0 0},width=\linewidth]{fig5.pdf} \caption{(Color online) Effect of pure dephasing. The black solid line represents $\gamma_p/\kappa = 0$, red dashed line is for $\gamma_p/\kappa = 0.001$ and the blue dash-dotted line denotes $\gamma_p/\kappa = 0.01$.} \label{upb1:fig5} \end{figure} Till now, in our analysis, we have not considered the effect of pure dephasing induced decoherences. Pure dephasing may arise from instability of the laser drive, or coupling of the cavity modes to other mechanical modes, and due to this there can be perturbing effect on polarization , linewidth , transmittance, and photon statistics \cite{savona2013unconventional}. Therefore in the following, we analyze the effect of pure-dephasing on the antibunching properties of the cavity photons. The effects of pure dephasing can be modeled by adding another Lindblad term of the form $L_p(\rho) = \frac{\gamma_p}{2} \sum\limits_{j=1,2}{[2a_j^\dagger a_j \rho a_j^\dagger a_j - (a_j^\dagger a_j)^2 \rho - \rho (a_j^\dagger a_j)^2]}$, into the master equation, where $\gamma_p$ is the pure dephasing rate for the cavity modes. Figs.~\ref{upb1:fig5}(a)-(d) show the second-order correlation function $g_a^{(2)}(0)$ for different pure dephasing rates with different sets of optimized values. The values of $g$ is considered as: $g/\kappa = 1$ in (a), $g/\kappa = 1.5$ in (b), $g/\kappa = 2$ in (c), and $g/\kappa = 2.5$ in (d). The black solid line represents $\gamma_p/\kappa = 0$, red dashed line is for $\gamma_p/\kappa = 0.001$ and the blue dash-dotted line denotes $\gamma_p/\kappa = 0.01$. With increase in the pure dephasing rate, $g_a^{(2)}(0)$ increases near the optimal detuning. For higher values of pure-dephasing rates eg.~$\gamma_p = 0.01 \kappa$, $g_a^{(2)}(0)$ approaches classical Poissonian statistics. \section{\label{sec:level1} CONCLUSION} In conclusion, we analyzed the photon statistics in terms of the second-order correlation function, in a weakly driven optomechanical system, where two optical modes and one mechanical mode interact via a three-mode mixing. Due to this coupling, additional two-photon excitation pathways are created in the higher-frequency optical mode, which can be exploited to obtain the desired photon blockade characteristics in the system via quantum interference. We derived the optimal parameters required for strong photon blockade by solving the non-Hermitian Schr\"{o}dinger equation containing the damping and decay in the system. The numerical calculations of the second-order correlation function obtained from solving the master equation show agreement with the analytical calculations. It is observed that even when the Kerr-type nonlinearity is weak, under the optimal conditions corresponding to the fulfillment of the quantum-interference effect, photon blockade is possible in the system. \setcounter{secnumdepth}{0} \section{ACKNOWLEDGMENTS} B.~Sarma gratefully acknowledges a research fellowship from MHRD, Govt. of India.\\
{ "timestamp": "2018-05-08T02:11:55", "yymm": "1805", "arxiv_id": "1805.02218", "language": "en", "url": "https://arxiv.org/abs/1805.02218" }
\section{Experiment} \label{sec:experiment} In this section, we demonstrate the effectiveness of the proposed generative model on a synthetic dataset as well as on well-known datasets where the number of links can be significantly reduced compared to state-of-the-art. \subsection{Experimental Settings} \label{subsec:initialization} To illustrate the method, we start with the case of $p(\mathbf{x}|z^m)$: a mixture of Gaussians ($m_K > 1$) and a single Gaussian distribution ($m_K = 1$). To initialize the model parameters, we first randomly select the mean vectors by K-means++~\cite{arthur2007k}, which is similar to the Gonzalez algorithm~\cite{gonzalez1985clustering} without being completely greedy. Afterward, we assign every observed data point to its nearest initial mean where initial covariance matrices for each class are computed. We initially assume equally probable classes where the mixing parameters are set to $1/M$. When $m_K > 1$ (i.e., multiclusters per class), we initialize the parameters of the $k$th cluster in the $m$th class using the aforementioned strategy, but only on the data points that have been assigned to the $m$th class after the above initialization. To mimic user preferences and assess the performance of the proposed model as a function of the number of available relations, pairwise relations are created by randomly selecting a pair of observed data points and using the knowledge of the distributions. If the points are assigned to the same cluster based on their ground truth labeling, we move them to the {\em must-link} set, otherwise, to the {\em cannot-link} set. We perform 100 trials for all experiments. Each trial is constructed by the random initialization of the model parameters and random pairwise relations. We compare the proposed model, \textit{generative model with pairwise relation} ({\bf GM-PR}), to the unconstrained {\bf GMM}, unconstrained {\bf spectral clustering (SC)}, and four other state-of-the-art algorithms: 1) {\bf GMM-EC}: GMM with the equivalence constraint~\cite{shental2004computing}, 2) {\bf EM-PC}: EM with the posterior constraint~\cite{conf/nips/GracaGT07}; it is worth mentioning that {\bf EM-PC} works only for {\em cannot-link}, 3) {\bf SSKK}: Constrained kernel K-means~\cite{kulis2009semi}, and 4) {\bf CSC}~\footnote{https://github.com/gnaixgnaw/CSP}: Flexible constrained spectral clustering~\cite{wang2010flexible}. For SC, SSKK, and CSC, the similarity matrix is computed by the RBF kernel, whose parameter is set by the average squared distance between all pairs of data points. We use {\em purity}~\cite{manning2008introduction} for performance evaluation, which is a scalar value ranging from $0$ to $1$ where $1$ is the best. Purity can be computed as follows: each class $m$ is assigned to the most frequent ground truth label $g(m)$; then, purity is measured by counting the number of correctly assigned observed data points in every ground truth class and dividing the total number of observed data. The assignment is according to the highest probability of the posterior distribution. \subsection{Results: Single Gaussian Distribution ($m_K=1$)} In this section, we demonstrate the performance of the proposed model using a single Gaussian distribution on standard binary and multiclass problems. \label{subsec:G} \subsubsection{Synthetic Data} \label{subsubsec:G:toy} We start off by evaluating the performance of {\bf GM-PR}, which uses a single Gaussian distribution for $p(\mathbf{x}|z^m)$ on synthetic data. We generate a two-cluster toy example to mimic the example in Figure~{\ref{fig:wrongModel}}, which is motivated by \cite{zhu2006semi}. The correct decision boundary should be the horizontal line along the x-axis. Figure~\ref{fig:toy}(a) is the generated data with the initial means. Figure~\ref{fig:toy}(b) is the clustering result obtained from an unconstrained {\bf GMM}. Figure~\ref{fig:toy}(c) shows that the proposed \textbf{GM-PR} can learn the desired model with only two must-link relations and two cannot-link relations. Figure~\ref{fig:toy}(d) shows that the proposed \textbf{GM-PR} can learn the desired model with only two must-links. Figure~\ref{fig:toy}(e) shows that the proposed \textbf{GM-PR} can learn the desired model with only two cannot-link relations. This experiment illustrates the advantage of the proposed method, which can perform well with only either must-links or cannot-links. This advantage makes the proposed model distinct from previous works ~\cite{shental2004computing,law2005model}. \begin{figure*}[!t] \twoAcrossLabels{./fig/wrongModelOri-eps-converted-to.pdf}{./fig/wrongModelUnsupervised-eps-converted-to.pdf} {(a)}{(b)} \threeAcrossLabels{./fig/wrongModelEMMPD-eps-converted-to.pdf}{./fig/wrongModelEMMPD_MLonly-eps-converted-to.pdf}{./fig/wrongModelEMMPD_CLonly-eps-converted-to.pdf} {(c)}{(d)}{(e)} \vspace{-5pt} \caption [Result of application-specific model synthetic data] { {\bf Application-specific model synthetic data}: (a) Original data with initial two means marked by x. Results are represented as follows: (b) {\bf GMM}, (c) {\bf GM-PR} using two {\em must-links} (solid line) and two {\em cannot-links} (dashed line), (d) {\bf GM-PR} using only two must-links, and (e) {\bf GM-PR} using only two cannot-links. The saturation of the red/green points represents the value of the soft label. } \label{fig:toy} \vspace{-5pt} \end{figure*} \subsubsection{UCI Repository and Handwritten Digits} \label{subsubsec:G:RealData} In this section, we report the performance of three real datasets: 1) the {\bf Haberman's survival}\footnote{https://archive.ics.uci.edu/ml/datasets.html} dataset contains 306 instances, 3 attributes, and 2 classes; 2) the {\bf MNIST}\footnote{http://yann.lecun.com/exdb/mnist/} database contains images of handwritten digits. We used the test dataset, which contains 10000 examples, 784 attributes, and 10 classes~\cite{lecun1998gradient}; and 3) the {\bf Thyroid}\footnote{http://www.raetschlab.org/Members/raetsch/benchmark} dataset contains 215 instances, 5 attributes, and 2 classes. We demonstrate the performance of {\bf GM-PR} on two binary clustering tasks, Haberman and Thyroid, and two multiclass problems, digits 1, 2, 3 and 4, 5, 6, 7. For ease of visualization, we work with only the leading two principal components of the MNIST using principal component analysis (PCA). Figure~\ref{fig:MNISTEX} shows two-dimensional inputs, color-coded by class label. Figure~\ref{fig:GMMECBOTH} shows that {\bf GM-PR} significantly outperforms {\bf GMM-EC} regardless of the available number of links on all datasets. Moreover, Figure~\ref{fig:GMMECML} shows that {\bf GM-PR} performs well even if only the {\bf must-links} are available. Compared to {\bf EM-PC}, which uses only the {\em cannot-links}, Figure~\ref{fig:EMPC} shows the performance of {\bf GM-PR} is always greater than or comparable to {\bf EM-PC} and {\bf GM-PR}. Figure~\ref{fig:EMPC} also shows that the performance of {\bf EM-PC} decreases when the number of classes increases. The cannot-link in the {\bf GM-PR}, on the other hand, can contribute to the model when the problem is either binary or multiclass. Notice that all the experiments indicate that {\bf GM-PR} has a lower variance over 100 random initializations, which implies {\bf GM-PR} stability regardless of the number of available pairwise links. \begin{figure*}[!t] \center \begin{minipage}[b]{0.1\linewidth} \includegraphics[width=2.1cm,height=1.4cm]{./fig/GMMEClegend-eps-converted-to.pdf} \end{minipage}% \begin{minipage}[t]{\linewidth} \twoAcrossLabels{./fig/harbermanGMMEC-eps-converted-to.pdf}{./fig/thyroidGMMEC-eps-converted-to.pdf} {(a) Harberman}{(b) Thyroid} \twoAcrossLabels{./fig/digit123Both-eps-converted-to.pdf}{./fig/digit4678Both-eps-converted-to.pdf} {(c) digit 1, 2, and 3}{(d) digit 4, 5, 6, and 7} \end{minipage} \vspace{-5pt} \caption [Result of MNIST and UCI] { The performance of {\bf GM-PR} compared to {\bf GMM-EC}~\cite{shental2004computing} with a different number of pairwise links on (a) Harberman, (b) Thyroid, (c) digits 1, 2, and 3, and (d) digits 4, 5, 6, and 7. } \label{fig:GMMECBOTH} \vspace{-5pt} \end{figure*} \begin{figure*} \twoAcrossLabels{./fig/digit123Visualization.png}{./fig/digit4567Visualization.png} {(a) digits 1, 2, and 3}{(b) digits 4, 5, 6, and 7} \centering \caption [Visualization of MNIST] { Digits 1, 2, and 3, and digits 4, 5, 6, and 7 visualized by the first two principal components of PCA. } \label{fig:MNISTEX} \end{figure*} \begin{figure*}[!t] \center \begin{minipage}[b]{0.1\linewidth} \includegraphics[width=2.1cm,height=1.4cm]{./fig/GMMEClegend-eps-converted-to.pdf} \end{minipage}% \begin{minipage}[t]{\linewidth} \twoAcrossLabels{./fig/harbermanGMMECML-eps-converted-to.pdf}{./fig/thyroidGMMECML-eps-converted-to.pdf} {(a) Harberman used only must-links}{(b) Thyroid used only must-links} \twoAcrossLabels{./fig/digit123ML-eps-converted-to.pdf}{./fig/digit4567ML-eps-converted-to.pdf} {(c) digits 1, 2, and 3}{(d) digits 4, 5, 6, and 7} \end{minipage} \vspace{-5pt} \caption [Result of MNIST and UCI with only must-link relations] { The performance of {\bf GM-PR} compared to {\bf GMM-EC}~\cite{shental2004computing} with a different number of must-links on (a) Harberman, (b) Thyroid, (c) digits 1, 2, and 3, and (d) digits 4, 5, 6, and 7. } \label{fig:GMMECML} \vspace{-5pt} \end{figure*} \begin{figure*}[!t] \center \begin{minipage}[b]{0.1\linewidth} \includegraphics[width=2cm,height=1.4cm]{./fig/GMMECandEMPClegend-eps-converted-to.pdf} \end{minipage}% \begin{minipage}[t]{\linewidth} \twoAcrossLabels{./fig/harbermanGMMECandEMPC-eps-converted-to.pdf}{./fig/thyroidGMMECandEMPC-eps-converted-to.pdf} {(a) Harberman}{(b) Thyroid} \twoAcrossLabels{./fig/digit123GMMECandEMPC-eps-converted-to.pdf}{./fig/digit4567GMMECandEMPC-eps-converted-to.pdf} {(c) digits 1, 2, and 3}{(d) digits 4, 5, 6, and 7} \end{minipage} \vspace{-5pt} \caption [Result of MNIST and UCI with only cannot-link relations] { The performance of {\bf GM-PR} compared to {\bf GMM-EC}~\cite{shental2004computing} and {\bf EM-PC}~\cite{conf/nips/GracaGT07} with a different number of cannot-links on (a) Harberman, (b) Thyroid, (c) digits 1, 2, and 3, and (d) digits 4, 5, 6, and 7. } \label{fig:EMPC} \vspace{-5pt} \end{figure*} \subsection{Results: Mixture of Gaussians ($m_K>1$)} In this section, we demonstrate the performance of the proposed model using a mixture of Gaussians on the datasets that have local manifold structure. \label{subsec:MG} \subsubsection{Synthetic Data: Two Moons Dataset} \label{subsec:MM:twoMoon} Data points in two moons are on a moon-like manifold structure (Figure~\ref{fig:twomoons}(a)), which allows us to show the advantage of the proposed method using a mixture of Gaussians as a distribution instead of a single Gaussian distribution. Figure~\ref{fig:twomoons}(a) shows the data with initial means for the {\bf GMM} and the {\bf GM-PR} using a single Gaussian. Figure~\ref{fig:twomoons}(b) shows the data with initial means for {\bf GM-PR} using a mixture of Gaussians ($m_K=2$). Figure~\ref{fig:twomoons}(c) is the clustering result obtained from the unconstrained {\bf GMM}, in which three points were assigned to the wrong class. Figure~\ref{fig:twomoons}(c) also shows that the performance of the {\bf GMM} relied on the parameter initialization. Figure~\ref{fig:twomoons}(d) shows that the proposed \textbf{GM-PR}, which used a single cluster for each class, tried to learn the manifold structure via two must-link and two cannot-link relations. However, two points were still assigned to the incorrect class. Figure~\ref{fig:twomoons}(e) shows that the \textbf{GM-PR} can trace the manifold structure but used the same links in (d) with two clusters for each class. This experiment illustrates the advantage of the proposed model with a mixture of distributions that traces the local data structure by every single cluster and describes the global data structure using the mixture of clusters. \begin{figure*}[!t] \twoAcrossLabels{./fig/twoMooniniGMMMean-eps-converted-to.pdf}{./fig/twoMooniniHGMMsMean-eps-converted-to.pdf} {(a)}{(b)} \threeAcrossLabels{./fig/twoMoonGMM-eps-converted-to.pdf}{./fig/twoMoonSSGMM-eps-converted-to.pdf}{./fig/twoMoonHGMMs-eps-converted-to.pdf} {(c)}{(d)}{(e)} \vspace{-5pt} \caption [Result of two moons synthetic data] { {\bf Two moons synthetic data}: (a) Original data with initial two means marked by x. (b) Original data with initial means marked by triangles for class 1 and squares for class 2. Results are represented as follows: (c) {\bf GMM}, (d) {\bf GM-PR} used one cluster for each class, and two {\em must-links} (solid line) and two {\em cannot-links} (dashed line), and (e) {\bf GM-PR} used two clusters for each class and used the same links as in (d). } \label{fig:twomoons} \vspace{-5pt} \end{figure*} \subsubsection{COIL 20} \label{subsubsec:MM:RealData} In this section, we report the performance of COIL 20\footnote{http://www.cs.columbia.edu/CAVE/software/softlib/coil-20.php} datasets, which contain images of 20 objects in which each object was placed on a turntable and rotated 360 degrees to be captured with different poses via a fixed camera~(Figure~{\ref{fig:COIL20}}). The COIL 20 dataset contains 1440 instances and 1024 attributes. We set the number of multiclusters per class by cross-validation to $m_K=3$. Previous studies have shown that the intrinsic dimension of many high-dimensional real-world datasets is often quite small ($d \leq 20$)~\cite{raginsky2005estimation,felsberg2009continuous}; therefore, each image is first projected onto the low-dimensional subspace (d = 10, 15, and 20). Figure~{\ref{fig:COIL20}} shows that the {\bf GM-PR} provides higher purity values compared to the {\bf SSKK} and the {\bf CSC} with fewer links ($\leq 1000$) regardless of the subspace dimension. In these experiments, we found that the proposed model can outperform the graph-based method with fewer links. \begin{figure*}[!t] \center \begin{minipage}[b]{0.1\linewidth} \includegraphics[width=1.25cm,height=1.25cm]{./fig/coil20legend-eps-converted-to.pdf} \end{minipage}% \begin{minipage}[t]{\linewidth} \twoAcrossLabels{./fig/coil20_d10_MLJ-eps-converted-to.pdf}{./fig/coil20_d15_MLJ-eps-converted-to.pdf} {(a) d = 10}{(b) d = 15} \twoAcrossLabels{./fig/coil20_d20_MLJ-eps-converted-to.pdf}{./fig/coil20.png} {(c) d = 20}{ COIL-20} \end{minipage} \vspace{-5pt} \caption [Result of COIL 20] { The performance of {\bf GM-PR} compared to {\bf SSKK}~\cite{kulis2009semi} and {\bf CSC}~\cite{wang2010flexible} with a different number of cannot-links on COIL-20 (d), which are projected onto the low-dimensional subspace: (a) d = 10 (b) d = 15, and (c) d = 20. } \label{fig:COIL20} \vspace{-5pt} \end{figure*} \subsection{Result: Sensitivity to Number of Clusters Per Class} \label{subsubsec:MM:RealData} Lastly, we demonstrated the performance of the proposed model in regard to different values of $m_K$. First, we used the same dataset (MNIST) that is used in section~\ref{subsubsec:G:RealData}. In Figure~\ref{fig:MNISTEX}(a), we observed digit 1, which clearly lay on a moon-like structure. Therefore, Figure~\ref{fig:MK}(a) shows that the performance of $m_K=2, 3$, or $4$ is better than $m_K=1$ when the number of links is greater than 64. However, in Figure~\ref{fig:MNISTEX}(b), we observe hardly any manifold structure for digits 4, 5, 6, and 7. This observation also applies to the results in Figure~\ref{fig:MK}(b). The performances of $m_K = 1, 2, 3$, and $4$ are very similar to each other, e.g., increasing the value of $m_K$ does not help. However, we also notice that the increase in the number of $m_K$ does not hurt the performance of the model and might even enhance the performance depending on the dataset. \begin{figure*}[!t] \centering \begin{minipage}[b]{0.1\linewidth} \includegraphics[width=1cm,height=1.5cm]{./fig/digit123MulitVSSingleLegend-eps-converted-to.pdf} \end{minipage}% \begin{minipage}[t]{\linewidth} \twoAcrossLabels{./fig/digit123MulitVSSingle-eps-converted-to.pdf}{./fig/digit4567MulitVSSingle-eps-converted-to.pdf} {(a) digits 1, 2, and 3}{(b) digits 4, 5, 6, and 7} \end{minipage} \vspace{-5pt} \caption { The performance of {\bf GM-PR} uses different values of $m_K$ on (a) digits 1, 2, and 3 and (b) digits 4, 5, 6, and 7. } \label{fig:MK} \vspace{-5pt} \end{figure*} \section{Clustering With Pairwise Relationships} \label{sec:FormulationSSC} The proposed model incorporates user input in the form of relations between pairs of points that are in the same class (\textit{must-link}) or different classes (\textit{cannot-link}). The \textit{must-link} and \textit{cannot-link} relationships are a natural and practical choice since the user can guide the clustering without having a specific preconceived notion of classes. These pairwise relationships are typically not sufficiently dense or complete to build a full discriminative model, and yet they may be helpful in discovering the underlying structure of the unlabeled data. For data points that have no user input, we assume that they are independent, random samples. The pairwise relationships give rise to an associate generative model with a joint distribution that reflects the nature of the user input. The parameters are estimated as an ML formulation through an EM algorithm that discovers the global structure of the underlying distribution that reflects the user-defined relations. Unlike previous works that include user input in a specific model (e.g., a GMM) through either hard constraints ~\cite{shental2004computing} or soft penalties ~\cite{lu2004semi}, in this work we propose an ML estimation based on a generative model, without ad hoc penalties. \subsection{Generative Models: Unsupervised Scenario} \label{subsec:MM} In this section, we first introduce generative models for an unsupervised scenario. Suppose the unconstrained generative model consists of $M$ classes. $\mathcal{X} = \{\mathbf{x}_n \in \mathbb{R}^d\}_{n=1}^N$ denotes the observed dataset without user input. Dataset $\mathcal{X}$ is associated with \textit{latent} set $\mathcal{Z} = \{\mathbf{z}_n\}_{n=1}^N$ where $\mathbf{z}_n = [z_n^1, ..., z_n^M]^T \in \{0,1\}^M$ with $z_n^m = 1$ if and only if the corresponding data point $\mathbf{x}_n$ was generated from the $m$th class, subject to $\sum_{m=1}^M z_n^m = 1$. Therefore, we can obtain the soft label for a data point $\mathbf{x}$ by estimating $p(z^m|\mathbf{x})$. The probability that a data point $\mathbf{x}$ is generated from a generative model with parameters $\boldsymbol\vartheta $ is \begin{align} \label{eq:GMU} p(\mathbf{x} | \boldsymbol\vartheta ) & = \sum_\mathbf{z} p(\mathbf{x} | \mathbf{z}, \boldsymbol\vartheta ) p(\mathbf{z}) \end{align} The likelihood of the observed data points governed by the model parameters is \begin{align} & \mathcal{L}(\mathcal{X}, \mathcal{Z}, \boldsymbol\vartheta ) := p(\mathcal{X}, \mathcal{Z}| \boldsymbol\vartheta ) = \prod_{m=1}^M \prod_{n \in [1,N]: z_n^m=1} p(\mathbf{x}_n) \label{eqn:GM:prodconstraint} \\ &= \prod_{m=1}^M \prod_{n=1}^N p(\mathbf{x}_n, z_n^m) = \prod_{m=1}^M \prod_{n=1}^N \bigg [ p(\mathbf{x}_n | z_n^m, \boldsymbol\vartheta ) p(z_n^m) \bigg ]^{z_n^m} \label{eqn:GM:jointprob} \end{align} where the condition on the product term in equation~(\ref{eqn:GM:prodconstraint}) is restricted to data points $\mathbf{x}_n$ generated from the $m$th class. The joint probability in equation~(\ref{eqn:GM:jointprob}) is expressed, using Bayes' rule, in terms of the conditional probability $p(\mathbf{x}_n | z_n^m,\boldsymbol\vartheta )$ and the $m$th class prior probability $p(z_n^m)$. In the rest of the formulation, to simplify the representation, we use $p(\mathbf{x}_n | z_n^m) = p(\mathbf{x}_n | z_n^m,\boldsymbol\vartheta )$ \subsection{Generative Model With Pairwise Relationships} \label{subsec:GMPR} \begin{figure*}[t!] \includegraphics[scale=0.18]{./fig/graphModelXZgray.png} \centering \caption[Graphical representation of the generative model with pairwise relationships] { The graphical representation of the proposed generative model with complete data-likelihood $p(\mathcal{X}, \mathcal{Z} | \mathcal{M}, \mathcal{C},\boldsymbol\vartheta )$. The $\mathcal{L}(\cdot)$ is from the standard generative model with independent samples. The $\mathcal{S}(\cdot)$ shows the must-link data points pair $\mathbf{x}_i$ and $\mathbf{x}_j$ shares a single latent variable $z_{\{ij\}}$. The $\mathcal{D}(\cdot)$ shows the cannot-link data points pair $\mathbf{x}_a$ and $\mathbf{x}_b$, where the green dashed line indicates the joint probability of $z_a$ and $z_b$. } \label{fig:graphModel} \end{figure*} The definition of a pairwise relation in the proposed generative model is similar to that in the unsupervised case, yet such relations are propagated to the latent variables level. In particular, $\mathcal{M} = \{(i,j)\}$ denotes a set of must-link relations where the pair $\mathbf{x}_i$ and $\mathbf{x}_j$ was generated from the same class; hence, the pair $(\mathbf{x}_i,\mathbf{x}_j)$ shares a single latent variable $\mathbf{z}_{\{ij\}}$. The same logic is applied to the cannot-link relations where $\mathcal{C}= \{(a,b)\}$ denotes a set of cannot-link relations encoding that $\mathbf{x}_a$ and $\mathbf{x}_b$ were generated from distinct classes; therefore, $\mathbf{z}_a \neq \mathbf{z}_b$. Including $\mathcal{M}$ and $\mathcal{C}$, the data points are now expanded to be $\mathcal{X} := \{ \mathbf{x}_1, \dots \mathbf{x}_N, \mathcal{M}, \mathcal{C} \}$. Thus, the \textit{modified} complete-data likelihood function $\mathcal{J}(\cdot)$ that would reflect user input is (refer to Figure \ref{fig:graphModel} for the graphical representation) \begin{align} \label{eq:LogJH} \mathcal{J}(\mathcal{X}, \mathcal{Z}, \mathcal{M}, \mathcal{C}, \boldsymbol\vartheta ) & := p(\mathcal{X}, \mathcal{Z} | \mathcal{M}, \mathcal{C}, \boldsymbol\vartheta ) \nonumber \\ & = \mathcal{L}(\mathcal{X}, \mathcal{Z}, \boldsymbol\vartheta )~ \mathcal{S}(\mathcal{X}, \mathcal{Z}, \mathcal{M}, \boldsymbol\vartheta )~ \mathcal{D}(\mathcal{X}, \mathcal{Z}, \mathcal{C}, \boldsymbol\vartheta ). \end{align} $\mathcal{S}(\cdot)$ and $\mathcal{D}(\cdot)$ are the likelihood of pairwise data points. The likelihood of the set of all pairs of must-link data points $\mathcal{S}$ is, therefore, \begin{align} \mathcal{S}(\mathcal{X},\mathcal{Z}, \mathcal{M}, \boldsymbol\vartheta ) & := p(\mathcal{X},\mathcal{Z} | \mathcal{M}, \boldsymbol\vartheta ) \nonumber \\ & = \prod_{m=1}^M \prod_{(i,j) \in \mathcal{M}} p(\mathbf{x}_i, \mathbf{x}_j, z_{\{ij\}}^m) \nonumber \\ & = \prod_{m=1}^M \prod_{(i,j) \in \mathcal{M}} \bigg [ p(\mathbf{x}_i| z_{\{ij\}}^m) p(\mathbf{x}_j | z_{\{ij\}}^m) p(z_{\{ij\}}^m) \bigg ]^{z_{\{ij\}}^m} \label{eq:Sterm} \end{align} The likelihood of the cannot-link data points explicitly reflects the fact that they are drawn from distinct classes. Therefore, the joint probability of the labeling vectors $\mathbf{z}_a$ and $\mathbf{z}_b$ for all $(a,b) \in \mathcal{C}$ is as follows: \begin{eqnarray} p(z_a^m,z_b^m) &:=& p(z_a^m|z_b^m)p(z_b^m) = p(z_b^m|z_a^m)p(z_a^m)~~ \label{eqn:jointCLbayes1} \\ &=& \left\{ \begin{array}{ll} \cfrac {p(z_a^m)^{z_a^m}p(z_b^m)^{z_b^m}}{1-\sum_{m'=1}^M p(z_a^{m'})^2} & z_a^m \neq z_b^m\\ 0 & z_a^m = z_b^m \label{eqn:GM:propjoint}\\ \end{array} \right. \\ & = & \cfrac {(1-z_a^m z_b^m) p(z_a^m)^{z_a^m}p(z_b^m)^{z_b^m}} {1-\sum_{m'=1}^M p(z_a^{m'})^2} \label{eq:joinCLbayes2} \end{eqnarray} The proposed joint distribution reflects the cannot-link constraints by assigning a zero joint probability of $\mathbf{x}_a$ and $\mathbf{x}_b$ being generated from the same class, and takes into account the effect of this relation on the normalization term of the joint distribution $p(z_a^m,z_b^m)$. As such, the cannot-link relations contribute to the posterior distribution as follows: \begin{align} \mathcal{D}(\mathcal{X}, \mathcal{Z}, \mathcal{C}, \boldsymbol\vartheta ) & := p(\mathcal{X}, \mathcal{Z} | \mathcal{C}, \boldsymbol\vartheta ) \nonumber \\ & = \prod_{m=1}^M \prod_{(a,b) \in \mathcal{C}} p(\mathbf{x}_a, \mathbf{x}_b, z_a^m, z_b^m) \nonumber \\ & = \prod_{m=1}^M \prod_{(a,b) \in \mathcal{C}} \Big [ p(\mathbf{x}_a| z_a^m) \Big ]^{z_a^m} \Big [ p( \mathbf{x}_b| z_b^m) \Big ]^{z_b^m} p(z_a^m,z_b^m) \label{eq:Dterm} \end{align} \subsection{Expectation Maximization With Pairwise Relationships} \label{subsec:GMEM} Given the joint distribution $p(\mathcal{X}, \mathcal{Z} | \mathcal{M}, \mathcal{C}, \boldsymbol\vartheta )$, the objective is to maximize the log-likelihood function $\log \mathcal{J}$ with respect to the parameters $\boldsymbol\vartheta $ of the generative process in a manner that would discover the global structure of the underlying distribution and reflect user input. This objective can be achieved using an EM algorithm. \subsubsection{E-Step} \label{subsubsec:Estep} In the E-step, we estimate the posterior of the latent variables using the current parameter values $\boldsymbol\vartheta ^{\text{old}}$. \begin{align} \label{eq:Qfunction} \mathcal{Q}(\boldsymbol\vartheta ,\boldsymbol\vartheta ^{\text{old}}) & = \mathbb{E}_{\mathcal{Z}}[ \log ~ \mathcal{J} ] \nonumber \\ & = \sum_{\mathcal{Z}} p(\mathcal{Z}|\mathcal{X},\mathcal{M},\mathcal{C},\boldsymbol\vartheta ^{\text{old}}) \log p(\mathcal{X}, \mathcal{Z} | \mathcal{M}, \mathcal{C}, \boldsymbol\vartheta ) \end{align} \underline{$\mathcal{L}(\cdot)$-term:} Taking the expectation of $\log \mathcal{L}$ with respect to the posterior distribution of $z_n^m$ and bearing in mind that the latent variable $\mathbf{z}$ is a binary variable, \begin{align} \label{eq:GM:Lterm} \mathbb{E}_{z_n^m | \mathbf{x}_n}[z_n^m] = \frac {p(\mathbf{x}_n | z_n^m)p(z_n^m)} {\sum_{m'=1}^M p(\mathbf{x}_n|z_n^{m'})p(z_n^{m'})} \end{align} \underline{$\mathcal{S}(\cdot)$-term:} Taking the expectation of $\log \mathcal{S}$ with respect to the must-link posterior distribution of $z^m_{\{ij\}}$ results in \begin{align} \label{eq:GM:Sterm} \mathbb{E}_{z^m_{\{ij\}} | \mathbf{x}_i, \mathbf{x}_j} & [z^m_{\{ij\}}] = \frac { p(\mathbf{x}_i|z^m_{\{ij\}}) p(\mathbf{x}_j|z^m_{\{ij\}}) p(z^m_{\{ij\}}) } { \sum_{m'=1}^M p(\mathbf{x}_i|z^{m'}_{\{ij\}}) p(\mathbf{x}_j|z^{m'}_{\{ij\}}) p(z^{m'}_{\{ij\}}) } \end{align} \underline{$\mathcal{D}(\cdot)$-term:} Because the proposed model does not allow $\mathbf{x}_a$ and $\mathbf{x}_b$ to be from the same class, the expectation of equation~(\ref{eq:joinCLbayes2}) in the $\log \mathcal{D}$ that both will have the same class assignment vanishes, which can be shown using Jensen's inequality as follows: \begin{align} \label{eq:GM:leq} \mathbb{E}_{z^m_a, z^m_b | \mathbf{x}_a, \mathbf{x}_b} [\log(1-z_a^m z_b^m)] & \leq \log \big( 1-\mathbb{E}_{z^m_a, z^m_b | \mathbf{x}_a, \mathbf{x}_b} [z_a^m z_b^m] \big) = \log(1-0) = 0 \end{align} Hence, we can set $\log(1-z_a^m z_b^m) = 0$ in equation~(\ref{eq:joinCLbayes2}). The expectation of the $\log \mathcal{D}$ term with respect to $z^m_a$ is \begin{align} \label{eq:GM:Dterm:z} \mathbb{E}_{z^m_a | \mathbf{x}_a, \mathbf{x}_b} [z_a^m] & = p(z_a^m| \mathbf{x}_a, \mathbf{x}_b) = \sum_{m'=1}^M p(z_a^m, z_b^{m'} | \mathbf{x}_a, \mathbf{x}_b) \nonumber \\ & = \frac { \sum_{m'=1}^{M} p(\mathbf{x}_a|z_a^m) p(\mathbf{x}_b|z_b^{m'}) p(z_a^m,z_b^{m'}) } { \sum_{m''=1}^{M} \sum_{m'''=1}^{M} p(\mathbf{x}_a|z_a^{m_k''}) p(\mathbf{x}_b|z_b^{m_k'''}) p(z_a^{m''},z_b^{m'''}) } . \end{align} In a like manner, we can write down the expectation of $z_b^m$. \subsubsection{M-Step} \label{subsubsec:Mstep} In the M-step, therefore, we update the $\boldsymbol\vartheta ^{\text{new}}$ by maximizing equation~(\ref{eq:Qfunction}) and fixing the posterior distribution that we estimated in the E-step. \begin{align} \label{eq:argmaxQ} \boldsymbol\vartheta ^{\text{new}} = \arg\max_{\boldsymbol\vartheta } ~~ \mathcal{Q}(\boldsymbol\vartheta ,\boldsymbol\vartheta ^{\text{old}}) \end{align} Different density models result in different update mechanisms for the respective model parameters. In the next subsection, we elaborate on an example of the proposed model to illustrate the idea of the M-step for the case of Gaussian mixture models. \subsection{Gaussian Mixture Model With Pairwise Relationships} \label{subsubsec:GMM} Consider employing a single distribution (e.g., a Gaussian distribuion) for each class probability $p(\mathbf{x}|z^m)$. The proposed model, therefore, becomes the Gaussian mixture model (GMM) with pairwise relationships. The parameter of the GMM is $\boldsymbol\vartheta = \{\alpha_m, \boldsymbol\mu_m, \boldsymbol\Sigma_m \}_{m=1}^M$, such that $\alpha_m \in [0,1]$ is the mixing parameter for the {\em class} proportion subject to $\sum_{m=1}^M \alpha_m =1$ and $p(z^m) = \alpha_m$. $\boldsymbol\mu_m \in \mathbb{R}^d$ is the mean parameter, and $\boldsymbol\Sigma_m \in \mathbb{R}^{d \times d}$ is the covariance associated with the $m$th class. By taking the derivative of equation~(\ref{eq:Qfunction}) with respect to $\boldsymbol\mu_m$ and $\boldsymbol\Sigma_m$, we can get \begin{align} \label{eqn:mu} \boldsymbol\mu_m & = \bigg ( \sum_{n=1}^N \ell_n^m \mathbf{x}_n + \sum_{(i,j) \in \mathcal{M}} s_{ij}^m \big[ \mathbf{x}_i + \mathbf{x}_j \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^m\mathbf{x}_a + d_{b}^m\mathbf{x}_b \big] \bigg ) \bigg / Z \\ \label{eqn:cov} \boldsymbol\Sigma_m & = \bigg ( \sum_{n=1}^N \ell_n^m \mathbf{S}_n^m + \sum_{(i,j) \in \mathcal{M}} s_{ij}^m \big[ \mathbf{S}_i^m + \mathbf{S}_j^m \big] + \sum_{(a,b) \in \mathcal{C}} \big [ d_{a}^m\mathbf{S}_a^m + d_{b}^m \mathbf{S}_b^m \big ] \bigg ) \bigg / Z \\ \label{eqn:normalization} Z & = \sum_{n=1}^N \ell_n^m + 2\sum_{(i,j) \in \mathcal{M}} s_{ij}^m + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^m + d_{b}^m \big] \end{align} where $\ell_n^m = p(z_n^m|\mathbf{x}_n)$, $s_{ij}^m = p(z_{\{ij\}}^m|\mathbf{x}_i, \mathbf{x}_j)$, $d_{a}^m = p(z_a^m|\mathbf{x}_a, \mathbf{x}_b)$ and the sample covariance $\mathbf{S}_n^m = (\mathbf{x}_n - \boldsymbol\mu_m)(\mathbf{x}_n - \boldsymbol\mu_m)^T$. Estimating the mixing parameters $\alpha_m$, on the other hand, entails the following constrained nonlinear optimization, which can be solved using sequential quadratic programming with Newton-Raphson steps \cite{fletcher2013practical,abramowitz1964handbook}. Let $\boldsymbol\alpha \in \mathbb{R}^M$ denote the vector of mixing parameters. Given the current estimate of the mean vectors and covariance matrices, the new estimate of the mixing parameters can be solved for using the optimization problem defined in (\ref{eqn:mixing_opt}), \begin{align} \label{eqn:mixing_opt} \boldsymbol\alpha^* &= \arg\min_{\boldsymbol\alpha} -\mathcal{Q}(\boldsymbol\vartheta ,\boldsymbol\vartheta ^{\text{old}}) \nonumber \\ \operatorname{s.t.}& ~~~~~ \mathbf{1}^T\boldsymbol\alpha - 1 = 0 ~~\operatorname{and}~~ \alpha_m \geq 0 ~~\forall m \in [1,M] \end{align} where the initialization can be obtained using the closed-form solution obtained from discarding the nonlinear part, which ignores the normalization term $\log(1-\sum_{m'=1}^M \alpha_{m'}^2)$. The energy function is convex, and we have found that this iterative algorithm typically converges in three to five iterations and does not represent a significant computational burden. \subsubsection{Multiple Mixture Clusters Per Class} \label{subsec:MM} In order to group the data that lies on the subspace (e.g., manifold structure) more explicitly, multiclusters to model per class have been widely used in unsupervised clustering by representing the density model in a hierarchical structure ~\cite{coviello2012variational,williams1999mcmc,vasconcelos1998learning,goldberger2004hierarchical,meila2001learning,jordan1994hierarchical,titsias2002mixture}. Because of its natural representation of data, the hierarchical structure can be built using either a top-down or bottom-up approach, in which the first approach tries to decompose one cluster into several small clusters, whereas the second starts with grouping several clusters into one cluster. The multicluster per class strategy also has been proposed when both labeled data and unlabeled data are available ~\cite{nigam2000text,liu2010gaussian,shen2012refining,he2011laplacian,xing2013multi,demiriz1999semi,dara2002clustering,goldberg2009multi}. However, previous works indicated that the labeled data is unable to impact the final parameter estimation if the initial model assumption is incorrect~\cite{cozman2003semi,loog2014semi,yang2011effect,singh2009unlabeled}. Moreover, it is not clear how to employ the previous works in regard to pairwise links instead of labeled data. In this section, we propose to use the generative mixture of Gaussian distributions for each class probability $p(\mathbf{x}|z^m)$. In this form, we use multiclusters to model one class that overcomes data on a manifold structure. Therefore, in addition to the latent variable set $\mathcal{Z}$, $\mathcal{X}$ is also associated with the \textit{latent} variable set $\mathcal{Y} = \{\mathbf{y}_n\}_{n=1}^N$ where $\mathbf{y}_n = [y_n^1, ..., y_n^{m_K}]^T \in \{0,1\}^{m_K}$ with $y_n^{m_k} = 1$ if and only if the corresponding data point $\mathbf{x}_n$ was generated from the $k$th cluster in the $m$th class, subject to $\sum_{m_k=1}^{m_K} y_n^{m_k} = 1$; $m_K$ is the number of clusters in the $m$th class. The parameter of the generative mixture model is $\boldsymbol\vartheta = \{\alpha_m, \boldsymbol\Theta_m\}_{m=1}^M$ and $\alpha_m$ is the mixing parameter for the {\em class} proportion and is the same as $\alpha_m$ in section~\ref{subsubsec:GMM}. The parameter of the $m$th class is $\boldsymbol\Theta_m = \{\pi_{m_k}, \Theta_{m_k}\}_{m_k=1}^{m_K}$ where $\Theta_{m_k} = \{\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}\}$, such that $\pi_{m_k} \in [0,1]$ is the mixing parameter for the {\em cluster} proportion subject to $\sum_{m_k=1}^{m_k} \pi_{m_k}=1$, $\boldsymbol\mu_{m_k} \in \mathbb{R}^d$ is the mean parameter, and $\boldsymbol\Sigma_{m_k} \in \mathbb{R}^{d \times d}$ is the covariance associated with the $k$th cluster in the $m$th class. The probability that an unsupervised data point $\mathbf{x}$ is generated from a generative mixture model given parameters $\boldsymbol\vartheta $ is \begin{align} & \mathcal{L}(\mathcal{X}, \mathcal{Y}, \mathcal{Z}, \boldsymbol\vartheta ) = \prod_{m=1}^M \prod_{m_k=1}^{m_K} \prod_{n=1}^N \bigg [ \Big [ p(\mathbf{x}_n | y_n^{m_k}) p(y_n^{m_k}|z_n^m) \Big ]^{y_n^{m_k}} p(z_n^m) \bigg ]^{z_n^m} \label{eqn:HGMM:jointprob} \end{align} where \begin{align} p(z_n^m) = \alpha_m ;~ p(y_n^{m_k} | z_n^m) = \pi_{m_k} ;~ p (\mathbf{x}_n| y_n^{m_k}) = \mathcal{N} (\mathbf{x}_n|\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) , \label{eq:HGMM:individual} \end{align} and $\mathcal{N} (\mathbf{x}_n|\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k})$ is the Gaussian distribution. The definition of equation~(\ref{eq:HGMM:individual}) can be used to describe the $\mathcal{S}(\cdot)$ in equation~(\ref{eq:Sterm}) and the $\mathcal{D}(\cdot)$ in equation~(\ref{eq:Dterm}). In the E-step, the posterior of latent variable $\mathcal{Z}$ can be estimated by marginalization of the $\mathcal{Y}$ directly. In the M-step, we update the parameters by maximizing equation~(\ref{eq:Qfunction}), which is similar to GMM case in section~\ref{subsubsec:GMM} (see the Appendix A for details). Last, if $m_K=1$, we have $\mathbf{y}_n = [y_n^1]$ and equation~(\ref{eqn:HGMM:jointprob}) becomes the GMM, i.e., one cluster/single Gaussian distribution per class. \subsection*{Likelihood: Must-link Relationships} \label{subsec:appendix:MMML} The likelihood of the $\mathcal{S}(\cdot)$ is \begin{align} \label{eq:appendix:S} \mathcal{S}(\mathcal{X},\mathcal{Y},\mathcal{Z},\mathcal{M}, & \boldsymbol\vartheta ) := p(\mathcal{X},\mathcal{Y}, \mathcal{Z} | \mathcal{M}, \boldsymbol\vartheta ) \nonumber = \prod_{m=1}^M \prod_{m_k=1}^{m_K} \prod_{(i,j) \in \mathcal{M}} \bigg [ \\ & \alpha_m \Big [ \pi_{m_k} \mathcal{N} (\mathbf{x}_i |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \Big ]^{y_i^{m_k}} \Big [ \pi_{m_k} \mathcal{N} (\mathbf{x}_j |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \Big ]^{y_j^{m_k}} \bigg ]^{z_{\{ij\}}^m} . \end{align} \subsection*{Likelihood: Cannot-link Relationships} \label{subsec:appendix:MMCL} The likelihood of the $\mathcal{D}(\cdot)$ is \begin{eqnarray} p(z_a^m,z_b^m) &=& p(z_a^m|z_b^m)p(z_b^m) := p(z_b^m|z_a^m)p(z_a^m)~~ \label{eqn:jointbayes} \\ &=& \left\{ \begin{array}{ll} \cfrac {(\alpha_m)^{z_a^m}(\alpha_m)^{z_b^m}}{1-\sum_{m'=1}^M \alpha_{m'}^2} & z_a^m \neq z_b^m\\ 0 & z_a^m = z_b^m \label{eq:appendix:MM:propjoint}\\ \end{array} \right. . \end{eqnarray} and \begin{align} \label{eq:appendix:D} & \mathcal{D}(\mathcal{X},\mathcal{Y}, \mathcal{Z}, \mathcal{C},\boldsymbol\vartheta ) := p(\mathcal{X}, \mathcal{Y}, \mathcal{Z} | \mathcal{C}, \boldsymbol\vartheta ) \nonumber = \prod_{m=1}^M \prod_{m_k=1}^{m_K} \prod_{(a,b) \in \mathcal{C}} \\ & \Big [ \pi_{m_k} \mathcal{N} (\mathbf{x}_a |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \Big ]^{z_a^m~y_a^{m_k}} \Big [ \pi_{m_k} \mathcal{N} (\mathbf{x}_b |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \Big ]^{z_b^m~y_b^{m_k}} p(z_a^m,z_b^m) . \end{align} \subsection*{E-Step:} \label{subsec:appendix:Estep} \subsection*{Unsupervised Scenario} \label{subsec:appendix:MM} The expatiation $\mathcal{L}(\cdot)$ is \begin{align} \label{eq:appendix:MM:Lterm:yz} \mathbb{E}_{z_n^m,y_n^{m_k} | \mathbf{x}_n}[z_n^m~y_n^{m_k}] & = p(z_n^m,y_n^{m_k} | \mathbf{x}_n) \\ \nonumber & = \frac {\alpha_m \pi_{m_k} \mathcal{N} (\mathbf{x}_n |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k})} {\sum_{m'=1}^M \sum_{m_k'=1}^{m_K} \alpha_{m'} \pi_{m_k'} \mathcal{N} (\mathbf{x}_n |\boldsymbol\mu_{m_k'}, \boldsymbol\Sigma_{m_k'})} , \end{align} and \begin{align} \label{eq:appendix:MM:Lterm:z} \mathbb{E}_{z_n^m | \mathbf{x}_n}[z_n^m] & = p(z_n^m | \mathbf{x}_n) \nonumber \\ & = \frac {\alpha_m \sum_{m_k=1}^{m_K} \pi_{m_k} \mathcal{N} (\mathbf{x}_n |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k})} {\sum_{m'=1}^M \sum_{m_k'=1}^{m_K} \alpha_{m'} \pi_{m_k'} \mathcal{N} (\mathbf{x}_n |\boldsymbol\mu_{m_k'} \boldsymbol\Sigma_{m_k'})} . \end{align} \subsection*{Must-link Scenario} \label{subsec:appendix:ML} The $\mathcal{S}(\cdot)$ is \begin{align} \label{eq:appendix:MM:Sterm:zy} & \mathbb{E}_{z^m_{\{ij\}}, y_i^{m_k} | \mathbf{x}_i, \mathbf{x}_j} [z^m_{\{ij\}} y_i^{m_k}] = p(z^m_{\{ij\}},y_i^{m_k}|\mathbf{x}_i, \mathbf{x}_j) \nonumber \\ & = \frac { \alpha_m \pi_{m_k} \mathcal{N} (\mathbf{x}_i |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \sum_{m_{k'}=1}^{m_K} \pi_{m_{k'}} \mathcal{N} (\mathbf{x}_i |\boldsymbol\mu_{m_{k'}}, \boldsymbol\Sigma_{m_{k'}}) } { Z_{\mathcal{S}} } , \end{align} and \begin{align} \label{eq:appendix:MM:Sterm:z} & \mathbb{E}_{z^m_{\{ij\}} | \mathbf{x}_i, \mathbf{x}_j}[z^m_{\{ij\}}] = p(z^m_{\{ij\}} | \mathbf{x}_i, \mathbf{x}_j) \nonumber \\ & = \frac { \alpha_m \sum_{m_k=1}^{m_K} \sum_{m_{k'}=1}^{m_K} \pi_{m_k} \pi_{m_{k'}} \mathcal{N} (\mathbf{x}_i |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \mathcal{N} (\mathbf{x}_j |\boldsymbol\mu_{m_{k'}}, \boldsymbol\Sigma_{m_{k'}}) } { Z_{\mathcal{S}} } , \end{align} where \begin{align} Z_{\mathcal{S}} = \sum_{m'=1}^M \sum_{{m'}_k=1}^{m_K} \sum_{{m'}_{k'}=1}^{m_K} \alpha_{m'} \pi_{{m'}_k} \pi_{{m'}_{k'}} \mathcal{N} (\mathbf{x}_i |\boldsymbol\mu_{{m'}_k}, \boldsymbol\Sigma_{{m'}_k}) \mathcal{N} (\mathbf{x}_j |\boldsymbol\mu_{{m'}_{k'}}, \boldsymbol\Sigma_{{m'}_{k'}}). \end{align} \subsection*{Cannot-link Scenario} \label{subsec:appendix:ML} The $\mathcal{D}(\cdot)$ is \begin{align} \label{eq:appendix:MM:Dterm:zy} & \mathbb{E}_{z^m_a, y_a^{m_k} | \mathbf{x}_a, \mathbf{x}_b} [z_a^m~y_a^{m_k}] = p(z_a^m, y_a^{m_k} | \mathbf{x}_a, \mathbf{x}_b) = \nonumber \\ & \frac { \pi_{m_k} \mathcal{N} (\mathbf{x}_a |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \sum_{m'=1}^{M} \sum_{{m'}_k=1}^{m_K} \pi_{{m'}_k} \mathcal{N} (\mathbf{x}_b |\boldsymbol\mu_{{m'}_k}, \boldsymbol\Sigma_{{m'}_k}) p(z_a^m,z_b^{m'}) } { Z_{\mathcal{D}} } , \end{align} and \begin{align} \label{eq:appendix:MM:Dterm:z} & \mathbb{E}_{z^m_a | \mathbf{x}_a, \mathbf{x}_b} [z_a^m] = p(z_a^m | \mathbf{x}_a, \mathbf{x}_b) = \nonumber \\ & \frac { \splitfrac { \sum_{m_k=1}^{m_K} \pi_{m_k} \mathcal{N} (\mathbf{x}_a |\boldsymbol\mu_{m_k}, \boldsymbol\Sigma_{m_k}) \sum_{m'=1}^{M} \sum_{{m'}_k=1}^{m_K} \pi_{{m'}_k} \mathcal{N} (\mathbf{x}_b |\boldsymbol\mu_{{m'}_k}, \boldsymbol\Sigma_{{m'}_k}) } { p(z_a^m,z_b^{m'}) } } { Z_{\mathcal{D}} } , \end{align} where \begin{align} Z_{\mathcal{D}} = \sum_{m''=1}^{M} \sum_{m'''=1}^{M} \sum_{{m''}_{k}=1}^{m_K} \sum_{{m'''}_{k}=1}^{m_K} \pi_{{m''}_{k}} \pi_{{m'''}_{k}} & \mathcal{N} (\mathbf{x}_a |\boldsymbol\mu_{{m''}_{k}}, \boldsymbol\Sigma_{{m''}_{k}}) \nonumber \\ & \mathcal{N} (\mathbf{x}_b |\boldsymbol\mu_{{m'''}_{k}}, \boldsymbol\Sigma_{{m'''}_{k}}) p(z_a^{m''},z_b^{m'''}) . \end{align} \subsection*{M-Step} \label{subsec:appendix:Mstep} The mean and covariance in the $k$th cluster in the $m$th class are \begin{align} \label{eq:appendix:MM:mu} \boldsymbol\mu_{m_k} = \frac { \sum_{n} \ell_n^{m_k} \mathbf{x}_n + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} \mathbf{x}_i + s_j^{m_k} \mathbf{x}_j \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} \mathbf{x}_a + d_{b}^{m_k} \mathbf{x}_b \big] } { \sum_{n} \ell_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} + s_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} + d_{b}^{m_k} \big] } , \end{align} \begin{align} \label{eq:appendix:MM:mu} \boldsymbol\Sigma_{m_k} = \frac { \splitfrac { \sum_{n} \ell_n^{m_k} \mathbf{S}_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} \mathbf{S}_i^{m_k} + s_j^{m_k} \mathbf{S}_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ } { d_{a}^{m_k} \mathbf{S}_a^{m_k} + d_{b}^{m_k} \mathbf{S}_b^{m_k} \big] } } { \sum_{n} \ell_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} + s_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} + d_{b}^{m_k} \big] } , \end{align} where \begin{align} \label{eq:appendis:MM:shortnotation} \ell_n^{m_k} &= p(z_n^m, y_n^{m_k}|\mathbf{x}_n), \nonumber \\ s_i^{m_k} &= p(z_{\{ij\}}^m, y_i^{m_k}|\mathbf{x}_i, \mathbf{x}_j), \nonumber \\ d_{a}^{m_k} &= p(z_a^m, y_a^{m_k}|\mathbf{x}_a, \mathbf{x}_b), \end{align} and \begin{align} \mathbf{S}_n^{m_k} = (\mathbf{x}_n - \boldsymbol\mu_{m_k})(\mathbf{x}_n - \boldsymbol\mu_{m_k})^T. \end{align} Because the mixing parameter for the cluster $\pi_{m_k}$ satisfies the summation to one, the determination can be achieved by the Lagrange multiplier. \begin{align} \label{eq:appendix:MM:pi} \mathcal{Q}_{\mathcal{J}} + \lambda \bigg (\sum_{m_k=1}^{m_K} \pi_{m_k} - 1 \bigg ) \end{align} $\lambda$ is the Lagrange multiplier. Taking the derivative of equation~(\ref{eq:appendix:MM:pi}) with respect to $\pi_{m_k}$, \begin{align} \label{eq:appendix:MM:pi2} \frac { \sum_{n=1}^N \ell_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} + s_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} + d_{b}^{m_k} \big] } { \pi_{m_k} } + \lambda = 0 \end{align} By taking the derivative of equation~(\ref{eq:appendix:MM:pi}) with respect to $\lambda$ and equal to zero, we then can get $\sum_{m_k=1}^{m_K} \pi_{m_k}=1$ and use it to eliminate the $\lambda$ in equation~(\ref{eq:appendix:MM:pi2}). The mixing parameter for the $k$th cluster in the $m$th mixture is given by \begin{align} \label{eq:HGMM:pi3} \pi_{m_k} = \frac { \sum_{n=1}^N \ell_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} + s_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} + d_{b}^{m_k} \big] } { \sum_{m_k=1}^{m_K} \Big ( \sum_{n=1}^N \ell_n^{m_k} + \sum_{(i,j) \in \mathcal{M}} \big[ s_i^{m_k} + s_j^{m_k} \big] + \sum_{(a,b) \in \mathcal{C}} \big[ d_{a}^{m_k} + d_{b}^{m_k} \big] \Big ) } \end{align} Lastly, estimating the mixing parameters for mixture $\alpha_m$ is the same as in equation~(\ref{eqn:mixing_opt}). \section{Introduction} Semi-supervised learning (SSL) has become a topic of significant recent interest in the context of applied machine learning, where per-class distributions are difficult to automatically separate due to limited sampling and/or limitations of the underlying mathematical model. Several applications, including content-based retrieval~\cite{yang2012multimedia}, email classification~\cite{kyriakopoulou2013impact}, gene function prediction~\cite{nguyen2012detecting}, and natural language processing~\cite{sirts2013minimally,le2014semi}, benefit from the availability of user-defined/application-specific knowledge in the presence of large amounts of complex unlabeled data, where labeled observations are often limited and expensive to acquire. In general, SSL algorithms fall into two broad categories: \textit{classification} and \textit{clustering}. Semi-supervised classification is considered to improve supervised classification when small amounts of labeled data with large amounts of unlabeled data are available ~\cite{zhu2006semi,chapelle2006semi}. For example, in a semi-supervised email classification, one may wish to classify constantly increasing email messages into spam/nonspam with the knowledge of a limited amount of user-/human-based classified messages~\cite{kyriakopoulou2013impact}. On the other hand, semi-supervised clustering (SSC), also known as {\it constrained clustering}~\cite{basu2008constrained}, aims to provide better performance for {\it unsupervised clustering} when user-based information about the \textit{relationships} within a small subset of the observations becomes available. Such relations would involve data points belonging to the same or different classes. For example, a language-specific grammar is necessary in cognitive science when individuals are attempting to learn a foreign language efficiently. Such a grammar provides rules for prepositions that can be considered as user-defined knowledge for improving the ability to learn a new language. To highlight the role of user-defined relationships for learning an application-specific data distribution, we consider the example in Figure \ref{fig:wrongModel}(a), which shows a maximum likelihood model estimate of a Gaussian mixture that is well supported by the data. However, an application may benefit from another good (but not optimal w.r.t. likelihood) solution as in Figure~\ref{fig:wrongModel}(b), which is inconsistent with the data, but is optimal without some information in addition to the raw data points. Using a limited amount of \textit{labeled} data and a large amount of unlabeled data could be difficult to guide the learning algorithm in the application-specific direction~\cite{zhu2006semi,cozman2003semi,loog2014semi,yang2011effect}, because performance of a generative model depends on the ratio of the labeled data to unlabeled data. In contrast, previous works have shown that SSC achieves the estimate in Figure~\ref{fig:wrongModel}(b), given the observed data and a small number of user-defined \textit{relationships} that would \textit{guide} the parameter estimation process toward a model~\cite{basu2008constrained} that is not only informed by the data, but also by this small amount of user input. This paper addresses the problem of incorporating such user-specific relations into a clustering problem in an effective, general, and reliable manner. \begin{figure} \center \scalebox{.8} { \twoAcrossLabels {./fig/example11.png}{./fig/example22.png} {(a) Mathematically Ideal Model} {(b) Application-Specific Model} } \vspace{-5pt} \caption [Generative model clustering example] { {\bf Generative model clustering example}: Because of finite sampling and modeling limitations, a distribution of points may give rise to optimal solutions that, depending on the model and the data, (a) are not well suited to the application and/or (b) are not consistent with the underlying generative model, which may require domain knowledge from a user. } \label{fig:wrongModel} \vspace{-5pt} \end{figure} Clustering data using a generative framework has some useful, important properties, including compact representations, parameter estimation for subsequent statistical analysis, and the ability to induce classifications of unseen data~\cite{zhu2005harmonic}. For the problem of estimating the parameters of generative models, the expectation-maximization (EM) algorithm~\cite{dempster1977maximum} is particularly effective. The EM formulation is guaranteed to give maximum-likelihood (ML) estimates in the unimodal case and local maxima in likelihood otherwise. Therefore, EM formulations of parameter estimation that properly account for user input in the context of SSC are of interest and one of the contributions of this paper. A flexible and efficient way to incorporate user input into SSC is in the form of \textit{relations} between observed data points, in order to define statistical relationships among observations (rather than explicit labeling, as would be done in classification). A typical example would be for a user to examine a small subset of data and decide that some pairs of points should be in different classes, referred to as a \textit{cannot-link} relation, and that other pairs of data points should be in the same class, i.e., \textit{must-link}. Using these basic primitives, one may build up more complex relationships among sets of points. The concept of pairwise links was first applied to centroid-based clustering approaches, for instance, in the form of \textit{constrained} K-means~\cite{wagstaff2001constrained}, where each observation is assigned to the nearest cluster in a manner that avoids violating constraints. Although some progress has been made in developing mechanisms for incorporating this type of user input into clustering algorithms, the need remains for a systematic, general framework that generalizes with a limited amount of user knowledge. Most state-of-the-art techniques propose adding {\em hard constraints}~\cite{shental2004computing}, where data points that violate the constraints do not contribute (i.e., all pairwise constraints must be satisfied), or {\em soft penalties}~\cite{lu2004semi}, which penalize the clustering results based on the number of violated constraints. Both hard constraints and soft penalties can lead to both a lack of generality and suboptimal solutions. For instance, in constrained K-means, introducing constraints by merely assigning a relatively small number of points to appropriate centroids does not ensure that the models (centroids) adequately respond to this user input. In this paper, we propose a novel, generative approach for clustering with pairwise relations that incorporates these relations into the estimation process in a precise manner. The parameters are estimated by optimizing the data likelihood under the {\em assumption} that individual data points are either independent samples (as in the unsupervised case) or that they have a nontrivial joint distribution, which is determined by user input. The proposed model explicitly incorporates the pairwise relationship as a property of the generative model that guides the parameter estimation process to reflect user preferences and estimates the global structure of the underlying distribution. Moreover, the proposed model is represented as a probability distribution that can be virtually any form. The results in this paper demonstrate that the proposed optimal strategy pays off, and that it outperforms the state-of-the art on real-world datasets with significantly less user input. \section*{Appendix A. Mixture of Distributions} \input{formulationHGMMICML.tex} \bibliographystyle{spmpsci} \section{Related Work} \label{sec:relate} Semi-supervised clustering methods typically fall into one of two categories~\cite{basu2008constrained}: \textit{distance-based} methods and \textit{constraint-based} methods. The distance-based approaches combine conventional clustering algorithms with distance metrics that are designed to satisfy the information given by user input \cite{xing2002distance,bar2005learning,weinberger2005distance,cohn2003semi}. The metrics effectively embed the points into spaces where the distances between the points with constraints are either larger or smaller to reflect the user-specified relationships. On the other hand, constraint-based algorithms incorporate the pairwise constraints into the clustering objective function, to either enforce the constraints or penalize their violation. For example,Wagstaff et al. proposed the constrained K-means algorithm, which enforced user input as hard constraints in a nonprobabilistic manner as the part of the algorithm that assigns points to classes~\cite{wagstaff2001constrained}. Basu el al. proposed a probabilistic framework based on a hidden Markov random field, with ad hoc soft penalties, which integrated metric learning with the constrained K-means approach, optimized by an EM-like algorithm~\cite{basu2004probabilistic}. This work also can be applied to a kernel feature space as in \cite{kulis2009semi}. Allab and Benabdeslem adapted topological clustering to pairwise constraints using a self-organizing map in a deterministic manner~\cite{allab2011constraint}. Semi-supervised clustering methods with generative, parametric clustering approaches have also been augmented to accommodate user input. Lu and Leen proposed a penalized clustering algorithm using Gaussian mixture models (GMM) by incorporating the pairwise constraints as a prior distribution over the latent variable directly, resulting in a computationally challenging evaluation of the posterior~\cite{lu2004semi}. Such a penalization-based formulation results in a model with no clear generative interpretation and a stochastic expectation step that requires Gibbs sampling. Shental et al. proposed a GMM with equivalence constraints that defines the data from either the same or a different source. However, for the {\em cannot-link} case, they used the Markov network to describe the dependence between a pair of latent variables and sought the optimal parameter by gradient ascent~\cite{shental2004computing}. Their results showed that the cannot-link relationship was unable to impact the final parameter estimation (i.e., such a relation was ineffective). Further, they imposed user input as \textit{hard} constraints where data points that violate the constraints did not contribute to the parameter estimation process. A similar approach, in~\cite{law2005model}, proposed to treat the constraint as an additional random variable that increases the complexity of the optimization process. Further, their approach focused only on {\em must-link}. In this paper, we propose a novel solution to incorporating user-defined data relationships into clustering problems, so that cannot-link and must-link relations can be included in a unified framework in a way that they are computed efficiently using an EM algorithm with very modest computational demands. Moreover, the proposed formulation is general in that it can 1) accommodate any kind of relation that can be expressed as a joint probability and 2) incorporate, in principle, any probability distribution (generative model). For GMMs, however, this formulation results in a particularly attractive algorithm that entails a closed-form solution for the mean and covariance and a relatively inexpensive, iterative, constrained, nonlinear optimization for the mixing parameters. Recently, EM-like algorithms for SSL (and clustering in particular) have received significant attention in natural language processing \cite{conf/nips/GracaGT07,mann2010generalized}. Graca et al. proposed an EM approach with a posterior constraint that incorporates the expected values of specially designed auxiliary functions of the latent variables to influence the posterior distribution to favor user input~\cite{conf/nips/GracaGT07}. Because of the lack of probabilistic interpretation, the expectation step is not influenced by user input, and the results are not optimal. Unlike the generative approach, graph-based methods group the data points according to similarity and do not necessarily assume an underlying distribution. Graph-based, semi-supervised clustering methods have been demonstrated to be promising when user input is available ~\cite{yi2013semi,wang2010flexible,xiong2012spectral}. However, graph-based methods are not ideal classifiers when a new data point is presented due to their transductive property, i.e., unable to learn the general rule from the specific training data ~\cite{gammerman1998learning,zhu2005harmonic}. In order to classify a new data point, other than rebuilding the graph with the new data point, one likely solution is to build a separate inductive model on top of the output of the graph-based method (e.g., K-means or GMM); user input would need to be incorporated into this new model. The work in this paper is distinct from the aforementioned works in the following aspects: \begin{itemize} \item We present a {\em fully} generative approach, rather than a heuristic approach of imposing hard constraints or adding ad hoc penalties. \item The proposed generative model reflects user preferences while maintaining a probabilistic interpretation, which allows it to be generalized to take advantage of {\em alternative} density models or optimization algorithms. \item The proposed model clearly deals with the must-link {\em and} cannot-link cases in a unified framework and demonstrates that solutions using must-link and cannot-link together or independently are tractable and effective. \item Instead of pairwise constraints, the statistical interpretation of pairwise relationships allows the model estimation to converge to a distribution that follows user preferences with {\em less} domain knowledge. \item In the proposed algorithm, the parameter estimation is very similar to a standard EM in terms of ease of implementation and efficiency. \end{itemize}
{ "timestamp": "2018-05-08T02:13:11", "yymm": "1805", "arxiv_id": "1805.02285", "language": "en", "url": "https://arxiv.org/abs/1805.02285" }
\section{Introduction} \emph{Conflict-free colorings}, or CF-colorings for short, were introduced by Even~\emph{et~al.}\xspace~\cite{even-cf-03} and Smorodinsky~\cite{thesis-smorodinsky} to model frequency assignment to base stations in wireless networks. In the basic setting one is given a set~$S$ of objects in the plane---often disks are considered---and the goal is to assign a color to each object such that the following holds: for any point~$p$ in the plane such that the set~$S_p:=\{D\in S\mid p\in D\}$ of objects containing~$p$ is non-empty, $S_p$ must contain an object whose color is different from the colors of the other objects in~$S_p$. Even~\emph{et~al.}\xspace~proved, among other things, that any set of disks admits a CF-coloring with~$O(\log n)$ colors. This bound is tight in the worst case. Since then many different geometric variants of CF-colorings have been studied. For example, Har-Peled and Smorodinsky~\cite{harpeled-cf-05} generalized the result to objects with near-linear union complexity, while Even~\emph{et~al.}\xspace~\cite{even-cf-03} considered the dual version of the problem. See the survey by Smorodinsky~\cite{S-survey-10} for an overview. A restricted type of a CF-coloring is a \emph{unique-maximum} (\emph{UM}) \emph{coloring}, in which the colors are identified with integers, and the maximum color in the set~$S_p$ is required to be unique. Another type of coloring, often used as an intermediate step to obtain a CF-coloring, is \emph{non-monochromatic}~(\emph{NM}). In an NM-coloring---sometimes called \emph{a proper coloring}---we only require that, for any point~$p$ in the plane, if the set~$S_p$ contains at least two elements, not all of them have the same color. Smorodinsky \cite{smor-geomCF-06} showed that if an NM-coloring of~$k$ elements using~$\beta(k)$ colors exists for every~$k$, one can CF-color $n$ elements with~$O(\beta(n) \log n)$ colors. CF- or NM-coloring objects in $\reals^1$ is significantly easier than in the planar case. In $\reals^1$ the objects become intervals, assuming we require the objects to be connected, and a folklore result states that any set of intervals in $\reals^1$ can be CF-colored with three colors and NM-colored with two colors. (This is achieved by the \emph{chain methods}, which we describe below.) Thus, unlike in the planar case, the number of colors for a CF- or NM-coloring of intervals in $\reals^1$ does not depend on the number of intervals to be colored. \medskip We are interested in generalizations of this result to 1-dimensional spaces that have a more complex topology than~$\reals^1$. To this end we consider \emph{network spaces}: 1-dimensional spaces with the topology of an arbitrary graph. It is convenient to view a network space~$\mathcal{N}\xspace$ as being embedded in~$\reals^2$, although the embedding is actually immaterial. In this view the \emph{nodes} of~$\mathcal{N}\xspace$ are points in $\reals^2$, and the \emph{edges} are simple curves connecting pairs of nodes and otherwise disjoint. We let~$d\colon\mathcal{N}\xspace^2 \to \mathbb{R}\xspace_+$ denote the geodesic distance on $\mathcal{N}\xspace$. In other words, for two points $p,q\in \mathcal{N}\xspace$---these points may lie in the interior of an edge---we let $d(p,q)$ denote the minimum Euclidean length of any path connecting $p$ to $q$ in $\mathcal{N}\xspace$. We consider two special types of network spaces, \emph{tree spaces} and \emph{planar network spaces}, whose topology is that of a tree and a planar graph, respectively. The objective of our paper is to investigate the number of colors needed to CF- or NM-color a set~$\Objects$ of $n$ objects in a network space, where we consider various classes of connected objects. (Here CF- and NM-colorings are defined as above: in a CF-coloring, for any point $p\in \mathcal{N}\xspace$ the set $S_p := \{ o\in \mathcal{A}\xspace \mid p\in o \}$ of objects containing~$p$ should have an object with a unique color when it is non-empty, and in an NM-coloring the set $S_p$ should not be monochromatic when it consists of at least two objects.) In particular, we consider balls on $\mathcal{N}\xspace$---the~\emph{ball centered at~$p\in\mathcal{N}\xspace$ of radius~$r$} is defined as $B(p,r):= \{q\in\mathcal{N}\xspace \mid d(p,q)\leq r\}$--- and, for tree spaces, we also consider arbitrary connected subsets as objects. Note that, if the given network space is a single curve, then our setting, both for balls and for connected subspaces, reduces to coloring intervals in~$\reals^1$. The~main question we want to answer is: How does the maximum number of colors needed to NM- or CF-color a set $\Objects$ of objects in a network space depend on the complexity of the network space and of the objects to be colored? \mypara{Our results.} We assume without loss of generality that the nodes in our network space either have degree~1 or degree at least~3---there are no nodes of degree~2. Nodes of degree~1 are also called \emph{leaves}, and nodes of degree at least~3 are also called \emph{internal nodes}. We start by considering colorings on a tree space, which we denote by~$\TreeSpace$. Let~$\mathcal{A}\xspace$ be the set of $n$ objects that we wish to color, where each object $T\in \mathcal{A}\xspace$ is a connected subset of $\TreeSpace$. Note that each such object is itself also a tree. From now on we refer to the objects in~$\mathcal{A}\xspace$ as ``trees,'' and always use ``tree space'' when talking about~$\TreeSpace$. Observe that internal nodes of a tree are necessarily internal nodes of~$\TreeSpace$, but a tree leaf may lie in the interior of an edge of~$\TreeSpace$. We will investigate CF- and NM-chromatic number of trees on tree space as a function of the following parameters: \begin{itemize} \item $k$, the number of leaves of the tree space~$\TreeSpace$; \item $\ell$, the maximum number of leaves of any tree in $\Objects$; \item $n$, the number of objects in $\Objects$. \end{itemize} We define the CF-chromatic number~$\CFCN{tree}{trees} (k, \ell; n)$ as the minimum number of colors sufficient to CF-color any set $\mathcal{A}\xspace$ of $n$ trees of at most $\ell$ leaves each, in a tree space of at most~$k$ leaves. The NM-chromatic number~$\NMCN{tree}{trees} (k, \ell; n)$ is defined similarly. Rows~3 and~4 in Table~\ref{table:overview} give our bounds on these chromatic numbers. Notice that the upper bounds do not depend on $n$. In other words, any set of trees in a tree space can be colored with a number of colors that depends only on the complexity of the tree space~$\TreeSpace$ and of the trees in~$\Objects$. (Obviously the number of objects, $n$, is an upper bound on these chromatic numbers as well. To avoid cluttering the statements, we usually omit this trivial bound.) \begin{table} {\small \begin{center} \begin{tabular}{l l l r r l} Space & Objects & \ Coloring & Upper Bound & Lower Bound & \ \ Reference \\[2pt] \hline \hline \\[-8pt] Line & Intervals & \ NM & $2$ & $2$ & \ \ Folklore \\[2pt] \hline \\[-8pt] Line & Intervals & \ CF & $3 $ & $3$ & \ \ Folklore \\[2pt] \hline \hline \\[-8pt] Tree & Trees & \ NM & $\min\left(\ell +1, 2 \sqrt{6k} \right)$ & $\min\left(\ell +1, \left\lfloor \frac{1+ \sqrt{1+8k}}{2} \right\rfloor \right)$ & \ \ Section \ref{sec:gen_subgraph_tree} \\[2pt] \hline \\[-8pt] Tree & Trees & \ CF & $O(\ell \log k)$ & $\left\lfloor \log_2 \min (k,n) \right\rfloor$ & \ \ Section \ref{sec:gen_subgraph_tree} \\[2pt] \hline \hline \\[-8pt] Tree & Balls & \ NM & $2$ & $2$ & \ \ Section \ref{sec:balls-on-trees}\\[2pt] \hline \\[-8pt] Tree & Balls & \ CF & $\lceil \log t \rceil +3 $ & $\lceil \log (t+1) \rceil $ & \ \ Section \ref{sec:balls-on-trees} \\[2pt] \hline \hline \\[-8pt] Planar & Balls & \ NM & $4$ & $4$ & \ \ Section \ref{sec:balls-on-networks}\\[2pt] \hline \\[-8pt] Planar & Balls & \ CF & $\lceil \log_{4/3} t \rceil +3 $ & $\lceil \log (t+1) \rceil $ & \ \ Section \ref{sec:balls-on-networks} \\[2pt] \hline \end{tabular} \end{center} \caption{Overview of our results. The folklore result for intervals on the line (that is, in $\reals^1$) is explained below.} \label{table:overview} } \end{table} We also study balls in tree spaces. Here it turns out to be more convenient to not use $k$ (the number of leaves) as the complexity measure of $\TreeSpace$, but \begin{itemize} \item $t$, the number of internal nodes of~$\TreeSpace$. \end{itemize} We are interested in the chromatic numbers~$\CFCN{tree}{balls} (t; n)$ and $\NMCN{tree}{balls} (t; n)$. Rows~5 and~6 of Table~\ref{table:overview} state our bounds for these chromatic numbers. After studying balls in tree spaces, we turn our attention to balls in planar network spaces. Our bounds on the corresponding chromatic numbers $\CFCN{planar}{balls} (t; n)$ and $\NMCN{planar}{balls} (t; n)$ are contained in row 7 and 8 of Table~\ref{table:overview}. \mypara{Related results.} Above we considered CF- and NM-colorings in a geometric setting, but they can also be defined more abstractly. A CF-coloring on a hypergraph $\mathcal{H}=(V,E)$ is a coloring of the vertex set~$V$ such that, for every (non-empty) hyperedge $e\in E$, there is a vertex in $e$ whose color is different from that of the other vertices in~$e$. In a NM-coloring any hyperedge with at least two vertices should not be monochromatic. Smorodinsky's survey~\cite{S-survey-10} also gives an overview of results on CF-colorings in this abstract setting. The basic geometric version mentioned above---coloring objects in $\reals^2$ with respect to points---can be phrased in terms of hypergraphs by letting the objects be the node set $V$ and, for each point $p$ in the plane, creating a hyperedge $e:=S_p$. Another avenue for constructing a hypergraph~$\mathcal{H}$ to be colored is to start with a graph~$\mathcal{N}\xspace$, let the vertices of $\mathcal{H}$ be the nodes of $\mathcal{N}\xspace$ and create hyperedges for (the sets of vertices of) certain subgraphs of~$\mathcal{N}\xspace$. For example, Pach and Tardos~\cite{PT-cf-09} considered the case where hyperedges are all the node neighborhoods. For this case, Abel~\emph{et~al.}\xspace~\cite{abel-etal-17} recently showed that a planar graph can always be CF-colored with only three colors, if we allow some nodes to be uncolored. (Otherwise, we can use a dummy color, increasing the number of colors to four.) As another example, we let the hyperedges be induced by all the paths in the graph. This setting is equivalent to an older notion of \emph{node ranking} \cite{bod-vertexrank-94}, or \emph{ordered coloring}~\cite{katch-orderedcol-95}. Note that in the above results the goal is to color the nodes of a graph. We, on the other hand, do not want to color nodes, but objects (connected subsets) in a network space (which has a graph topology, but is a geometric object). \mypara{Preliminaries: the chain methods. We start by describing a folklore technique, called the \emph{chain method}, to color intervals in~$\reals^1$ in a non-monochromatic fashion using at most two colors. We order the intervals left-to-right by their left endpoints (in case of ties, we take the longest interval first) and color them in this order using the so-called \emph{active color} which is defined as follows. We start with blue as the active color. We color the first interval, then change the active color to red. We then use the following procedure: we color the next interval $I$ in the ordering using the active color, then if the right endpoint of $I$ is not contained in any other already colored interval, we change the active color from red to blue or blue to red. To obtain a CF-coloring the chain method proceeds as follows. First, the interval with the leftmost left endpoint---in case of ties, the longest such interval---is colored blue. Next, the following procedure is repeated until we get stuck: Let $I$ be the interval colored last. Among all intervals whose left endpoint lies in~$I$ and that are not contained in it, color the one extending farthest to the right red (if $I$ is blue) or blue (if $I$ is red). This creates a chain of alternating blue and red intervals. Each remaining interval is now either completely covered by the already colored intervals, or it lies completely to the right of them. The former intervals are given a dummy color (grey), the latter intervals are colored by applying the above procedure again. \begin{lemma}\label{lem:chain-methods} There is a NM-coloring of intervals on a line using two colors, and a CF-coloring using three colors. \end{lemma} \begin{proof} We prove the latter coloring is conflict-free; the proof for the NM-coloring is similar. Consider a point~$p$ contained in an interval. It is clear that~$p$ is contained in either a red or a blue interval. We suppose without loss of generality it is contained in a red interval~$I_0=[a_0,b_0]$. We show it is not contained in another red interval. Let us suppose by contradiction that it is contained in another red interval~$I_1=[a_1,b_1]$ with~$a_1\geqslant a_0$. Then~$p$ must also be contained in a blue interval~$I_2=[a_2,b_2]$, with~$a_1 \geqslant a_2 \geqslant a_0$. Moreover, we have that~$b_2 < b_1 $. Thus,~$I_2$ starts in~$I_0$ and extends further than~$I_1$, hence should have been chosen to be colored blue, which is a contradiction. Therefore,~$p$ is always contained in at most one red interval, and similarly, in at most one blue interval, and is always contained in a blue or in a red interval. Thus the coloring is conflict-free. \end{proof} \section{Trees on Tree Spaces}\label{sec:gen_subgraph_tree} \subsection{The upper bound} \mypara{Overview of the coloring procedure.} Let~$\mathcal{T}\xspace$ be a tree space with~$k$ leaves and let~$\mathcal{A}\xspace$ be a set of~$n$ trees in~$\TreeSpace$, each with at most~$\ell$ leaves. We describe an algorithm that NM-colors $\Objects$ in two phases: first, we select a subset~$\Core\subseteq \mathcal{A}\xspace$ of size at most~$6k-12$ and color it with at most~$\min\left(\ell +1, 2 \sqrt{6k} \right)$ colors. In the second phase we extend this coloring to the whole set~$\Objects$ without using new colors. An edge $e$ of $\TreeSpace$ is a \emph{leaf edge} if it is incident to a leaf; the remaining edges are \emph{internal}. We define $\Core\subseteq \mathcal{A}\xspace$ as the set of at most~$6k-12$ trees selected as follows. For every pair $(e,v)$, where $e$ is an edge of~$\mathcal{T}\xspace$ and $v$ is an endpoint of $e$ that is not a leaf of~$\mathcal{T}\xspace$, we choose two trees containing~$v$ and extending the furthest into~$e$ (if they exist), that is, trees~$T$ of $\mathcal{A}\xspace$ containing~$v$ for which~$\mbox{length}(T\cap e)$ is maximal, and place them in~$\Objects(e,v)$. If two or more trees of $\mathcal{A}\xspace$ fully contain~$e$, then~$\Objects(e,v)$ contains two of them, chosen arbitrarily. If a tree contains an internal edge~$e$ fully, it may be chosen by both endpoints. We now define~$\Objects(e):=\Objects(e,u) \cup \Objects(e,v)$ for each internal edge~$e=uv$, $\Objects(e):= \Objects(e,v)$ for each leaf edge~$e=uv$ with non-leaf endpoint~$v$, and $\Core:= \bigcup \Objects(e)$, with the union taken over all edges~$e$ of~$\TreeSpace$. Then~$\Objects(e)$ contains at most four trees for any internal edge~$e$ and at most two trees for any leaf edge~$e$. If $\TreeSpace$ has at most $k$ leaves, it has at most~$k$ leaf edges and at most $k-3$ internal edges; recall that $\TreeSpace$ has no degree-two nodes. Thus $|\Core|\leq 6k-12$, as claimed. We first explain how to color~$\Core$. \mypara{Coloring~$\Core$.} We color~$\Core$ in two steps. Let~$T \in \Core$ be a tree. We define $E(T)$ to be the set of edges~$e$ of~$\TreeSpace$ with~$T\in \Objects(e)$. Firstly, if~$\ell > 2 \sqrt{6k}$ we select all subtrees~$T$ with~$|E(T)| \geqslant \sqrt{6k}$, and give each of them a unique color. Since $\sum_{e} |\Objects(e)| \leq 6k-12$, there are at most~$\sqrt{6k}-1$ such trees, so we use at most~$\sqrt{6k}-1$ colors. For each uncolored~$T\in\Core$, we create a new tree~$T'$, defined as the smallest tree containing~$\bigcup_{e\in E(T)} e\cap T$; see Fig.~\ref{fig:trimming-trees}. $T'$ has at most~$\ell':=\min(\ell, \sqrt{6k})$ leaves because $|E(T)|<\sqrt{6k}$. Define~$\Core':=\{T'\mid T\in \Core \}$. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.655] \node at (0,0) (1) {}; \node at (1,2) (2) {}; \node at (-0.4,1.3) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (0.6,-1.9) (5) {}; \node at (-0.2,-1.8) (12) {}; \node at (1.3,-1.6) (13) {}; \node at (0.7,0.9) (6) {}; \node at (0.7,-0.3) (7) {}; \node at (-2,0.4) (8) {}; \node at (-2.9,-0.3) (14) {}; \node at (-2.7,1) (15) {}; \node at (-1.7,-1.5) (9) {}; \node at (-1.5,1.9) (10) {}; \node at (0.2,2.6) (11) {}; \draw[line width=1.3mm, red!60] (1.center) -- (0.3,-0.95); \draw[line width=1.3mm, red!60] (1.center) -- (4.center); \draw[line width=1.3mm, red!60] (1.center) -- (6.center) -- (2.center); \draw[line width=1.3mm, red!60] (3.center) -- (-0.95,1.6); \draw[line width=1.3mm, red!60] (4.center) -- (-1.6,-0.15); \draw[line width=1.3mm, red!60] (6.center) -- (3.center); \draw[thick] (4.center) -- (1.center) -- (5.center); \draw[thick] (1.center) -- (6.center) -- (2.center); \draw[thick] (8.center) -- (4.center) -- (9.center); \draw[thick] (11.center) -- (3.center) -- (10.center); \draw[thick] (6.center) -- (3.center); \draw[thick] (7.center) -- (1.center); \draw[thick] (12.center) -- (5.center) -- (13.center); \draw[thick] (15.center) -- (8.center) -- (14.center); \foreach \i in {1,...,15}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (1) {}; \node at (1,2) (2) {}; \node at (-0.4,1.3) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (0.6,-1.9) (5) {}; \node at (-0.2,-1.8) (12) {}; \node at (1.3,-1.6) (13) {}; \node at (0.7,0.9) (6) {}; \node at (0.7,-0.3) (7) {}; \node at (-2,0.4) (8) {}; \node at (-2.9,-0.3) (14) {}; \node at (-2.7,1) (15) {}; \node at (-1.7,-1.5) (9) {}; \node at (-1.5,1.9) (10) {}; \node at (0.2,2.6) (11) {}; \draw[line width=1.3mm, red!60] (1.center) -- (0.3,-0.95); \draw[line width=1.3mm, red!60] (6.center) -- (3.center); \draw[line width=1.3mm, red!60] (4.center) -- (-1.6,-0.15); \draw[thick] (4.center) -- (1.center) -- (5.center); \draw[thick] (1.center) -- (6.center) -- (2.center); \draw[thick] (8.center) -- (4.center) -- (9.center); \draw[thick] (11.center) -- (3.center) -- (10.center); \draw[thick] (6.center) -- (3.center); \draw[thick] (7.center) -- (1.center); \draw[thick] (12.center) -- (5.center) -- (13.center); \draw[thick] (15.center) -- (8.center) -- (14.center); \foreach \i in {1,...,15}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (1) {}; \node at (1,2) (2) {}; \node at (-0.4,1.3) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (0.6,-1.9) (5) {}; \node at (-0.2,-1.8) (12) {}; \node at (1.3,-1.6) (13) {}; \node at (0.7,0.9) (6) {}; \node at (0.7,-0.3) (7) {}; \node at (-2,0.4) (8) {}; \node at (-2.9,-0.3) (14) {}; \node at (-2.7,1) (15) {}; \node at (-1.7,-1.5) (9) {}; \node at (-1.5,1.9) (10) {}; \node at (0.2,2.6) (11) {}; \draw[line width=1.3mm, red!60] (1.center) -- (0.3,-0.95); \draw[line width=1.3mm, red!60] (1.center) -- (4.center); \draw[line width=1.3mm, red!60] (1.center) -- (6.center) -- (3.center); \draw[line width=1.3mm, red!60] (4.center) -- (-1.6,-0.15); \draw[thick] (4.center) -- (1.center) -- (5.center); \draw[thick] (1.center) -- (6.center) -- (2.center); \draw[thick] (8.center) -- (4.center) -- (9.center); \draw[thick] (11.center) -- (3.center) -- (10.center); \draw[thick] (6.center) -- (3.center); \draw[thick] (7.center) -- (1.center); \draw[thick] (12.center) -- (5.center) -- (13.center); \draw[thick] (15.center) -- (8.center) -- (14.center); \foreach \i in {1,...,15}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \end{center} \caption{The original tree~$T$ (left), the set~$\bigcup_{e\in E(T)} e\cap T$ (middle), and the new tree~$T'$ (right). }\label{fig:trimming-trees} \end{figure} The second step is to color~$\Core'$. We need the following lemma, which shows that an NM-coloring of~$\Core'$ carries over to~$\Core$. \begin{lemma}\label{lem:c'_to_c} Any NM-coloring of~$\Core'$ corresponds to an NM-coloring of~$\Core$, that is, if we give each tree~$T\in \Core$ the color of the corresponding tree $T'\in \Core'$ then we obtain an NM-coloring. \end{lemma} \begin{proof} Let~$q$ be a point on an edge~$e$ of~$\mathcal{T}\xspace$ contained in at least two trees of~$\Core$ (if no such trees exists, the coloring is trivially non-monochromatic at~$q$). Since~$q$ is contained in at least two trees of~$\Core$, it is also contained in two trees of~$\Objects(e)$. Call these trees $T_1$ and~$T_2$. Note that $T_1$ either receives a color in the first coloring step---namely, when $\ell> 2\sqrt{6k}$ and $|E(T_1)|\geq\sqrt{k}$---or $T'_1\in\Core'$ contains~$q$, since $e\in E(T_1)$. A similar statement holds for~$T_2$. Since the colors used in the first step are unique and $\Core'$ is NM-colored, this implies that $T_1$ and $T_2$ have different colors. Hence,~$\Core$ is NM-colored. \end{proof} Next we show how to NM-color~$\Core'$. Fix an arbitrary internal node~$r$ of $\TreeSpace$ and treat $\TreeSpace$ as rooted at $r$.. Our coloring procedure for~$\Core'$ maintains the following invariant: any path from~$r$ to a leaf~$v$ of~$\TreeSpace$ consists of three disjoint consecutive subpaths (some possibly empty), in this order, as illustrated in Fig.~\ref{fig:3-paths}: \begin{itemize} \item a \emph{non-monochromatic} subpath containing the root on which at least two trees are colored with at least two different colors, \item a \emph{singly-colored} subpath covered by exactly one colored tree, and \item an \emph{uncolored} subpath containing the leaf on which no tree is colored. \end{itemize} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (1,-1) (1) {}; \node at (-1,-1) (2) {}; \node at (0,-1) (15) {}; \node at (-1.5,-2) (3) {}; \node at (-0.5,-2) (4) {}; \node at (0.5,-2) (5) {}; \node at (1.5,-2) (6) {}; \node at (1,-2) (13) {}; \node at (-1.8,-3) (7) {}; \node at (-1.5,-3) (8) {}; \node at (-1.2,-3) (14) {}; \node at (-0.1,-3) (9) {}; \node at (0.3,-3) (10) {}; \node at (0.7,-3) (11) {}; \node at (1.1,-3) (12) {}; \node at ([yshift=-3mm]11.center) {$v$}; \draw[line width=1.3mm, red!50] (0.center) -- (1.center) -- (5.center); \draw[line width=1.3mm, red!50] (0.center) -- (2.center) -- (3.center) -- (-1.65,-2.5); \draw[line width=1.3mm, red!50] (1.center) -- (1.25,-1.5); \draw[line width=1.3mm, blue!50] (4.center) -- ([xshift=-0.7mm,yshift=1.4mm]2.center) -- ([yshift=2mm]0.center) -- ([xshift=0.7mm,yshift=1.4mm]1.center) -- ([yshift=3.5mm]6.center); \foreach \i in {0,...,15}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (0.center) -- (1.center); \draw[thick] (0.center) -- (2.center); \draw[thick] (0.center) -- (15.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (1.center) -- (6.center); \draw[thick] (2.center) -- (3.center); \draw[thick] (2.center) -- (4.center); \draw[thick] (1.center) -- (13.center); \draw[thick] (3.center) -- (7.center); \draw[thick] (3.center) -- (8.center); \draw[thick] (5.center) -- (9.center); \draw[thick] (5.center) -- (10.center); \draw[thick] (5.center) -- (11.center); \draw[thick] (5.center) -- (12.center); \draw[thick] (3.center) -- (14.center); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (1,-1) (1) {}; \node at (0.5,-2) (4) {}; \node at (0.7,-3) (10) {}; \node at (3.3,-0.2) {non-monochromatic}; \node at (2.8,-1.6) {singly-colored}; \node at (2.1,-2.5) {uncolored}; \node at ([yshift=-3mm]10.center) {$v$}; \draw[line width=1.3mm, red!50] (0.center) -- (1.center) -- (4.center); \draw[line width=1.3mm, blue!50] ([xshift=0.5mm,yshift=1.5mm]0.center) -- ([xshift=1.5mm,yshift=0.5mm]1.center); \foreach \i in {0,1,4,10}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (0.center) -- (1.center) -- (4.center) -- (10.center); \end{tikzpicture} \end{center} \caption{A coloring of trees (left) and an illustration of the invariant for~$v$ (right). }\label{fig:3-paths} \end{figure} \begin{observation} Any set of trees containing~$r$ and satisfying the invariant described above is NM-colored if we disregard uncolored trees. \end{observation} We color the trees $T\in\Core'$ that contain~$r$ in an arbitrary order, using~$\ell'+1$ colors, as follows: for each leaf~$v$ of $T$, we follow the path from~$v$ to the root~$r$ to find a singly-colored part. Note that if we find a singly-colored part---by the invariant there is at most one such part on the path from $v$ to $r$---we cannot use that color for~$T$. Since $T$ has at most $\ell'$ leaves, this eliminates at most $\ell'$ colors. Hence, at least one color remains for~$T$. \begin{lemma}\label{lem:c'} The procedure described above maintains the invariant and colors all trees of~$\Core'$ containing~$r$ with at most~$\ell'+1$ colors. \end{lemma} \begin{proof} Suppose the invariant holds before the coloring of~$T$. Then we need to make sure the invariant still holds after~$T$ has been colored. Let~$w$ be a leaf of~$\TreeSpace$ and~$\pi_w$ the path from~$w$ to the root. Let~$v$ be the closest point to~$w$ in~$\pi_w \cap T$. Note that~$v$ always exists as~$r \in \pi_w \cap T$. Now let $\pi_v\subseteq \pi_w$ be the path from~$v$ to~$r$. It is obvious that~$\pi_w \cap T = \pi_v$. Then the part of~$\pi_v$ that was uncolored (if it was non-empty) now is singly-colored. The part that was singly-colored now becomes non-monochromatic, as we eliminated that color for~$T$. And the part that was already non-monochromatic stays so. Therefore the invariant is indeed maintained for~$\pi_w$, concluding the proof. \end{proof} Once all the trees containing~$r$ are colored we delete~$r$ from~$\TreeSpace$, that is, we consider the space $\TreeSpace\setminus\{r\}$, and we take the closures of the resulting connected components. This creates a number of subspaces such that each uncolored tree in $\Core'$ is contained in exactly one of them. Consider such a subspace $\TreeSpace'$ and let $r'$ be the neighbor of $r$ in $\TreeSpace'$. We now want to recursively color the uncolored trees in $\TreeSpace'$, taking~$r'$ as the root of $\TreeSpace'$. However, the invariant might not hold on the edge~$e$ from~$r'$ to the old root~$r$: Since now~$r$ is considered a child of~$r'$, the order of the three parts might switch on~$e$---see Fig.~\ref{fig:invariant-order-switch}. Suppose this is the case, and let~$c_e$ be the color of the singly-colored part on the edge~$e$. (If the singly-colored part is empty, we can cut the tree between the non-monochromatic and the uncolored part and recurse immediately, which maintains the invariant.) Note also that, for the order to switch, the non-monochromatic part needs to end on~$e$, and therefore the only color used in any singly-colored part of the tree rooted at~$r'$ is~$c_e$. We overcome this problem by carefully choosing the order in which we color the trees containing~$r'$. Namely, we fist color the tree~$T$ extending the farthest into~$e$. In this case, there is only one color forbidden, namely~$c_e$. We can therefore easily color~$T$. We can then trim the treespace $\TreeSpace'$ to remove any non-monochromatic and singly-colored part and hence restore the invariant and continue with the coloring. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (1,-1) (1) {}; \node at (-1,-1) (2) {}; \node at (-0.3,-0.3) (2b) {}; \node at (-0.8,-0.8) (2c) {}; \node at (0,-1) (15) {}; \node at (-1.5,-2) (3) {}; \node at (-0.5,-2) (4) {}; \node at (0.5,-2) (5) {}; \node at (1.5,-2) (6) {}; \node at (1,-2) (13) {}; \node at (-1.8,-3) (7) {}; \node at (-1.5,-3) (8) {}; \node at (-1.2,-3) (14) {}; \node at (-0.1,-3) (9) {}; \node at (0.3,-3) (10) {}; \node at (0.7,-3) (11) {}; \node at (1.1,-3) (12) {}; \node at ([yshift=4mm]0.center) {$r$}; \node at ([xshift=-1mm, yshift=3mm]2.center) {$r'$}; \draw[line width=1.3mm, red!50] (0.center) -- (1.center) -- (5.center); \draw[line width=1.3mm, red!50] (2c.center) -- (0.center); \draw[line width=1.3mm, red!50] (1.center) -- (1.25,-1.5); \draw[line width=1.3mm, blue!50] ([xshift=-1mm,yshift=1mm]2b.center) -- ([yshift=2mm]0.center) -- ([xshift=0.7mm,yshift=1.4mm]1.center) -- ([yshift=3.5mm]6.center); \foreach \i in {0,...,15}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (0.center) -- (1.center); \draw[thick] (0.center) -- (2.center); \draw[thick] (0.center) -- (15.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (1.center) -- (6.center); \draw[thick] (2.center) -- (3.center); \draw[thick] (2.center) -- (4.center); \draw[thick] (1.center) -- (13.center); \draw[thick] (3.center) -- (7.center); \draw[thick] (3.center) -- (8.center); \draw[thick] (5.center) -- (9.center); \draw[thick] (5.center) -- (10.center); \draw[thick] (5.center) -- (11.center); \draw[thick] (5.center) -- (12.center); \draw[thick] (3.center) -- (14.center); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (-1,-1) (2) {}; \node at (-0.3,-0.3) (2b) {}; \node at (-0.8,-0.8) (2c) {}; \node at (0,-1) (15) {}; \node at (-1.5,-2) (3) {}; \node at (-0.5,-2) (4) {}; \node at (-1.8,-3) (7) {}; \node at (-1.5,-3) (8) {}; \node at (-1.2,-3) (14) {}; \node at ([yshift=4mm]0.center) {}; \node at ([xshift=-1mm, yshift=3mm]2.center) {$r'$}; \draw[line width=1.3mm, red!50] (2c.center) -- (0.center); \draw[line width=1.3mm, blue!50] ([xshift=-1mm,yshift=1mm]2b.center) -- ([xshift=-1mm, yshift=1mm]0.center); \foreach \i in {2,3,4,7,8,14}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (0.center) -- (2.center); \draw[thick] (2.center) -- (3.center); \draw[thick] (2.center) -- (4.center); \draw[thick] (3.center) -- (7.center); \draw[thick] (3.center) -- (8.center); \draw[thick] (3.center) -- (14.center); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (-1,-1) (2) {}; \node at (-0.3,-0.3) (2b) {}; \node at (-0.8,-0.8) (2c) {}; \node at (0,-1) (15) {}; \node at (-1.5,-2) (3) {}; \node at (-0.5,-2) (4) {}; \node at (-1.8,-3) (7) {}; \node at (-1.5,-3) (8) {}; \node at (-1.2,-3) (14) {}; \node at ([yshift=4mm]0.center) {}; \node at ([xshift=-1mm, yshift=4mm]2.center) {$r'$}; \draw[line width=1.3mm, red!50] (2c.center) -- (0.center); \draw[line width=1.3mm, blue!50] ([xshift=-1mm,yshift=1mm]2b.center) -- ([xshift=-1mm, yshift=1mm]0.center); \draw[line width=1.3mm, blue!50] (4.center) -- ([xshift=-0.7mm,yshift=1.4mm]2.center) -- ([xshift=-2mm,yshift=0mm]2b.center); \foreach \i in {2,3,4,7,8,14}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (0.center) -- (2.center); \draw[thick] (2.center) -- (3.center); \draw[thick] (2.center) -- (4.center); \draw[thick] (3.center) -- (7.center); \draw[thick] (3.center) -- (8.center); \draw[thick] (3.center) -- (14.center); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.655] \node at (0,0) (0) {}; \node at (-1,-1) (2) {}; \node at (-0.3,-0.3) (2b) {}; \node at (-0.8,-0.8) (2c) {}; \node at (0,-1) (15) {}; \node at (-1.5,-2) (3) {}; \node at (-0.5,-2) (4) {}; \node at (-1.8,-3) (7) {}; \node at (-1.5,-3) (8) {}; \node at (-1.2,-3) (14) {}; \node at ([yshift=4mm]0.center) {}; \node at ([xshift=-1mm, yshift=4mm]2.center) {$r'$}; \draw[line width=1.3mm, blue!50] (4.center) -- ([xshift=-0.7mm,yshift=1.4mm]2.center) -- ([xshift=-0.9mm,yshift=1.2mm]2c.center); \foreach \i in {2,3,4,7,8,14}{ \draw[thick, fill=black ] (\i) circle (0.06); } \draw[thick] (2c.center) -- (2.center); \draw[thick] (2.center) -- (3.center); \draw[thick] (2.center) -- (4.center); \draw[thick] (3.center) -- (7.center); \draw[thick] (3.center) -- (8.center); \draw[thick] (3.center) -- (14.center); \end{tikzpicture} \end{center} \caption{When recursing on the subspace rooted at~$r'$ (leftmost), the invariant does not hold anymore (middle left), as the parts are switched on the edge between~$r$ and~$r'$. To remedy this, we first color the tree extending the farthest into that edge (middle right), starting from~$r'$. We then trim the tree to fix the invariant (rightmost). } \label{fig:invariant-order-switch} \end{figure} \begin{lemma}\label{lem:c} $\Core$ admits an NM-coloring with~$\min (\ell+1, 2\sqrt{6k})$ colors. \end{lemma} \begin{proof} The fact that the procedure above produces an NM-coloring follows from Lemmas~\ref{lem:c'_to_c} and~\ref{lem:c'}. When $\ell>2\sqrt{6k}$ we use $\sqrt{6k}-1$ colors to deal with trees $T$ with $|E(T)|\geq\sqrt{6k}$ and $\ell'+1\leq \min(\ell,2\sqrt{6k})+1\leq \sqrt{6k}+1$ colors for the other trees, giving $2\sqrt{6k}$ colors in total. When $\ell\leq 2\sqrt{6k}$ we do not treat the trees with $|E(T)|\geq\sqrt{6k}$ separately, so we just use $\ell'+1\leq \min(\ell,\sqrt{6k})+1\leq \ell+1$ colors. \end{proof} \mypara{Extending the coloring from~$\Core$ to~$\Objects$.} Let~$\Col\colon\Core \to \integers$ be an NM-coloring on~$\Core$. We extend the coloring to~$\Objects$ as follows. We start by coloring all trees in~$\Objects \setminus \Core$ containing an internal node of $\TreeSpace$ using an arbitrary color already used. We then treat all edges in an abritrary order, coloring all trees contained in the edge as explained now. Let~$e=rr'$ be an arbitrary edge of~$\TreeSpace$ and~$\Objects^*(e)$ be the set of uncolored trees contained in~$e$. We color~$\Objects^*(e)$ as follows. We first color the set of uncolored trees contained in~$e$ naively using the chain method. For this we use two new colors, which are used for all chains---we can re-use the same two colors for the chains, since trivially the chains in any two edges $e,e'$ do not interact. However, we can avoid using two extra colors and re-use the colors from~$\Core$ as explained next. First, if~$\Col$ uses fewer than two colors, then each node of~$\TreeSpace$ is contained in at most one tree. We then forget the trivial coloring~$\Col$ and use the chain method from scratch on~$\Objects$. We start at a arbitrarily fixed leaf~$u$ of~$\TreeSpace$, and for any other leaf~$u'$, we consider the path between~$u$ and~$u'$ and use the chain method on the trees restricted to this path. Since for any node~$v$, at most one tree contains~$v$, no tree receives two different colors on two different paths. Moreover, the coloring is conflict-free, since any point in~$\TreeSpace$ is contained in a path from~$u$ to a certain leaf~$u'$. We may now suppose that~$\Col$ uses at least two colors. Let~$T_r\in \Objects(e,r)$ and~$T_{r'}\in \Objects(e,r')$, be the trees extending the farthest into~$e$ (arbitrarily chosen in case of a tie). Note that these trees might not exist. Also note that $T_r$ and $T_{r'}$ are not in~$\Objects^*(e)$. We define the following colors. \begin{itemize} \item Let~$c_r$ be the color of~$T_r$, if~$T_{r}$ exists, and an arbitrary color otherwise. \item Let~$c_{r'}$ be the color of~$T_{r'}$, if~$T_{r'}$ exists, and~$\Col(T_{r'})\neq \Col(T_{r})$ (if~$T_{r}$ does not exist, we assume this is always true), and an arbitrary color different from~$c_r$ otherwise. \end{itemize} We then do the following. \begin{enumerate} \item[(a)] If~$T_{r}$ fully contains~$e$, we color all trees in~$\Objects^*(e)$ using~$c_{r'}$. \item[(b)] If~$T_{r'}$ fully contains~$e$, we color all trees in~$\Objects^*(e)$ using~$c_{r}$. \item[(c)] Otherwise, we use the chain method for NM-colorings using~$c_r$ and~$c_{r'}$ on~$\Objects^*(e) \cup \{T_{r}\} \cup \{T_{r'}\}$. We start from~$r$ with color~$c_r$ so that~$T_{r}$ is the first tree colored and keep its color. We then check if the color of~$T_{r'}$ changed. If so, let~$\Core_{r'}\subseteq \Core$ be the subset of trees contained in the subspace rooted at~$r'$ (including~$e$ but not~$r$) and excluding~$T_{r'}$. We exchange~$c_r$ and~$c_{r'}$ in~$\Core_{r'}$; see Fig. \ref{fig:case-no-intersection}. \end{enumerate} \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.65] \node at (0,0) (r) {}; \node at (3,-2) (r') {}; \draw[line width=1.3mm, red!60] (r.center) --++ (-0.4,-1.5); \draw[line width=1.3mm, red!60] (r.center) --++ (0.2,-0.75); \draw[line width=1.3mm, red!60] ([xshift=-0.2cm, yshift=-0.75cm]r.center) --++ (0.2,-0.75); \draw[line width=1.3mm, red!60] (r.center) --++ (0.75, -0.5); \draw[line width=1.3mm, blue!60] ([xshift=0.4cm, yshift=-0.1cm]r.center) --++ (1.5,-1); \draw[line width=1.3mm, red!60] ([xshift=0.95cm, yshift=-0.8cm]r.center) --++ (1.05,-0.7); \draw[line width=1.3mm, red!60] ([xshift=1.5cm, yshift=-1cm]r.center) --++ (1.5,-1); \draw[line width=1.3mm, red!60] (r'.center) --++ (0.4,-1.5); \draw[line width=1.3mm, red!60] (r'.center) --++ (-0.2,-0.75); \draw[line width=1.3mm, red!60] ([xshift=0.2cm, yshift=-0.75cm]r'.center) --++ (-0.2,-0.75); \draw[line width=1.3mm, blue!60] ([xshift=-0.4cm, yshift=-1.5cm]r'.center) --++ (0.2,-0.75); \draw[line width=1.3mm, blue!60] ([xshift=-0.4cm, yshift=-1.5cm]r'.center) --++ (-0.2,-0.75); \foreach \i in {r, r'}{ \draw (\i.center) --++ (-0.8, -3) --++ (1.6,0) -- cycle; \draw[thick, fill=black] (\i.center) circle (0.06); \node at ([xshift=3mm, yshift=3mm]\i.center) {$\i$}; } \draw[thick] (r.center) -- (r'.center); \end{tikzpicture} \qquad \begin{tikzpicture}[scale=0.65] \node at (0,0) (r) {}; \node at (3,-2) (r') {}; \draw[line width=1.3mm, red!60] (r.center) --++ (-0.4,-1.5); \draw[line width=1.3mm, red!60] (r.center) --++ (0.2,-0.75); \draw[line width=1.3mm, red!60] ([xshift=-0.2cm, yshift=-0.75cm]r.center) --++ (0.2,-0.75); \draw[line width=1.3mm, red!60] (r.center) --++ (0.75, -0.5); \draw[line width=1.3mm, blue!60] ([xshift=0.4cm, yshift=-0.1cm]r.center) --++ (1.5,-1); \draw[line width=1.3mm, red!60] ([xshift=0.95cm, yshift=-0.8cm]r.center) --++ (1.05,-0.7); \draw[line width=1.3mm, blue!60] ([xshift=1.5cm, yshift=-1cm]r.center) --++ (1.5,-1); \draw[line width=1.3mm, blue!60] (r'.center) --++ (0.4,-1.5); \draw[line width=1.3mm, blue!60] (r'.center) --++ (-0.2,-0.75); \draw[line width=1.3mm, blue!60] ([xshift=0.2cm, yshift=-0.75cm]r'.center) --++ (-0.2,-0.75); \draw[line width=1.3mm, red!60] ([xshift=-0.4cm, yshift=-1.5cm]r'.center) --++ (0.2,-0.75); \draw[line width=1.3mm, red!60] ([xshift=-0.4cm, yshift=-1.5cm]r'.center) --++ (-0.2,-0.75); \foreach \i in {r, r'}{ \draw (\i.center) --++ (-0.8, -3) --++ (1.6,0) -- cycle; \draw[thick, fill=black] (\i.center) circle (0.06); \node at ([xshift=3mm, yshift=3mm]\i.center) {$\i$}; } \draw[thick] (r.center) -- (r'.center); \end{tikzpicture} \end{center} \caption{If the color of~$T_{r'}$ changes with the chain method, we swap the labels of the old and new colors of~$T_{r'}$ in the subspace rooted at~${r'}$. }\label{fig:case-no-intersection} \end{figure} The following lemma proves the extended coloring is non-monochromatic. \begin{lemma}\label{lem:extending-coloring} Any NM-coloring~$c$ on~$\Core$ can be extended to~$\mathcal{A}\xspace$ without using any extra color if~$c$ uses two colors or more, and with two colors otherwise. \end{lemma} \begin{proof} Let $\Objects_1$ be the subset of trees in $\Objects\setminus \Core$ that contain an internal node of~$\TreeSpace$, and let $\Objects_2$ be the remaining trees in $\Objects\setminus \Core$. By Lemma~\ref{lem:c}, we have an NM-coloring on $\Core$. To prove that the method described above gives us an NM-coloring on~$\Core\cup\Objects_2$, we show that the following invariant holds each time an edge is colored: the coloring on~$\Core\cup\Objects_2$ is non-monochromatic when restricted to colored trees. It is clear that before the first edge is colored, the coloring is non-monochromatic as at this point the only trees colored are exactly those in~$\Core$. We hence only have to show the invariant still holds after coloring an edge~$e=\{r,r'\}$. If we are in cases (a) or (b), the invariant trivially holds. It remains to consider the third case. In the case~(c) we use the chain method on~$\Objects^*(e) \cup \{T_{r}\} \cup \{T_{r'}\}$, which immediately implies the coloring is non-monochromatic on~$e$. To prove it is also non-monochromatic elsewhere, let~$p\notin e$ be a point contained in at least two trees. Then we only have to show that the label swap we did on one side of~$e$ keeps the coloring non-monochromatic. The point~$p$ cannot be contained in one tree containing~$r$ and one tree containing~$r'$ at the same time, because no tree contains~$e$ fully. Therefore,~$p$ is contained in at least two trees from either side of~$e$, hence two trees of different color. Furthermore, the trees in $\Objects_1$ received an arbitrary color already used. To prove that this gives an NM-coloring for $\Objects = \Core \cup \Objects_1\cup\Objects_2$, it suffices to prove that each tree $T\in\Objects_1$ is \emph{doubly-covered} by~$\Core$, that is, any point $q\in T$ is contained in at least two trees in~$\Core$. To this end, let $e$ be an edge such that $q\in e$. Then, since $T\not\in \Core$ and $T$ contains an endpoint $v$ of~$e$, the two trees in $\Objects(e,v)$ contain~$q$. Hence, $T$ is doubly-covered by~$\Core$, as claimed. \end{proof} \begin{theorem}\mbox{} \label{thm:main} \begin{enumerate} \item $\NMCN{tree}{trees} (k, \ell; n) \leq \min\left(\ell +1, 2 \sqrt{6k} \right)$. \item $\CFCN{tree}{trees} (k, \ell; n)=O(\ell \log k)$. \end{enumerate} \end{theorem} \begin{proof For the NM-coloring part of the theorem, we use Lemmas~\ref{lem:c} and~\ref{lem:extending-coloring}. For the second part, if $\ell > 2\sqrt{6k}$ we again reduce $\Core$ to $\Core'$ using at most $\sqrt{6k}-1$ colors. Then use the result by Smorodinsky \cite{smor-geomCF-06} on the NM-coloring on $\Core'$ provided by Lemma~\ref{lem:c'}. Since this coloring uses at most $\ell' +1$ colors and $|\Core'|\leqslant 6k-12$, the CF-coloring uses $O(\ell \log k)$ colors. We then extend the coloring to $\Objects$ using similar techniques as for the NM-coloring. This coloring uses $O(\sqrt{k} \log k)$ colors if $\ell > 2\sqrt{6k}$, which is in $O(\ell \log k)$, and directly $O(\ell \log k)$ colors otherwise. Note that a direct application of the result of Smorodinsty~\cite{smor-geomCF-06} would give a~$O(\ell \log n)$ bound instead. \end{proof} \subsection{The lower bound} We show a lower bound for the number of colors\footnote{From now on, we either identify colors with integers or we use actual colors (red, blue, etc.) in our descriptions, whichever is more convenient.} needed to NM-color a set of trees in a tree space. \begin{theorem}\label{thm:lb-nm} For all~$n, k$, and~$\ell$, there exist a tree space~$\mathcal{T}\xspace$ with~$k$ leaves and a set~$\Objects$ at most~$n$ trees on~$\TreeSpace$, each with at most~$\ell$ leaves, such that any non-monochromatic coloring of~$\Objects$ uses at least~$\min \left(\ell+1, \left\lfloor \tfrac{1+ \sqrt{1+8k} }{2} \right\rfloor, n \right)$ colors. In other words, $$\NMCNtree{trees} (k, \ell; n)\geqslant \min \left(\ell+1, \left\lfloor \tfrac{1+ \sqrt{1+8k} }{2} \right\rfloor, n \right). $$ \end{theorem} \begin{proof} Let~$\mathcal{T}\xspace$ be a star with~$k$ leaves. We construct the set~$\mathcal{A}\xspace$ of~$m$ trees such that, for each pair of trees~$T,T'\in \mathcal{A}\xspace$, there is a leaf of~$\mathcal{T}\xspace$ contained in~$T$ and~$T'$, and no other tree from~$\Objects$. Consequently, each tree in~$\mathcal{A}\xspace$ must be assigned a distinct color. To this end, we define~$m:=\min(\ell+1,m',n)$, where $m':=\lfloor(1+ \sqrt{1+8k})/2 \rfloor$ is the largest integer such that~${m'\choose 2} \leqslant k $. Then, for every pair~$\{i,j\}$ with~$1\leqslant i<j \leqslant m$, we choose a distinct leaf of~$\mathcal{T}\xspace$ and associate it with~$\{i,j\}$. The total number of such pairs is~${m \choose 2} \leqslant {m' \choose 2} \leqslant k $, hence we can indeed associate a distinct leaf to each pair. Let now~$\mathcal{A}\xspace:= \{ T_1,\ldots, T_m\}$ be the set of trees defined as follows: for each~$i=1,\ldots,m$, the tree~$T_i$ is defined as the tree containing all the leaves associated with pairs~$\{i,j\}$ for some~$j\neq i$, i.e.,~$T_i$ is the union, for all~$j\neq i$, of edges from the root to a leaf associated with~$\{i,j\}$. Fig.~\ref{fig:star_lb} shows an example. \begin{figure} \begin{center} \begin{tikzpicture} \node at (0,0) (0) {}; \node at (0:1) (1) {}; \node at (60:1) (2) {}; \node at (120:1) (3) {}; \node at (180:1) (4) {}; \node at (240:1) (5) {}; \node at (300:1) (6) {}; \foreach \i in {1,...,3}{ \draw[line width=1.3mm, red!50] (0.center) -- (\i.center); \foreach \i in {0,...,6}{ \draw[thick, fill=black] (\i) circle (0.06); } } \foreach \i in {1,...,6}{ \draw[thick] (0.center) -- (\i.center); } \node at (0:1.7) {$\{1,2\}$}; \node at (60:1.7) {$\{1,3\}$}; \node at (120:1.7) {$\{1,4\}$}; \node at (180:1.7) {$\{2,3\}$}; \node at (240:1.7) {$\{2,4\}$}; \node at (300:1.7) {$\{3,4\}$}; \end{tikzpicture} \caption{An example of the non-monochromatic lower bound for~$k=6$, $\ell=3$, and~$n=4$. The tree~$T_1$ is drawn in red. } \label{fig:star_lb} \end{center} \end{figure} We now have to prove that the construction is possible within the parameters. Recall that~$m\leqslant n$ so we have indeed at most~$n$ trees in~$\Objects$, and that~$m\leqslant m'$ where~$m'$ is chosen to ensure~$k$ leaves are enough. We therefore only have to show that no tree~$T_i,\ldots, T_m$ has more than~$\ell$ leaves. However, the number of leaves of each tree~$T_i$ is at most~$m-1$, as we only create at most one leaf for~$T_i$ for each~$T_j$ with~$j\neq i$. Hence, since~$m\leqslant \ell +1$, each tree has at most~$\ell$ leaves. Thus, the construction does not violate the parameters. Finally, each tree needs a distinct color, and since there are~$m$ trees, the number of colors needed is~$m=\min (\ell+1, \lfloor \tfrac{1+ \sqrt{1+8k} }{2} \rfloor, n)$. \end{proof} Since any CF-coloring is also an NM-coloring, the lower bound in Theorem~\ref{thm:lb-nm} holds for CF-coloring as well. The next theorem gives a stronger lower bound for CF-coloring in the case $\ell=2$, that is, when the objects are paths. \begin{theorem} For all~$n$ and~$k$, there exist a tree space~$\mathcal{T}\xspace$ with~$k$ leaves and a~set~$\Objects$ of at most~$n$ paths in~$\TreeSpace$ such that any conflict-free coloring of~$\Objects$ uses at least~$\lfloor\log_2 \min(k,n)\rfloor$ colors. In other words, $$\CFCNtree{paths} (k; n)\geqslant \lfloor\log_2 \min(k,n)\rfloor. $$ \end{theorem} \begin{proof} Let~$\mathcal{T}\xspace$ be a rooted complete binary tree of height~$h=\lfloor\log_2 \min(k,n)\rfloor$. Note that $\TreeSpace$ has at most $\min(k,n)$ leaves. For each leaf~$v$ of~$\mathcal{T}\xspace$, we define~$\pi_v$ to be the path from~$v$ to the root~$r$ of~$\mathcal{T}\xspace$. Our set~$\mathcal{A}\xspace$ of objects is now defined as~$\mathcal{A}\xspace:=\{ \pi_v \mid v \text{ a leaf of } \mathcal{T}\xspace \}$. (Trivially,~$|\Objects|\leqslant n$.) Let~$c\colon\mathcal{A}\xspace \to \mathbb{N}\xspace $ be a conflict-free coloring of~$\mathcal{A}\xspace$. We prove that~$c$ uses at least~$h=\lfloor\log_2 \min(k,n)\rfloor$ colors by induction on the height~$h$ of~$\mathcal{T}\xspace$. If~$h=1$, then there is only one degenerate path and the claim trivially holds. Suppose now that the claim holds for a tree of height~$h$, and suppose the height of~$\mathcal{T}\xspace$ is~$h+1$. Since~$c$ is a conflict-free coloring, among the paths containing the root~$r_1:=r$ of~$\mathcal{T}\xspace$, there must be a path~$\pi_1$ of unique color. Since by construction all paths in~$\mathcal{A}\xspace$ contain the root, the color of~$\pi_1$ is unique among all paths. Let~$r_2$ be the child of~$r_1$ not contained in~$\pi_1$. We now use the induction hypothesis on the subtree rooted at~$r_2$ with paths containing~$r_2$ cut above it. Among these paths, there are~$h$ that use distinct colors. Moreover, none of these path can use~$c(\pi_1)$, as this color is unique among all paths. Hence, we have indeed~$h+1$ paths using distinct colors. This concludes the proof. \end{proof} The following theorem is a direct consequence of the previous two. \begin{theorem} For all~$n, k$, and~$\ell$, there exist a tree space~$\mathcal{T}\xspace$ with~$k$ leaves and a set~$\Objects$ at most~$n$ trees in~$\TreeSpace$ with at most~$\ell$ leaves each such that any conflict-free coloring of~$\Objects$ uses at least~$\min \left(\ell+1, \left\lfloor \tfrac{1+ \sqrt{1+8k} }{2} \right\rfloor, \lfloor\log_2 \min(k,n)\rfloor\right)$ colors. In other words, $$\CFCNtree{trees} (k, \ell; n)\geqslant \max \left\{\begin{array}{l} \min \left(\ell+1, \left\lfloor \tfrac{1+ \sqrt{1+8k} }{2} \right\rfloor \right) \\[16pt] \lfloor\log_2 \min(k,n)\rfloor. \end{array} \right. $$ \end{theorem} \section{Balls in Tree Spaces and on Planar Network Spaces}\label{sec:balls} In this section we restrict the objects to balls. Let~$\mathcal{N}\xspace$ be a network space,~$d\colon\mathcal{N}\xspace^2 \to \mathbb{R}\xspace$ a distance function on~$\mathcal{N}\xspace$, and let~$\Objects$ be a set of balls on~$\mathcal{N}\xspace$. We define the coverage~$cov_x(B)$ of a node~$x$ by a ball~$B=B(p,r)$ containing~$x$ as~$cov_x(B):=r-d(p,x)$. Given a node~$x$ contained in at least one ball from~$\Objects$, we define~$B_x$ as the ball maximizing the coverage of~$x$, where we break ties using an arbitrary but fixed ordering on the balls. We say that $B_x$ is \emph{assigned} to~$x$. Note that~$B_x$ does not exist if no ball contains~$x$, and that a ball can be assigned to multiple nodes. We will regularly use the following lemma regarding the assigned balls. \begin{lemma}\label{lem:connected_core} Let $x$ be an internal node of $\mathcal{N}\xspace$. \begin{enumerate} \item[(i)] Suppose $\mathcal{N}\xspace$ is a tree space, and let $\mathcal{T}_1,\ldots,\mathcal{T}_{{\mathrm deg}(x)}$ denote the subtrees resulting from removing $x$ from~$\mathcal{N}\xspace$ or, more precisely, the closures of the connected components of $\mathcal{T}\setminus\{x\}$. Let $p$ be a point in some subtree $\mathcal{T}_i$ and suppose $p$ is contained in a ball $B\in \Objects$ whose center lies in $\mathcal{T}_j$ with $j\neq i$. Then $p\in B_x$. \item[(ii)] Suppose $x$ is contained in at least one ball in $\Objects$. Let $\pi$ be a shortest path from~$x$ to the center of~$B_x$, and let $y$ be a node on the path~$\pi$. Then~$B_x$ is also assigned to~$y$, that is,~$B_x=B_y$. \end{enumerate} \end{lemma} \begin{proof} Part~(i) follows immediately from the definition of~$B_x$. To prove part~(ii), suppose for a contradiction that~$B_y \neq B_x$ for some $y\in\pi$. Thus,~$cov_y(B_y) \geqslant cov_y(B_x)$. Because~$\pi$ is a shortest path from $x$ to the center of $B_x$, we have that~$cov_x(B_x) = cov_y(B_x) - d(x,y)$. Moreover,~$ cov_y(B_y) - d(x,y) \leqslant cov_x(B_y)$ because of the triangle inequality. Hence,~$cov_x(B_x) \geqslant cov_x(B_y) \geqslant cov_y (B_y) -d(x,y) \geqslant cov_y(B_x) -d(x,y) =cov_x(B_x) $. Thus~$ cov_x(B_x) = cov_x(B_y)$ and~$cov_y(B_x) = cov_y(B_y)$. However, this is a contradiction as in case of a tie, we use the fixed ordering to choose which ball to assign to a node. \end{proof} \subsection{Tree spaces: the upper bound}\label{sec:balls-on-trees} For balls on a tree space~$\TreeSpace$, the upper bounds from Theorem~\ref{thm:main} with $\ell=k$ apply. Below we improve upon these bounds using the special structures of balls. Let~$\TreeSpace$ be a tree with~$t$ internal nodes. We present algorithms to NM-color balls on trees using two colors, and CF-color them with~$\log t +3$ colors. Let~$\Objects$ be a set of~$n$ balls on~$\TreeSpace$. Let also~$\Core:=\{B=B(c,r) \mid \exists x: B=B_x \}$ be the set of balls assigned to at least one internal node. Recall that an internal node~$x$ is assigned the ball maximizing the coverage of~$x$. \mypara{NM-coloring.} We first explain how to NM-color~$\Objects$. We use a divide-and-conquer approach. If~$t=0$, that is~$\TreeSpace$ consists of a single node or a single edge, we use the chain method for~NM-coloring with colors blue and red. If $t>0$, then we proceed as follows. Let~$e=uv$ be an edge of~$\TreeSpace$. Let~$\TreeSpace_u$, respectively~$\TreeSpace_v$, be the connected component of~$\TreeSpace \setminus e$ containing~$u$, respectively~$v$. Recall that~$B_u$ and~$B_v$ are the balls assigned to~$u$ and~$v$, respectively. Note that we may assume that both $B_u$ or~$B_v$ exist, for otherwise recursion is trivial. Also observe that~$B_u$ and~$B_v$ may coincide. We define \[ \Objects(u) := \{ \mbox{balls $B\in \Objects$ whose center lies in~$\TreeSpace_u$} \} \cup \{B_u\}, \] We define~$\Objects(v)$ similarly. We recursively color~$\Objects(u)$ in~$\TreeSpace_u$ and~$\Objects(v)$ in~$\TreeSpace_v$, obtaining colorings of~$\Objects(u)$ and $\Objects(v)$ with colors blue and red. In the recursive calls on $\Objects(u)$, and similarly for $\Objects(v)$, we ``clip'' the balls to within $\TreeSpace_u$. Note that the clipped balls are still balls in the space $\TreeSpace_u$. This is clear for the balls whose center lies in $\TreeSpace_u$. The center of $B_u$ may not lie in $\TreeSpace_u$, but in that case it behaves within $\TreeSpace_u$ as a ball with center $u$ and radius $cov_u(B_u)$. Let~$\Objects(e):= \Objects \setminus (\Objects(u)\cup \Objects(v))$ be the set of the remaining balls. In other words, $\Objects(e)$ contains the balls whose center is contained in~$e$, except for~$B_u$ and~$B_v$. We color~$\Objects(e)$, possibly swapping colors in~$\Objects(u)$ or~$\Objects(v)$, as follows. \begin{itemize} \item If~$B_u=B_v$, we first ensure that it gets the same color in both~$\Objects(u)$ and~$\Objects(v)$ by swapping colors in one of the two subsets if necessary. We then color all balls in~$\Objects(e)$ blue if~$B_u$ is red, and red if~$B_u$ is blue. \item If~$B_u \neq B_v$, let~$\pi$ be a longest simple path containing~$u$ and~$v$. We color~$\Objects(e) \cup \{ B_u, B_v \}$ restricted to~$\pi$ using the non-monochromatic chain method. We then possibly swap colors in~$\Objects(u)$ and~$\Objects(v)$ so that~$B_u$ and~$B_v$ match the colors they were given by the chain method. \end{itemize} Both cases are illustrated in Fig.~\ref{fig:tree-recursive-NM}. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=4mm, yshift=-4mm]1.center) {$B_u$}; \node at ([xshift=0mm, yshift=-6mm]2.center) {$B_v$}; \draw[line width=1.3mm, black!30] (-0.3,-0.95) -- (1.center) -- (0.8,0.2); \draw[line width=1.3mm, black!30] (4.center) -- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, black!30] (0.5, 0.305) -- (1.9, 0.655); \draw[line width=1.3mm, black!30] (2.6, 0.7) -- (2.center) -- (1.6,0.4); \draw[line width=1.3mm, black!30] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, black!30] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, black!30] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, black!30] (8.center) -- (1.9,1); \draw[line width=1.3mm, black!30] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (2.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=-10mm, yshift=0mm]1.center) {$\Objects(u)$}; \node at ([xshift=4mm, yshift=-4mm]1.center) {$B'_u$}; \node at ([xshift=6mm, yshift=7mm]2.center) {$\Objects(v)$}; \node at ([xshift=-1mm, yshift=-6mm]2.center) {$B'_v$}; \draw[line width=1.3mm, red!60] (4.center)-- (1.center); \draw[line width=1.3mm, blue!60] (2.center) -- (2.6, 0.7); \draw[line width=1.3mm, red!60] (-0.3,-0.95) -- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, blue!60] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, blue!60] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, blue!60] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, blue!60] (8.center) -- (1.9,1); \draw[line width=1.3mm, red!60] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=4mm, yshift=-4mm]1.center) {$B_u$}; \node at ([xshift=0mm, yshift=-6mm]2.center) {$B_v$}; \draw[line width=1.3mm, red!60] (4.center) -- (1.center) -- (0.8,0.2); \draw[line width=1.3mm, red!60] (-0.3,-0.95)-- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, blue!60] (0.5, 0.305) -- (1.9, 0.655); \draw[line width=1.3mm, red!60] (2.6, 0.7) -- (2.center) -- (1.6,0.4); \draw[line width=1.3mm, red!60] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, blue!60] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, blue!60] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, red!60] (8.center) -- (1.9,1); \draw[line width=1.3mm, blue!60] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (2.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \\ \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=12mm, yshift=-3mm]1.center) {$B_u=B_v$}; \draw[line width=1.3mm, black!30] (-0.3,-0.95) -- (1.center) -- (2.center) -- (2.6, 0.7); \draw[line width=1.3mm, black!30] (4.center) -- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, black!30] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, black!30] (0.5, 0.305) -- (1.9, 0.655); \draw[line width=1.3mm, black!30] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, black!30] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, black!30] (8.center) -- (1.9,1); \draw[line width=1.3mm, black!30] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (2.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=-10mm, yshift=0mm]1.center) {$\Objects(u)$}; \node at ([xshift=4mm, yshift=-4mm]1.center) {$B'_u$}; \node at ([xshift=6mm, yshift=7mm]2.center) {$\Objects(v)$}; \node at ([xshift=-1mm, yshift=-6mm]2.center) {$B'_v$}; \draw[line width=1.3mm, red!60] (4.center) -- (1.center); \draw[line width=1.3mm, blue!60] (2.center) -- (2.6, 0.7); \draw[line width=1.3mm, red!60] (-0.3,-0.95) -- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, blue!60] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, blue!60] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, blue!60] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, blue!60] (8.center) -- (1.9,1); \draw[line width=1.3mm, red!60] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \hspace{0.5cm} \begin{tikzpicture}[scale=0.65] \node at (0,0) (1) {}; \node at (2,0.5) (2) {}; \node at (-0.8,1) (3) {}; \node at (-1.2,-0.7) (4) {}; \node at (-0.6,-1.9) (5) {}; \node at (3.2,0.9) (6) {}; \node at (2.9,-0.6) (7) {}; \node at (1.8,1.5) (8) {}; \node at ([xshift=12mm, yshift=-3mm]1.center) {$B_u=B_v$}; \draw[line width=1.3mm, red!60] (-0.3,-0.95) -- (1.center) -- (2.center) -- (2.6, 0.7); \draw[line width=1.3mm, red!60] (4.center) -- (1.center) -- (-0.4, 0.5); \draw[line width=1.3mm, red!60] (2.45,-0.05) -- (2.center); \draw[line width=1.3mm, blue!60] (0.5, 0.305) -- (1.9, 0.655); \draw[line width=1.3mm, blue!60] ([xshift=-1.2mm, yshift=-1.2mm]3.center) -- (-0.32,0.13); \draw[line width=1.3mm, blue!60] ([xshift=-1.5mm, yshift=0.5mm]5.center) -- (-0.35,-0.575); \draw[line width=1.3mm, red!60] (8.center) -- (1.9,1); \draw[line width=1.3mm, blue!60] ([xshift=0.2mm, yshift=-1.7mm]6.center) -- ([xshift=3.2mm, yshift=-1mm]2.center) -- ([xshift=1.5mm, yshift=1mm]7.center); \draw[thick] (1.center) -- (2.center); \draw[thick] (1.center) -- (3.center); \draw[thick] (1.center) -- (4.center); \draw[thick] (1.center) -- (5.center); \draw[thick] (6.center) -- (2.center); \draw[thick] (7.center) -- (2.center); \draw[thick] (8.center) -- (2.center); \foreach \i in {1,...,8}{ \draw[thick, fill=black] (\i.center) circle (0.06); } \end{tikzpicture} \caption{On the left, we have the two different initial cases, i.e., on the top,~$B_u\neq B_v$, on the bottom,~$B_u = B_v$. In the middle, the recursive call is made. On the right, we use the two recursive colorings and swap colors if needed. } \label{fig:tree-recursive-NM} \end{center} \end{figure} \begin{theorem}\label{thm:NM-balls-on-trees} $ \NMCN{balls}{trees} (t; n)=2$. \end{theorem} \begin{proof} The coloring obviously uses two colors. It remains to show it is non-monochromatic. We use induction on~$t$. If~$t=0$, the coloring is non-monochromatic since it uses the chain method. Suppose now that~$t\geqslant 1$ and that the claim holds for any tree space with fewer than~$t$ internal nodes. Let~$p$ be a point contained in at least two balls. If~$p$ is contained in balls only of~$\Objects(v)$, only of~$\Objects(u)$, or only of~$\Objects(e)$, it is contained in at least two balls of different colors. Indeed, the colorings of~$\Objects(v)$ and~$\Objects(u)$ are non-monochromatic since they use the method on a tree with fewer than~$t$ internal nodes and we can use the induction hypothesis. Moreover~$\Objects(e)$ is non-monochromatic due to the chain method. It remains to consider the case where~$p$ is contained in balls from at least two of the sets~$\Objects(u)$, $\Objects(v)$, and~$\Objects(e)$. We distinguish two cases:~$p$ is contained in a ball of~$\Objects(e)$ and~$p$ is not contained in a ball of~$\Objects(e)$. If~$p$ is contained in a ball~$B$ of~$\Objects(e)$, we can assume without loss of generality that~$p$ is also contained in a ball of~$\Objects(v)$. By Lemma \ref{lem:connected_core}(i), we have that~$p \in B_v$. If~$B_u = B_v$ then all balls in~$\Objects(e)$ are given a different color than~$B_v$ hence~$p$ is contained in two balls of different color. If~$B_u \neq B_v$ then we use the chain method on~$\pi$. Hence if~$p\in \pi$, it is contained in two balls of different color. To show that if~$p\notin \pi$ then~$p$ is still contained in two balls of different colors, it suffices to notice that for any subset of balls of~$\Objects(e)$ in which~$p$ is contained, the point~$p' \in \pi$ at distance~$d(u,p)$ from~$u$ is contained in the same set of balls from~$\Objects(e)$ as~$\pi$ is the longest path containing~$e$. On the other hand, if~$p$ is not contained in a ball of~$\Objects(e)$, then it is contained in at least one ball from~$\Objects(u)$ and one from~$\Objects(v)$. By Lemma~\ref{lem:connected_core} we have that~$p \in B_u \cap B_v$. We then have two cases. If~$B_u = B_v$, then~$p$ is contained in another ball of~$\Objects(u)$ or~$\Objects(v)$, and then the coloring is non-monochromatic by the induction hypothesis. Otherwise~$B_u$ and~$B_v$ are part of the chain~$\Objects(e) \cup \{ B_u, B_v \}$, and hence~$p$ is contained in at least two balls of different color. \end{proof} \mypara{CF-coloring.} The second algorithm CF-colors~$\Objects$ using~$\lceil \log t \rceil + 3$ colors. As before, define~$\Core:=\{B=B(c,r) \mid \exists x: B=B_x \}$. We explain how to color~$\Core$ and then extend the coloring to~$\Objects$. Let~$r$ be a node whose removal results in subtrees each of at most~$t/2$ internal nodes. We color~$B_r$ (if it exists) with color~1. Let~$\TreeSpace_1,\ldots,\TreeSpace_{{\mathrm deg}(r)}$ be subtrees resulting from removing~$r$, that is, the closures of the connected components of $\TreeSpace\setminus\{r\}$. For each~$i=1,\ldots, {\mathrm deg}(r)$, we recurse on~$\TreeSpace_i$ with the balls from $\Core$ whose centers lie in~$\TreeSpace_i$. In such a recursive call, we consider a node to be an internal node when it was an internal node in the original space~$\TreeSpace$ and when it has not yet been selected as a splitting node in a previous call. Hence, when $t=0$ in a recursive call on a subtree $\mathcal{T}'\subset \TreeSpace$, then $\TreeSpace'$ must be a single edge both of whose endpoints have already been treated. The recursion stops when there are no more balls left (which must be the case when we have a recursive call with $t=0$). Note that the internal nodes are fixed from the beginning, hence at some point of the recursion, a leaf node might still be considered internal for the purposes of the recursion. \begin{lemma}\label{lem:balls-on-trees-CF-core} The above algorithm CF-colors~$\Core$ using~$\lceil \log t \rceil$ colors. \end{lemma} \begin{proof} The number of colors used comes immediately from the splitting of~$\TreeSpace$ into trees of at most~$\frac{t}{2}$ internal nodes. We now show the coloring is indeed confict-free by showing that it is a unimin coloring: for any point~$p$ the minimum color among the colors of the balls containing~$p$ is unique. Let~$p\in\TreeSpace$ be a point contained in two balls~$B_1=B(p_1,r_1)$ and~$B_2=B(p_2,r_2)$ both of color~$i$. We show that this implies the existence of a ball of higher color containing~$p$. Let~$v_1$ be the node~$B_1$ is assigned to, and ~$v_2$ the node~$B_2$ is assigned to. Since~$B_1$ and~$B_2$ have the color~$i$, they were contained in different trees when they were colored in the recursive process. Let~$v_0$ be the node that disconnected~$v_1$ and~$v_2$ and let~$B_0$ be the ball assigned to~$v_0$. Note that~$c(B_0)<i$. We prove that~$p\in B_0$. Let~$\pi$ be the unique simple path between~$p$ and~$v_0$. It cannot be the case that both~$p_1 \in \pi$ and~$p_2 \in \pi$. Suppose without loss of generality that~$p_2 \notin \pi$. Let~$d$ be the distance between~$p$ and~$v_0$. Since~$p\in B_2$, we have that~$cov_{v_0}(B_2) \geqslant d$. And since~$cov_{v_0}(B_0) \geqslant cov_{v_0}(B_2)$, we have that~$p\in B_0$, concluding the proof. \end{proof} We now wish to extend the coloring to balls in $\Objects\setminus \Core$. To this end, define~$\TreeSpace' := \TreeSpace \setminus (\bigcup \Core)$ to be the part of $\TreeSpace$ that remains after removing all points covered by the balls in~$\Core$. We finish the coloring with three more colors (using the chain method for CF-colorings) as explained next, resulting in~$\lceil \log t \rceil +3$ colors. We use the following lemma to show that the remaining balls can be reduced to intervals on disjoint lines. Note that it does not use tree spaces and can hence be applied also for planar network spaces. \begin{lemma}\label{lem:not-core-then-in-single-edge} For any ball~$B\notin \Core$, we have $\{p \in B \mid p\notin \cup \Core \} \subseteq e, $ where~$e$ is the edge containing the center of~$B$. \end{lemma} \begin{proof} Suppose for a contradiction that there is a point~$p\notin e$ contained in~$B$ but not in~$\cup \Core$. Consider the endpoint~$v$ of~$e$ belonging to the geodesic from the center of~$B$ to~$p$. We claim that~$cov_v(B)>cov_v(B_v)$, contradicting the definition of~$B_v$. Indeed, $cov_v(B)>d(v,p)$ (since $v$ lies on the geodesic from $B$'s center to~$p$) and $cov_v(B_v)<d(v,p)$ (since $p\not\in \Core$ and, hence, $p\notin B_v$). \end{proof} \begin{theorem} $\CFCN{tree}{balls} (t; n) \leqslant \lceil \log t \rceil +3$. \end{theorem} \subsection{Tree spaces: the lower bound} \begin{lemma} $\CFCN{tree}{balls} (t; n) \geqslant \left\lceil\log (t+1) \right\rceil.$ \end{lemma} \begin{proof} Let~$\TreeSpace$ be as follows. We take~$t+2$ points~$p_1,\ldots,p_{t+2}$ in the plane, with~$p_i=(i, 0)$ for each~$i=1,\ldots, t+2$, and we link consecutive points with a unit distance segment. We then take~$t+2$ additional points~$p'_1,\ldots, p'_{t+2}$, with~$p'_i=(i,t+2)$, and for each~$i=1,\ldots,t+2$ we link~$p_i$ and~$p'_i$ with a segment of length~$t+2$. Note that~$p_1$ and~$p_{t+2}$ do not count as internal nodes as their degree is two. Finally, we place~$t+1$ balls~$B_1=B(c_1,t+2),\ldots, B_{t+1}=B(c_{t+1}, t+2)$, for all~$i=1,\ldots,t+1$, with~$c_i=(i+\frac{2}{3}, 0)$, see Fig.~\ref{fig:comb}. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.65] \draw[line width=1.3mm, red!50, shorten >=-0.4mm] (1,2.34) -- (1,0) -- (5,0) -- (5,1.66); \draw[line width=1.3mm, red!50, shorten >=-0.4mm] (2,0) -- (2,3.34); \draw[line width=1.3mm, red!50, shorten >=-0.4mm] (3,0) -- (3,3.66); \draw[line width=1.3mm, red!50, shorten >=-0.4mm] (4,0) -- (4,2.66); \node at (2.5, 3) {$B_2$}; \foreach \i in {1,2,...,5}{ \node at (\i,0) (\i) {}; \draw[thick, fill=black] (\i,0) circle (0.06); \draw[thick, fill=black] (\i,4) circle (0.06); \draw[thick] (\i,0) -- (\i,4); \node at ([yshift=-3mm]\i.center) {$p_\i$}; \node at ([yshift=43mm]\i.center) {$p'_\i$}; } \draw[thick] (1.center) -- (2.center) -- (3.center) -- (4.center) -- (5.center); \foreach \i in {1,2,...,4}{ \draw[thick, fill=white] ([xshift=6.6mm]\i.center) circle (0.06); \node at ([xshift=6.6mm, yshift=3mm]\i.center) {$c_\i$}; } \end{tikzpicture} \end{center} \caption{Example of the lower bound construction with~$t=3$. For clarity purposes, only~$B_2$ is displayed, in red. } \label{fig:comb} \end{figure} Consider the hypergraph~$\mathcal{H}$ whose nodes are the balls~$B_i$, and whose hyperedges are the subsets of balls such that there is a point~$p\in \TreeSpace$ contained in exactly that subset (and no other balls). We claim (and will prove below) that the set of hyperedges is exactly the set~$\{ \{ B_i,B_{i+1},\ldots, B_j \} \mid i \geqslant j \}$. In other words, there is a hyperedge for a subset of balls if and only if there is an interval on the $x$-axis containing exactly the centers of these balls. Hence, we can apply the~$\ceil{\log (t+1)}$ lower bound for CF-coloring points with respect to intervals~\cite{even-cf-03}. To prove the claim, note that if~$p_i$ is the ball center nearest to~$p$ then~$d(p,p_1)> d(p,p_2)> \cdots > d(p,p_i)$ and~$d(p,p_{i+1})> \cdots > d(p,p_{t+2})$, which implies that any hyperedge is of the form $\{B_i,B_{i+1},\ldots,B_j\}$. On the other hand, the point~$(\floor{(j+i)/2},t+2-(j-i)/2)$ is contained in exactly the balls~$B_i,B_{i+1},\ldots,B_j$. \end{proof} \subsection{Planar network spaces}\label{sec:balls-on-networks} \mypara{NM-coloring.} We first explain how to NM-color balls on a planar network space~$\mathcal{N}\xspace$. Let again~$\Core$ be the set~$\{B=B(c,r) \mid \exists x: B=B_x \}$. We create a graph~$\mathcal{G}_\Core$ whose node set is~$\Core$ and whose edge set is defined as follows: there is an edge between~$B$ and~$B'$ if and only if there is an edge $vv'$ in~$\TreeSpace$ with~$B_v=B$ and~$B_{v'}=B'$. It follows from Lemma~\ref{lem:connected_core} that for any ball~$B$, the set of nodes of~$\mathcal{N}\xspace$ to which~$B$ is assigned, together with the edges between these nodes, is a connected set. Therefore,~$\mathcal{G}_\Core$ is planar as well since its nodes correspond to disjoint connected subspaces in the planar space~$\mathcal{N}\xspace$. We now use the Four Color Theorem to color~$\mathcal{G}_\Core$ and we give each ball in~$\Core$ the same color as the corresponding node in~$\mathcal{G}_\Core$. \begin{lemma}\label{lem:net-core} The coloring on~$\Core$ is non-monochromatic and uses at most four colors. \end{lemma} \begin{proof} It is clear that the coloring uses at most four colors. Now let~$p$ be a point contained in two balls~$B_1$ and~$B_2$ of the same color. Let~$v_1$ and~$v_2$ be nodes of~$\mathcal{N}\xspace$ with~$B_1=B_{v_1}$ and~$B_2=B_{v_2}$. Let~$\pi_1$ and~$\pi_2$ be two shortest paths between~$p$ and~$v_1, v_2$, respectively. If all the nodes in~$\pi_1 \cup \pi_2$ are either assigned~$B_1$ or~$B_2$, then there is an edge between~$B_1$ and~$B_2$ in~$\mathcal{G}_\Core$ and hence~$B_1$ and~$B_2$ are given different colors. Therefore there must be a node~$v$ in~$\pi_1 \cup \pi_2$ (we assume without loss of generality that~$v\in \pi_1$) with~$B_v\notin \{ B_1, B_2 \}$ and~$c(B_v)=c(B_1)$. Note that if~$c(B_v)=c(B_1)$ for all~$v\in \pi_1$, then there must be an edge between two balls of the same color in~$\mathcal{G}_\Core$ which is a contradiction, hence there must be a vertex~$v\in \pi_1$ with~$c(B_v)\neq c(B_1)$. Since~$\pi_1$ is a shortest path between~$v_1$ and~$p$, and since~$v\in \pi_1$, we have that~$\pi_1$ contains a shortest path between~$v$ and~$p$. Moreover,~$cov_v(B_v) \geqslant cov_v(B_1) \geqslant d(v,p)$, which implies that~$p \in B_v$ and concludes the proof. \end{proof} We now wish to extend the coloring to balls in $\Objects\setminus \Core$. To this end, define $\mathcal{N}\xspace' := \mathcal{N}\xspace \setminus (\bigcup \Core)$ to be the part of $\mathcal{N}\xspace$ that remains after removing all points covered by the balls in~$\Core$. The proof of the following lemma is similar to the proof of Lemma~\ref{lem:not-core-then-in-single-edge}. \begin{lemma} Consider a ball $B\in \Objects\setminus\Core$, and let~$B' := B \cap \mathcal{N}\xspace'$. Then $B'$ is contained in a single edge of $\mathcal{N}\xspace'$. \end{lemma} For each edge $e$ of $\mathcal{N}\xspace'$, let $\Objects(e)$ denote the set of balls contained in~$e$. Let $u$ and $v$ denote the endpoints of the edge in $\mathcal{N}\xspace$ containing~$e$. We color the uncolored balls in~$e$ using the chain method with two colors not equal to~$c(B_u)$ and~$c(B_v)$. We have now colored the balls in $\Core$ as well as the balls in $\Objects\setminus\Core$ that lie at least partially in $\mathcal{N}\xspace'$. Next we explain how to color the remaining balls, which are fully covered by the balls in $\Core$. \begin{lemma}\label{lem:3-balls} Any uncolored ball is contained in the union of at most three balls. \end{lemma} \begin{proof} Any uncolored ball~$B$ is contained in~$\cup \Core$. If~$B$ is fully contained in a single edge~$e$ of~$\mathcal{N}\xspace$, it must be covered by the two balls from~$\Core$ extending the farthest into~$e$, starting from each of the two endpoints. If not, let~$v$ be a node contained in~$B$. Now~$B \setminus B_v$ is contained in a single edge~$e$ of~$\mathcal{N}\xspace$ and so $B\setminus B_v$ can be covered by two balls (as just explained), which implies that $B$ can be covered by three balls. \end{proof} Using this lemma, we can easily finish the NM-coloring. \begin{theorem} $\NMCN{planar}{balls} (t; n) = 4$. \end{theorem} \begin{proof} The coloring obviously uses four colors at most. Moreover, it is easy to see the coloring is non-monochromatic. It remains to show that there is an instance requiring at least four colors. To that purpose, let~$\mathcal{N}\xspace$ be an embedding of~$K_4$ where all edges have length one. Then, for each node~$v$ of~$\mathcal{N}\xspace$, we create the ball~$B(v,2/3)$. Since no two balls can have the same color, we need at least four colors. \end{proof} \mypara{CF-coloring.} We now explain how to CF-color balls on a planar network. As before, define~$\Core := \{B=B(c,r) \mid \exists x: B=B_x \}$. We first CF-color $\Core$ using the following recursive algorithm introduced by Smorodinsky~\cite{smor-geomCF-06}: we select a maximum independent set in~$C_1:=\Core$, we give it color 1, place all uncolored balls in~$C_2$, and recurse. We claim that for all~$i$, the Delauney graph~$D_i:=(C_i,E_i)$ on the balls in~$C_i$ is planar, where~$E_i:= \{ \{B_1, B_2\} \mid \exists p\in \mathcal{N}\xspace: p \in B_1 \cap B_2 \text{ and } \forall B\notin \{ B_1, B_2\} : p \notin B \} $. \begin{lemma}\label{lem:D_i-planar} $D_i$ is planar. \end{lemma} \begin{proof} We draw~$D_i$ using the drawing of~$\mathcal{N}\xspace$ as follows: each ball is represented by its center. Then, for every edge in~$D_i$, we find a witness, that is a point contained in the intersection of the two balls and not in any other ball. We finally draw the edge as two geodesics on~$\mathcal{N}\xspace$: one from one endpoint to the witness point, and the other from the witness point to the other endpoint. We claim that this drawing is plane. Suppose by contradiction that it is not the case and there is a crossing between the two edges~$B_1B_3$ and~$B_2B_4$. Suppose also that the endpoints of the two edges are distinct: the argument when an endpoint is shared is similar. Since we based our drawing on~$\mathcal{N}\xspace$, a planar graph, the point where the two edges cross must be a node~$x$ in~$\mathcal{N}\xspace$. Let~$w_{13}$ be the witness of the edge~$B_1B_3$ and~$w_{24}$ the winess of~$B_2B_4$. Fig.~\ref{fig:crossing} shows the two crossing edges, with the crossing node~$x$ in the middle, and the two witnesses~$w_{13}$ and~$w_{24}$ used to draw the geodesics. \begin{figure} \begin{center} \begin{tikzpicture}[scale=0.65] \node at (-1,-1) (a) {}; \node at (1,-1) (b) {}; \node at (0,0) (c) {}; \node at (-1,1) (d) {}; \node at (1,1) (e) {}; \foreach \a/\l/\loc in {a/B_1/below left, b/B_2/below right, c/ /below, d/B_4/above left, e/B_3/above right}{ \draw[thick, fill] (\a) circle (0.06); \node[\loc] at (\a) {$\l$}; } \node at ([yshift=-4mm]c.center) {$x$}; \draw[thick] (a.center) -- (c.center) -- (d.center); \draw[thick] (b.center) -- (c.center) -- (e.center); \draw[thick] (-0.7,0.5) --++ (0.2,0.2); \node at (-0.9,0.3) () {$w_{24}$}; \draw[thick] (0.5,0.3) --++ (-0.2,0.2); \node at (0.7,0.1) () {$w_{13}$}; \end{tikzpicture} \end{center} \caption{We suppose for a contradiction that the edges~$B_1B_3$ and~$B_2B_4$ cross. The crossing point is a node~$x$ of~$\mathcal{N}\xspace$. Let~$w_{13}$ be the witness of the edge~$B_1B_3$ and~$w_{24}$ the winess of~$B_2B_4$. } \label{fig:crossing} \end{figure} Suppose, without loss of generality, that the distance from~$x$ to~$w_{24}$ is greater than or equal to the distance from ~$x$ to~$w_{13}$. Thus, the distance from the center of~$B_1$ to~$w_{24}$ is greater than or equal to~$w_{13}$. Hence,~$w_{13}$ is also contained in the ball~$B_1$, which contradicts the definition of a witness. Thus, the drawing is plane. \end{proof} Using this lemma and the Four Color Theorem, we get a coloring on~$\Core$ using~$\lceil \log_{4/3} t \rceil$ colors. Note that this method does not give an efficient algorithm because of the use of the Four Color Theorem. For a fast algorithm, we can use a linear-time algorithm~\cite{Chiba_5_coloring} to find an independent set of size at least~$n/5$, leading to~$\lceil \log_{5/4} t \rceil$ colors. We then color the balls in~$\Objects \setminus \Core$. Using Lemma~\ref{lem:not-core-then-in-single-edge}, we have that for any such ball~$B$, the set of points contained in~$B$ but not in any ball in~$\Core$ is contained in one edge of~$\mathcal{N}\xspace$. Therefore, if we cut~$\cup\mathcal{C}$ out of~$\mathcal{N}\xspace$, the remaining space is a union of disjoint segments, and any object that is not colored is contained in at most one segment. We can therefore use the chain coloring on each segment with the two additional colors and the dummy one. Finally, any point in~$\cup\Core$ is contained in a ball in~$\Core$ of unique color, and any point not in~$\cup\Core$, is contained in at most one ball of each of the two additional colors. Therefore, the coloring is conflict-free. This yields the following theorem. \begin{theorem} $\CFCN{planar}{balls} (t; n) \leqslant \lceil \log_{4/3} t \rceil +3$. \end{theorem} \section{Concluding Remarks} We studied NM- and CF-colorings on network spaces, where the objects to be colored are connected regions of the network space. We showed that the number of colors can be bounded as a function of the complexity (which depends on the type of space and of objects) of the network space and the objects, rather than on the number of objects. All our bounds are tight up to some constants, except for $\CFCN{tree}{trees} (k,\ell; n)$ where the ~upper bound is a factor~$\ell$ away from the lower bound. Closing this gap remains an open problem. It would also be interesting to find bounds on general connected objects on any network space, or other settings where the number of colors depends on the complexity of the space and objects rather the number of objects.
{ "timestamp": "2018-05-08T02:18:09", "yymm": "1805", "arxiv_id": "1805.02538", "language": "en", "url": "https://arxiv.org/abs/1805.02538" }
\section{Introduction} \label{S-introduction} Solar flares, coronal mass ejections (CMEs), associated shock waves, and related phenomena are known as causes of space weather disturbances. Hard electromagnetic emissions and energetic particles pose hazard to space-borne equipment, astronauts on spacecraft, and even crew members and passengers on aircraft that carry out transocean flights entering high latitudes. CME-associated shock waves travel over large distances in the heliosphere, being responsible for the geomagnetic storm sudden commencement (SSC). Magnetic structures of CMEs hitting the Earth's magnetosphere can cause strong geomagnetic storms. In spite of a certain space weather impact, the origin and interrelation of solar eruptive phenomena are still not quite clear. Comprehending solar eruptions is hampered by observational difficulties. The existing concepts are mostly based on the hypotheses that were proposed several decades ago and back-extrapolated results of in-situ measurements in near-Earth space. According to a widely accepted view, the main driver of a solar eruption is a magnetic flux rope. It is considered as the active structure of a CME that governs its development and subsequent expansion. The flux rope is traditionally assumed to be associated with the CME cavity. Prominences (filaments) or associated structures appear to be among the most probable flux-rope progenitors \citep{Gibson2015}. However, genesis of flux ropes, their size range, and other properties are not clear so far. According to some concepts, the flux rope pre-exists before the eruption onset \citep{Chen1989, Chen1996, Cheng2013}. Different concepts relate the flux-rope formation to reconnection processes, which are also responsible for solar flares \citep{InhesterBirnHesse1992, LongcopeBeveridge2007, Qiu2007}. There is no consensus about coronal shock waves. Some authors advocate flare-ignited blast waves at least in some events \citep{Magdalenic2010, Magdalenic2012, Nindos2011}. Different studies demonstrate the CME-related origin of shock waves to be more probable (e.g. \citealp{Cliver2004}). While basic excitation mechanisms of shock waves seem to be known (see, e.g., \citealp{VrsnakCliver2008}), observational difficulties result in large uncertainties in their identification. Solar eruptions and associated phenomena are manifested in different spectral domains, including microwaves. Radio emission is produced by various mechanisms, providing important information on these phenomena and responsible processes. Being sensitive to gyrosynchrotron emission of nonthermal electrons, microwaves reveal the flare regions. The microwave spectrum contains information about accelerated electrons and magnetic fields in the corona. Being sensitive to thermal plasma emission, microwave images show eruptive prominences (filaments). Screening background solar emission by erupted prominence material sometimes produces depressions detectable even in the total microwave flux \citep{CovingtonDodson1953} termed the ``negative bursts''. From studies of the negative bursts, events with reconnection between erupting structures and a large-scale coronal magnetic environment were identified \citep{Grechnev2013neg, Grechnev2014_I, Uralov2014}. These examples demonstrate significant contribution to studies of solar eruptions from microwave imaging and non-imaging observations. Microwave images produced by radio heliographs generally have a poorer spatial resolution relative to extreme-ultraviolet (EUV) and X-ray telescopes. Nevertheless, sometimes it is even possible to judge about the structures that are unresolved in microwave images \citep{GrechnevKochanov2016, Grechnev2017_II, Lesovoi2017}. In 2016, the first 48-antenna stage of the Siberian Radioheliograph (SRH; \citealp{Lesovoi2014, Lesovoi2017}) started observing the Sun. An overview of the SRH data has revealed several indications of eruptions. Proceeding from these indications, we consider a few eruptive events observed by different instruments and endeavor to address the challenges listed in this section. We pay special attention to the 16 March 2016 eruptive event, one of the first flares observed by the SRH \citep{Lesovoi2017}. Multi-instrument analysis of large-scale aspects of this event promises shedding additional light on the development of a CME and associated shock wave. Section~\ref{S-srh} outlines the SRH. Section~\ref{S-neg_bursts} presents observations of microwave depressions caused by small jets. Section~\ref{S-eruption_may_1} presents direct observations of a spray on 1 May 2017. Section~\ref{S-march16} is devoted to a multi-instrument analysis of an eruptive event on 16 March 2016 that produced a CME and caused a near-Earth proton enhancement. Section~\ref{S-discussion} discusses the results and shows their relevance to a typical eruptive event. Section~\ref{S-summary} summarizes our conclusions and their implications and presents last changes in the functionality of the SRH. \section{SRH: 48-Antenna First Stage} \label{S-srh} The SRH was constructed as an upgrade of the Siberian Solar Radio Telescope (SSRT: \citealp{Smolkov1986, Grechnev2003ssrt}). The SSRT was designed as a cross-shaped interferometer comprising two linear arrays in the EW and SN directions, each with 128 equidistant antennas of 2.5\,m diameter spaced by $d = 4.9$\,m. The SSRT scans the Sun due to its diurnal passage through the fan beam formed by the simultaneous receiving at a number of different but close frequencies in the 5.67--5.79\,GHz band. Thus, the SSRT can produce the images practically at a single frequency every 2--3 minutes at most. Unlike the directly-imaging SSRT, the SRH uses the Fourier synthesis. The temporal resolution determined by the receiver system is much higher than the SSRT had. The SRH has a T-shaped antenna array. Its 1.8\,m antenna elements replace old SSRT antennas, being installed at the existing posts along the east, west, and south arms. The first 48-antenna stage constitutes a dense equidistant part of a future complete SRH antenna array (Figures \ref{F-SRH_config} and \ref{F-heliograph}). Being redundant, this array provides a high sensitivity, which is about 1000~K in the images and reaches for compact sources $10^{-4}$ of the total solar flux, i.e. about 100~Jy \citep{LesovoiKobets2017}. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {srh_config.eps} } \caption{The T-shaped configuration of the 48-antenna SRH first stage. The remote parts of the four SSRT arms (each arm of 311~m) with remaining old antennas are not shown. } \label{F-SRH_config} \end{figure} \begin{figure*} \centerline{\includegraphics[width=\textwidth] {heliograph.eps} } \caption{View of the 48-antenna SRH first stage (the east arm). White remote larger dishes on the right (east) belong to the old SSRT antenna system. Separate dishes on the ground behind the SRH antennas belong to the total-flux spectropolarimeters \citep{ZhdanovZandanov2015}. The receiver and control systems are located in the working building visible behind the SRH antennas on the left.} \label{F-heliograph} \end{figure*} Both circularly-polarized components are measured. The observing frequencies, each of the 10\,MHz bandwidth in the 4--8\,GHz range, are set by software and can be optimized for an observing program. The accumulation time at each frequency is 0.28\,s for each circularly-polarized component, and the time to switch from one frequency to another was about 2\,s in 2016 and 2017. The maximum baseline used is 107.4\,m, enabling a spatial resolution down to $70^{\prime \prime}$ at 8\,GHz. The SRH systems outlined in Figure~\ref{F-SRH_struct} were mostly developed and constructed by the SRH team. The top image represents a single antenna element. The antenna feed receives two orthogonal linearly-polarized signals, which come into the frontend unit. A 3-dB $90^{\circ}$ hybrid coupler performs the linear-to-circular polarization conversion of the input signals. Then they are pre-amplified and come to a switch, which alternately passes the left-handedly polarized signal (LCP) and the right-handedly polarized one (RCP). The signals from the output of the switch come through the second amplifier to a diode laser, which converts the ultrahigh-frequency (UHF) signals to optical signals for their transmission to the working building. The total gain of the frontend unit is 30--40\,dB. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {srh_structure.eps} } \caption{A scheme of the SRH hardware. The frontend units (middle left) are installed in all antenna elements (top left). The backend of the receiver and the correlator (bottom right) are located in the working building. Twelve backend units are mounted in the left cabinet, and the correlator is located in the right cabinet. The green arrows denote the paths of the signals. } \label{F-SRH_struct} \end{figure} The signal from each antenna element is transmitted to the backend of the receiver located in the working building (Figure~\ref{F-heliograph}) through the optical fiber link located in the tunnel. Each backend unit (Figure~\ref{F-SRH_struct}, bottom-left) processes the signals from four antennas. The input optical signals are converted back to the UHF, amplified, transformed to an intermediate frequency, and digitized at 100\,MHz. Their subsequent digital processing includes the formation of the operating frequency band, coarse and fine compensation for the geometric delays and difference in the cable lengths, and fringe stopping. Finally the digital signals come to a correlator mounted in the right cabinet shown in Figure~\ref{F-SRH_struct} (bottom right). The correlator currently produces 512 complex visibilities for the imaging and several tens of those for the calibration purposes. Redundant baselines are not used in the imaging. Single-frequency test observations started at the SRH early in 2016. Since July 2016 till December 2017, the SRH routinely observed the Sun at five frequencies. To monitor solar activity and main SRH systems, the so-called correlation plots are used. Being a proxy of radio flux, they represent temporal variations in the sum of cross-correlations of all antenna pairs \citep{LesovoiKobets2017} and show the changes in both the brightness and structure of the sources. Real-time correlation plots and quick-look images produced by the SRH at a set of the operating frequencies are accessible online at the SRH Web site \url{http://badary.iszf.irk.ru/}. Adjustment of the SRH systems is still in progress. Raw SRH data contain complex visibilities measured at a given set of frequencies in right and left circularly-polarized components, information on the array geometry, time stamps, etc. The data are stored in binary FITS tables. The Python-based library providing basic programming user interfaces for data handling, phase calibration, and interferometric imaging routines is under development. The phase calibration tasks use the baseline redundancy of the east, west, and south SRH arms and resolve phase ambiguities in a sense of an overdetermined optimization problem. To clean raw SRH images, we tentatively apply an MS-CLEAN algorithm \citep{Cornwell2008}. Parameters of the algorithm would be adjusted to meet diverse observational requirements. The technique to calibrate the images in brightness temperatures \citep{Kochanov2013, Lesovoi2017} is based on a well-known method by referring to the most frequent pixel values over the solar disk and those over the sky. We refer the quiet-Sun brightness temperature to the measurements by \cite{Zirin1991} and \cite{Borovik1994}, fitting their frequency dependence with a fourth-order polynomial in the log--log scale. In particular, we adopt the values of 21.6, 18.1, 16.0, 14.6, and 13.6 thousand Kelvin at frequencies of 4.0, 5.0, 6.0, 7.0, and 8.0 GHz, respectively. The remaining outer SSRT antennas of the three arms and the whole north arm continue observing in the original operating mode, providing the images of compact sources at 5.7\,GHz with a resolution of down to $21^{\prime \prime}$. Daily quick-look SSRT images near the local noon are available at the SRH Web site. \section{Microwave Depressions} \label{S-neg_bursts} Temporary depressions of the total microwave flux below the quasi-stationary level known as negative bursts were discovered by \cite{CovingtonDodson1953} from observations at 10.7\,cm (2.8\,GHz). Typically, a negative burst follows an ordinary flare-related impulsive burst, when the eruption screens a radio source located in the same or a nearby active region. The cause of a negative burst is screening by low-temperature absorbing erupted material of a compact microwave source \citep{Covington1973, Sawyer1977, Maksimov1991} or/and large areas of the quiet Sun. Hence, microwave depressions indicate probable eruptions. The dependence of the absorption depth on both the observing frequency and properties of absorbing plasma provides a basic possibility to estimate some parameters of an erupting structure, if a depression is observed at different frequencies (see, e.g., \citealp{Grechnev2008, Grechnev2013neg, KuzmenkoGrechnev2017}). Because both the opacity of a filament or surge and its contrast against the solar disk depend on the frequency inversely, the negative bursts are observed mainly at 1--10\,GHz. Although eruptions occur often, detection of microwave depressions requires a high sensitivity and calibration stability of total-flux radiometers that makes the negative bursts rare phenomena. From 1990 through 2009, their total number recorded by all ground-based stations was 72 with a maximum yearly number being as small as 14 in 1991 \citep{Grechnev2013neg}. Previously negative bursts were observed almost exclusively in total intensity. With an operating frequency range within 4--8\,GHz and a high sensitivity, the SRH observations promise the detection of eruption-related absorption phenomena. A simplest way to detect a microwave depression is provided by the correlation plots. \cite{Lesovoi2017} presented an unprecedented series of three negative bursts observed in one day on 9 August 2016 by the SRH and Nobeyama Radio Polarimeters (NoRP: \citealp{Torii1979, Nakajima1985}) in both intensity and polarization. These negative bursts were caused by repeating surges, which screened a polarized sunspot-associated microwave source in active region (AR) 12574 located not far from the limb (N04\,E59). Here we present examples of microwave depressions revealed from the SRH data that really point at small eruptions. Some of the eruptions indicated by the SRH are too weak and small to be easily detected from observations at different wavelengths. The possibilities of plasma diagnostics for such eruptions based on the SRH data are discussed in Section~\ref{S-summary_eruptions}. \subsection{A Small Eruption on 9 September 2017} A conspicuous microwave depression recorded on 9 September 2017 is visible between the vertical dash-dotted lines in Figure~\ref{F-2017-09-09_timeprof}a, which presents the SRH intensity and polarization correlation plots at a frequency of 7.5\,GHz. The bursts at 03:06, 04:00, 04:26, and a spiky burst at 06:55 are associated with GOES C6.3, C4.2, M1.1, and C1.7 flares, respectively, all of which occurred in AR\,12673. The excursions around 01:00 and 06:15 are caused by the Sun-to-sky calibration maneuvers of the antenna system. The depression in intensity has a counterpart in polarization, indicating the screening of a polarized source. The plots at the other frequencies are similar. The SRH images reveal that the brightness decreased in a microwave source located close to the west limb. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-09-09_timeprof.eps} } \caption{Temporal profiles of the small eruption on 9 September 2017. The temporal profile in panel d was computed over the framed region in Figure~\ref{F-2017-09-09_aia304} from the quarter-resolution beacon AIA 304\,\AA\ images with a 3-minute interval. The vertical dotted lines denote the times of the images in Figure~\ref{F-2017-09-09_aia304} whose panels are indicated by the bold-italic letters in panel d.} \label{F-2017-09-09_timeprof} \end{figure} The depression was caused by a small eruption associated with a short (7 minutes) impulsive C1.7/1F flare (S10\,W70) in AR\,12673. This superactive region produced from 4 through 10 September four X-class flares and numerous weaker events. The major eruptive events in this region caused strong fluxes of energetic particles, a severe geomagnetic storm on 7--9 September, a deep Forbush decrease, and a ground-level enhancement of cosmic-ray intensity (GLE73) on 10 September, as AR\,12673 arrived at the west limb. The event of interest was much weaker. The intensitygram in Figure~\ref{F-2017-09-09_aia304}a produced on 9 September by the \textit{Helioseismic and Magnetic Imager} (HMI: \citealp{Scherrer2012}) onboard the \textit{Solar Dynamics Observatory} (SDO) shows that AR\,12673 comprised several sunspots. It had a complex magnetic $\beta \gamma \delta$-configuration. Figures \ref{F-2017-09-09_aia304}b\,--\,\ref{F-2017-09-09_aia304}d present three episodes of the small event observed by the \textit{Atmospheric Imaging Assembly} (AIA: \citealp{Lemen2012}) onboard SDO in the 304\,\AA\ channel, which is most sensitive to low-temperature plasma. Here we used quarter-resolution beacon AIA files available with an interval of 3 minutes. AIA did not observe the whole Sun between 06:27 and 06:54. Figure~\ref{F-2017-09-09_aia304}b shows a flare brightening with a circular ribbon. Figure~\ref{F-2017-09-09_aia304}c reveals a jet-like eruption. Figure~\ref{F-2017-09-09_aia304}d presents the active region after the event. \begin{figure} \centerline{\includegraphics[height=0.9\textheight] {2017-09-09_surge.eps} } \caption{Small eruption on 9 September 2017 in the SDO/AIA 304\,\AA\ images (b--d) in comparison with a sunspot group visible in an HMI intensitygram (a). The axes indicate the distance from solar disk center in arcseconds.} \label{F-2017-09-09_aia304} \end{figure} Figures \ref{F-2017-09-09_timeprof}b and \ref{F-2017-09-09_timeprof}c show expanded correlation plots in intensity and polarization. While the structure of the active region is unresolved by the SRH, the change in the polarization indicates the screening of one or more sunspot-associated sources in AR\,12673. Figure~\ref{F-2017-09-09_timeprof}d presents the average brightness in 304\,\AA\ over the framed region in Figure~\ref{F-2017-09-09_aia304} to compare the EUV and microwave observations. The microwave depression lasted somewhat longer than the jet was visible in the 304\,\AA\ images. The \url{2017-09-09_AIA304_WL_SRH.mpg} movie presents the course of the event as observed by AIA in 304\,\AA\ (left) in comparison with HMI intensitygrams (right). The bottom plot shows the same 304\,\AA\ light curve in Figure~\ref{F-2017-09-09_timeprof}d in white and the 7.5\,GHz correlation plots in yellow scaled to match the plotted range. The red vertical line on the plots marks the current observation time. A short-lived flare brightening visible in one image is followed by an eruption (surge) from the same region. The rising material of the surge is initially narrow and bright that indicates its temperature of order $5 \times 10^{4}$~K. Then the surge broadens, darkens, and screens the structures behind it. The absorption indicates a temperature of the erupted material of $ < 10^{4}$~K. The surge partly covers a sunspot group in AR\,12673 behind it. The screening of microwave sources above the sunspots causes the depression in total intensity and change in polarization. After 07:20 the opacity of the surge gradually decreases that corresponds to the recovery of the 304\,\AA\ emission flux. The microwave emission recovers later. The depression was preceded by a small microwave burst around 06:55 corresponding to the flare brightening. Simultaneously, a group of metric Type~III bursts was observed from 06:53 to 06:56 extending down to the kilometric range that indicates escape of accelerated electrons into the interplanetary space. No CME followed this event. \subsection{A Microeruption on 3 August 2017} A microwave depression caused by a still weaker eruptive event was observed on 3 August 2017. Figures \ref{F-2017-08-03_timeprof} and \ref{F-2017-08-03_aia304} present the event occurring in AR\,12670 (S06\,E55) in the formats similar to those in the preceding section. Note that the SRH correlation coefficients here were one order of magnitude smaller than in the 9 September 2017 event. To reduce the noise, they were smoothed in Figures \ref{F-2017-08-03_timeprof}b and \ref{F-2017-08-03_timeprof}c with a 15-samples-wide boxcar. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-08-03_timeprof.eps} } \caption{Time profiles of the microeruption on 3 August 2017. The time profile in panel d was computed over the framed region in Figure~\ref{F-2017-08-03_aia304} from full-resolution AIA 304\,\AA\ images taken with a 1-minute interval. The vertical dotted lines denote the times of the images in Figure~\ref{F-2017-08-03_aia304} whose panels are indicated by the bold-italic letters in panel b.} \label{F-2017-08-03_timeprof} \end{figure} \begin{figure} \centerline{\includegraphics[height=0.9\textheight] {2017-08-03_surge.eps} } \caption{Microeruption on 3 August 2017 in the SDO/AIA 304\,\AA\ images (b--d) in comparison with a sunspot visible in an HMI intensitygram (a). The arrow in panel c indicates a tiny surge. The axes indicate the distance from solar disk center in arcseconds.} \label{F-2017-08-03_aia304} \end{figure} The circumstances of the 3 August and 9 September events are mainly similar. A brightening visible in 304\,\AA\ near a single isolated sunspot located not far from the east limb was followed by a tiny surge (the arrow in Figure~\ref{F-2017-08-03_aia304}c) that overlapped with a sunspot-associated polarized microwave source and partly screened its emission. However, the spatial size and energy of this event were considerably smaller. The field of view in Figure~\ref{F-2017-08-03_aia304} roughly corresponds to the SRH beam size, while the region of brightening is poorly visible even in the full-resolution AIA 304\,\AA\ images. There were no Type~III bursts and no CME. No response to this event is present in X-rays, and its detection in AIA images is not a simple task. Nevertheless, this microeruption is clearly visible in the SRH correlation plots, while its location is easily identified from the SRH images. The correlation plots in Figure~\ref{F-2017-08-03_timeprof}a reveal more depressions on that day. At least one of them, around 02:20, was caused by a similar microeruption in the same active region. Depressions are also detectable in the SRH data on some different days. \section{A Spray Observed on 1 May 2017} \label{S-eruption_may_1} The eruptive event on 1 May 2017 associated with a B9.9 flare in active region 12652 (N18\,W78) was directly observed by the SRH. Figure~\ref{F-2017-05-01_srh_aia304} presents the images of the event produced by the SRH at 5.2\,GHz in the left column along with temporally close SDO/AIA 304\,\AA\ images in the right column. Note that the solar disk is subtracted in the SRH images (the quiet-Sun brightness temperature at 5.2\,GHz is 17570\,K) and reduced in the 304\,\AA\ images of this event to emphasize the off-limb spray. The flare region is denoted by the solid contour in Figures \ref{F-2017-05-01_srh_aia304}b and \ref{F-2017-05-01_srh_aia304}c. The eruption is outlined in Figures \ref{F-2017-05-01_srh_aia304}c--\ref{F-2017-05-01_srh_aia304}e by the thick gray-dashed circle. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-05-01_srh_aia304.eps} } \caption{Eruption on 1 May 2017 in dirty SRH 5.2\,GHz images (a--e) in comparison with SDO/AIA 304\,\AA\ images (f--j). The solar disk is subtracted in the SRH images and reduced in the AIA images. The dashed arc denotes the limb. The solid contours in panels {b, c} outline the flaring source. The gray-dashed circle in panels {c--e} outline the off-limb eruption. The temporal profiles over the contoured region are presented in Figure~\ref{F-2017-05-01_timeprof}a. The black cross in panels {b, c} denote the center of the X-ray source in RHESSI images. The frame in panel {j} denotes the field of view in Figure~\ref{F-2017-05-01_aia211_304}. The axes indicate the distance from solar disk center in arcseconds.} \label{F-2017-05-01_srh_aia304} \end{figure} Figures \ref{F-2017-05-01_srh_aia304}a and \ref{F-2017-05-01_srh_aia304}f show the situation before the event. A compact flare brightening appears in Figures \ref{F-2017-05-01_srh_aia304}b and \ref{F-2017-05-01_srh_aia304}g. A spray appears in the next row (Figures~\ref{F-2017-05-01_srh_aia304}c,\,h); the SRH shows its thickest part with a considerably poorer resolution relative to SDO/AIA. Then, the flaring source disappears at 5.2\,GHz, while a portion of the off-limb spray is still present in the SRH images. The spray broadens in 304\,\AA; a part of its material returns to the solar surface. The black cross in Figures~\ref{F-2017-05-01_srh_aia304}b,\,c denotes the brightness center of an X-ray source observed by the \textit{Reuven Ramaty High-Energy Solar Spectroscopic Imager} (RHESSI: \citealp{Lin2002}). The centers of the source observed at 3--6\,keV, 6--12\,keV, and 12--25\,keV coincide to within $2.5^{\prime \prime}$. A response is detectable in the RHESSI count rate up to the 25--50\,keV band. Figure~\ref{F-2017-05-01_timeprof}a presents the temporal profiles computed from the SRH images (synthesized with a one-minute interval) over the contoured regions in comparison with a GOES 1--8\,\AA\ flux shown in Figure~\ref{F-2017-05-01_timeprof}b in the linear scale. The similarity of the microwave burst (black) with a soft X-ray (SXR) flux suggests domination of the microwaves by thermal emission, consistent with a flatness within $\pm 8\%$ of the flux spectrum measured from the SRH images at 4.0--6.8\,GHz. The thermal bremsstrahlung estimated from GOES data provides 0.8\,sfu equal to the microwave flux actually observed. The same flux of the microwave source was computed from the 17\,GHz image produced by the Nobeyama Radioheliograph (NoRH: \citealp{Nakajima1994}) at 04:00. This weak microwave burst is not detectable in NoRP or Learmonth data. It is only shown by the RT-2 radio telescope of the Ussuriysk Astrophysical Observatory \citep{Kuzmenko2008} at 2.8\,GHz, where its flux was also about 0.8\,sfu. A flat microwave spectrum over a six-fold frequency range confirms that the burst was due to optically thin free-free emission. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-05-01_timeprof.eps} } \caption{Temporal profiles of the 1 May 2017 eruptive event. a)~Microwave flux profiles computed from the SRH images at 5.2\,GHz over the flare region (black; solid contour in Figures~\ref{F-2017-05-01_srh_aia304}b,\,c) and over the off-limb spray (gray; dotted contour in Figures~\ref{F-2017-05-01_srh_aia304}c--e). The labels at the bottom denote the observation times of the corresponding panels in Figure~\ref{F-2017-05-01_srh_aia304}. b)~GOES 1--8\,\AA\ plot. c)~Temporal profile computed from the 211\,\AA\ images over a framed region in Figure~\ref{F-2017-05-01_aia211_304}d.} \label{F-2017-05-01_timeprof} \end{figure} Unlike the SXR burst followed by a shoulder, the microwave burst changed to a depression, which lasted one hour (Figure~\ref{F-2017-05-01_timeprof}). The depression was most likely caused by absorption of the microwave emission in the low-temperature plasma of the spray. A dark absorbing material is really visible in the combined 211\,\AA\ and 304\,\AA\ AIA images in Figure~\ref{F-2017-05-01_aia211_304}d. The similarity between the temporal profile in Figure~\ref{F-2017-05-01_timeprof}c computed from the 211\,\AA\ images over the framed region and the microwave profile of the flare region confirms the absorption-related origin of the microwave depression. The total microwave flux emitted by the off-limb spray is represented by the thick-gray line in Figure~\ref{F-2017-05-01_timeprof}a. The temporal profile of the microwave depression resembles an inverted profile of the spray that also confirms their common cause. The filament eruption is seen in combined SDO/AIA 304\,\AA\ and 211\,\AA\ images in Figure~\ref{F-2017-05-01_aia211_304}, whose field of view is denoted by the frame in Figure~\ref{F-2017-05-01_srh_aia304}j. A part of a dark pre-eruptive filament in Figure~\ref{F-2017-05-01_aia211_304}a screens the bright emission above a plage. In Figure~\ref{F-2017-05-01_aia211_304}b, a thick circular structure bound with the filament brightens up. The eruption process strengthens in Figure~\ref{F-2017-05-01_aia211_304}c corresponding to the peak of the microwave and X-ray bursts. Two Type~III bursts occurred at that time extending to the kilometric range that suggests the appearance of accelerated electrons and their escape into the interplanetary space. The brightest compact source was located in the southwest part of the configuration. Figure~\ref{F-2017-05-01_aia211_304}d shows outflow of low-temperature plasma along the main legs of the erupting filament. This plasma partly returned back later. The low-temperature plasma flow screened the bright microwave-emitting source that caused the depression in the temporal profile in Figure~\ref{F-2017-05-01_timeprof}a. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-05-01_aia211_304.eps} } \caption{Eruption region on 1 May 2017 in combined SDO/AIA 304\,\AA\ and 211\,\AA\ images. The times averaged between both images separated by 4.5\,s are specified in the panels. The axes indicate the distance from solar disk center in arcseconds.} \label{F-2017-05-01_aia211_304} \end{figure} A supplementary \url{2017-05-01_AIA304_SRH.mpg} movie presents the development of the large-scale spray in 304\,\AA\ images (right) and the SRH observations at 5.2\,GHz (left). The impulsive flare brightening is maximum at 04:00. A bright erupted material appears at 04:02. A dark absorbing low-temperature material appears at 04:04 which corresponds to the decay of the spike at 5.2\,GHz in Figure~\ref{F-2017-05-01_timeprof}a and in 304\,\AA\ in Figure~\ref{F-2017-05-01_timeprof}c. The rising motion of the dark material is visible until 04:14, and then its returning motion starts. The erupted material visible in 304\,\AA\ gradually falls until the end of the movie (corresponding to the end of the depression in Figures~\ref{F-2017-05-01_timeprof}a,\,c), while its amount decreases. Figure~\ref{F-2017-05-01_c2} shows a mass ejection observed by the \textit{Large Angle Spectroscopic Coronagraph} (LASCO: \citealp{Brueckner1995}) onboard SOHO. The ejection also looks like a spray and does not exhibit a flux-rope-like magnetic structure. A trailing part of the ejected material (dark in the running differences) indicated by the arrows returned to the surface. The ejection dispersed in solar wind and disappeared in the LASCO-C3 field of view. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2017-05-01_c2_aia.eps} } \caption{Mass ejection on 1 May 2017 in LASCO-C2 running-difference images. The insets show the AIA 193\,\AA\ image ratios. The arrows point at dark features returning to the solar surface. The circles denote the solar limb. The axes indicate the distance from solar disk center in solar radii.} \label{F-2017-05-01_c2} \end{figure} \section{The 16 March 2016 Event Associated with a CME and Shock Wave} \label{S-march16} Unlike the small eruptions not associated with CMEs presented in Section~\ref{S-neg_bursts}, here we consider an eruptive-flare event, which occurred on 16 March 2016 in AR\,12522 (N14\,W83) and had a GOES importance of C2.2. The event gave rise to a CME and shock wave and produced a weak near-Earth proton enhancement. This was the first flare observed by the SRH, when it operated in a single-frequency mode at 6.0\,GHz. Here we start from SRH images and follow different stages of the event using imaging and non-imaging observations in hard X-rays, extreme ultraviolet, white-light, and in metric radio range. \subsection{SRH Observations and Preliminary Conclusions} We synthesized about 3270 total-intensity (Stokes $I$) images in steps of 1\,s for the whole flare duration from 06:35:34 to 07:30:10. Each image was processed separately for the impulsive phase, and we produced 10\,s averages for a later stage. Each of the images obtained in this way was calibrated in brightness temperatures individually using the technique described by \cite{Kochanov2013} and referring to the quiet-Sun brightness temperature of 15960\,K at 6.0\,GHz. All of the images were coaligned. One of the images observed by the SRH before the flare is shown in Figure~\ref{F-2016-03-16_srh_aia}b, and an image observed close to the maximum of the microwave burst is shown in Figure~\ref{F-2016-03-16_srh_aia}d. Nearly simultaneous AIA 193\,\AA\ images are shown on the left. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_srh_aia.eps} } \caption{The 16 March 2016 eruptive flare in the AIA~193\,\AA\ and clean SRH 6\,GHz images: a,\,b)~before the flare, c,\,d)~near the maximum of the microwave burst, e)~the total-intensity temporal profile at 6\,GHz computed from the SRH images over a framed region in panels (b,d).} \label{F-2016-03-16_srh_aia} \end{figure} The microwave emission of this flare was too weak to be recorded by total-flux radiometers properly. With an insufficient spatial resolution of the SRH to supply detailed images of the flare site, its sensitivity is high enough to produce a detailed light curve. The total-flux temporal profile was computed from dirty SRH images over the flare region denoted by the dotted white frame in Figures~\ref{F-2016-03-16_srh_aia}b,\,d. The microwave burst was modest, up to 18\,sfu, while a hard X-ray (HXR) burst was considerable. The impulsive phase of the flare is shown by the \url{2016-03-16_SRH_impulsive_phase_inset.mpg} movie composed from dirty SRH images with an interval of 1\,s. Each full-disk image is displayed with an individual nonlinear brightness scale to reveal the brightness distribution over the solar disk. The top-left inset represents the framed region in a common linear brightness scale over the whole flare. The bottom plot shows the total-flux temporal profile over the framed region with a moving vertical line, which denotes the observation time of the corresponding image. The \url{2016-03-16_AIA193_304_SRH_Fermi.mpg} movie presents the prominence eruption observed by AIA in 193\,\AA\ (left) and in 304\,\AA\ (right) in comparison with the microwave and HXR bursts shown at the bottom. The eruption started first; the bursts became considerable, when intermittent brightenings appeared in 193\,\AA\ near the solar surface beneath the rising prominence. The temporal structure of the microwave burst is similar to a temporal profile computed from the running-difference 193\,\AA\ images over combined regions of the intermittent brightenings, whereas no similarity was observed with any of the individual regions \citep{Lesovoi2017}. SRH images indicate an expanding feature above the west limb. At that time, the image of the Sun from an adjacent interference order of the SRH was located close to the main image right on the west, where the erupting prominence expanded. The east--west sidelobes from the flare region and those from a source at the east limb overlapped (Figure~\ref{F-2016-03-16_srh_aia}d), covering the erupting prominence. Unfavorable observation conditions and a low contrast of the erupting prominence determined by a large area of the SRH beam make its analysis from SRH images difficult. We therefore consider EUV observations of the erupting prominence in the next section. A brief analysis of the flare observed by the SRH and the prominence eruption led \cite{Lesovoi2017} to the following conclusions: 1.~Acceleration of most electrons in the flare was initiated by the prominence eruption. 2.~Compact microwave sources were located in the legs of the flare arcade throughout its whole length. 3.~HXR sources were most likely also distributed over the flare ribbons. Here we continue with a study of this event, focusing on its large-scale aspects and using data of different instruments. We also pay attention to its space weather impact. \subsection{Prominence Eruption} \label{S-prom_eruption} AIA 304\,\AA\ images in Figure~\ref{F-2016-03-16_aia304} present some episodes of the prominence eruption. Figure~\ref{F-2016-03-16_aia304}a shows the initial static prominence. In Figure~\ref{F-2016-03-16_aia304}b, the southern part of the prominence top slightly displaced up, and a gap in its body appeared beneath. Flare ribbons are not yet detectable. In Figure~\ref{F-2016-03-16_aia304}c, the prominence considerably stretched up. Its broadest part north of the top brightened up, that indicates heating; note faint cross-shaped diffraction patterns on the photodetector emanating from this bright feature. A flare ribbon appeared. In Figure~\ref{F-2016-03-16_aia304}d, the prominence changed still stronger, having acquired a high speed. The top part took a complex shape and started stretching forward. In Figure~\ref{F-2016-03-16_aia304}e, the twisted prominence intersected. Two ribbons are visible. \begin{figure} \centerline{\includegraphics[height=0.9\textheight] {2016-03-16_aia304.eps} } \caption{Prominence eruption on 16 March 2016 in the SDO/AIA 304\,\AA\ images. The arcs outlining the top of the erupting prominence correspond to the kinematic curves presented in Figure~\ref{F-kinem}. The axes indicate the distance from solar disk center in arcseconds.} \label{F-2016-03-16_aia304} \end{figure} \cite{Lesovoi2017} measured the kinematics of the erupting prominence from AIA 304\,\AA\ images. To verify those measurements, we included the 174\,\AA\ observations in a wider field of view with the \textit{Sun Watcher using Active Pixel system detector and image processing} (SWAP: \citealp{Berghmans2006}) onboard the PROBA~2 micro-satellite. Although the rising prominence is barely detectable in the SWAP images, they allowed us to expand the measured height interval almost twice. The results are shown in Figure~\ref{F-kinem}a, where the red triangles represent the measurements from AIA images, and the blue squares correspond to the measurements from SWAP images. The refinement of the measurements did not affect the results considerably. \begin{figure} \centerline{\includegraphics[width=0.44\textwidth] {2016-03-16_prom_kinematics.eps} } \caption{Kinematics of the erupting prominence on 16 March 2016. a)~Height--time plot measured from the SDO/AIA 304\,\AA\ images (triangles) and Proba~2/SWAP 174\,\AA\ images (squares). The analytic curve was fit to the measurements (see the text). b)~Velocity--time plot. The vertical lines of different styles denote the times of the images in Figure~\ref{F-2016-03-16_aia304}; its panels are indicated by the bold-italic labels. c)~Acceleration--time plot. The red curve shows the 25--50\,keV flux (Fermi/GBM). The blue curve shows the 6\,GHz flux (SRH). } \label{F-kinem} \end{figure} The height--time dependence in Figure~\ref{F-kinem}a is simple: The initial speed is close to zero; then the slope (i.e. speed) monotonically increases and finally becomes nearly constant. The acceleration determines the curvature of the bend in the height--time plot; it works within a limited interval and does not change the sign. The double integration in the transition from the acceleration to the height--time plot makes the role of a particular shape of the acceleration pulse negligible. Here we use a Gaussian acceleration pulse, adjusting its parameters to match the height--time points measured. The variations in the height, velocity, and acceleration of the prominence top are calculated in this way by integration of a smooth analytic function instead of a problematic differentiation of scattered measured points. The method of the analytic fit to the measured data proved its reliability and accuracy in several studies \citep{Gallagher2003, Sheeley2007, WangZhangShen2009, Alissandrakis2013} and was also successfully used in the cases, when the kinematics was more complex (e.g. \citealp{Grechnev2011_I, Grechnev2013_6dec, Grechnev2016, KuzmenkoGrechnev2017}). The velocity and acceleration of the prominence top found using this method are presented in Figures \ref{F-kinem}b and \ref{F-kinem}c. For comparison, Figure~\ref{F-kinem}c also shows the temporal profiles of the burst recorded by the SRH at 6\,GHz and by the \textit{Fermi Gamma-ray Burst Monitor} (GBM: \citealp{Meegan2009}) in HXR. The maximum velocity acquired by the prominence top was 635\,km\,s$^{-1}$, much higher than the sound speed. Hence, plasma ahead of the erupting prominence could not efficiently flow away which results in the development of a compression region. The acceleration reached 1.86\,km\,s$^{-2}$, or 6.8-fold solar gravity acceleration ($g_\odot = 274$\,m\,s$^{-2}$ at the solar surface). Although the peaks of the HXR burst and acceleration pulse occurred nearly simultaneously, the prominence started accelerating at least two minutes earlier than the main sharp rise of the microwave and HXR bursts. Thus, microwave SRH observations and HXR data indicate that efficient electron acceleration was initiated by the prominence eruption. We observed the earlier development of the eruption process with respect to non-thermal flare emissions in different events, where a clear lag of order 100\,s was present between the acceleration pulse and flare bursts \citep{Grechnev2011_I, Grechnev2013_6dec, Grechnev2016}. This relation does not support an attractive idea of a feedback relationship between the CME motion and the flare energy release \citep{Vrsnak2008}. \subsection{EUV Wave} \label{S-EUV_wave} With a strong acceleration up to $6.8 g_\odot$, the erupting prominence must have produced a magnetohydrodynamic (MHD) wavelike disturbance. Its initial propagation velocity is determined by the local fast-mode speed ($v_\mathrm{fast}$), which is high above an active region (typically $v_\mathrm{fast} > 1000$\,km\,s$^{-1}$). Away from the wave origin, the $v_\mathrm{fast}$ in the environment decreases both upwards and laterally, reaching about $200$\,km\,s$^{-1}$ above the quiet Sun. When a high-speed disturbance enters the environment of a considerably lower $v_\mathrm{fast}$, its profile steepens, and the disturbance rapidly becomes a shock wave. In this impulsive-piston scenario, the shock formation is determined mainly by the maximum acceleration of the eruption and the $v_\mathrm{fast}$ falloff away from the eruption region and does not depend on the relation between the eruption speed and the local $v_\mathrm{fast}$ in the environment \citep{AfanasyevUralovGrechnev2013}. The disturbance excited by the erupting prominence is visible in the \url{2016-03-16_AIA171_211.mpg} movie, which presents nearly simultaneous AIA 171\,\AA\ and 211\,\AA\ images. The diffuse coronal background was removed from the 171\,\AA\ images on the left. The 211\,\AA\ running-difference images on the right show the propagating disturbance. Unlike some other events, no manifestations of a rim are detectable around the erupting prominence in either the 211\,\AA\ running differences or the filtered 171\,\AA\ images, while the latter could reveal the rim most clearly (see, e.g., \citealp{Grechnev2016}), if it had been present. The 211\,\AA\ running-difference images in the movie reveal the following. At about 06:35, faint structures above the erupting prominence appeared, which reveals their displacement caused by the early rise of the prominence (conspicuous due to its black appearance in the enhanced-contrast images). A bright compression region above the prominence top appeared at 06:37, when its velocity reached $300$\,km\,s$^{-1}$, and expanded at 06:38, when the velocity became $400$\,km\,s$^{-1}$. A fast disturbance propagated during 06:39--06:42 along transequatorial loops connecting the parent active region with remote southern regions, indicating a high Alfv{\'e}n speed in the loops. Then, a large-scale brightening (EUV wave) is visible that propagates along the surface and above the limb on the southwest. To analyze the EUV wave propagation quantitatively, we invoke its approximate analytic description, which was used in our previous studies of several events \citep{Grechnev2008, Grechnev2011_I, Grechnev2011_III, Grechnev2013_6dec, Grechnev2014_II, Grechnev2015, Grechnev2016, Grechnev2017_III} to follow various shock-wave signatures such as EUV waves, Type II bursts, and wave traces ahead of CMEs. This approach uses a power-law density model \begin{eqnarray} n(x) = n_0(x/h_0)^{-\delta} \label{E-pl_model} \end{eqnarray} where $x$ is the distance from the eruption center, $n_0$ is the density at a distance $h_0 = 100$~Mm, which is close to the scale height, and the density falloff exponent $\delta$ generally depends on the wave propagation direction. The development of a compression region during the eruption before the appearance of the shock wave strongly disturbs the corona, making standard coronal density models in the near zone inadequate, while the corona remains quiet in the far zone. The power-law density model (\ref{E-pl_model}) describes this situation acceptably: with $x \approx r-R_\odot$ being the height from the photosphere, $n_0 = 4.1 \times 10^8$~cm$^{-3}$, and $\delta = 2.6$, it is close to the equatorial Saito model \citep{Saito1970} within $\pm 30\%$ at the distances exceeding 260\,Mm, providing higher densities at lesser heights. A blast-wave-like shock, which spends its energy to sweep up and extrude the plasma from the volume it occupied previously, has a power-law kinematics, $x(t) \propto t^{2/(5-\delta)}$ versus time $t$ \citep{Grechnev2008}. We use this equation in the form \begin{eqnarray} x(t) = x_1[(t-t_0)/(t-t_1)]^{2/(5-\delta)}, \label{E-pl_fit} \end{eqnarray} where the starting estimate of the wave onset time, $t_0$, can be taken equal to the maximum acceleration time, and $x_1$ is the distance from the eruption center to one of the wave fronts observed at time $t_1$. Then, we adjust in sequential attempts the $\delta$ and $t_0$ parameters to reach a best fit of the wave propagation. The density falloff exponent $\delta$ determines the curvature of the distance--time plot: with a maximum value $\delta = 3$ it has a linear shape, and a decrease of $\delta$ increases the curvature of the plot. The shape of the global shock-wave front is close to an ellipsoid \citep{Grechnev2011_III, Grechnev2014_II, Grechnev2017_III, Kwon2014, Kwon2015, Rouillard2016} with a ratio of the axes not much different from unity; for simplicity we consider a spheroid, i.e. ellipsoid of revolution. Its axis corresponds to the acceleration vector of the eruption. If the large-scale $v_\mathrm{fast}$ distribution is strongly inhomogeneous (e.g. because of the presence of a large coronal hole), then the orientation of the axis gradually displaces toward the region of a higher $v_\mathrm{fast}$ \citep{Grechnev2011_III, Grechnev2013_6dec}. The shock front is ``hard'' like an ocean tube wave, being governed by the global wave expansion and does not depend on local inhomogeneities in the $v_\mathrm{fast}$ distribution. For this reason, the description of the near-surface wave propagation with Equation~(\ref{E-pl_fit}) corresponds to an intermediate value of $\delta_\mathrm{S}$ between zero expected for a constant density and $\approx 2.6$ typical of the radial direction (we usually observed $\delta_\mathrm{S} \approx 2.0$ for EUV waves). The stronger near-surface retardation causes a tilt of the shock front sometimes observed \citep{Hudson2003, Warmuth2004_II}. Local inhomogeneities in the $v_\mathrm{fast}$ distribution over the solar surface determine the brightness of the EUV wave \citep{Grechnev2011_III}, while larger inhomogeneities affect its propagation velocity and cause its reflection and refraction (e.g. \citealp{Veronig2008, Gopalswamy2009, Grechnev2011_I}). Keeping in mind these circumstances, we calculated the global shock-wave fronts and their surface skirt (EUV wave). They are shown in Figure~\ref{F-aia_wave}b--i and the \url{2016-03-16_AIA211_wave.mpg} movie on top of the AIA~211\,\AA\ running differences. Figure~\ref{F-aia_wave}a presents an averaged pre-event AIA~211\,\AA\ image, which shows active regions (green in the movie) and coronal holes (blue in the movie). The elliptic arcs on the surface are small circles parallel to the equator of the sphere, whose pole coincides with the eruption site. The distances are measured from the pole to the small circles along the great circle. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_aia_wave.eps} } \caption{a)~Average of four AIA 211\,\AA\ images on 16 March 2016 from 06:30 to 06:33. b--i)~EUV wave in running-difference AIA 211\,\AA\ images. The white circle denotes the solar limb. The arcs outline the wave front. The axes indicate the distance from solar disk center in arcseconds.} \label{F-aia_wave} \end{figure} Figure~\ref{F-aia_wave} and the movie reveal a complex character of the EUV wave. From 06:37 to 06:53, the calculated ellipses bound its outermost signatures in both hemispheres, except for the mentioned southwards fastest disturbance on the west above the limb. After 06:53, the EUV wave is conspicuous southwest from the extended southern coronal hole, while large-scale inhomogeneities complicate and hamper its propagation farther in the northern hemisphere. Overall, while the calculated ellipses represent, on average, the global expansion of the wave dome above the limb and its surface trail, the presence of active regions and coronal holes governs the propagation and appearance of the EUV wave according to the associated inhomogeneities in the $v_\mathrm{fast}$ distribution over the solar surface. Their influence corresponds to the expectations for a mast-mode wave. Figure~\ref{F-aia_wave_kinem} presents the kinematics used to outline the wave signatures in Figure~\ref{F-aia_wave} and the movie. The wave onset time was refined to fit the EUV wave propagation, $t_0 = $06:36:30 (the vertical thick-dotted line in Figure~\ref{F-kinem}). The density falloff exponents for the radial direction $\delta_\mathrm{C} = 2.5$ and for the near-surface propagation $\delta_\mathrm{S} = 2.4$ almost coincide in this case. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_euv_wave_prop.eps} } \caption{Distance--time (a) and velocity--time (b) plots of the EUV wave. The wave propagation in the radial direction (up) is represented by the solid curve, and the dashed curve represents its surface trail. The vertical solid line denotes the wave onset time. The vertical dotted lines denote the times of the images in Figure~\ref{F-aia_wave} whose panels are indicated by the bold-italic letters. The shading in panel b denotes the observation interval of the Type~II burst (continued afterward).} \label{F-aia_wave_kinem} \end{figure} The EUV wave velocity in Figure~\ref{F-aia_wave_kinem}b monotonically decreased by 80\% within an interval shown in Figure~\ref{F-aia_wave}. This behavior with a strong deceleration is consistent with a pioneering result of \cite{Warmuth2001} and several later studies, but is not exhibited by all EUV transients (e.g. \citealp{Warmuth2004_I, Warmuth2004_II, Warmuth2005, Muhr2011, Muhr2014, Nitta2013_waves, Long2017}; see \citealp{Warmuth2015} for a review). In our previous case studies, we observed exactly this behavior for shock-associated EUV waves. On the other hand, if the EUV wave properties had been studied solely from signatures in the images, especially by means of an automated detection algorithm, then understanding its kinematics would be difficult. \subsection{Type II Burst} \label{S-type_II} While the EUV wave reveals a fast-mode disturbance, which was most likely super-Alfv{\'e}nic, its shock-wave regime is not obvious. A commonly accepted evidence of a shock wave is a Type~II radio burst. An important property of Type~II bursts is their narrow-band emission. To ensure it, the source should be compact; otherwise, a large shock front crossing a wide range of plasma densities could only produce a drifting continuum \citep{KnockCairns2005}. An appropriate source of a Type~II emission is a distinct narrow structure, i.e. coronal streamer \citep{Uralova1994, Reiner2003} that was confirmed in imaging meter-wave observations of Type~II sources \citep{Feng2013, Chen2014, Du2014, Lv2017}. A Type~II burst can be emitted from a remote streamer crossed by a flank of a quasi-perpendicular or oblique shock or from a streamer located above the eruption region crossed by the front of a quasi-parallel shock. The former case probably corresponds to a typical situation, and the infrequent latter case is characterized by a considerably faster drift \citep{Grechnev2014_II, Grechnev2016}. In either case, the shock crossing the streamer deforms its current sheet that produces a flare-like process running along the streamer together with the intersection point. This scenario has shed light on various structural properties of Type~II bursts \citep{Grechnev2011_I, Grechnev2014_II, Grechnev2015, Grechnev2016}. Figure~\ref{F-dyn_spec} shows a dynamic spectrum combined from the Learmonth and Culgoora spectrographs. The spectrum presents a strong Type~V burst co-temporal with the main burst in HXR and microwaves in Figure~\ref{F-kinem}c followed by a faint Type~III burst at 06:40 corresponding to a minor burst. At 06:46, a Type~II burst with a complex structure started. Its fundamental-emission band was strongly suppressed, while the harmonic emission consisted of at least three indistinct lanes. A fine Type~III-like structure of the lanes is detectable suggesting acceleration of electrons in the running flare-like process. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_dyn_spec.eps} } \caption{Dynamic spectrum of the metric radio burst composed from the Learmonth and Culgoora data. The vertical dashed line denotes the wave onset time $t_0 = $\,06:36:30. The curves of different line styles and colors outline different bands in the Type~II structure; the paired curves outline the fundamental and harmonic emissions. All of the curves correspond to the same $t_0$ and density falloff exponent $\delta = 2.67$, suggesting a single shock front crossing a few emitting structures.} \label{F-dyn_spec} \end{figure} To analyze the frequency--time drift of the Type~II burst, we use the approach described in the preceding section. We choose a reference point of a Type~II band on the dynamic spectrum at time $t_1$ with a frequency $f_1$, convert the frequency into the density $n_1$ assuming the fundamental emission at the plasma frequency $f_\mathrm{P} = 9 \times 10^4 n^{1/2}$ or its second harmonic $2 f_\mathrm{P}$, and then convert $n_1$ into the distance $x_1$ using the power-law density model (\ref{E-pl_model}). Taking starting estimates for $t_0$ and $\delta$, we calculate the trajectory using Equation~(\ref{E-pl_fit}), convert it to the frequency and plot on top of the dynamic spectrum. The values of $t_0$ and $\delta$ are optimized in sequential attempts to reach the best fit of the trajectory to bright Type~II signatures (see \citealp{Grechnev2014_II, Grechnev2017_III} for details). If a Type~II band is clearly defined, then two reference points can be chosen. The \url{type_II_fit.mpg} movie presents the adjustment of the Type~II trajectory using this example. Here the only variable is $\delta$, which governs the curvature of the trajectory, and its optimal value $\delta = 2.67$ determines $t_0 =$\,06:36:30, the same as for the EUV wave. The difference between the $\delta = 2.67$ and $\delta_\mathrm{C} = 2.50$ for the coronal wave (Figure~\ref{F-aia_wave_kinem}) can be due to different directions. With $t_0$ and $\delta$ estimated for a single harmonic band, the trajectories for different bands at both harmonics were calculated by referring to different $f_1$ at the same $t_1$ and plotted in Figure~\ref{F-dyn_spec} with different line styles and colors (same for each harmonically related pair). An extra band with the same $t_0$ and $\delta$ appeared at 06:55:00. The coincidence of the wave onset times and even the density falloffs for all of the bands indicates their common origin related to the same shock front. The structure of the Type~II burst does not resemble the band-splitting, and this effect conventionally interpreted by the emission upstream and downstream of the shock front cannot account for more than two bands. It is also difficult to relate this structure to a single bow-shock-associated source ahead of the CME nose, which can only produce a single or split harmonic pair of bands. Instead, the presence of several pairs of bands points at a corresponding number of compact sources not much different from each other. Most likely, they were located at the flanks of the coronal wave and none ahead of the CME nose because of their similar drift rates with the same $\delta$. This assumption is supported by the strong absorption of the fundamental emission along the line of sight either in a long column of the corona in front of the Type~II sources above the west limb, or a dense structure such as the base of the streamer belt, or both. The appearance of the EUV wave in Figure~\ref{F-aia_wave} and the \url{2016-03-16_AIA211_wave.mpg} movie is not different before the start of the Type~II burst (06:45:00) and after it. The wave velocity in Figure~\ref{F-aia_wave_kinem}b monotonically decreased, being in the first panels of Figure~\ref{F-aia_wave} most likely higher than the ambient fast-mode speed both along the surface and in the radial direction. The Type~II burst started when the wave considerably decelerated (shading in Figure~\ref{F-aia_wave_kinem}b). All of these facts indicate that the lag of the Type~II burst behind the wave onset time is determined by the distance required for the shock front, which already exists, to propagate until the encounter with a streamer, which can produce the Type~II emission, and does not depend on the relation between the velocity of the wave or ejecta and the ambient fast-mode speed. \cite{Long2017} found the delay of a Type~II burst relative to the EUV wave onset to be typical. In summary, both the EUV wave and Type~II burst point to the same wave onset time at 06:36:30. The velocity of the prominence top, which excited the wave, was 215\,km\,s$^{-1}$ at that time (the thick dotted line in Figure~\ref{F-kinem}b). It should be noted that Equation~(\ref{E-pl_fit}) used in our measurements was obtained for a spherical blast wave expanding from a point-like source \citep{Grechnev2008}. A real wave exciter can be spatially extended, which might shift the actual wave onset time. In the radial direction corresponding to the eruption, the wave represented by the solid curve in Figure~\ref{F-aia_wave_kinem} travels, e.g., 20\,Mm in 6\,s and 50\,Mm in 20\,s. Even with the largest time shift the velocity of the prominence top in Figure~\ref{F-kinem}b did not exceed 300\,km\,s$^{-1}$, being certainly sub-Alfv{\'e}nic. On the other hand, the wave started close to the maximum acceleration time in Figure~\ref{F-kinem}c that occurs in the impulsive-piston shock excitation scenario. \subsection{White-Light Transient} \label{S-CME} The eruption produced a decelerating CME. According to the online CME catalog (\url{https://cdaw.gsfc.nasa.gov/CME_list/}: \citealp{Yashiro2004}), it had a central position angle of $265^{\circ}$, an average speed of 592\,km\,s$^{-1}$, and acceleration of $-22.4$\,m\,s$^{-2}$. Figure~\ref{F-2016-03-16_c2_wave} presents the wave traces in contrasted LASCO-C2 running-difference images. The radii of the white-on-black arcs were calculated from the decelerating wave kinematics in Figure~\ref{F-aia_wave_kinem}a with the same $t_0=$\,06:36:30 and $\delta = 2.5$. The arcs match most of the wave traces, which are manifested in the partial halo enveloping the CME body and deflections of the coronal rays. The arcs are close to the measurements in the CME catalog denoted by the black slanted crosses. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_c2_wave.eps} } \caption{Wave traces on 16 March 2016 in LASCO-C2 images (running differences). The thick white circle denotes the solar limb. The small white crosses denote the eruption center. The larger slanted black crosses in panels a--c denote the measurements in the CME catalog. The arcs outline the wave front. The axes indicate the distance from solar disk center in solar radii.} \label{F-2016-03-16_c2_wave} \end{figure} Figure~\ref{F-2016-03-16_c2} shows the CME structure in non-subtracted C2 images. The white arcs correspond to wave traces. Neither the frontal structure nor cavity are pronounced. The black-dashed arcs outline the main part of the CME body (core) with a helical structure inherited from the erupted prominence. It seems to be more complex than one expects for a perfect flux-rope structure. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_c2_img.eps} } \caption{The CME on 16 March 2016 in LASCO-C2 images (fixed-base ratios). The thick white circle denotes the solar limb. The small crosses denote the eruption center. The slanted cross in panel a denotes the measurement in the CME catalog. The white solid arcs outline the wave front, and the black-dashed arcs outline the flux-rope-like structure. The axes indicate the distance from solar disk center in solar radii.} \label{F-2016-03-16_c2} \end{figure} Figure~\ref{F-cme_kinem} presents the kinematical plots for the wave (solid) and CME body (dashed) along with the measurements from the CME catalog (symbols). The way to obtain the wave kinematics has been discussed in detail. It is more complex to infer the kinematics of the CME body, which is determined by different processes at different stages of its development. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_cme_kinem.eps} } \caption{Overall kinematical plots of the wave signatures (solid) and CME body (dashed): a)~heliocentric distances versus time, b)~velocity--time plots. The symbols represent the measurements in the CME catalog. The gray curve in panel b is the GOES 0.5--4\,\AA\ flux scaled to match the plot of the CME body. The shading in panel {b} shows the interval when the Type~II burst was observed.} \label{F-cme_kinem} \end{figure} The kinematics of the erupting prominence governed by an MHD instability was measured in Section~\ref{S-prom_eruption} using the fit with a Gaussian acceleration pulse (Figure~\ref{F-kinem}). When the instability expires, the CME expands for some time freely and self-similarly \citep{CremadesBothmer2004}. Eventually, the CME kinematics should be determined by the aerodynamic drag from solar wind \citep{Chen1989, Chen1996, VrsnakGopal2002}, whose dominance is expected beyond $15\,\mathrm{R}_\odot$ \citep{Vrsnak2006, Temmer2011}. As \cite{KuzmenkoGrechnev2017} showed, exceptions do occur, and nevertheless the CME expands nearly self-similarly at moderate distances from the Sun. The self-similar character of the CME expansion is determined by the fact that the magnetic propelling and retarding forces, plasma pressure and gravity decrease after the termination of the initial instability by the same factor inversely proportional to the distance from the eruption center squared (except for the drag). The theory of self-similar expansion of solar CMEs was initially developed by \cite{Low1982}. A description of a self-similar expansion convenient in the analysis of observations was proposed by \cite{UralovGrechnevHudson2005}. From their formulas, the instantaneous velocity $v$ can be related to the distance $R$ from the expansion center \citep{Grechnev2008}: \begin{eqnarray} v^2 = v_0^2+\left(v_\infty^2 - v_0^2\right)\left({1-R_0/R}\right), \label{E-self-sim_vel} \end{eqnarray} where $R_0$ is the initial position of the CME and $v = \mathrm{d}R/\mathrm{d}t$ and $v_0$ and $v_\infty$ are its initial velocity and the asymptotic final velocity in the self-similar expansion stage. With a simple form, Equation~(\ref{E-self-sim_vel}) cannot be integrated explicitly; the formulas for the time versus distance dependence are cumbersome. They can be found in \cite{Grechnev2014_II}. The properties of the self-similar plots correspond to those of hyperbolic functions. Acceleration in the self-similar regime cannot increase by the absolute value and therefore this approach does not apply to the CME's initial lift-off during the impulsive acceleration stage. We concatenated the kinematics of the erupting prominence fitted with a Gaussian acceleration (Figure~\ref{F-kinem}) with the self-similar kinematics of the CME. The rising prominence forces to expand closed coronal structures above it, which are expected to be ahead but were not observed. To take account of their presence in LASCO images, the prominence velocity was increased by 40\%. The resulting velocity--time plot for the CME body is presented in Figure~\ref{F-cme_kinem}b by the dashed curve. Its integration provided the distance-time plot in Figure~\ref{F-cme_kinem}a used to calculate the radii of the black-dashed arcs outlining the CME body in Figure~\ref{F-2016-03-16_c2}. The \url{2016-03-16_C2_rope_wave.mpg} movie shows the CME body and wave in the images, whose field of view is scaled according to the measured kinematics to fix the visible size of the transient. \cite{Zhang2001} established similarity between the CME velocity variations and the rise phase of the GOES SXR flux and found indications of similarity between the CME acceleration and the HXR burst confirmed by \cite{Temmer2008}. The similarity between the HXR and the derivative of the SXR flux is really expected due to the Neupert effect \citep{Neupert1968}. A case study by \cite{Grechnev2016} demonstrated a close correspondence between the kinematics of an erupting structure and X-ray emissions, which were delayed by about 2 minutes that resembles the situation in this event. There is the similarity indeed between the rising parts of the CME velocity plot and the GOES 0.5--4\,\AA\ flux (gray in Figure~\ref{F-cme_kinem}b), which lags behind the velocity by 140\,s. The self-similar plots resemble the CME kinematics expected for a drag-dominated situation, whereas the responsible forces are quite different (the similarity is also possible for gradually-accelerating slow CMEs). For this reason, if a drag-based model acceptably describes the CME kinematics, then this result does not guarantee the importance of the drag. The measurements in the CME catalog are carried out for the fastest feature of a transient, being therefore most likely related to a wave ahead of the CME body, if it is present. Figure~\ref{F-cme_kinem}a confirms the agreement between these measurements and our curve. To find the velocity of a transient, the linear and second-order fit are used in the CME catalog. The latter is presented in Figure~\ref{F-cme_kinem}b by the slanted crosses, whose difference from our power-law fit is mostly not large. The difference increases at shorter distances that results in a strong underestimation by the second-order fit of the wave velocity during its initial evolution hidden by the occulting disk of LASCO-C2. The interval when the Type~II burst was observed is denoted in Figure~\ref{F-cme_kinem}b by the gray shading. The Type~II burst ceased by 07:11, when the wave velocity decreased to about 800\,km\,s$^{-1}$, and did not extend into the frequency range below 14\,MHz. These circumstances indicate that the decelerating shock decayed at about this time into a weak disturbance. The maximum heliocentric distance at that time was $4.2\,\mathrm{R}_\odot$ for the wave front and $2.8\,\mathrm{R}_\odot$ for the CME body, whose velocity was 500\,km\,s$^{-1}$. The shock wave had not changed to the bow-shock regime, because the trailing CME body was sub-Alfv{\'e}nic. \subsection{Implication to the Near-Earth Proton Enhancement} The SXR emission of this eruptive flare up to C2.2 level had an impulsive time profile with a duration of 23 minutes (Figure~\ref{F-xray_protons}a). At about the time of the event, a weak near-Earth proton enhancement started (Figure~\ref{F-xray_protons}b). The proton flux reached about 1\,pfu in the $> 10$\,MeV integral channel, was detectable in the averaged $> 50$\,MeV channel, and possible in the $> 100$\,MeV channel, exceeding the $3 \sigma$ level above the background around 11:00. Figure~\ref{F-xray_protons}a also reveals a minor secondary SXR enhancement during 07:45--08:05 marked on the 1--8\,\AA\ plot by a thin vertical bar. A group of metric Type~IIIs around 08:00 extending to lower frequencies in the Wind/WAVES spectrum corresponds to this minor event, while neither SOHO/LASCO nor STEREO-A/COR1 show any additional CME. The proton event already started at that time and was therefore caused by the eruptive C2.2 event in AR\,12522 observed by the SRH, while the minor event around 08:00 was unlikely important. \begin{figure} \centerline{\includegraphics[width=0.48\textwidth] {2016-03-16_xray_protons.eps} } \caption{GOES plots of SXR fluxes (a) and integral proton channels (b) recorded on 16 March 2016. The histogram-like thin line in panel b presents the original 5-minute data on $> 10$\,MeV protons. The thick lines present the proton fluxes summed over 1~hour. The vertical dashed line marks the peak time of the SXR flux. The horizontal dashed line shows the background level in the $> 100$\,MeV proton channel averaged over the preceding and next days.} \label{F-xray_protons} \end{figure} This impulsive flare accompanied by a modest microwave burst of 18~sfu seems to be too weak to produce the proton event; the most probable candidate for its source is the shock wave. It appeared during the flare rise, being able to accelerate protons considerably earlier than usually assumed, and decayed soon, having not changed to the regime of the CME-driven bow shock. These circumstances show that a widely accepted view relating solar energetic particles with CME-driven shocks, which develop at considerable heights, needs refinement. \section{Discussion} \label{S-discussion} \subsection{Summary on the Eruptions Observed with the SRH} \label{S-summary_eruptions} Being not able to resolve the spatial structure of eruption regions, the SRH detects the occurrence of many eruptions, whose energy and spatial size can be very small, and locates their positions on the Sun. The eruptions presented here were revealed in one of three ways: i)~from microwave depressions shown in Section~\ref{S-neg_bursts}, ii)~by direct SRH observations of the eruptions, as was the case on 1 May 2017 in Section~\ref{S-eruption_may_1}, and iii)~by the examination of the eruptive flare observed by the SRH on 16 March 2016 (Section~\ref{S-march16}). In all of these cases, the SRH provides the pointing to the events, which are analyzed using data acquired by a number of different instruments. This is a usual way to study complex solar events. The microwave depressions shown in Section~\ref{S-neg_bursts} as well as the negative bursts on 9 August 2016 in AR\,12574 (N04\,E59) presented by \cite{Lesovoi2017} occurred not far from the limb. The intensity depressions were accompanied by changes in the polarization that indicates the screening of microwave sources, which had a considerable polarization, i.e. gyromagnetic sources. In all of these cases, the screening was caused by low-temperature jets, which occurred near sunspots indeed. Thus, the jet-like eruptions responsible for the depressions most likely screened polarized sunspot-associated sources. Because the orientations of the jets are not much different from the radial direction, the screening phenomena are favored by the location of the eruptions close to the limb. Deviations in the Stokes~V correlation plots indicate such events, as a cursory analysis of different depressions observed by the SRH confirms. The events considered in Sections \ref{S-neg_bursts} and \ref{S-eruption_may_1} were associated with jet-like eruptions of different size, where the low-temperature erupted plasma rose and gradually crossed in front of microwave sources, absorbing their emission. The screening caused long-lasting depressions of the microwave emission and changes in its polarization. As noted in Section~\ref{S-neg_bursts}, multi-frequency observations of microwave depressions provide the basis for plasma diagnostics in erupting structures. Modeling the spectrum of the absorption depths observed at a few frequencies from 1 to 10\,GHz allowed estimating parameters of the erupted material responsible for several negative bursts even without images \citep{Grechnev2008, Grechnev2013neg, KuzmenkoGrechnevUralov2009, KuzmenkoGrechnev2017}. These studies used a flat-layered model of a relatively large absorbing cloud of given height, dimensions, temperature, and density, with a possible stable compact microwave source covered. The estimated area screened reached 2--10\,\% of the solar disk. This approach can also be used to analyze from the SRH data large-scale absorption phenomena, when they would be observed. A narrower SRH frequency range of 4--8\,GHz relative to these studies might result in increased uncertainties. Plasma diagnostics for small eruptions shown in Section~\ref{S-neg_bursts} is more complex. The fractions of the solar disk covered by the jets in 304\,\AA\ were about 0.12\,\% on 9 September 2017, 0.03\,\% on 3 August 2017, and 0.05\,\% on 1 May 2017. The small width of the screen becomes comparable with the size of the microwave source behind it. Here it is necessary to consider additionally the overlap between the narrow jet and a microwave source and to untangle the variations in the opacity of the jet and the changes in the brightness and spectrum of the flaring source. These issues should be addressed in future studies. Most of the events presented here were associated with jet-like eruptions. A realistic explanation of jets was proposed by \cite{Filippov2009} and \cite{Meshalkina2009} based on three-dimensional magnetic configurations containing coronal null points. Such configurations appear above photospheric magnetic islands surrounded by opposite-polarity regions and resemble an inverted funnel or helmet. If a small flux rope erupts inside the funnel, then its magnetic structure cannot survive when passing at a null point \citep{Uralov2014}, and released plasma flows out as a jet. Eruptions in such configurations are characterized by circular ribbons and impulsive temporal profiles \citep{Masson2009, Meshalkina2009}. Magnetic islands inside opposite-polarity regions occur very often, and inverted funnels (helmets) are also expected to be quite common configurations. For example, similar configurations are conjectured in Figures \ref{F-2017-05-01_aia211_304}b and \ref{F-2017-05-01_aia211_304}d. The roles of such configurations deserve further attention to be paid elsewhere. \subsection{Initiation of an Eruption and Development of a Flux Rope} \label{S-flux-rope} All of the eruptions considered here started developing from below at small heights in the corona. This circumstance is obvious for small eruptions presented in Section~\ref{S-neg_bursts} and a larger event on 1 May 2017 shown in Section~\ref{S-eruption_may_1}. The situation was also similar in the CME-related 16 March 2016 event. We consider this event in more detail. The main active structure observed in this event was the eruptive prominence. Its motion started before the HXR and microwave bursts, and the flare ribbons developed later. The chain of events resembles the scenario by \cite{Hirayama1974}, in which an MHD instability of an electric current in the prominence drives its lift-off, which stretches associated magnetic fields, forming the current sheet, in which the flare reconnection occurs, and a shock wave is generated ahead of the erupting prominence. None of the SDO/AIA 304\,\AA, 171\,\AA, or 211\,\AA\ channels, capable of detecting non-flaring structures, reveal within the AIA field of view any larger feature embracing the prominence that could govern its eruption. The behavior of the erupting prominence in Figure~\ref{F-2016-03-16_aia304} and the \url{2016-03-16_AIA193_304_SRH_Fermi.mpg} movie indicates its own twist instability rather than a reflection of external processes in a larger structure, whose presence is often presumed. The structure of a pre-eruptive prominence is considerably different from a flux rope, which is rooted to the surface by two ends only. The presence of numerous barbs indicates a multitude of flux-rope-like segments arranged along the magnetic neutral line, each of which is connected to the surface by its ends. A presumable scenario, in which reconnection forms a single flux rope from a multitude of sheared field lines with the appearance of flare loops was theoretically described by \cite{InhesterBirnHesse1992} and \cite{LongcopeBeveridge2007} and got a quantitative support in observational studies (e.g. \citealp{Qiu2007, Miklenic2009}). The MHD instability, which governs the initiation and development of the prominence eruption, is presumably driven by an electric current. In pre-eruptive force-free conditions, $\nabla \times \textbf{\textit{B}} = \alpha\textbf{\textit{B}}$; the density of the electric current is proportional to the magnetic field strength in a prominence. The field strength in its environment above an active region steeply falls off, as the height increases (e.g. \citealp{Gary2001, Mann2003}). Therefore, the magnetic field and electric current in a prominence are typically stronger near the solar surface than at larger heights. To produce the acceleration with a half-height duration of 5 minutes observed for the erupting prominence in Figure~\ref{F-kinem}, the characteristic Alfv{\'e}n time in the responsible processes should be much shorter. This would not be possible if the eruption had been governed by a large-scale structure with a weaker magnetic field and longer Alfv{\'e}n time. The reconnection process detaches the barbs under the prominence, transforming its structure into the helical structure of the developing flux rope. When its central part is nearly formed, it becomes convex, and the torus instability develops. Figure~\ref{F-2016-03-16_aia304}c presents an episode of this stage corresponding to the maximum acceleration measured. Then, the twist instability develops in Figures \ref{F-2016-03-16_aia304}d and \ref{F-2016-03-16_aia304}e, which is often observed, but does not seem to be a necessary phase of the eruption process. The flux-rope formation is unlikely to occur perfectly and terminate completely in the course of the prominence eruption. Some of the pre-eruptive segments could not reconnect. The flux-rope-like structures actually observed (e.g. \citealp{Cheng2013, Grechnev2016}) resemble twisted bundles of loops rather than a perfect croissant-shaped structure. \cite{KuzmenkoGrechnev2017} revealed indications of an ongoing flux-rope formation from twisted core structures during the CME expansion. The structure of the CME body in Figure~\ref{F-2016-03-16_c2} observed on 16 March 2016 also seems to be more complex than an expected croissant-like flux rope in the CME cavity. These circumstances indicate that a flux rope forms in the course of a time-extended process. The eruption observed in the extreme ultraviolet is its most impulsive, powerful stage, when a future CME structure develops, while its components have not yet constituted the whole. This fact is essential to determine the actual shock-wave excitation scenario. \subsection{Shock Excitation Scenarios} \label{S-scenarios} The impulsive-piston shock-wave excitation scenario revealed in Section~\ref{S-march16} is not exceptional. The main conditions necessary to realize this scenario are i)~more or less impulsive acceleration of an eruptive structure, and ii)~pronounced falloff of the fast-mode speed away from the eruption region. These conditions are typical of many events, irrespective of the flare size, and even in cases where non-thermal bursts are not observed in HXR or microwaves. An abrupt eruption is only required, while the presence of a CME is not necessary. On the other hand, the impulsive-piston scenario is not expected for gradually accelerating CMEs initiated by the eruptions of large quiescent prominences away from active regions. It is also not expected for confined flares independent of their size, that are not associated with expansion of any structures. Such rare flares sometimes occur (e.g. \citealp{Thalmann2015}; a few major confined flares also occurred in September 2005). While the shock-wave excitation scenarios have been known for several decades, observations until recently did not allow identifying which one was responsible for the appearance of coronal shock waves (see \citealp{VrsnakCliver2008} for a review). The search for their origins has been focused on the ``impulsive-piston shock excitation by a flare pressure pulse versus the bow-shock excitation by the outer surface of a super-Alfv{\'en}ic CME'' alternative. A rather obvious scenario outlined in Section~\ref{S-EUV_wave} has been escaping attention, possibly because the flux ropes are assumed pre-existing when the eruptions develop. Having adopted the ``flare versus CME'' alternative, one is constrained by its framework and comes to a conclusion about the flare-related shock origin, if its exciter exhibits impulsive properties (e.g. in the case of Moreton waves), or if mismatch between the estimated speeds of the shock and CME is conspicuous, especially if a CME is absent. However, the role of the flare pressure in the shock-wave excitation is unlikely \citep{Grechnev2011_I, Grechnev2015} for the following reasons. \begin{enumerate} \item The plasma density and temperature in flare loops are manifested in their SXR emission. It is gradual in nature and resembles the indefinite integral of the HXR burst (the Neupert effect: \citealp{Neupert1968}). On the other hand, the HXR burst roughly corresponds to a sharp acceleration of an eruption, which produces a strong MHD disturbance, while the plasma pressure in flare loops increases gradually. \item The plasma pressure in flare loops cannot considerably exceed the magnetic pressure, being compensated by the dynamic pressure of the reconnection outflow. Even if the plasma pressure in a loop becomes comparable with the magnetic pressure ($\beta \approx 1$), the effect is as small as an increase in each of its three dimensions by a factor of $(\beta + 1)^{1/4}$ (see \citealp{Grechnev2006} for details). The increase in the volume of flare loops is basically insufficient to produce an appreciable MHD disturbance outward. \end{enumerate} These considerations were verified in case studies of a few events, in which the presence of shock waves was undoubted and their onset times were estimated with certainty \citep{Grechnev2011_I, Grechnev2015}. The plasma pressure in flare loops estimated from SXR GOES fluxes steadily rose, when the waves were excited near the peak time of the impulsive acceleration of an eruption. The size of the SXR-emitting regions in RHESSI images did not change around the wave onset time. In some events, the wave onset time clearly corresponded to the early rise of an HXR or microwave burst, when the chromospheric evaporation responsible for the plasma pressure in flare loops just started \citep{Grechnev2013_6dec, Grechnev2014_II, Grechnev2015, Grechnev2016}. The same situation is seen in Figure~\ref{F-kinem} in the 16 March 2016 event. The conclusions drawn from the case studies are supported by the statistical independence of the EUV wave occurrence on the flare size \citep{Long2017}. While the relation between the velocity of an eruption and the ambient fast-mode speed is not important for the initial impulsive-piston excitation of a shock wave, it is crucial for its later evolution. A decelerating shock wave is supplied by the energy from the trailing ``piston'', whose role at larger distances really plays the outer surface of the CME body. If it is fast, then the shock wave changes into the bow-shock regime. If the CME is slow, as was the case in the 16 March 2016 event, then the shock decays into a weak disturbance. This occurs most rapidly in confined eruptions without CMEs (but not confined flares). Very rare events of this kind are known indeed, in which EUV waves or Type~II bursts, or both were observed (e.g. \citealp{Shanmugaraju2006, Magdalenic2012, Nitta2014, Grechnev2014_II, Eselevich2017}). Thus, the fact that the vast majority of EUV waves are associated with CMEs (e.g. \citealp{Biesecker2002, Long2017}) does not guarantee that every shock wave has an associated CME. The studies of shock-wave histories are facing heavy observational difficulties. Eruptive structures rapidly acquire high velocities and dramatically lose brightness. Wave signatures possess strong initial deceleration, which is most conspicuous in the first few minutes of their propagation, as Figure~\ref{F-aia_wave_kinem}b exemplifies. At that time, the measurements of the wave propagation and even its detection are hampered by a strong flare emission, while the imaging rate and dynamic range of telescopes are limited. In addition, different objects appear similar to shock-related EUV waves --- for example, rising CME structures and quasi-stationary compression regions at their bases \citep{ZhukovAuchere2004, ChenFangShibata2005, Grechnev2011_III, Warmuth2015}. Finally, a shock wave excited by a sharply erupting structure has a kinematics similar to what is expected for a hypothetical flare blast wave. These circumstances along with the framework of the ``flare vs. CME'' alternative probably account for the conclusions made in some case studies in favor of flare-ignited shock waves. On the other hand, this alternative and observational difficulties might incline different studies toward the initial bow-shock excitation by the outer surface of a super-Alfv{\'e}nic CME. Being constrained by these difficulties, researchers are forced to invoke indirect arguments, which do not always ensure the unambiguous identification of a scenario. These are, for example, the presence of a fast CME that cannot guarantee the bow-shock regime of an associated wave. It is also not certified by the position of the Type~II source ahead of a CME, because the Type~II emission can originate from the streamer above the eruption region disturbed by the quasi-parallel blast-wave-like shock. Next, a delayed appearance of a Type~II burst that does not necessarily mark the onset of the shock formation. On the other hand, the absence of a CME is not evidence of the flare-related shock origin, as mentioned. \subsection{Overview of Actual Shock-Wave Histories} To avoid deceptive indications, it is reasonable to follow the appearance and evolution of shock waves and to measure their propagation from a combined analysis of their various manifestations in different spectral ranges. This way is time-consuming, but provides a highest confidence in adequacy of the outcome. Using this approach, we made a detailed analysis of the shock-wave histories for several events in a manner similar to Section~\ref{S-march16}, mainly from the extreme-ultraviolet and white-light coronagraph images, dynamic radio spectra, and others (e.g. H$\alpha$ images), if available. The results of these case studies are summarized in Table~\ref{T-summary}, whose column 15 specifies the article, where they were published. Table~\ref{T-summary} contains 13 events listed chronologically. The kinematics of eruptive filaments or similar structures was measured in 8 events, when it was possible. Two shock waves following each other and merging eventually into a single stronger shock were revealed in four events. Column 1 lists the number of an event with a label ``a'' or ``b'' specifying one of the two shocks, if present. Columns 2--5 list the date (in the format of the Solar Object Identifier), peak time, duration, and importance of a flare according to the GOES reports, and column 6 gives its reported position. Columns 7--9 present the estimated wave onset time, the peak time of an HXR or microwave burst, and the onset time of a Type~II burst. Columns 10--12 present the CME parameters taken from the online CME catalog (\url{https://cdaw.gsfc.nasa.gov/CME_list/}: \citealp{Yashiro2004}): the onset time at the limb estimated from a linear fit and second-order fit, and an average speed. Column 13 shows the outcome of the shock-wave history: either a bow shock, or decay. Column 14 lists the peak flux of near-Earth protons $> 10$\,MeV produced by the event (GOES). \begin{table*} \footnotesize \centering \caption{Summary of shock waves studied} \label{T-summary} \begin{tabular}{lllrcclccccrcrc} \hline \noalign{\vskip 1mm} \multicolumn{1}{c}{No.} & \multicolumn{1}{c}{Date} & \multicolumn{3}{c}{GOES} & \multicolumn{1}{c}{Position} & \multicolumn{1}{c}{Wave} & \multicolumn{1}{c}{$T_\mathrm{peak}$} & \multicolumn{1}{c}{Type II} & \multicolumn{3}{c}{CME} & \multicolumn{1}{c}{Shock} & \multicolumn{1}{c}{$J_{10}$} & \multicolumn{1}{c}{Refs}\\ \cline{3-5} \cline{10-12} \multicolumn{2}{c}{} & \multicolumn{1}{c}{Peak} & \multicolumn{1}{c}{Dur.} & Size & \multicolumn{1}{c}{} & \multicolumn{1}{c}{onset} & \multicolumn{1}{c}{HXR or} & \multicolumn{1}{c}{onset} & \multicolumn{2}{c}{Onset at $1\,\mathrm{R}_\odot$} & \multicolumn{1}{c}{Speed} & \multicolumn{1}{c}{outcome} & \multicolumn{1}{c}{[pfu]} \\ \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{time} & \multicolumn{1}{c}{min} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{} & \multicolumn{1}{c}{time} & \multicolumn{1}{c}{m/w} & \multicolumn{1}{c}{time} & 1-order & 2-order & \multicolumn{1}{c}{km\,s$^{-1}$} & \multicolumn{1}{c}{} & \\ \hline \multicolumn{1}{c}{1} & \multicolumn{1}{c}{2} & \multicolumn{1}{c}{3} & \multicolumn{1}{c}{4} & \multicolumn{1}{c}{5} & \multicolumn{1}{c}{6} & \multicolumn{1}{c}{7} & \multicolumn{1}{c}{8} & \multicolumn{1}{c}{9} & \multicolumn{1}{c}{10} & \multicolumn{1}{c}{11} & \multicolumn{1}{c}{12} & \multicolumn{1}{c}{13} & \multicolumn{1}{c}{14} & \multicolumn{1}{c}{15}\\ \hline \noalign{\vskip 1mm} 1 & 1997-09-24 & 02:48:00 & 9 & M5.9 & S31E19 & 02:46:50 & 02:46:50 & 02:48:40 & 02:33 & 00:55$^{1}$ & 532 & Decay & -- & 1 \\ 2a & 2001-12-26 & 05:40:00 & 135 & M7.1 & N08W54 & 05:04:00 & 05:04:40 & 05:08:00 & 05:06 & 05:10 & 1446 & Bow & 700 & 2 \\ 2b & & & & & & 05:09:00 & 05:09:00 & 05:12:00 & \multicolumn{3}{c}{--------- Same ---------} & \multicolumn{3}{c}{------ Same ------} \\ 3 & 2002-06-01 & 03:57:00 & 11 & M1.5 & S19E29 & 03:53:40 & 03:53:40 & 03:55:30 & \multicolumn{3}{c}{No coronagraph data} & Decay? & -- & 1 \\ 4 & 2003-11-18 & 07:52:00 & 43 & M3.2 & N00E18 & 07:41:00 & 07:42:00 & 07:47:00 & \multicolumn{3}{c}{Confined eruption} & Decay & -- & 3 \\ 5 & 2003-11-18 & 08:31:00 & 47 & M3.9 & N00E18 & 08:14:12 & 08:16:00 & 08:15:00 & 08:13 & 08:13 & 1660 & Bow & 0.7 & 3 \\ 6 & 2004-07-13 & 00:17:00 & 14 & M6.7 & N13W46 & 00:14:50 & 00:15:00 & 00:16:00 & 00:02 & 00:04 & 607 & Decay & 1 & 1,4 \\ 7a & 2006-12-13 & 02:40:00 & 43 & X3.4 & S06W23 & 02:23:20 & 02:25:30 & 02:26:00 & 02:25 & 02:29 & 1774 & Bow & 695 & 5 \\ 7b & & & & & & 02:27:20 & 02:29:00 & 02:28:00$^2$& \multicolumn{3}{c}{--------- Same ---------} & \multicolumn{3}{c}{------ Same ------} \\ 8a & 2007-05-19 & 13:02:00 & 31 & B9.5 & N07W06 & 12:50:00 & 12:51:15 & 12:52:00 & 12:56 & 13:00 & 958 & Decay & -- & 1 \\ 8b & & & & & & 12:56:00 & 12:57:00 & 13:01:00 & \multicolumn{3}{c}{--------- Same ---------} & \multicolumn{3}{c}{------ Same ------} \\ 9 & 2010-01-17 & 03:56:00 & ? & X1$^{3}$ & S25E128 & 03:47:48 & No data & 03:51:00 & 03:13 & 03:45 & 350 & Decay & + & 6 \\ 10 & 2010-06-13 & 05:39:00 & 14 & M1.0 & S21W82 & 05:35:10 & 05:36:00 & 05:38:00 & 05:14 & 04:56$^{1}$ & 320 & Decay & -- & 7 \\ 11 & 2011-02-24 & 07:35:00 & 19 & M3.5 & N19E84 & 07:29:00 & 07:30:30 & 07:34:30$^{4}$ & 07:16 & 07:23 & 1186 & Bow & -- & 8 \\ 12 & 2011-05-11 & 02:43:00 & 60 & B8.1 & N25W54 & 02:22:10 & 02:28:30$^{5}$ & 02:27:00 & 02:26 & 02:24 & 745 & Decay & 0.5 & 8 \\ 13 & 2016-03-16 & 06:46:00 & 23 & C2.2 & N14W83 & 06:36:30 & 06:37:30 & 06:45:00 & 06:04 & 06:21 & 592 & Decay & 1 & 9 \\ \hline \end{tabular} \flushleft $^{1}$ Acceleration is uncertain due to either poor height measurement or a small number of height-time measurements (remark from the CME catalog). $^{2}$ Not clear. $^{3}$ Average of the estimates from STEREO-B/EUVI 195\,\AA\ images of M6.4 by \cite{Nitta2013_farside} and X1.6 by \cite{Chertok2015}. $^{4}$ Reported 07:37:00 when the Type II structures became clear after overlap with a strong Type III group. $^{5}$ For the derivative of the GOES flux at 1--8\,\AA. References: 1.~\cite{Grechnev2011_I}, 2.~\cite{Grechnev2017_III}, 3.~\cite{Grechnev2014_II}, 4.~\cite{Grechnev2008}, 5.~\cite{Grechnev2013_6dec}, 6.~\cite{Grechnev2011_III}, 7.~\cite{Grechnev2016}, 8.~\cite{Grechnev2015}, 9.~Present article. \end{table*} The events listed in Table~\ref{T-summary} had greatly differing properties. The flares ranged in size from B8.1 to X3.4 and in duration from 9 to 135 minutes. The average CME speed ranged from 320\,km\,s$^{-1}$ to 1774\,km\,s$^{-1}$. Noteworthy was event 4, in which a confined eruption without any CME produced a shock wave, which excited clear large-amplitude oscillations of a remote filament observed in the H$\alpha$ line center and both wings (``winking filament''). The flares in the 13 events had differing morphologies, including two-ribbon flares and flares with circular ribbons. Nevertheless, the shock-wave excitation scenario was the same in all of these events. The wave onset times were close to the peak times of the HXR or microwave bursts or led them by up to 2 minutes (when they were observed), i.e. occurred not later than the flare impulsive phase. Despite the differences between the events listed in Table~\ref{T-summary}, shock waves in all of them were initially excited in the same impulsive-piston scenario by sharply erupting filaments or similar structures, as described in Section~\ref{S-EUV_wave}. This fact allows combining the results obtained in studies of different events to reveal common properties of these shock waves. The possibility of their flare-related origin was examined in each case study and excluded for the reasons listed in Section~\ref{S-scenarios}. Neither was a shock initially excited in any of the events by a super-Alfv{\'e}nic CME. This result is also expected, because the impulsive-piston shock excitation by a relatively small erupting structure is highly efficient in a medium with a steep falloff of the fast-mode speed away from the eruption region. Hence, the shock appears much earlier than is possible in the bow-shock scenario; the shock waves initially resemble blast waves. While they eventually changed to the bow-shock regime in 4 events in Table~\ref{T-summary}, this did not affect their early development. The successive appearance in events 2, 7, and 8 of two shock waves within 6 minutes supports this conclusion, because a single super-Alfv{\'e}nic CME cannot drive more than one shock. The initial wave excitation and the CME development turn out to be closely related. Most likely, when an eruption starts, neither a CME nor its flux rope exists in the final form. For example, wave traces in event 10 were revealed inside the developing CME; then the wave passed through its structures and propagated outward like a decelerating blast wave \citep{Grechnev2016}. There is no reason for a concern about the role in the shock-wave excitation of a presumable lateral overexpansion of the CME bubble, which does not yet exist at that time. There was nothing to expand laterally in event 13 (Section~\ref{S-march16}); nevertheless, the shock wave appeared. The CME speeds listed in column 12 of Table~\ref{T-summary} are related to the plane of the sky, while the CME orientations could be strongly off-plane. The speeds might therefore be underestimated considerably for slow CMEs and moderately for fast CMEs, whose measurements are probably related to nearly spherical wave fronts. With these circumstances, the transition to a CME-driven shock occurs for those CMEs, whose average speed exceeds 1000\,km\,s$^{-1}$. Indeed, to ensure the super-Alfv{\'e}nic regime, the CME speed should exceed the sum of the Alfv{\'e}n speed and the solar wind speed. Using the models of the Alfv{\'e}n speed \citep{Mann2003} and solar wind speed \citep{Sheeley1997}, \cite{Grechnev2017_III} estimated this sum to decrease from 900\,km\,s$^{-1}$ at $5\mathrm{R}_\odot$ to 650\,km\,s$^{-1}$ at $25\mathrm{R}_\odot$ (with an established solar wind speed of 400\,km\,s$^{-1}$). Nevertheless, with a CME speed as high as 1446\,km\,s$^{-1}$ in event 2, the bow-shock regime became possible at distances exceeding $15\,\mathrm{R}_\odot$, while the wave front was still nearly spherical \citep{Grechnev2017_III}. The transition of a blast-wave-like shock to a CME-driven bow shock corresponds to the change from the regime of the plasma extrusion by the CME body to the regime of the plasma flow around its outer surface, when the aerodynamic drag becomes significant. This change occurring at considerable distances from the Sun determines the shape of a CME-driven shock. It forms from a nearly spherical blast-wave-like shock, while its driver expands in three dimensions \citep{VrsnakCliver2008, Grechnev2011_I}. This makes the bow-shock shape with a Mach cone unlikely and raises a question about its actual shape. An additional consequence of Table~\ref{T-summary} is the early shock-wave appearance in events 2 and 7 responsible for major energetic particle events and GLE63 and GLE70. This circumstance should be considered in studies of solar energetic particles. All of the listed events were associated with decelerating shock waves. The drag should also decelerate fast CMEs, when it becomes important. These circumstances imply that the onset time of a corresponding transient estimated from the second-order fit should generally be somewhat later than that estimated from the linear fit. This pattern mostly holds for the events listed in Table~\ref{T-summary}, except for those whose observations were of an insufficient quality (events 1, 10 and 12; they were equal for event 5). A positive acceleration estimated in the CME catalog for fast CMEs is probably a result of observational difficulties. Besides the implications mentioned, there are several other significant consequences of the shock-wave histories discussed. All of them emphasize the importance of systematic studies of coronal shock waves. Statistical studies of EUV waves have recently been made by \cite{Nitta2013_waves}, \cite{Muhr2014}, and \cite{Long2017}. Some of their conclusions do not agree with each other, probably because of the observational difficulties shown in Sections \ref{S-EUV_wave} and \ref{S-scenarios}. Some others do not seem to be obvious. Our results can shed light on these challenges. For example, all of these studies stated a poor correspondence between EUV waves and Type~II bursts. This seems to be challenging, if the Type~II emission originates ahead of a CME, while \cite{Muhr2014} consider them as the EUV waves' driving agent. The situation is different, if Type~IIs originate in streamers located away from the eruption region. Such a streamer may exist or may not. If the antiparallel magnetic fields in a streamer are separated by plasma outflow caused, e.g., by a preceding CME, then the streamer cannot generate Type~II emission. On the other hand, the visibility of an EUV wave is determined by the ambient fast-mode speed and can be poor, e.g., in coronal holes \citep{Grechnev2011_III, Long2017}. The plasma density depletion caused by a preceding CME also disfavors the detection of an EUV wave. These circumstances might be implicated in the extreme cases of mismatch between EUV waves and Type~II bursts shown by \cite{Nitta2014}. The pattern found by \cite{Muhr2014} and \cite{Long2017}, with faster EUV waves exhibiting a stronger deceleration, suggests that the highest-speed initial stage of the EUV wave propagation is often not fully measured, as the velocity--time plot in Figure~\ref{F-aia_wave_kinem}b explains. Some causes of a poor EUV wave visibility are mentioned in the preceding paragraph. The absence of any relationship between the EUV wave properties and the size of the associated flare stated by \cite{Nitta2013_waves} and \cite{Long2017} is consistent with our results. Instead, the shock-wave excitation mechanism we are talking about is expected to depend on the acceleration of an eruptive structure that is not easy to measure. \subsection{The Role of the Flare Duration in Soft X-rays} There is a traditional view relating impulsive flares to narrow or no CMEs and long-decay flares (LDEs) to large CMEs \citep{Kahler1989}. While the authors of this statement talked primarily about major flares ($\geq$\,M1 GOES importance), this pattern obviously holds for minor events presented in Section~\ref{S-neg_bursts}. However, a wide CME on 16 March 2016 discussed in Section~\ref{S-march16} developed also in association with an impulsive flare. Some CMEs in Table~\ref{T-summary} were also related to impulsive flares. \cite{NittaHudson2001} presented a series of large CMEs, which occurred in association with major impulsive flares in the same active region within 60 hours. Conversely, infrequent major LDEs without any eruptions are known (e.g. \citealp{Thalmann2015}). Thus, the pattern found by \cite{Kahler1989} seems to represent a tendency, but does not ensure a one-to-one correspondence. The long decay time in LDEs might be determined by long-lasting reconnection processes occurring typically in the post-eruption phase \citep{Grechnev2006} or at a late stage of rare confined flares. The conditions favoring such processes still need understanding. On the other hand, the SXR GOES fluxes might possibly be invoked to find the indications of a probable EUV wave occurrence. According to the Neupert effect, the rise time of the SXR flux should correspond to the acceleration duration of an eruption. Being possibly somehow combined with another parameter of an event, this rise time might characterize its impulsiveness to indicate the magnitude of the acceleration and thus to provide an indication of a probable EUV wave. \section{Summary and Conclusion} \label{S-summary} The T-shaped SRH antenna array with redundant baselines has allowed implementing algorithms to construct correlation plots of the solar radio emission and those to synthesize the images of the Sun without involvement of calibration radio sources. A high sensitivity of the interferometer of about 0.01\,sfu in combination with a high dynamic range makes it possible to observe in microwaves without attenuators a wide range of solar activity, from sources of powerful flare bursts down to its faint manifestations associated with microeruptions. The latter occur more frequently, being less studied. The first observations with SRH have shown its promising opportunities to detect solar eruptions of different energy and spatial size. We have demonstrated three ways to detect the eruptions: i)~direct observations of erupted material, ii)~observations of microwave bursts as a probable pointing at eruptive events, and iii)~detection of faint eruptive events that manifest as depressions in the total-intensity correlation plots, being accompanied by distinct changes in the circular-polarization plots. Such events can be too weak and small to be detected from any other observations. We have learned from the SRH observations that microwave depressions at 4--8\,GHz of this kind are typically polarized. They can be caused by eruptions from the same region repeating in a few hours, and this can occur not once. Such phenomena raise a question what favors energy release in small portions, preventing its accumulation. An answer might shed additional light on preparation conditions and their manifestations for big eruptions which pose a largest space weather hazard. Understanding of the mechanisms responsible for the eruptions of different size, their implication to space weather disturbances as well as development of criteria for their detection is among important future tasks for the multi-frequency SRH. To carry out detailed studies of solar eruptions, it is reasonable to combine the SRH observations with multi-instrument data from different spectral ranges. This is a typical approach in such studies. Besides the listed opportunities to detect various eruptions, a significant advantage of the SRH observations is promised by their dense frequency sampling: in December 2017, the SRH has started observing first at 15 frequencies, and then at 32 frequencies within the 4--8\,GHz range. In February 2018, the time to process each frequency bin has been reduced and reached a planned value of 0.28\,s. The time to collect the visibilities at 32 frequencies became about 9\,s. From the multi-instrument analysis of an eruptive event observed by the SRH on 16 March 2016, we have followed the development of a CME and associated shock wave and compared them with expectations from well-known models. This event has demonstrated a direct shock-wave excitation by an erupting prominence without any indications of a cavity or rim bounding it that contradicts their crucial role presumed in some studies. Another highlight of this event is that the shock wave, which was probably responsible for a near-Earth proton enhancement, was not CME-driven and appeared during the flare impulsive phase, when the CME was still in the development stage. Thus, a widely accepted view on the origin of solar energetic particles should be refined. The scenario discussed appears to be typical of various solar eruptions of different importance. We hope our results would be helpful in further studies of solar eruptions, CMEs, and coronal shock waves. \section*{Acknowledgements} We thank N.V.~Nitta and H.~Nakajima for fruitful discussions and I.V.~Kuzmenko for the RT-2 data. We appreciate our colleagues from the Radio Astrophysical Department and the Radio Astrophysical Observatory in Badary. We are indebted to anonymous reviewers for their valuable remarks. We thank the NASA/SDO and the AIA and HMI science teams; the instrument teams of GOES, RHESSI, the SWAP telescope on the ESA's PROBA2 spacecraft, the NASA's Fermi Gamma-Ray Space Telescope, Culgoora and Learmonth spectrographs of the Australian Space Weather Services, and LASCO on SOHO. SOHO is a project of international cooperation between ESA and NASA. We thank the team maintaining the CME Catalog at the CDAW Data Center by NASA and the Catholic University of America in cooperation with the Naval Research Laboratory. The work was performed with budgetary funding of Basic Research program II.16. The results were obtained using the Unique Research Facility Siberian Solar Radio Telescope \url{http://ckp-rf.ru/usu/73606/}. \bibliographystyle{elsarticle-harv}
{ "timestamp": "2018-05-24T02:05:24", "yymm": "1805", "arxiv_id": "1805.02564", "language": "en", "url": "https://arxiv.org/abs/1805.02564" }
\section{Introduction}\label{sec:1} \setcounter{section}{1} \setcounter{equation}{0}\setcounter{theorem}{0} It is a well-known result, that the decay of Fourier coefficients or the Fourier transform is related to the smoothness of the function. Intuitively this is caused by the fact, that we decompose the function as a sum or integral of infinitely differentiable cosine and sine functions. Thus, the more irregular the function is, the more its Fourier representation requires high frequency components to represent the abrupt changes in it. A bijective result to predict the same continuity from the Fourier decay alone exists in the case of $L^2$ H\"older continuity and the decay of the function's $L^2$ approximation. In this paper, the connection between uniform H\"older continuity and the decay rate of Fourier coefficients is studied in detail. It is shown that a bijective result is not possible, since absolute continuity and infinite oscillations affect the decay rate but are not necessary conditions. Specifically, I will prove the following in Section \ref{sec:3}: \begin{theorem}\label{Th1} For some $m = 0, 1, 2, \ldots$, suppose that $f^{(m)}$ is absolutely continuous, $f \in C_{m,\,\mu}[0,T]$ with some $\mu \in (0,1]$, $f$ is $T$-periodic and the number of oscillations of the finite difference function $\Delta_h f^{(m)}$ is uniformly bounded for every $0 < h \leq h_0$. Then the Fourier coefficients of $f$ decay like $c_k(f) \in O(1/|k|^{1+m+\mu})$ as $|k| \rightarrow \infty$. \end{theorem} The decay rate $O(1/|k|^{1+m+\mu})$ of Fourier coefficients also implies that $f \in C_{m,\,\mu}[0,T]$. This result is probably known in the literature, though it is not explicitly written in the textbooks. I will add a proof of this result in Section \ref{sec:3} using difference calculus and the fact that the falling factorial is asymptotically equal to a polynomial with the same exponent (which is also proved in this paper in Section \ref{sec:2}). This decay rate also implies the absolute continuity of $f^{(m)}$ in the case $\mu \in (0.5,1)$ but not in the case $\mu \in (0,0.5)$. This part of the problem was studied by Littlewood, Wiener and Wintner as well as Schaeffer in the 1930s and these results are shortly reviewed in Section \ref{sec:3}. To prove that the finite oscillations condition is necessary for Theorem \ref{Th1}, we calculate the uniform H\"older continuity of chirps and estimate their Fourier decay in Section \ref{sec:4}. The exponents which define these infinitely oscillating functions can be chosen so that the chirps are absolutely continuous and in $C_{m,\,\mu}[0,T]$. Then we find out that for some chirps their Fourier coefficients decay slower than $O(1/|k|^{1+m+\mu})$. Finally, the main result is generalised to Fourier transforms in Section \ref{sec:5} and some examples are discussed. Mellin-Barnes integral representations of the Fourier transforms of absolutely continuous chirps are derived. \section{Definitions}\label{sec:2} \setcounter{section}{2} \setcounter{equation}{0}\setcounter{theorem}{0} We will mostly be interested in signals of length $T$ and they will mostly be defined on the interval $[0,T]$ or on $[-T/2,T/2]$. The usual Lebesgue spaces of complex-valued functions on the interval are denoted by $L^p(0,T)$ and on the real line by $L^p(\R)$, with $1 \leq p \leq \infty$. The corresponding discrete spaces of infinitely long complex-valued vectors defined over the integers are denoted by $l^p(\Z)$. The important regularity properties of functions for the interval are as follows. \begin{definition}\label{Def:boundedvariation} Function $f: [0,T] \rightarrow \C$ is of \textit{bounded variation}, i.e. $f \in BV[0,T]$, if its \textit{total variation} is finite, i.e. \begin{equation} V_0^T(f) = \sup_\mathcal{P} \sum_{k=1}^N |f(t_k) - f(t_{k-1})| < \infty, \end{equation} where $t_k$, $k = 0,1,\ldots,N$ is a partition of the interval $[0,T]$ and the supremum is taken over all possible partitions $\mathcal{P}$ (of any number $N$ of points) of $[0,T]$. \end{definition} \begin{definition}\label{Def:absolutecontinuity} Function $f:$ $[0,T] \rightarrow \C$ is \textit{absolutely continuous}, i.e. $f \in AC[0,T]$, if its derivative $f'$ exists a.e. on $[0,T]$ and is Lebesgue integrable \begin{equation} \int_0^t f'(\tau) \, \mathrm{d}\tau = f(t) - f(0), \hspace{1cm} \text{for every } t \in [0,T]. \end{equation} If $f^{(m)} \in AC[0,T]$ for some $m = 0, 1, 2, \ldots$, then $f$ belongs to \textit{Sobolev space} $W_{m+1}^1[0,T]$. \end{definition} Let us review a couple of basic results related to $BV$ and $AC$ functions. \begin{lemma} \label{lemma:BVsum} If $f,g \in BV[0,T]$ and $\alpha,\beta \in \C$, then $\alpha f + \beta g \in BV[0,T]$ as well and \begin{equation} V_0^T(\alpha f + \beta g) \leq |\alpha|V_0^T(f) + |\beta|V_0^T(g). \end{equation} If $0 \leq a < b < c \leq T$, then \begin{equation} V_a^c(f) = V_a^b(f) + V_b^c(f). \end{equation} \end{lemma} \begin{proof} For real-valued functions: \cite[pp. 328-330]{Kolmogorov}. \end{proof} \begin{lemma} \label{lemma:ACsum} If $f,g \in AC[0,T]$ and $\alpha,\beta \in \C$, then $\alpha f + \beta g \in AC[0,T]$ as well. \end{lemma} \begin{proof} For real-valued functions: \cite[p. 337]{Kolmogorov}. \end{proof} \begin{lemma} \label{lemma:BV_AC_difference} Let $f: [0,T] \rightarrow \R$. If $f \in BV[0,T]$, then it can be represented as the difference of two increasing functions. If $f \in AC[0,T]$, then it can be represented as the difference of two increasing and absolutely continuous functions. In both cases this representation is \begin{equation} f(x) = V_0^x(f) - \big( V_0^x(f) - f(x) \big) \end{equation} \end{lemma} \begin{proof} For real-valued functions: \cite[pp. 331 and 337-338]{Kolmogorov}. \end{proof} \begin{lemma}[Lebesgue decomposition] \label{lemma:Lebesgue_decomposition} Any real-valued function $f \in BV[0,T]$ can be represented as the sum \begin{equation} f = \phi + \varphi + \chi, \end{equation} where $\phi \in AC[0,T]$, $\varphi \in BV[0,T]$ and $\varphi$ is also continuous and its derivative is zero a.e. Finally, $\chi$ is a jump function, i.e. it is piecewise constant with at most countably infinite number of steps. It follows that $f' = \phi$ a.e. \end{lemma} \begin{proof} \cite[p. 341]{Kolmogorov}. \end{proof} \begin{definition}\label{Def:holdercontinuity} Function $f:$ $[0, T] \rightarrow \C$ is \textit{uniformly H\"older continuous} of order $\mu = (0,1]$, if \begin{equation}\label{eq:holdercontinuity} |f(t + h) - f(t)| \leq C h^\mu, \end{equation} holds for all $t$, $t+h \in [0,T]$ and $0<h<h_0$, where $h_0$ is some sufficiently small number. Then we write $f \in C_\mu[0,T]$. If for some $m = 1, 2, 3, \ldots$ it holds that $f^{(m)} \in C_\mu[0,T]$, we write $f \in C_{m,\,\mu}[0,T]$. The supremum of all $\mu$ such that (\ref{eq:holdercontinuity}) holds for $f^{(m)}$ is called the \textit{H\"older coefficient} of $f$. \end{definition} The case $\mu = 1$ is often called a \textit{Lipschitz condition} and the case $\mu = 0$ would simply mean that $f$ is bounded, but we are mainly interested in the fractional orders of regularity in this paper. The condition (\ref{eq:holdercontinuity}) with $\mu > 1$ is satisfied only by constant functions. Pointwise H\"older conditions are possible to define if one considers for example the equation (\ref{eq:holdercontinuity}) only at the neighbourhood of $t$. Different definitions for pointwise H\"older exponents are discussed for example at the beginning of \cite{HolderExp}. \begin{definition}\label{Def:Lpholdercontinuity} Let $f \in L^p(0,T)$ and $T$-periodic, where $p \geq 1$. Function $f$ is $L^p$ H\"older continuous of order $\mu = (0,1]$, if for all $0<h<h_0$ \begin{equation} \Vert f_h - f \Vert_p = \left( \frac{1}{T} \int_0^T |f(t+h) - f(t)|^p \mathrm{d}t \right)^{1/p} \leq C h^\mu, \end{equation} and we write $f \in C_\mu^p[0,T]$. If $f^{(m)} \in C_\mu^p[0,T]$, we write $f \in C_{m,\,\mu}^p[0,T]$. The case $p = \infty$ would give us just the uniform H\"older continuity. \end{definition} We will also deal with asymptotic notations near the infinities. These are the usual big $O$ and small $o$ function spaces $O(f)$ and $o(f)$ generated by a function $f: \R \rightarrow \C$ or $O(f_n)$ and $o(f_n)$ generated by a sequence $\{f_n\}_{n=-\infty}^\infty$. We also say that functions $f$ and $g$ (or similarly for sequences $f_n$ and $g_n$) are \textit{asymptotically equal} iff \begin{equation} \lim_{|t|\rightarrow \infty}\frac{f(t)}{g(t)} = 1, \end{equation} which we notate $f \sim g$ and it implies that $f \in O(g)$ and $g \in O(f)$. The asymptotic equivalence may also be considered for positive or negative infinities separately. The next Lemma is a classic example of an asymptotic equivalence which we will also utilise shortly. \begin{lemma} The Gamma function \begin{equation}\label{eq:Gamma} \Gamma(s) = \int_0^\infty x^{s-1}e^{-x} \, \mathrm{d}x, \hspace{0.5cm} \re(s) >0, \end{equation} satisfies the Stirling's formula for $t \in \R_+$ \begin{equation}\label{eq:Stirling} \Gamma(t+1) \sim \sqrt{2 \pi t} \left( \frac{t}{e} \right)^t, \hspace{0.5cm} \text{as } t \rightarrow \infty. \end{equation} \end{lemma} \begin{proof} Multiple proofs exist in the literature, see for example \cite{Stirlings} or \cite[pp. 12 -- 13]{SpecialFunctions}. The rate of this approximation was first discovered by Abraham de Moivre and the constant was evaluated by Stirling. \end{proof} The Gamma function satisfies the following formulas which also define its analytic continuation \cite[pp. 3 -- 4]{SpecialFunctions} \begin{equation} \label{eq:Gamma_iterate} \Gamma(s+1) = s\Gamma(s), \hspace{0.5 cm} s \neq 0, -1, -2,\ldots, \end{equation} \begin{equation} \label{eq:Gamma_sine} \Gamma(s)\Gamma(1-s) = \frac{\pi}{\sin(\pi s)}, \hspace{0.5 cm} s \neq \Z. \end{equation} \begin{example} \label{ex:asymptotic} We will also need the following simple asymptotic equivalences, where $t \in \R_+$, $x \in \R$ and $t>x$ \begin{equation} (t-x)^{t-x} \sim e^{-x}t^{t-x}, \hspace{0.5cm} \text{as } t \rightarrow \infty, \end{equation} and \begin{equation} \sqrt{t} \sim \sqrt{t-x}, \hspace{0.5cm} \text{as } t \rightarrow \infty. \end{equation} The proofs are straightforward calculations \begin{align*} \lim_{t\rightarrow\infty} \frac{(t-x)^{t-x}}{t^{t-x}} &= \lim_{t\rightarrow\infty} \left( \frac{t-x}{t}\right)^{t-x} = \lim_{t\rightarrow\infty} \left( 1 - \frac{x}{t}\right)^{t-x} \\ &= \lim_{t\rightarrow\infty} \exp \left( (t-x) \ln \left( 1 - \frac{x}{t}\right) \right) \\ &= \exp \left( \lim_{t\rightarrow\infty} t \ln \left( 1 - \frac{x}{t}\right) \lim_{t\rightarrow\infty} \frac{t-x}{t} \right) \\ &= \exp \left( \lim_{t\rightarrow\infty} \frac{\ln \left( 1 - \frac{x}{t}\right)}{1/t} \lim_{t\rightarrow\infty} \frac{1-\frac{x}{t}}{1} \right) \\ &=\exp \left( \lim_{t\rightarrow\infty} \frac{\frac{\mathrm{d}}{\mathrm{d}t} \ln \left( 1 - \frac{x}{t}\right)}{\frac{\mathrm{d}}{\mathrm{d}t} 1/t} \right) = \exp \left( \lim_{t\rightarrow\infty} \frac{ \frac{x}{t^2(1-x/t)} }{-1/t^2} \right) \\ &= \exp \left( \lim_{t\rightarrow\infty} \frac{x}{\frac{x}{t} -1} \right) = e^{-x}, \end{align*} where we used L'H\^opital's / Johann Bernoulli's rule. This proves the first statement. The second is simpler \begin{align*} \lim_{t\rightarrow\infty} \frac{\sqrt{t-x}}{\sqrt{t}} &= \lim_{t\rightarrow\infty} \sqrt{\frac{t-x}{t}} = \lim_{t\rightarrow\infty} \sqrt{1 - \frac{x}{t}} \\ &= \sqrt{\lim_{t\rightarrow\infty}\left(1 - \frac{x}{t}\right)} = 1. \end{align*} \end{example} \begin{definition}\label{Def:fourierseries} Let $f \in L^1(0,T)$ and $T$-periodic. Its \textit{Fourier series} is \begin{equation}\label{eq:FourierSeries} \sum_{k = -\infty}^\infty c_k(f) e^{i 2 \pi k t /T }, \end{equation} where the \textit{Fourier coefficients} $c_k(f)$ are calculated as \begin{equation}\label{eq:ck} c_k(f) = \frac{1}{T} \int_{0}^{T} f(t) e^{-i 2 \pi k t/T}\, \mathrm{d}t, \hspace{0.5cm} k = 0, \pm 1,\pm 2,\ldots. \end{equation} \end{definition} \begin{definition}\label{maar:DFT} The \textit{Discrete Fourier transform} (DFT) of a sequence \\$\{f_n\}_{n=0}^{N-1} \in \C^N$ at the point $k$ is \begin{equation}\label{eq:DFT} \mathcal{F} \{f_n\}_k = F_k = \frac{1}{N} \sum_{n = 0}^{N-1} f_n e^{-i 2 \pi k n / N} \end{equation} and its \textit{inverse transform} (IDFT) at the point $n$ is \begin{equation}\label{eq:IDFT} \mathcal{F}^{-1} \{F_k\}_n = \sum_{k = 0}^{N-1} F_k e^{i 2 \pi k n / N}. \end{equation} \end{definition} The IDFT always returns the original sequence, which is why we could safely write above that the IDFT at the point $n$ is indeed $f_n $ \cite[p. 30]{TheDFT}. If we sample a finite length interval with a finer resolution, then the DFT values (\ref{eq:DFT}) approach the Fourier coefficients (\ref{eq:ck}) in the limit $N \rightarrow \infty$ \cite[p. 53]{TheDFT}. One just needs to realise that in this definition of the DFT the negative frequencies are located at the points $N/2<k \leq N-1 $. If $N$ is even, the DFT at $k = N/2$ is a combination of the highest resolvable positive and negative frequency. \begin{definition} The \textit{Fourier transform} of a function $f \in L^1(\R)$ is \begin{equation}\label{eq:FourierL1} \mathcal{F}\big\{f(t)\big\}(\nu) = \widehat{f}(\nu) = \int_{-\infty}^{\infty} f(t) e^{-i2\pi \nu t} \, \mathrm{d}t \end{equation} and its \textit{inverse transform} is \begin{equation} \mathcal{F}^{-1}\big\{\widehat{f}(\nu)\big\}(t) = \int_{-\infty}^{\infty} \widehat{f}(\nu) e^{i2\pi \nu t} \, \mathrm{d}\nu. \end{equation} \end{definition} \begin{definition} The \textit{Mellin transform} of a locally integrable function $f$ is \begin{equation}\label{eq:Mellin} \mathcal{M}\big\{f(t)\big\}(s) = F(s) = \int_0^\infty t^{s-1}f(t) \, \mathrm{d}x, \end{equation} for such $s \in \C$ that the integral converges. \end{definition} \begin{example} \label{ex:Mellin} Some known Mellin transforms and general properties that are needed later are listed here. Let $a \in \R$ \begin{equation}\label{eq:Mellin1} \mathcal{M}\big\{f(t^a)\big\}(s) = |a|^{-1} F(s/a), \end{equation} \begin{equation}\label{eq:Mellin2} \mathcal{M}\big\{f(at)\big\}(s) = a^{-s} F(s), \end{equation} \begin{equation}\label{eq:Mellin3} \mathcal{M}\big\{\sin(t)\big\}(s) = \Gamma(s)\sin\left(\frac{\pi}{2} s\right), \hspace{0.5cm} -1 < \re(s) < 1, \end{equation} \begin{equation}\label{eq:Mellin4} \mathcal{M}\big\{\cos(t)\big\}(s) = \Gamma(s)\cos\left(\frac{\pi}{2} s\right), \hspace{0.5cm} 0 < \re(s) < 1, \end{equation} \begin{equation}\label{eq:Mellin5} \mathcal{M}\big\{f(t)g(t)\big\}(s) = \frac{1}{2\pi i} \int_{\sigma -i \infty}^{\sigma + i \infty}F(z)G(s-z) \, \mathrm{d}z. \end{equation} Proofs for (\ref{eq:Mellin1}), (\ref{eq:Mellin2}), (\ref{eq:Mellin3}) and (\ref{eq:Mellin4}) can be found for example in \cite[pp. 262 -- 263]{Sneddon} and for (\ref{eq:Mellin5}) on \cite[pp. 273 -- 275]{Sneddon}. Results (\ref{eq:Mellin3}) and (\ref{eq:Mellin4}) are also listed in \cite[p. 406]{Mellin-Barnes} and a proof of (\ref{eq:Mellin5}) is sketched in \cite[pp. 82 -- 83]{Mellin-Barnes}. The formula (\ref{eq:Mellin5}) is also given in \cite[p. 52]{Titchmarsh} and conditions on its validity are also examined \cite[p. 60]{Titchmarsh}. It is valid if for example $F(\sigma + it) \in L^1(\R)$ and $t^{-\sigma} g(t) \in L^1(0,\infty)$, but can also be extended to non-absolutely convergent integrals. \end{example} \begin{definition} The \textit{Fox's H-function} is a wide class of special functions which are defined by the Mellin-Barnes integral \begin{equation} H^{m,n}_{p,q}(z) = \frac{1}{2\pi i} \int_\mathcal{L} \mathcal{H}^{m,n}_{p,q}(s)z^{-s} \, \mathrm{d}s, \end{equation} where $0 \leq m \leq q$, $0 \leq n \leq p$ are natural numbers, $a_l, b_j \in \C$ and $A_l, B_j \in \R_+$ such that \begin{equation} \mathcal{H}^{m,n}_{p,q}(s) = \frac{\prod_{j=1}^m \Gamma(b_j + B_j s) \prod_{l=1}^n \Gamma(1-a_l - A_l s)}{\prod_{l=n+1}^p \Gamma(a_j + A_j s) \prod_{j=m+1}^q \Gamma(1-b_j - B_j s)}. \end{equation} The path $\mathcal{L}$ divides the poles of the Gamma functions $\Gamma(b_j+B_j s)$ to its left side and the poles of the Gamma functions $\Gamma(1-a_j-A_j s)$ to its right side. \end{definition} \begin{definition} The \textit{forward difference operator} $\Delta_h$ acts on a function $f$ with a \textit{difference interval} $h >0$ \begin{equation} \Delta_h f(t) = f(t+h) - f(t) = f_h(t) - f(t). \end{equation} \end{definition} \begin{lemma}[The fundamental theorem of sum calculus]\label{lemma:fundamental_theorem_of_sum_calculus} \begin{equation} h\sum_{n = 0}^{(N-1)h} f(a+nh) = \left. S(t) \right|_a^{a+Nh}, \end{equation} where $S$ is such that $\Delta_h S(t) = f(t)$. \end{lemma} \begin{proof} \cite[p. 96]{FiniteDifferences}. \end{proof} \begin{definition} The \textit{factorial polynomial} or \textit{falling factorial} is defined as \begin{equation} t^{(m)_h} = t(t-h)(t-2h)\dots(t-(m-1)h), \end{equation} for $m \in \Z_+$ and some $h >0$. Also \begin{equation} t^{(m)_h} = \frac{1}{t(t-h)(t-2h)\dots(t-(m-1)h)}, \end{equation} for $m \in \Z_-$. A general definition for all $\gamma \in \R$ is \begin{equation} t^{(\gamma)_h} = \frac{h^\gamma\Gamma\left(\frac{t}{h} + 1\right)}{\Gamma\left(\frac{t}{h} - \gamma +1\right)}. \end{equation} \end{definition} \begin{lemma}\label{lemma:factorial_pol_difference} For all $\gamma \in \R$ the forward difference of a factorial polynomial is another factorial polynomial with the exponent $(\gamma -1)_h$, i.e. for any $h >0$ \begin{equation} \Delta_h t^{(\gamma)_h} = \gamma h t^{(\gamma-1)_h}. \end{equation} \end{lemma} \begin{proof} \cite[p. 104]{FiniteDifferences}. \end{proof} Thus the factorial polynomials behave similarly with respect to forward differences as typical polynomials do with respect to differentiation. In \cite[p. 104]{FiniteDifferences} it is also proved that the \textit{digamma function} has the same role in difference calculus as logarithm has in differentiation, i.e. \begin{equation*} \Delta_h \left( \frac{\mathrm{d}}{\mathrm{d}t} \ln \Gamma \left( \frac{t}{h}+1 \right)\right) = \Delta_h \left( \frac{\Gamma' \left( \frac{t}{h}+1 \right)}{h \Gamma \left( \frac{t}{h}+1 \right)} \right) = \frac{1}{t+h}. \end{equation*} The three asymptotic equivalences mentioned earlier are all utilised to prove the following Lemma. \begin{lemma}\label{lemma:factorial_pol_asymptotic} Let $\gamma \in \R$. Then the factorial polynomial is asymptotically equal to a polynomial with the same exponent, i.e. \begin{equation} t^{(\gamma)_h} \sim t^\gamma, \hspace{0.5cm} \text{as } t \rightarrow \infty. \end{equation} \end{lemma} \begin{proof} To simplify notations, we first write $t/h = x$. Then we replace the gamma function with its asymptotically equivalent Stirling's formula (\ref{eq:Stirling}) \begin{equation*} t^{(\gamma)_h} = \frac{h^\gamma\Gamma\left(x + 1\right)}{\Gamma\left(x - \gamma +1\right)} \sim \frac{h^\gamma\sqrt{2 \pi x} \left( \frac{x}{e} \right)^x}{\sqrt{2 \pi x-\gamma} \left( \frac{x-\gamma}{e} \right)^{x-\gamma}}, \hspace{0.5cm} \text{as } t \rightarrow \infty. \end{equation*} Next we use the asymptotic formulas from the Example \ref{ex:asymptotic} and tidy up the results \begin{equation*} t^{(\gamma)_h} \sim \frac{h^\gamma\sqrt{2 \pi x} \left( \frac{x}{e} \right)^x}{\sqrt{2 \pi x} \left( \frac{x}{e} \right)^{x-\gamma}e^{-\gamma}} = h^\gamma \left( \frac{x}{e} \right)^{\gamma}e^{\gamma} = h^\gamma \left( \frac{t}{h} \right)^\gamma = t^\gamma, \hspace{0.5cm} \text{as } t \rightarrow \infty. \end{equation*} \end{proof} \section{Characterisation of smoothness with Fourier decay}\label{sec:3} \setcounter{section}{3} \setcounter{equation}{0}\setcounter{theorem}{0} First we state from the literature that $L^2$ H\"older continuity and the tail sum of Fourier coefficients squared give a simply bijective result. \begin{theorem} Suppose that $f\in L^2(0,T)$ and $T$-periodic. Then \\ $\left(\sum_{|k| > N}|c_k(f)|^2 \right)^{1/2} \in O(1/N^{m+\mu})$, for some $m = 0, 1, 2, \ldots$ and $\mu \in (0,1)$ if and only if $f \in C^2_{m,\,\mu}[0,T]$ and $f \in W_{m}^1[0,T]$ (when $m > 0$). \end{theorem} \begin{proof} The case $m = 0$ in \cite[p. 40]{Pinsky} and the cases $m > 0$ stated as an exercise on page 43. The case $m=0$ also in \cite[pp. 44 - 46]{Serov}. \end{proof} The goal now is to find something similar between uniform H\"older continuity and the decay rate of Fourier coefficients. There are well-known simple bounds for the decay of Fourier coefficients of H\"older continuous functions. \begin{theorem}\label{th:Zygmund_Hölder} Suppose that $f$ is $T$-periodic, $f \in W_{m}^1[0,T]$ (when $m > 0$) and $f \in C_{m,\,\mu}[0,T]$ or $f \in C_{m,\,\mu}^p[0,T]$ for some $\mu \in (0,1]$. Then \begin{equation} c_k(f) \in O(1/|k|^{m+\mu}) \end{equation} \end{theorem} \begin{proof} Case $m= 0$ proved in \cite[p. 46]{Zygmund1}, \cite[p. 38]{Serov}. For $m > 0$ we use the equality $c_k\left(f^{(m)}\right) = \left(\frac{i2\pi k}{T}\right)^m c_k(f)$, since $f \in W_{m}^1[0,T]$. \end{proof} It is also noteworthy that $\mu$-H\"older continuous functions with $0.5 < \mu < 1$ are "tamer" in many ways when compared to ones with $0 < \mu \leq 0.5$. This is seen for example in the absolute summability of the Fourier coefficients. \begin{lemma} Let $0.5 < \mu <1$ and suppose that $f \in C^2_{\mu}[0,T]$ and $T$-periodic. Then the Fourier coefficients $c_k(f) \in l^1(\Z)$. For $\mu \leq 0.5$ this is not necessarily true. \end{lemma} \begin{proof} \cite[pp- 43 - 44]{Pinsky}. \end{proof} In the other direction it is actually easier to study the tail sums of Fourier coefficients, if they are summable, i.e. $c_k(f) \in l^1(\Z)$. \begin{theorem}\label{theorem:tailsum_hölder} Suppose that $\sum_{|k|> N} |k|^m |c_k(f)| \in O(1/N^{\mu})$, for some $m = 0, 1, 2, \ldots$, and $\mu \in (0,1)$. Then $f \in C_{m,\,\mu}[0,T]$. \end{theorem} \begin{proof} \cite[pp. 27 - 28, 49]{Serov}. \end{proof} We see that Theorems \ref{th:Zygmund_Hölder} and \ref{theorem:tailsum_hölder} are not symmetric and that it is generally harder to estimate the uniform H\"older continuity from the Fourier coefficients than to bound the coefficients if the regularity is known. The next two examples show that Theorem \ref{th:Zygmund_Hölder} is sharp but also that we can easily find simple H\"older continuous functions whose Fourier coefficients decay faster than it predicts. \begin{example} Many of the first examples of nowhere differentiable but everywhere continuous functions were defined with the help of Fourier series. The first of these published (but not the first discovered) was the \textit{Weierstrass function} \begin{equation} \sum_{k=0}^\infty a^{k} \cos(b^k \pi t), \end{equation} which for $0 < a < 1$, $ab, b > 1$ is continuous and nowhere differentiable \cite{Hardy}. In Weierstrass's original proof he assumed that $ab > 1+ \frac{3}{2}\pi$ and $b$ is a positive odd integer. Writing $\mu = -\ln(a)/\ln(b)$, we get \begin{equation} w_\mu (t) = \sum_{k=1}^\infty b^{-k\mu} \cos(b^k \pi t), \end{equation} and from this form it is proved in \cite[p. 47]{Zygmund1} that for $0 < \mu < 1$, $w_\mu \in C_\mu[0,2]$. Writing $m = b^k$, we see that the Fourier coefficients $c_m(w_\mu) \in O(|m|^{-\mu})$ \cite[p. 48]{Zygmund1}. The study of these kinds of fractal functions which are defined via the Fourier series is still an active field, as the recent paper \cite{HolderExp} demonstrates. \end{example} \begin{example} \label{ex:simple_holder_functions} Let us define \begin{equation} g_\mu(x) = |x|^\mu, \hspace{0.5cm} \text{for } -T/2 \leq x \leq T/2, \end{equation} where $0 < \mu < 1$. It is proved in \cite[p. 42]{Pinsky} that the Fourier coefficients $c_k(g_\mu) \in O(1/|k|^{1+\mu})$, although one can criticize that partial integration was used twice in the proof, since the second application gives a divergent integral. A more elegant way is to divide the integral in to two parts where the other one is a Mellin transform and the other one a tail integral which decays to zero as $|k| \rightarrow \infty$. This method was presented to me by professor Valery Serov when solving these same Fourier coefficients during his course in Fourier series \cite[p. 38]{Serov}. I will apply this method also to bound the Fourier coefficients of chirps in Example \ref{ex:chirp}. Figure \ref{fig:alpha07} shows the function $|x|^{0.7}$ calculated on the interval $[-1,1]$ with $2\cdot 10^5$ samples and Figure \ref{fig:alpha07_fourier_coeff} shows the decay of its DFT on a log-log scale from $k = 0$ to $10^5-1$. One can calculate the slope of the curve \begin{equation} \frac{\log(0.003635) - \log(2.073\cdot 10^{-8})}{\log(9) - \log(9999)} = -1.721\ldots \approx -1.7 \end{equation} so the DFT decays like $O(1/|k|^{1.7})$ to one decimal accuracy. Since the Fourier coefficients are approximated with the DFT, there is some aliasing error present and it affects the higher frequencies the most. \begin{figure}[p!] \centerline{ \includegraphics[scale=0.48]{alpha07.eps} } \caption{Function $|x|^{0.7}$ on the interval $[-1,1]$ calculated with $2\cdot 10^5$ points} \label{fig:alpha07} \end{figure} \begin{figure}[p!] \centerline{ \includegraphics[scale=0.48]{alpha07_fourier_coeff.eps} } \caption{Absolute values of the DFT of the samples of $|x|^{0.7}$ from Figure \ref{fig:alpha07} from $k = 0$ to $10^5-1$ on a log-log scale} \label{fig:alpha07_fourier_coeff} \end{figure} \end{example} Such examples motivated the author to find exact smoothness conditions which explain the additional $1/|k|$ decay for functions which behave better than for example the fractal Weierstrass function. Two simple function spaces exist which cause this kind of behaviour. \begin{lemma} Suppose that $f \in BV[0,T]$. Then $c_k(f) \in O(1/|k|)$. \end{lemma} \begin{proof} \cite[p. 48]{Zygmund1}, \cite{Taibleson}. \end{proof} \begin{lemma} Suppose that $f \in AC[0,T]$ and $T$-periodic. Then $c_k(f) \in o(1/|k|)$. \end{lemma} \begin{proof} The Lemma follows directly from the Riemann-Lebesgue-lemma, which states that $c_k(f) \in o(1)$ if $f \in L^1(0,T)$. The proof of Riemann-Lebesgue-lemma can be found for example in \cite[p. 45]{Zygmund1}, \cite[pp. 37 -- 38]{Serov} and \cite[p. 18]{Pinsky}. \end{proof} Absolutely continuous functions are of bounded variation \cite[p. 337]{Kolmogorov}, which explains why in the previous Lemmas their Fourier coefficients decay slightly faster. It is also easy to show that Lipschitz continuous functions are absolutely continuous, but H\"older continuous functions with $0<\mu<1$ are not necessarily so. Thus, our investigations to study the bounded variation property and absolute continuity together with H\"older continuity are valid. Functions which are both H\"older continuous and of bounded variation do indeed have quicker decay of Fourier coefficients in the sense of summability. \begin{lemma}\label{lemma:BVandHolder} Suppose that $f \in BV[0,T] \cap C_\mu[0,T]$ with $0 < \mu < 1$ and $f$ is $T$-periodic. Then $c_k(f) \in l^1(\Z)$. \end{lemma} \begin{proof} \cite[pp. 59 -- 60]{Serov}, \cite[p. 44]{Pinsky}. \end{proof} Nevertheless, we can rule out these functions from our considerations via a counterexample, which is H\"older continuous and of bounded variation but its Fourier coefficients decay only like $c_k(f) \in O(1/|k|)$. This is the Cantor-Lebesgue function, another famous function with fractal properties. \begin{example} The \textit{Cantor-Lebesgue function} is a continuous and increasing function defined on the interval $[0,1]$. The construction uses the fractal Cantor set, and is described for example in \cite[pp. 334 - 335]{Kolmogorov} and \cite[pp. 194 - 196]{Zygmund1}. Zygmund presents the theory for a more general class of functions, Kolmogorov for the classic case where the Cantor set is constructed by always removing the middle thirds of the intervals at each step. The Cantor-Lebesgue function can then be thought of as the cumulative distribution of the Cantor set. The derivative of this function is 0 almost everywhere although its values increase continuously from 0 to 1. Hence it is not absolutely continuous. It is of bounded variation and the classic case (now denoted by $f$) is also in $C_\mu[0,1]$ with $\mu = \ln(2)/\ln(3)$, which by Lemma \ref{lemma:BVandHolder} means that the Fourier coefficients of the periodic function $f^* = f(t) - t$ are in $l^1(\Z)$. Nevertheless, the decay rate of $c_k(f^*)$ is only $O(1/|k|)$ \cite[pp. 196 - 197]{Zygmund1}. \end{example} Thus, we are left to check the case of functions which are absolutely and H\"older continuous. Before proving Theorem \ref{Th1}, let us also state here a result in the other direction from the literature. This is a direct consequence of the Riesz-Fischer theorem of the isomorphism between square summable Fourier coefficients and functions in $L^2(0,T)$. \begin{lemma}\label{lemma:absolute_from_f_coeffs} Suppose that $f \in L^2(0,T)$ has Fourier coefficients that satisfy $$ \sum_{k = -\infty}^\infty k^{2m} |c_k(f)|^2 < \infty, $$ for some $m = 1, 2, 3, \ldots$. Then $f^{(m-1)}$ is a.e. equal to an absolutely continuous function with derivative $f^{(m)}\in L^2(0,T)$ and $c_k\left(f^{(m)}\right) = \left(\frac{i2\pi k}{T} \right)^m \, c_k(f)$. \end{lemma} \begin{proof} In \cite[pp. 37 - 38]{Pinsky} case $m = 1$ is proved and for $m > 1$ stated as an exercise, which follows with induction. Proof of the Riesz-Fischer theorem on page 37 also. Proved also in \cite[pp. 42 - 44]{Serov}. \end{proof} \begin{theorem} \label{th:1part1} Suppose that $f \in W_{m+1}^1[0,T] \cap C_{m,\,\mu}[0,T]$ with some $\mu \in (0,1]$, $f$ is $T$-periodic and the number of oscillations of the function $\Delta_h f^{(m)}$ is uniformly bounded for every $0 < h \leq h_0$. Then the Fourier coefficients of $f$ decay like $c_k(f) \in O(1/|k|^{1+m+\mu})$. \end{theorem} \begin{proof} Suppose that $f \in C_{\mu}[0,T]$. The cases $m = 1, 2, \ldots$ follow with induction. Since $f$ is also absolutely continuous, we know that its derivative $f'$ exists a.e., is integrable and thus we can study the $L^1$ H\"older continuity of $f'$. Let us denote $g = \Delta_h f = f_h - f$. Then by Lemma \ref{lemma:ACsum} $g$ is absolutely continuous and by Lemma \ref{lemma:BV_AC_difference} we can decompose it also as a difference of two increasing absolutely continuous functions $g = g_1 - g_2$, where $g_1(t) = V_0^t(g)$ and $g_2(t) = V_0^t(g) - g(t)$. Then $g_1'(t), g_2'(t) \geq 0$ for almost every $t \in [0,T]$ and \begin{align*} T \Vert f'_h - f' \Vert_1 &= \int_0^T |f'(t+h) - f'(t)| \mathrm{d}t = \int_0^T |g_1'(t) - g_2'(t)| \mathrm{d}t \\ &\leq \int_0^T g_1'(t) \mathrm{d}t + \int_0^T g_2'(t) \mathrm{d}t\\ &= g_1(T) - g_1(0) + g_2(T) - g_2(0)\\ &= V_0^T(g) - V_0^0(g) + V_0^T(g) - g(T) - V_0^0(g) + g(0) \\ &= 2 V_0^T(g) - g(T) + g(0) \\ &= 2 V_0^T(g), \end{align*} since $ V_0^0(g) = 0$ and $g(T) = g(0)$ since $g$ is clearly also $T$-periodic. Now let us partition the interval $[0,T]$ so that the partition points are the local minima and maxima of $g$. Then on all of the $M$ intervals between these points the function $g$ is either increasing or decreasing and by Lemma \ref{lemma:BVsum} \begin{align*} V_0^T(g) &= \sum_{k=1}^M V_{t_{k-1}}^{t_k}(g) = \sum_{k=1}^M \big| g(t_k) - g(t_{k-1}) \big| \\ &= \sum_{k=1}^M \big| f(t_k + h) - f(t_k) - f(t_{k-1} + h) + f(t_{k-1}) \big| \\ &\leq \sum_{k=1}^M \Big(\big| f(t_k + h) - f(t_k) \big| + \big| f(t_{k-1} + h) - f(t_{k-1}) \big| \Big) \\ &\leq 2MC |h|^\mu \leq 2LC |h|^\mu, \end{align*} where $L$ is the supremum of the number of intervals on which $\Delta_h f$ is either increasing or decreasing over all $h>0$ small enough. Since we assumed that the number of oscillations of the function $\Delta_h f$ is uniformly bounded for every $0<h\leq h_0$, this supremum exists and is finite. Thus $f' \in C^1_\mu[0,T]$ and it follows from Theorem \ref{th:Zygmund_Hölder} that $c_k(f') \in O(1/|k|^\mu)$. Since $f$ is absolutely continuous, we know that $c_k(f') = \frac{i2\pi k}{T} \, c_k(f)$ and thus $c_k(f) \in O(1/|k|^{1+\mu})$. \end{proof} I will also provide a shorter proof, which does not use total variations, but rather just the properties of derivatives. \begin{proof} Again let us partition the interval $[0,T]$ so that the partition points are the local minima and maxima of $g = \Delta_h f$. Then the derivative $g'$ has a constant sign in any of these intervals. Thus, we can evaluate the integral over all these intervals and get \begin{align*} T\Vert f'_h - f' \Vert_1 &= \int_0^T |g'(t)| \, \mathrm{d}t \\ &= \sum_{k = 1}^M \int_{t_{k-1}}^{t_{k}} | g'(t)| \, \mathrm{d}t \\ &= \sum_{k=1}^M \big| g(t_k) - g(t_{k-1}) \big|, \end{align*} and the rest of the proof is the same as in the previous proof. \end{proof} \begin{theorem}\label{th:1part2} Suppose that the Fourier coefficients of a $T$-periodic function $f$ decay like $c_k(f) \in O(1/|k|^{1+m+\mu})$, with some $\mu \in (0,1)$. Then $f \in C_{m,\,\mu}[0,T]$. Also if $\mu \in \left(0.5,1\right)$, then $f \in W_{m+1}^1[0,T]$ as well. \end{theorem} \begin{proof} Suppose that $c_k(f) \in O(1/|k|^{1 + m +\mu})$. Let us first estimate the tail sum \begin{equation*} \sum_{|k|> N} |k|^m |c_k(f)| \leq\sum_{|k|> N} \frac{ C_1 }{|k|^{1+\mu}} \leq 2 \sum_{k = N+1}^\infty \frac{ C_1 }{k^{1+\mu}}, \end{equation*} and the sums are still clearly convergent. Then Lemma \ref{lemma:factorial_pol_asymptotic} allows us to change to factorial polynomials and Lemmas \ref{lemma:factorial_pol_difference} and \ref{lemma:fundamental_theorem_of_sum_calculus} to estimate the sum \begin{equation*} \sum_{|k|> N} |k|^m |c_k(f)| \leq \sum_{k = N+1}^\infty \frac{ C_2 }{k^{(1+\mu)_h}} \leq \frac{C_3}{(N+1)^{(\mu)_h}} \leq \frac{C_4}{(N+1)^\mu} \leq \frac{C_4}{N^\mu}, \end{equation*} and Lemma \ref{lemma:factorial_pol_asymptotic} was used again. Now it follows from Theorem \ref{theorem:tailsum_hölder} that $f \in C_{m,\,\mu}[0,T]$. The absolute continuity in the case $\mu \in (0.5, 1)$ follows from the estimate \begin{equation*} \sum_{k = -\infty}^\infty k^{2(m+1)} |c_k(f)|^2 \leq \sum_{\substack{k = -\infty\\ k \neq 0}}^\infty k^{2m+2} \frac{C}{|k|^{2+2m+2\mu}} = \sum_{\substack{k = -\infty\\ k \neq 0}}^\infty \frac{C}{|k|^{2\mu}} < \infty, \end{equation*} and Lemma \ref{lemma:absolute_from_f_coeffs}. \end{proof} Perhaps a little simpler proof would utilise that fact that we could replace the infinite sums with integrals, since they are asymptotically equal in these cases. A similar result for multidimensional Fourier series can be found in \cite[p. 178]{Grafakos}, although in one dimension it only states that the decay rate $c_k(f) \in O(1/|k|^{1+m+\mu})$ implies $C_\alpha[0,T]$ for all $\alpha < \mu$. It is not possible to deduce the absolute continuity from the Fourier coefficients in the case $\mu \in (0, 0.5)$. First hint into this direction was made in 1936 by J.E. Littlewood \cite{Littlewood} with the following counterexample. \begin{theorem} There exists an increasing function $f$ with $f'(t) = 0$ for a.e. $t \in [0, T]$ (hence $f$ is not absolutely continuous) and a positive real number $\mu$ such that the Fourier coefficients of the periodic function $f^*(t) = f(t) - \frac{t}{T} \big(f(T)-f(0)\big)$ decay like \begin{equation} c_k(f^*) \in O\left(1/|k|^{1+\mu}\right). \end{equation} \end{theorem} From Theorem \ref{th:1part2} we know that $\mu$ in Littlewood's Theorem must be from the interval $(0, 0.5]$. It was actually proved by Wiener and Wintner in 1938 \cite{WienerWintner} that for every $\mu \in (0, 0.5)$ such non-absolutely continuous functions exist. In 1939 \cite{Schaeffer} Schaeffer sharpened this result slightly by proving that for any increasing sequence $r(k)$ that approaches $\infty$ as $k \rightarrow \infty$ (no matter how slowly), there exists a non-absolutely continuous function $f$ whose Fourier coefficients are \begin{equation} c_k(f) \in O\left(\frac{r(|k|)}{|k|^{1.5}}\right), \end{equation} but whether the case $\mu = 0.5$ in Theorem \ref{th:1part2} implies absolute continuity, is probably still an open question. \section{Chirps}\label{sec:4} Next, we consider probably the simplest infinitely oscillating class of functions. These are called chirps and they have been studied a lot with wavelet theory \cite{Jaffard}. The example is a lengthy one, but it provides us the information that the condition for finite oscillations is necessary in Theorem \ref{th:1part1}. We will need the following Lemmas considering the convergence of certain improper integrals and the asymptotic behaviour of H-functions. \begin{lemma}[Leibniz's test for improper integrals]\label{lemma:leibnizs_test_improper_integral} Suppose that $f: [a,\infty) \rightarrow \R$ has infinitely many zeros $a_1, a_2, a_3, \ldots$ in the interval $[a,\infty)$, where $a_1 < a_2 < \ldots$ and $a_n \rightarrow \infty$ as $n \rightarrow \infty$. Suppose that $f(t) > 0$ if $a_{2n-1} < t < a_{2n}$ and $f(t) < 0$ if $a_{2n} < t < a_{2n+1}$ and let \begin{equation*} b_n = \int_{a_n}^{a_{n+1}} f(t) \,\mathrm{d}t. \end{equation*} If $|b_n| \geq |b_{n+1}|$ and $|b_n| \rightarrow 0$ as $n \rightarrow \infty$, then \begin{equation} \int_a^\infty f(t) \,\mathrm{d}t < \infty. \end{equation} \end{lemma} \begin{proof} \cite[pp. 438 - 439]{Shilov} \end{proof} \begin{lemma}\label{lemma:H-function} If for a given H-function $H_{p,q}^{m,n}$ the quantity \begin{equation} \sum_{j = 1}^q B_j - \sum_{l = 1}^p A_l \leq 0, \end{equation} and the poles of the Gamma functions $\Gamma(1-a_l-A_l)$, $l = 1, \ldots, n$ do not coincide, then \begin{equation} H_{p,q}^{m,n}(z) \in O(z^\rho) \hspace{0.5cm}\text{as} \hspace{0.5cm} |z|\rightarrow \infty, \end{equation} with \begin{equation} \rho = \max\limits_{1\leq l \leq n}\left\{\frac{\re(a_l)-1}{A_l} \right\}. \end{equation} \end{lemma} \begin{proof} This is a part of Corollary 5 in \cite{H-function}. \end{proof} \begin{example}\label{ex:chirp} Let $\alpha, \beta > 0$. Then for $x \in [0,L]$ we define \begin{equation}\label{eq:chirp} f_{\alpha,\beta}(x) = x^\alpha \sin(1/x^\beta). \end{equation} It is immediate that (\ref{eq:chirp}) is pointwise $\alpha$-H\"older in the neighbourhood of 0 according to equation (\ref{eq:holdercontinuity}) if $0 < \alpha < 1$ and infinitely differentiable elsewhere. Decay of the wavelet coefficients of such functions reveal both of the exponents $\alpha$ and $\beta$ and thus wavelet analysis is clearly superior to Fourier analysis in the case of pointwise regularity and oscillation. Interested reader may study for example the excellent books \cite{Jaffard, Mallat}. Nevertheless, we are interested in knowing how the frequencies of these signals decay and what is their uniform H\"older continuity. In \cite[p. 331]{Kolmogorov} it is given as an assignment to show that $f_{\alpha,\beta}$ is of bounded variation iff $\alpha > \beta$ and we will utilise this result now. Thus, let us suppose that $\alpha > \beta$. Due to the Lebesgue decomposition of a function of bounded variation in Lemma \ref{lemma:Lebesgue_decomposition}, we can deduce then that $\varphi$ and $\chi$ in this decomposition are both zero (since $f_{\alpha,\beta}$ is infinitely differentiable a.e. on $[0,L]$ and it is continuous on $[0,L]$) and thus $f_{\alpha,\beta} \in AC[0,L]$ iff $\alpha > \beta$. Since Theorem \ref{th:1part1} concerns uniform H\"older continuity, we will next show that the functions in question are $C_{\alpha/(1+\beta)}[0,L]$ for any $L >0$ and for all $\alpha, \beta > 0$ such that $\alpha/(1+\beta) \leq 1$. The last limitation comes from the fact, that for the sake of simplicity we do not consider the H\"older continuity of the derivatives of $f_{\alpha,\beta}$ here. If we consider for example only $0 < \beta < \alpha \leq 1$, then we will always have $\alpha/(1+\beta) < 1$. This method to bound the H\"older exponents of chirps was inspired by a discussion at the Mathematics Stack Exchange \cite{MathStackEx}, where username Gaultier sketched a proof that $x \sin(1/x)$ is 0.5-H\"older on $[0,1/2π]$. What follows is thus a generalisation of that sketch to more general chirps $f_{\alpha,\beta}$ with $0 < \beta < \alpha \leq 1$. Let us first suppose that $x,y \in \left[1/\big(2\pi(n+1)\big)^{1/\beta}, 1/(2\pi n)^{1/\beta}\right]$ with some $n = 1,2,3,\ldots$ and thus $x = 1/(2\pi n + \epsilon)^{1/\beta}$ and $y = 1/(2\pi n + \delta)^{1/\beta}$ where $0\leq \epsilon, \delta, \leq 2\pi$. Then with the help of the Taylor series for sine and the binomial expansion of Newton we estimate (using the typical abuse of notation with the $O$ sign) \begin{align*} &|f_{\alpha,\beta}(x) - f_{\alpha,\beta}(y)| = \left| \frac{1}{(2\pi n + \epsilon)^{\frac{\alpha}{\beta}}}\sin(2\pi n + \epsilon) - \frac{1}{(2\pi n + \delta)^{\frac{\alpha}{\beta}}}\sin(2\pi n + \delta) \right| \\ &= \left| \frac{1}{(2\pi n + \epsilon)^{\frac{\alpha}{\beta}}}\sin(\epsilon) - \frac{1}{(2\pi n + \delta)^{\frac{\alpha}{\beta}}}\sin(\delta) \right| \\ &= \left| \frac{1}{(2\pi n + \epsilon)^{\frac{\alpha}{\beta}}} \big( \epsilon + O(\epsilon^3)\big) - \frac{1}{(2\pi n + \delta)^{\frac{\alpha}{\beta}}}\big(\delta + O(\delta^3)\big)\right| \\ &= \left| \frac{(2\pi n + \delta)^{\frac{\alpha}{\beta}}\big( \epsilon + O(\epsilon^3)\big) - (2\pi n + \epsilon)^{\frac{\alpha}{\beta}}\big(\delta + O(\delta^3)\big)}{(4\pi^2 n^2 + 2\pi n\epsilon + 2\pi n \delta + \epsilon\delta)^{\frac{\alpha}{\beta}}} \right| \\ &= \left| \frac{ \sum_{k=0}^\infty \binom{\alpha/\beta}{k}(2\pi n)^{\frac{\alpha}{\beta}-k} \delta^k\big( \epsilon + O(\epsilon^3)\big) - \sum_{k=0}^\infty\binom{\alpha/\beta}{k}(2\pi n)^{\frac{\alpha}{\beta}-k} \epsilon^k \big(\delta + O(\delta^3)\big)}{(4\pi^2 n^2 + 2\pi n\epsilon + 2\pi n \delta + \epsilon\delta)^{\frac{\alpha}{\beta}}} \right| \\ &\leq \left| \frac{ \sum_{k=0}^\infty \binom{\alpha/\beta}{k}(2\pi n)^{\frac{\alpha}{\beta}-k} \left( \delta^k \epsilon - \epsilon^k\delta + O(\delta^k\epsilon^3 - \epsilon^k \delta^3) \right)}{(4\pi^2 n^2)^{\frac{\alpha}{\beta}}} \right| \\ &\leq d_0 \left| \frac{ n^{\frac{\alpha}{\beta}} ( \epsilon - \delta)}{n^{2\frac{\alpha}{\beta}}} \right| \leq \frac{ d_0 | \epsilon - \delta|}{n^{\frac{\alpha}{\beta}}}, \end{align*} where we estimated the series with its biggest term $k = 0$ multiplied by some constant, as $\epsilon$ and $\delta$ are small. Next, we estimate $|x-y|$ from below \begin{align*} |x-y| &= \left| \frac{(2\pi n + \delta)^{\frac{1}{\beta}} - (2\pi n + \epsilon)^{\frac{1}{\beta}}}{(2\pi n + \delta)^{\frac{1}{\beta}} (2\pi n + \epsilon)^{\frac{1}{\beta}}} \right| \\ &\geq \left| \frac{\sum_{k=0}^\infty \binom{1/\beta}{k}(2\pi n)^{\frac{1}{\beta}-k}\delta^k - \sum_{k=0}^\infty \binom{1/\beta}{k}(2\pi n)^{\frac{1}{\beta}-k}\epsilon^k}{(4\pi n)^{\frac{2}{\beta}}} \right| \\ &= \left| \frac{\sum_{k=0}^\infty \binom{1/\beta}{k}(2\pi n)^{\frac{1}{\beta}-k}\big( \delta^k - \epsilon^k\big)}{(4\pi n)^{\frac{2}{\beta}}} \right| \\ &\geq d_1 \left| \frac{n^{\frac{1}{\beta}-1}( \delta - \epsilon)}{n^{\frac{2}{\beta}}} \right| = \frac{d_1 | \delta - \epsilon|}{n^{\frac{1}{\beta}+1}}, \end{align*} where we estimated the series from below by leaving only the term $k = 1$. To combine the two estimates, we need to raise the last inequality to a power $u$ such that \begin{equation} \left(\frac{1}{\beta} + 1 \right)u = \frac{\alpha}{\beta}, \end{equation} from which we solve that $u = \alpha/(1+\beta)$. Thus we have \begin{equation} |x-y|^{\frac{\alpha}{1+\beta}} \geq \frac{d_2 | \delta - \epsilon|^{\frac{\alpha}{1+\beta}}}{n^{\frac{\alpha}{\beta}}} \end{equation} and finally \begin{equation} |f_{\alpha,\beta}(x) - f_{\alpha,\beta}(y)| \leq \frac{d_3 | \delta - \epsilon|^{\frac{\alpha}{1+\beta}}}{n^{\frac{\alpha}{\beta}}} \leq d_4 |x-y|^{\frac{\alpha}{1+\beta}}. \end{equation} To extend the result to the bigger interval $[0, (1/2\pi)^{1/\beta}]$ let $y$ be as before, but $x < 1/\big(2\pi(n+1)\big)^{1/\beta}$. Then because of the periodicity of sine and the decay of the function $f_{\alpha,\beta}$ towards $0$, we can find $z \in \left[1/\big(2\pi(n+1)\big)^{1/\beta}, 1/(2\pi n)^{1/\beta}\right]$ such that \begin{align} \label{eq:chirp_regularity} |f_{\alpha,\beta}(x) - f_{\alpha,\beta}(y)| &\leq |f_{\alpha,\beta}(z) - f_{\alpha,\beta}(y)| \nonumber\\ &\leq d_4 |z-y|^{\frac{\alpha}{1+\beta}} \nonumber\\ &\leq d_4 |x-y|^{\frac{\alpha}{1+\beta}}. \end{align} Since the function $f_{\alpha,\beta}$ does not oscillate at values $x > (2/\pi)^{1/\beta}$ and it is infinitely smooth and bounded there, we can conclude that $f_{\alpha,\beta} \in C_{\alpha/(1+\beta)}[0, L]$ for any $L > 0$. Next we want to bound the decay rate of the Fourier coefficients $c_k(f_{\alpha,\beta})$. We extend the function periodically as an even function for $x \in [-\pi, \pi]$ \begin{equation} f_{\alpha,\beta} = |x|^\alpha \sin(1/|x|^\beta). \end{equation} Then for $k > 0$ (we can calculate only these, since for a real-valued signal the negative frequencies are just complex conjugates of the corresponding positive ones) \begin{align*} c_k(f_{\alpha,\beta}) &= \frac{1}{2\pi} \int_{-\pi}^\pi |x|^\alpha \sin(1/|x|^\beta) e^{-ikx}\,\mathrm{d}x \\ &= \frac{1}{\pi} \int_0^\pi x^\alpha \sin(1/x^\beta) \cos(kx)\,\mathrm{d}x \\ &=0 - \frac{1}{\pi} \int_0^\pi x^{\alpha-\beta-1}\left[\alpha x^\beta \sin(x^{-\beta}) - \beta \cos(x^{-\beta}) \right]\frac{\sin(kx)}{k}\,\mathrm{d}x. \end{align*} Next we make the substitution $kx = y, \mathrm{d}x = \frac{\mathrm{d}y}{k}$ \begin{align*} &c_k(f_{\alpha,\beta}) \\ &= \frac{-1}{\pi k} \int_0^{k\pi} \left(\frac{y}{k}\right)^{\alpha-\beta-1} \left[\alpha \left(\frac{y}{k}\right)^\beta \sin\left(\left(\frac{k}{y}\right)^{\beta}\right) - \beta \cos\left(\left(\frac{k}{y}\right)^{\beta}\right) \right]\frac{\sin(y)}{k}\,\mathrm{d}y. \\ &= \frac{-1}{\pi k^{1+\alpha - \beta}} \int_0^{k\pi} y^{\alpha-\beta-1} \left[\alpha \left(\frac{y}{k}\right)^\beta \sin\left(\left(\frac{k}{y}\right)^{\beta}\right) - \beta \cos\left(\left(\frac{k}{y}\right)^{\beta}\right) \right]\sin(y)\,\mathrm{d}y \\ &= \frac{-\alpha}{\pi k^{1+\alpha}} \int_0^{k\pi} y^{\alpha-1} \sin\left(\left(\frac{k}{y}\right)^{\beta}\right) \sin(y) \,\mathrm{d}y \\ &\enspace\enspace + \frac{\beta}{\pi k^{1+\alpha-\beta}} \int_0^{k\pi} y^{\alpha-\beta-1} \cos\left(\left(\frac{k}{y}\right)^{\beta}\right) \sin(y) \,\mathrm{d}y \\ &= \frac{-\alpha}{\pi k^{1+\alpha}} \left( \int_0^\infty g_1(y, k) \,\mathrm{d}y - \int_{k\pi}^\infty g_1(y, k) \,\mathrm{d}y \right) \\ &\enspace\enspace + \frac{\beta}{\pi k^{1+\alpha-\beta}}\left( \int_0^\infty g_2(y, k) \,\mathrm{d}y - \int_{k\pi}^\infty g_2(y, k) \,\mathrm{d}y \right), \end{align*} where the improper integrals converge because for each $k$, the tails of the integrands $g_1$ and $g_2$ are products of decreasing functions and sines or cosines and thus the requirements of Lemma \ref{lemma:leibnizs_test_improper_integral} are fulfilled. Let us first look at the two simpler integrals which do not contain the infinite oscillations near the origin. Going backwards with our substitutions we get \begin{align} \label{eq:simpler_integrals} &\frac{\alpha}{\pi k^{1+\alpha}} \int_{k\pi}^\infty g_1(y, k)\,\mathrm{d}y - \frac{\beta}{\pi k^{1+\alpha-\beta}}\int_{k\pi}^\infty g_2(y, k) \,\mathrm{d}y \\ &= \frac{1}{\pi k} \int_\pi^\infty x^{\alpha-\beta-1}\left[\alpha x^\beta \sin(x^{-\beta}) - \beta \cos(x^{-\beta}) \right]\sin(kx)\,\mathrm{d}x \nonumber \\ &= \frac{1}{i2\pi k} \mathcal{F} \big\{ \varphi(x) \big\}\left( \frac{k}{2\pi} \right), \nonumber \end{align} where the integral is a Fourier transform of the odd function \begin{equation} \varphi(x) = \begin{cases} -|x|^{\alpha-\beta-1}\left(\alpha |x|^\beta \sin(|x|^{-\beta}) - \beta \cos(|x|^{-\beta}) \right), \hspace{0.5cm}& \text{if } t < -\pi\\ 0, & \text{if } -\pi \leq t \geq \pi\\ x^{\alpha-\beta-1}\left(\alpha x^\beta \sin(x^{-\beta}) - \beta \cos(x^{-\beta}) \right),& \text{if } t > \pi. \end{cases} \end{equation} The function $\varphi$ decays to 0 as $|t| \rightarrow \infty$ and its derivative is in $L_1(\R)$ (if we ignore the jumps at $x = -\pi$ and $t = \pi$), since \begin{align*} &\frac{\mathrm{d}}{\mathrm{d}t}\left( x^{\alpha-\beta-1}\left[\alpha x^\beta \sin(x^{-\beta}) - \beta \cos(x^{-\beta}) \right] \right) \\ &= \alpha(\alpha-1)x^{\alpha-2}\sin(x^{-\beta}) - \alpha\beta x^{\alpha-\beta-2}\cos(x^{-\beta}) \\ &\enspace\enspace -\beta^2 x^{\alpha-2\beta-2}\sin(x^\beta) - \beta(a-\beta-1)x^{\alpha-\beta-2}\cos(x-\beta). \end{align*} Then according to definition \ref{Def:boundedvariation_R} \begin{align*} V_{-\infty}^\infty(\varphi)& = V_{-\infty}^{-\pi-\epsilon}(\varphi) + V_{-\pi-\epsilon}^{\pi+\epsilon}(\varphi) + V_{\pi+\epsilon}^{\infty}(\varphi) \\ & = \int_{-\infty}^{-\pi-\epsilon} |h'(x)| \,\mathrm{d}x + V_{-\pi-\epsilon}^{\pi+\epsilon}(h) + \int_{\pi+\epsilon}^{\infty} |h'(x)|\,\mathrm{d}x < \infty, \end{align*} for some $\epsilon > 0$ and thus $\varphi \in BV(\R)$ and we can bound (\ref{eq:simpler_integrals}) according to Lemma \ref{lemma:BV_decay_R} \begin{equation} \frac{1}{i2\pi k} \mathcal{F} \left\{ \varphi \right\}\left( \frac{k}{2\pi} \right) \in \frac{1}{i2\pi k} O(1/|k|) = O(1/|k|^2). \end{equation} The reason for this decay rate are the boundaries $-\pi$ and $\pi$ of our periodically continued chirp. The integrals which range from 0 to $\infty$ can be interpreted as Mellin transforms \begin{equation}\label{eq:I_1} I_1(k) = \int_0^\infty g_1(y,k)\,\mathrm{d}y = \mathcal{M}\left\{ \sin\left(\left(\frac{k}{y}\right)^{\beta}\right)\sin(y) \right\}(\alpha), \end{equation} \begin{equation}\label{eq:I_2} I_2(k) = \int_0^\infty g_2(y,k)\,\mathrm{d}y = \mathcal{M}\left\{ \cos\left(\left(\frac{k}{y}\right)^{\beta}\right)\sin(y) \right\}(\alpha-\beta). \end{equation} The goal is now to evaluate these transforms in terms of H-functions for which asymptotic expansions are known. The results listed in the Example \ref{ex:Mellin} and the formula (\ref{eq:Gamma_sine}) give \begin{align*} &I_1(k) = \frac{1}{2\pi i} \int_{\sigma -i \infty}^{\sigma + i \infty} \frac{1}{\beta k^{\beta z}} \Gamma\left(\frac{-z}{\beta}\right) \sin\left(\frac{-\pi z}{2\beta} \right) \Gamma(\alpha-z) \sin\left( \frac{\pi}{2}(\alpha-z)\right) \, \mathrm{d}z \\ &= \frac{1}{2\pi i} \int_{\sigma -i \infty}^{\sigma + i \infty} \frac{ k^{-\beta z}}{\beta}\frac{\Gamma\left(\frac{-z}{\beta}\right) (-\pi) \Gamma(\alpha-z) \pi }{\Gamma\left(\frac{z}{2\beta}\right) \Gamma\left(1-\frac{z}{2\beta}\right) \Gamma\left(\frac{\alpha-z}{2}\right) \Gamma\left(1-\frac{\alpha-z}{2}\right)} \, \mathrm{d}z. \end{align*} Next we use the property (\ref{eq:Gamma_iterate}) of the Gamma function and then substitute $u = \beta z$, $\mathrm{d}z = \mathrm{d}u / \beta$ \begin{align*} &I_1(k) = \frac{\pi^2}{2\pi \beta i} \int_{\sigma -i \infty}^{\sigma + i \infty} \frac{\frac{\beta}{z}\Gamma\left(1-\frac{z}{\beta}\right) \Gamma\big(1 - (1\!-\!\alpha)-z\big) k^{-\beta z} \, \mathrm{d}z}{\Gamma\left(\frac{z}{2\beta}\right) \Gamma\left(1-\frac{z}{2\beta}\right) \Gamma\left(1-(1-\frac{\alpha}{2}) - \frac{z}{2}\right) \Gamma\left(1-\frac{\alpha}{2}+ \frac{z}{2}\right)} \\ & = \int\displaylimits_{\sigma -i \infty}^{\sigma + i \infty} \frac{ \frac{1}{2\pi i}\left(\frac{\pi^2}{\beta} \right) \Gamma\left( \frac{u}{\beta} \right) \Gamma\left(1-\frac{u}{\beta^2}\right) \Gamma\left(1 - (1\!-\!\alpha)-\frac{u}{\beta}\right) k^{-u} \, \mathrm{d}u }{ \Gamma\left(1 + \frac{u}{\beta} \right) \Gamma\left(\frac{u}{2\beta^2}\right) \Gamma\left(\frac{2-\alpha}{2}+ \frac{u}{2\beta}\right) \Gamma\left(1-\frac{u}{2\beta^2}\right) \Gamma\left(1-(1-\frac{\alpha}{2}) - \frac{u}{2\beta}\right) }. \end{align*} According to Lemma \ref{lemma:H-function} we calculate the quantity \begin{align*} \sum_{j = 1}^q B_j - \sum_{l = 1}^p A_l &= \frac{1}{\beta} + \frac{1}{2\beta^2} + \frac{1}{2\beta} - \frac{1}{\beta^2} - \frac{1}{\beta} - \frac{1}{\beta} - \frac{1}{2\beta^2} - \frac{1}{2\beta} \\ &= -\frac{1}{\beta} -\frac{1}{\beta^2} \leq 0, \end{align*} and thus if the poles of the Gamma functions $\Gamma(1-\frac{u}{\beta^2})$ and $\Gamma\big(1 - (1-\alpha)-\frac{u}{\beta}\big)$ do not coincide, i.e. \begin{align}\label{eq:Gamma_coincide_I1} &\frac{1-(1-\alpha)+c}{1/\beta} \neq \frac{1-0+d}{1/\beta^2}, \hspace{0.5cm} \text{for all } c,d \in \N_0, \nonumber\\ &\Leftrightarrow\hspace{0.5cm} \alpha + c \neq \beta + \beta d, \hspace{0.5cm} \text{for all } c,d \in \N_0, \end{align} then the decay rate of this H-function is given by \begin{align*} \max\limits_{1\leq l \leq n}\left\{\frac{\re(a_l)-1}{A_l} \right\} &= \max\left\{\frac{\re(0)-1}{1/\beta^2}, \frac{\re(1-\alpha)-1}{1/\beta} \right\} \\ &= \max\left\{-\beta^2, -\alpha\beta \right\} \\ &= -\beta^2, \end{align*} if $\alpha > \beta$, and thus $I_1(k)\in O\left(1/|k|^{\beta^2}\right)$ if $\alpha + c \neq \beta + \beta d$ for all $c,d \in \N_0$. The same steps applied to $I_2$ gives \begin{align*} &I_2(k) = \frac{1}{2\pi i} \int_{\sigma -i \infty}^{\sigma + i \infty} \Gamma\left(\frac{-z}{\beta}\right) \cos\left(\frac{\pi z}{2\beta} \right) \Gamma(\alpha\!-\!\beta\!-\!z) \sin\left( \frac{\pi}{2}(\alpha\!-\!\beta\!-\!z)\right) \frac{\mathrm{d}z}{\beta k^{\beta z}} \\ &= \frac{1}{2\pi i} \int_{\sigma -i \infty}^{\sigma + i \infty} \frac{ k^{-\beta z}}{\beta}\frac{\Gamma\left(\frac{-z}{\beta}\right) \pi^2 \Gamma(\alpha-\beta-z)}{\Gamma\left(\frac{z}{2\beta} + \frac{1}{2}\right) \Gamma\left(1-\frac{1}{2}-\frac{z}{2\beta}\right) \Gamma\left(\frac{\alpha-\beta-z}{2}\right) \Gamma\left(1-\frac{\alpha-\beta-z}{2}\right)} \, \mathrm{d}z\\ &= \int_{\sigma -i \infty}^{\sigma + i \infty} \frac{ \frac{1}{2\pi i} \frac{\pi^2}{\beta} \left(\frac{-\beta}{z}\right)\Gamma\left(1-\frac{z}{\beta}\right) \Gamma(1-(1\!-\!\alpha\!+\!\beta)-z)k^{-\beta z}\,\mathrm{d}z}{\Gamma\left(\frac{1}{2} + \frac{z}{2\beta}\right) \Gamma\left(1-\frac{1}{2}-\frac{z}{2\beta}\right) \Gamma\left(1-\left(1-\frac{\alpha-\beta}{2}\right)-z\right) \Gamma\left(1-\frac{\alpha-\beta}{2}+\frac{z}{2}\right)} \\ &= \int\displaylimits_{\sigma -i \infty}^{\sigma + i \infty} \frac{ \frac{1}{2\pi i} \left(\frac{-\pi^2}{\beta}\right) \Gamma\left( \frac{u}{\beta} \right) \Gamma\left(1-\frac{u}{\beta^2}\right) \Gamma\left(1-(1\!-\!\alpha\!+\!\beta)-\frac{u}{\beta}\right))k^{-u}\,\mathrm{d}u}{ \Gamma\left(1\!+\!\frac{u}{\beta} \right) \Gamma\left(\frac{1}{2}\!+\!\frac{u}{2\beta^2}\right) \Gamma\left(\frac{2-\alpha+\beta}{2}\!+\!\frac{u}{2\beta}\right) \Gamma\left(1\!-\!\frac{1}{2}\!-\!\frac{u}{2\beta^2}\right) \Gamma\left(1\!-\!\frac{2-\alpha+\beta}{2}\!-\!\frac{u}{\beta}\right) } \end{align*} Again according to Lemma \ref{lemma:H-function} we calculate the quantity \begin{align*} \sum_{j = 1}^q B_j - \sum_{l = 1}^p A_l &= \frac{1}{\beta} + \frac{1}{\beta} + \frac{1}{2\beta^2} - \frac{1}{\beta} - \frac{1}{\beta^2} - \frac{1}{\beta} - \frac{1}{2\beta^2} - \frac{1}{2\beta} \\ &= -\frac{1}{\beta^2} -\frac{1}{2\beta} \leq 0, \end{align*} and thus if the poles of $\Gamma(1-\frac{u}{\beta^2})$ and $\Gamma\big(1 - (1-\alpha+\beta)-\frac{u}{\beta}\big)$ do not coincide, i.e. \begin{align}\label{eq:Gamma_coincide_I2} &\frac{1-(1-\alpha+\beta)+c}{1/\beta} \neq \frac{1-0+d}{1/\beta^2}, \hspace{0.5cm} \text{for all } c,d \in \N_0, \nonumber\\ &\Leftrightarrow\hspace{0.5cm} \alpha + c \neq 2\beta + \beta d, \hspace{0.5cm} \text{for all } c,d \in \N_0, \end{align} then the decay rate of this H-function is given by \begin{align*} \max\limits_{1\leq l \leq n}\left\{\frac{\re(a_l)-1}{A_l} \right\} &= \max\left\{\frac{\re(0)-1}{1/\beta^2}, \frac{\re(1-\alpha+\beta)-1}{1/\beta} \right\} \\ &= \max\left\{-\beta^2, \beta(\beta-\alpha) \right\} \\ &= \begin{cases} \beta^2-\alpha\beta, & \text{if}\hspace{0.5cm} 0 < \alpha < 2\beta \\ -\beta^2, & \text{if}\hspace{0.5cm} \alpha \geq 2\beta, \end{cases} \end{align*} Let us note also that the integration path can separate the poles of the Gamma functions in the numerator according to the definition of the H-function if $\alpha > 0$ for $I_1$ and $\alpha > \beta$ for $I_2$. Thus, our restrictions on $\alpha$ and $\beta$ originally chosen because of absolute continuity are of use here as well. Now we have all the information on the asymptotic behaviour of the Fourier coefficients. If (\ref{eq:Gamma_coincide_I1}) and (\ref{eq:Gamma_coincide_I2}) are fulfilled, then \begin{align*} &c_k(f_{\alpha,\beta}) \in O(1/|k|^2) + \frac{-\alpha}{\pi |k|^{1+\alpha}} I_1(|k|) + \frac{\beta}{\pi |k|^{1+\alpha-\beta}} I_2(|k|) \\ &= \begin{cases} O(1/|k|^2) + O\left(1/|k|^{1+\alpha +\beta^2}\right) + O\left(1/|k|^{1+\alpha -\beta -\beta(\beta-\alpha)}\right), & \text{if}\hspace{0.5cm} 0 < \alpha < 2\beta \\ O(1/|k|^2) + O\left(1/|k|^{1+\alpha +\beta^2}\right) + O\left(1/|k|^{1+\alpha -\beta + \beta^2}\right), & \text{if}\hspace{0.5cm} \alpha \geq 2\beta \end{cases}\\ &= \begin{cases} O(1/|k|^2) + O\left(1/|k|^{1+\alpha -\beta -\beta(\beta-\alpha)}\right), & \text{if}\hspace{0.5cm} 0 < \alpha < 2\beta \\ O(1/|k|^2) + O\left(1/|k|^{1+\alpha -\beta + \beta^2}\right), & \text{if}\hspace{0.5cm} \alpha \geq 2\beta, \end{cases} \end{align*} since the decay rates of $I_2$ are in both cases slower than those of $I_1$. As a particular example in the case $\alpha = 0.7$, $\beta = 0.5$ we first check condition (\ref{eq:Gamma_coincide_I1}) \begin{align*} &0.7 + c \neq 0.5\ + 0.5 d, \hspace{0.5cm} \text{for all } c,d \in \N_0,\\ &\Leftrightarrow\hspace{0.5cm} 7 + 10c \neq 5 + 5 d, \hspace{0.5cm} \text{for all } c,d \in \N_0, \end{align*} which is clearly true, since the number on the left side always ends with a seven and the number on the right side with a 5 or a 0. For the same reason equation (\ref{eq:Gamma_coincide_I2}) holds as well. Thus \begin{align*} &c_k(f_{0.7,\,0.5}) \in O(1/|k|^2) + O\left(1/|k|^{1+0.7 +0.5^2}\right) + O\left(1/|k|^{1+0.7 -0.5 -0.5(0.5-0.7)}\right) \\ &= O(1/|k|^2) + O\left(1/|k|^{1.95}\right) + O\left(1/|k|^{1.3}\right) \\ &= O\left(1/|k|^{1.3}\right). \end{align*} We can say that the Fourier coefficients of the chirp $f_{0.7,\,0.5}$ decay like $O\left(1/|n|^{1.3}\right)$. This is verified by numerical calculations. Figure \ref{fig:infinitely_oscillating_alpha07_beta05} shows the function in question calculated on the interval $[-1,1]$ with $2\cdot 10^5$ samples. The absolute values of its DFT from $k = 0$ to $10^5-1$ are shown in Figure \ref{fig:infinitely_oscillating_alpha07_beta05_fourier_coeff} on a log-log scale. The curve can be bounded by a line with a slope \begin{equation} \frac{\log(3.708\cdot 10^{-5}) - \log(1.014\cdot 10^{-6})}{\log(541) - \log(8780)} = -1.291\ldots \approx -1.3, \end{equation} so indeed the DFT decays like $O\left(1/|n|^{1.3}\right)$ to one decimal accuracy. We also know from equation (\ref{eq:chirp_regularity}) that $f_{0.7,\,0.5} \in C_{7/15}[0, L]$, where $7/15 = 0.4666\ldots$. Thus the infinite oscillations affect the decay rate of the Fourier coefficients and we do not get $O\left(1/|n|^{1.4666\ldots}\right)$ as would be the case without the infinite oscillations according to Theorem \ref{th:1part1}. Let us write another example with $\alpha = 0.9$ and $\beta = 0.4$. Now the condition (\ref{eq:Gamma_coincide_I1}) \begin{align*} &0.9 + c \neq 0.4\ + 0.4 d, \hspace{0.5cm} \text{for all } c,d \in \N_0,\\ &\Leftrightarrow\hspace{0.5cm} 9 + 10c \neq 4 + 4 d, \hspace{0.5cm} \text{for all } c,d \in \N_0, \end{align*} which is again true, since the number on the left is always odd and the number on right is always even and similarly equation (\ref{eq:Gamma_coincide_I2}) holds as well. Thus \begin{align*} &c_k(f_{0.9,\,0.4}) \in O(1/|k|^2) + O\left(1/|k|^{1+0.9 +0.4^2}\right) + O\left(1/|k|^{1+0.9 -0.4 + 0.4^2}\right) \\ &= O(1/|k|^2) + O\left(1/|k|^{2.06}\right) + O\left(1/|k|^{1.66}\right) \\ &= O\left(1/|k|^{1.66}\right). \end{align*} Now based on equation (\ref{eq:chirp_regularity}) the signal is uniformly H\"older continuous with the exponent $9/14 \approx 0.6429$, but because $c_k(f_{0.9,\,0.4}) \in O\left(1/|k|^{1.66}\right)$, Theorem \ref{th:1part2} reveals that actually $f_{0.9,\,0.4} \in C_{0.66}[0, L]$ in this case regardless of the infinite oscillations. This shows that at least some chirps are actually smoother than the estimate (\ref{eq:chirp_regularity}) reveals. \begin{figure}[p!] \centerline{ \includegraphics[scale=0.48]{infinitely_oscillating_alpha07_beta05.eps} } \caption{Function $|x|^{0.7}\sin\left(1/|x|^{0.5}\right)$ on the interval $[-1,1]$ calculated with $2\cdot 10^5$ points} \label{fig:infinitely_oscillating_alpha07_beta05} \end{figure} \begin{figure}[p!] \centerline{ \includegraphics[scale=0.48]{infinitely_oscillating_alpha07_beta05_fourier_coeff.eps} } \caption{Absolute values of the DFT of the samples of $|x|^{0.7}\sin\left(1/|x|^{0.5}\right)$ in Figure \ref{fig:infinitely_oscillating_alpha07_beta05} from $k = 0$ to $10^5-1$ on a log-log scale} \label{fig:infinitely_oscillating_alpha07_beta05_fourier_coeff} \end{figure} \end{example} \section{Extension to Fourier transforms}\label{sec:5} Our goal now is to extend Theorem \ref{th:1part1} to non-periodic functions defined on $\R$ and their Fourier transforms. We need to consider integrability and absolutely continuity on $\R$. We will also discuss the Fourier transforms of chirps. \begin{definition}\label{Def:boundedvariation_R} Function $f: \R \rightarrow \C$ is of \textit{bounded variation}, i.e. $f \in BV(\R$), if its \textit{total variation} is finite, i.e. $$ V_{-\infty}^\infty(f) = \sup_\mathcal{P} \sum_{k=1}^N |f(t_k) - f(t_{k-1})| < \infty, $$ where $t_k$, $k = 0,1,\ldots,N$ is a partition of the real axis $\R$ and the supremum is taken over all possible partitions (of any number of points) $\mathcal{P}$ of $\R$. \end{definition} \begin{definition}\label{Def:absolutecontinuity_R} Function $f:$ $\R \rightarrow \C$ is \textit{absolutely continuous}, i.e. $f \in AC(\R)$, if its derivative $f'$ exists a.e. on $\R$ and is Lebesgue integrable \begin{equation} \int_{-\infty}^t f'(\tau) \, \mathrm{d}\tau = f(t) - C, \hspace{1cm} \text{for every } t \in \R. \end{equation} \end{definition} Note that necessarily $C = \lim_{t\rightarrow -\infty} f(t)$. This definition is almost equal to Sobolev space $W_1^1(\R)$, which simply means that the function and its first generalised derivative are in $L_1(\R)$. Note that the two requirements $f, f' \in L_1(\R)$ of the space $W_1^1(\R)$ guarantee that $\lim_{|t|\rightarrow \infty} f(t) = 0$. We will not need the integrability of $f$ in the Theorem that follows and thus we first state it without Sobolev spaces. The decay of Fourier transforms of functions $f \in BV(\R)$ and $f \in AC(\R)$ behave similarly as in the periodic case. \begin{lemma}\label{lemma:BV_decay_R} If $f \in BV(\R)$, then $\hat{f} \in O\left(\frac{1}{\nu} \right)$. \end{lemma} \begin{proof} \cite[pp. 33 - 34]{Mallat}. \end{proof} \begin{lemma}\label{lemma:AC_decay_R} If $f \in AC(\R)$, then $\hat{f} \in o\left(\frac{1}{\nu} \right)$. \end{lemma} \begin{proof} A consequence of the Riemann-Liouville-Lemma for the Fourier transform, proof can be found for example in \cite[p. 94]{Pinsky}. \end{proof} \begin{definition}\label{Def:holdercontinuity_R} Function $f:$ $\R \rightarrow \C$ is \textit{uniformly H\"older continuous} of order $\mu = (0,1]$, if \begin{equation} |f(t + h) - f(t)| \leq C h^\mu, \end{equation} holds for all $t$, $t+h \in \R$ and $0 < h \leq h_0$. Then we write $f \in C_\mu(\R)$. If for some $m = 1, 2, 3, \ldots$ it holds that $f^{(m)} \in C_\mu(\R)$, we write $f \in C_{m,\,\mu}(\R)$. \end{definition} \begin{definition}\label{Def:Lp_R_holdercontinuity} Function $f \in L^p(\R)$ is $L^p$ H\"older continuous of order $\mu = (0,1]$, if \begin{equation} \Vert f_h - f \Vert_p = \left( \int_{-\infty}^{\infty} |f(t+h) - f(t)|^p \mathrm{d}t \right)^{1/p} \leq C h^\mu, \end{equation} and we write $f \in C_\mu^p(\R)$. If $f^{(m)} \in C_\mu^p(\R)$, we write $f \in C_{m,\,\mu}^p(\R)$. \end{definition} \begin{lemma}\label{lemma:L1_Holder_R} If $f \in C_{m,\,\mu}^1(\R)$ and $f^{(m-1)} \in AC(\R)$ (when $m > 0$), then $\hat{f} \in O\left(\frac{1}{|\nu|^{m + \mu}} \right)$. \end{lemma} \begin{proof} Suppose that $m = 0$. Let us look at \begin{equation*} \int_{-\infty}^{\infty} f\big(t+1/(2\nu)\big) e^{-i2\pi \nu t} \mathrm{d}t = e^{i\pi} \int_{-\infty}^{\infty} f(\tau) e^{-i2\pi \nu \tau} \mathrm{d}\tau = - \hat{f}(\nu), \end{equation*} and thus \begin{equation*} -2 \hat{f}(\nu) = \int_{-\infty}^{\infty} \Big( f\big(t+1/(2\nu)\big) - f(t) \Big) e^{-i2\pi \nu t} \mathrm{d}t. \end{equation*} Now we can bound the Fourier transform \begin{equation*} \big\vert\hat{f}(\nu)\big\vert \leq \frac{1}{2} \int_{-\infty}^{\infty} \Big| f\big(t+1/(2\nu)\big) - f(t) \Big| \mathrm{d}t, \end{equation*} and if $f \in C_{\mu}^1(\R)$, then \begin{equation*} \big\vert\hat{f}(\nu)\big\vert \leq \frac{C}{2} \left( \frac{1}{2\nu} \right)^\mu. \end{equation*} If $m > 0$, then we have $\mathcal{F}\left\{f^{(m)}\right\}(\nu) \in O(1/|\nu|^\mu)$. Since $f^{(m)} \in L^1(\R)$, we know that $f^{(m)}$ is a tempered distribution. A tempered distribution is always a finite order derivative of some continuous function of power growth at the infinities \cite[p. 115]{GelfandShilov}. This result means that all the primitives of a tempered distribution are also temperate \cite[p. 108]{Zemanian}, and thus we know that $f$ also has a Fourier transform as a tempered distribution. Then we know also that $\mathcal{F}\left\{f^{(m)}\right\}(\nu) = (i2\pi \nu)^m \, \hat{f}(\nu)$ and thus $\hat{f} \in O(1/|\nu|^{m+\mu})$. \end{proof} The corresponding Theorem \ref{th:Zygmund_Hölder} for Fourier series is valid for all $p \geq 1$ and H\"older's inequality is used to prove this in \cite[p. 38]{Serov}. Using the same inequality here would result in a divergent integral and thus on $\R$ we can only state this result for $p = 1$. \begin{theorem} \label{th:2part1} Suppose that $f^{(m)} \in AC(\R)$, $f^{(m)}(t) \rightarrow 0$ as $|t| \rightarrow \infty$ and the number of maxima and minima of the function $\Delta_h f^{(m)}$ is uniformly bounded for every $0 < h \leq h_0$. Suppose also that $f \in C_{m,\,\mu}[a-h_0,b+h_0]$ with some $\mu \in (0,1]$ and $[a,b]$ contains all the mentioned local maxima and minima of $\Delta_h f^{(m)}$ for all $0 < h \leq h_0$. Then $\hat{f}(\nu) \in O(1/|\nu|^{1+m+\mu})$ as $|\nu| \rightarrow \infty$. \end{theorem} \begin{proof} Suppose that $m = 0$. Let us partition $\R$ so that the partition points are the local minima and maxima of $g = \Delta_h f$ and the first point is $-\infty$ and the last point $\infty$. Then the derivative $g'$ has a constant sign in any of these intervals and for any $0 < h \leq h_0$ we have $g(t) \rightarrow 0$ as $|t| \rightarrow \infty$. Thus, we can evaluate \begin{align*} \Vert f'_h - f' \Vert_1 &= \int_{-\infty}^\infty |g'(t)| \, \mathrm{d}t \\ &= \sum_{k = 1}^M \int_{t_{k-1}}^{t_{k}} | g'(t)| \, \mathrm{d}t \\ &= \sum_{k=1}^M \big| g(t_k) - g(t_{k-1}) \big| \\ &= |g(t_1) - g(-\infty)| + |g(\infty) - g(t_{M-1})| + \sum_{k=2}^{M-1} \big| g(t_k) - g(t_{k-1}) \big|, \end{align*} and estimate \begin{align*} \Vert f'_h - f' \Vert_1 &\leq \big| f(t_1 + h) - f(t_1) \big| + \big| f(t_{M-1} + h) - f(t_{M-1}) \big| \\ &\enspace\enspace + \sum_{k=2}^{M-1} \Big(\big| f(t_k + h) - f(t_k) \big| + \big| f(t_{k-1} + h) - f(t_{k-1}) \big| \Big) \\ &\leq 2(M-1)C|h|^\mu \leq 2(L-1)C |h|^\mu, \end{align*} where $L$ is the supremum of the number of increasing and decreasing intervals of $\Delta_h f$ over all $0 < h \leq h_0$. Thus $f' \in C^1_\mu(\R)$ and because $f \in AC(\R)$ as well, it follows from Lemma \ref{lemma:L1_Holder_R} that $\mathcal{F}\{f\}(\nu) \in O(1/|\nu|^{1+\mu})$. The cases $m >0$ are proved identically. \end{proof} If we relax the conditions of the previous Theorem slightly, a more elegant result follows. \begin{corollary} \label{th:2corollary} Suppose that $f \in W_{m+1}^1(\R) \cap C_{m,\,\mu}(\R)$ and the number of maxima and minima of the function $\Delta_h f^{(m)}$ is uniformly bounded for all $0 < h \leq h_0$. Then $\hat{f}(\nu) \in O(1/|\nu|^{1+m+\mu})$ as $|\nu| \rightarrow \infty$. \end{corollary} \begin{example} The function $e^{-a|t|}$, $\re(a) >0$ has the Fourier transform \begin{equation} \mathcal{F}\left\{ e^{-a|t|} \right\}(\nu) = \frac{2a}{a^2 + (2\pi \nu)^2}. \end{equation} Interestingly, this decay rate is explained by two different results. Since the derivative of this function is in $BV(\R)$, Lemma \ref{lemma:BV_decay_R} can be applied. But since the function is Lipschitz at 0, smooth elsewhere and its difference function does not oscillate infinitely often, Corollary \ref{th:2corollary} applies as well. \end{example} \begin{example} The function $e^{\pi t^2}|t|^\mu$ satisfies the requirements of Corollary \ref{th:2corollary}, and thus we expect that $\mathcal{F}\left\{ e^{\pi t^2}|t|^\mu \right\}(\nu) \in O(1/|\nu|^{1+\mu})$. The convolution Theorem (for functions of rapid descent and tempered distributions) can be used to decompose this \begin{align*} \mathcal{F}\left\{ e^{\pi t^2}|t|^\mu \right\}(\nu) &= \left( \mathcal{F}\left\{ e^{\pi t^2}\right\}\ast\mathcal{F}\left\{|t|^\mu \right\} \right) (\nu) \\ &= \left( e^{\pi \nu^2} \ast \frac{-2\sin\left( \frac{\pi\mu}{2}\right)\Gamma(\mu +1)}{|2\pi\nu|^{1+\mu}} \right) (\nu), \end{align*} where we substituted the Fourier transforms of the two functions. The formula for $\mathcal{F}\left\{|t|^\mu \right\}$ and other similar ones are proved for example in \cite[170-173]{GelfandShilov}. Since that distributional Fourier transform $\mathcal{F}\left\{|t|^\mu \right\}$ also decays like $O(1/|\nu|^{1+\mu})$, it is likely that a generalization of Theorem \ref{th:2part1} to some tempered distributions with enough smoothness is possible. \end{example} \begin{example} The Fourier transforms of the chirps introduced in equation (\ref{eq:chirp}) can also be evaluated. Let us look at the even chirp \begin{equation} f_{\alpha,\beta}(x) = |x|^\alpha \sin(1/|x|^\beta), \hspace{0.5cm} x \in \R, \end{equation} and the steps done in Example \ref{ex:chirp} repeated give us for $\nu > 0$ \begin{align*} \mathcal{F}\left\{f_{\alpha,\beta}\right\}(\nu) &= \int_{-\infty}^\infty |x|^\alpha \sin(1/|x|^\beta) e^{-i2\pi\nu x}\,\mathrm{d}x \\ &= 2\int_0^\infty x^\alpha \sin(1/x^\beta) \cos(2\pi\nu x)\,\mathrm{d}x \\ &=0 -2 \int_0^\infty x^{\alpha-\beta-1}\left[\alpha x^\beta \sin(x^{-\beta}) - \beta \cos(x^{-\beta}) \right]\frac{\sin(2\pi\nu x)}{2\pi \nu}\,\mathrm{d}x. \end{align*} Now we see that we can state this with the integrals $I_1$ (\ref{eq:I_1}) and $I_2$ (\ref{eq:I_2}) for $\nu \neq 0$ \begin{equation}\label{eq:FT_chirps} \mathcal{F}\left\{f_{\alpha,\beta}\right\}(\nu) = \frac{-2\alpha}{(2\pi|\nu|)^{1+\alpha}} I_1(2\pi|\nu|) + \frac{2\beta}{(2\pi|\nu|)^{1+\alpha-\beta}} I_2(2\pi|\nu|), \end{equation} since the Fourier transform of an even function is real-valued and the negative frequencies are then equal to the positive ones. We see that the Fourier transforms of chirps are simpler than the corresponding Fourier series which included also the boundary effects because of periodicity. The integrals $I_1$ and $I_2$ were evaluated as H-functions in Example \ref{ex:chirp}, so the result (\ref{eq:FT_chirps}) is an exact representation as Mellin-Barnes integrals. The asymptotic decay results from Example \ref{ex:chirp} also give us for $|\nu| \rightarrow \infty$ \begin{equation} \mathcal{F}\left\{f_{\alpha,\beta}\right\}(\nu) = \begin{cases} O\left(1/|\nu|^{1+\alpha -\beta -\beta(\beta-\alpha)}\right), & \text{if}\hspace{0.5cm} 0 < \alpha < 2\beta, \\ O\left(1/|\nu|^{1+\alpha -\beta + \beta^2}\right), & \text{if}\hspace{0.5cm} \alpha \geq 2\beta, \end{cases} \end{equation} required that the conditions (\ref{eq:Gamma_coincide_I1}) and (\ref{eq:Gamma_coincide_I2}) for $\alpha$ and $\beta$ are fulfilled. \end{example} \newpage \section*{Acknowledgement} The author is currently doing research under a grant from the Finnish Cultural Foundation, North Ostrobothnia Regional Fund. This work has also been supported by the Walter Ahlstr\"om foundation and the Auramo foundation. I wish to thank Jukka Kemppainen for his insightful remarks on the properties of the Mellin transform and introducing the Fox's H-function to me. I also thank professor Valery Serov for his excellent courses on the theory of Fourier series and the Fourier transform organised at the University of Oulu. \begin{figure}[h!] \centerline{ \includegraphics[scale=1]{SKR_englanti_harmaa_pms.eps} } \end{figure} All web addresses retrieved on 07.05.2018.
{ "timestamp": "2018-05-08T02:16:12", "yymm": "1805", "arxiv_id": "1805.02445", "language": "en", "url": "https://arxiv.org/abs/1805.02445" }
\section{Introduction} Structure from motion on video, is a variant of the Simultaneous Localisation And Mapping (SLAM) problem, which by now is one of the classical problems in robotics \citep{bailey06}. Structure from motion on video has a wide range of applications, such as 3D mapping \citep{engel16}, video stabilization \citep{kopf14}, and autonomous navigation \citep{bailey06}. Traditionally such systems used discrete-time camera poses, while this paper considers the more recent continuous-time formulation \citep{furgale15}. Many SLAM systems exploit a combination of sensors for robustness; LIDAR, cameras, and inertial sensors (typically gyroscopes and accelerometers) are popular choices. It is well known that cameras and inertial sensors are complementary, and thus useful to combine. Primarily this is because inertial measurements have biases, that can be estimated during fusion with camera measurements. In addition, cameras often provide very accurate relative pose, but not absolute scale, and camera-only structure from motion fails in the absence of scene structure. \begin{figure}[t] \begin{center} \includegraphics[width=\columnwidth]{rccar_barn1_montage} \caption{Rendered model estimated on the {\bf RC-Car} dataset, using split interpolation. Top: model rendered using Meshlab. Bottom: Sample frames from dataset.} \label{fig:example_rccar} \end{center} \end{figure} Platforms that house both cameras and inertial sensors are now very common. Examples include most current smartphones and tablets, but also some action cameras, e.g.\@\xspace newer models from GoPro. Nearly all such platforms use cameras with an electronic {\it rolling shutter} mechanism, that acquires each frame in a row-by-row fashion. This lends itself naturally to continuous-time motion models, as the camera has a slightly different pose in each image row. Classical structure from motion treats camera trajectories as a set of discrete poses \citep{triggs00}, but by replacing the poses with spline knots, we obtain the continuous-time formulation, which is used on rolling shutter cameras for video structure from motion \citep{hedborg12}. A useful property of the continuous pose representation, introduced by \cite{furgale12}, is that its derivatives can predict measurements from an inertial measurement unit (IMU), which simplifies fusion of data from cameras and IMUs, and multi-sensor platforms in general \citep{furgale15}. Continuous-time structure from motion is also crucial in camera-IMU calibration when the camera has a rolling shutter \citep{ovren15,furgale15,lovegrove13}. Compared to classical structure from motion, the continuous-time version has a moderate increase in complexity, due to reduced sparsity of the system Jacobian as shown by \cite{hedborg12}. \subsection{Contributions} In this paper we revisit the continuous-time structure from motion problem with inertial measurements, and rethink several design choices: \begin{itemize} \item We replace the $\SE3$-based interpolation used in the \emph{Spline Fusion} method \citep{lovegrove13,patron-perez15} with a split interpolation in $\RplusSO3$. This leads to a trajectory representation that does not couple rotation and translation in a screw motion, see \figurename~\ref{fig:interaction}, and is better suited to e.g.\@\xspace, hand-held camera motions. \item We compare the split and $\SE3$ trajectory representations theoretically, and in a series of both synthetic and real data experiments. \item We compare the performance and efficiency of three previously proposed ways to incorporate reprojection time into the optimization \citep{hedborg12,furgale12, lovegrove13, kim16}. \item For completeness, we also describe our recently published {\it spline error weighting} approach to better balance the residuals in the optimization problem, and to automatically set the spline knot spacing based on desired trajectory accuracy \citep{ovren18a}. \end{itemize} The main goal of the paper is to help other researchers make informed choices when designing their continuous-time structure from motion systems. \subsection{Related work} \label{sec:related_work} The classical pose interpolation approach in computer animation is to independently interpolate the camera orientation in the orientation group $\SO3$ and the camera positions in the vector space $\R3$ \citep{kim95}. In robotic animation it is instead common to do direct interpolation on the special Euclidean group $\SE3$ \citep{crouch99}. Recently, such a direct interpolation on $\SE3$ was applied to the continuous-time structure from motion problem, by integrating the $\SE3$ spline into an optimization framework \citep{lovegrove13,patron-perez15}. This formulation generalizes the orientation interpolation of \cite{kim95} to $\SE3$. Several recent continuous-time structure from motion papers use the $\SE3$ approach \citep{patron-perez15,kerl15,kim16,anderson15}, while others use separate interpolation of pose and orientation \citep{furgale15,oth13}. In the following sections, we analyse the two approaches theoretically, and also compare them experimentally. When re-projecting a landmark in a frame there is an additional complication in the rolling shutter case. As one image coordinate (typically the image row) corresponds to observation time, the reprojection of a landmark at time $t$ will not necessarily end up at the row corresponding to that time. Early methods handled this by setting the reprojection time to the landmark observation time \citep{hedborg12,furgale12}. This was improved upon by \cite{oth13} who also linearize the reprojection time error and convert it to a spatial error covariance. \cite{lovegrove13} instead use the Newton method to iteratively find a reprojection time with a consistent row coordinate, and this approach is also followed by \cite{kerl15}. Yet another approach is to add a projection time parameter for each landmark observation, as well as a cost term for the projection time deviation \citep{kim16}. In the experiments, we refer to this approach as {\it lifting}, which is the common term for elimination of alternating optimization by adding variables and constraints \citep{zach14}. No previous publication has compared these choices, instead each paper makes a hard commitment to one of the methods. In \cite{furgale15} some of the choices are discussed, but a comparison is left for future work. \subsection{Paper overview} The remainder of the paper is organized as follows. In section \ref{sec:visual_inertial_fusion} we introduce the visual-inertial fusion problem that is the context of this paper. In section \ref{sec:projection} we describe three methods for rolling shutter landmark projections, and in section \ref{sec:trajectories} we present two different choices of continuous trajectory representation. Finally, in \ref{sec:experiments} we evaluate our methods experimentally, and section \ref{sec:conclusions} summarizes the paper and gives an outlook. Illustrations and plots are best viewed in colour. \begin{figure*}[t] \centering \begin{subfigure}{0.49\textwidth} \begin{center} \includegraphics[width=\columnwidth]{sfm_global_shutter} \caption{Global shutter projection} \label{fig:sfm_gs} \end{center} \end{subfigure} \begin{subfigure}{0.49\textwidth} \begin{center} \includegraphics[width=\columnwidth]{sfm_rolling_shutter} \caption{Rolling shutter projection} \label{fig:sfm_rs} \end{center} \end{subfigure} \caption{Structure from motion under global and rolling shutter geometry. Here, ${\bf x}_k$ is a 3D landmark which is projected to an image observation, ${\bf y}_{k, n}$, in image $n$. Cameras are represented by their image plane, where we also show a limited number of the image rows. On the camera trajectory (dashed, blue line) we indicate the time instance (global shutter), or time span (rolling shutter), when the image was captured. } \label{fig:sfm} \end{figure*} \section{Visual-inertial fusion} \label{sec:visual_inertial_fusion} This work is an extension of the \emph{Spline fusion} visual-inertial fusion framework introduced by \cite{lovegrove13}. In this section we outline how the Spline fusion method works, and also summarize the improvements to robustness of the framework, introduced by \cite{ovren18a}. \subsection{Video structure from motion} In structure from motion, the goal is to estimate camera poses, and 3D structure, from a set of images. If the images are from video, or are taken in sequence, the camera poses can be thought of as a trajectory over time. A camera pose consists of a rotational component ${\bf R} \in \SO3$, and a translational component ${\bf p} \in \R3$. In standard structure from motion, the camera path is simply the set of all camera poses, with one pose per image, $n$: \begin{align} {\bf T}_n = ({\bf R}_n, {\bf p}_n) \,. \label{eq:pose_discrete} \end{align} We follow the convention in \cite{patron-perez15}, and define the pose such that ${\bf T}$ is a transformation from the body (i.e.\@\xspace camera) to the global coordinate frame. The objective is then to find the camera poses, and 3D points that minimize the cost function \begin{align} J(\mathcal{T}, \mathcal{X}) = \sum_{{\bf T}_n \in \mathcal{T}} \sum_{{\bf x}_k \in \mathcal{X}} \| {\bf y}_{k, n} - \pi({\bf T}^{-1}_n {\bf x}_k) \|^2 \,. \label{eq:sfm_basic} \end{align} Here, $\mathcal{X}$ is the set of all 3D points, $\mathcal{T}$ is the set of all camera poses, and ${\bf y}_{k, n}$ is the observation of 3D point $k$ in image $n$. The function $\pi(\cdot)$ projects a 3D point in the camera coordinate frame to the image plane, using some camera model. This formulation of the structure from motion objective is called \emph{bundle adjustment} \citep{triggs00}. We illustrate the structure from motion geometry in Figure \ref{fig:sfm_gs}. \subsection{Rolling shutter} \label{sec:rolling_shutter} In the previous section, we assumed that there is one camera pose per image, such that all pixels are captured at the same time. Such cameras are said to have a \emph{global shutter}. Most cameras available today are however equipped with a \emph{rolling shutter} \citep{elgamal05}. Here, the image is read out from the sensor one row at a time, i.e.\@\xspace different rows are captured at different times. If the camera is moving while the image is captured, then we no longer have a single camera pose per image, but instead one camera pose per row. We illustrate the rolling shutter geometry in Figure \ref{fig:sfm_rs}, where the camera pose at the row that corresponds to image observation ${\bf y}_{k,n}$ is denoted $({\bf R}(t_{k,n}), {\bf p}(t_{k,n}))$. It has been shown \citep{hedborg12} that ignoring rolling shutter when minimizing \eqref{eq:sfm_basic} reduces accuracy, and can even lead to reconstruction failures. \subsection{Continuous-time structure from motion} To handle the rolling shutter problem, the standard, or discrete-time, formulation of structure from motion in \eqref{eq:pose_discrete} can be modified to instead model the camera trajectory as a continuous-time function \begin{align} {\bf T}(t) = ({\bf R}(t), {\bf p}(t)) \,. \end{align} Instead of being restricted to a set of discrete camera poses, we can now determine the camera pose at any time instance $t$. There are many ways to construct ${\bf T}(t)$, but argubly the most common approach is to model it as some kind of spline. Given this new representation, we modify the cost function \eqref{eq:sfm_basic} to \begin{align} J(\mathcal{T}, \mathcal{X}) = \sum_{{\bf y}_{k, n} \in \mathcal{Y}} \| {\bf y}_{k, n} - \pi({\bf T}^{-1}(t_{k, n}){\bf x}_k) \|^2 \,. \label{eq:ct_sfm_basic} \end{align} where $\mathcal{Y}$ is the set of all image observations, and $\mathcal{X}$ is still the set of 3D points. However, $\mathcal{T}$ is no longer a set of discrete camera poses, but is instead the set of \emph{trajectory parameters}. The exact nature of the trajectory parameters depends on how we choose to model the trajectory. With a continuous-time formulation, structure from motion can be solved on both rolling shutter, and global shutter cameras by minimizing the same cost function \eqref{eq:ct_sfm_basic}. There are however some practical aspects regarding how the landmarks are projected into the camera, which we will further investigate in section \ref{sec:projection}. Next, we will show another new possibility: incorporating inertial measurements in the bundle adjustment formulation. \subsection{Inertial measurements} An IMU consists of a gyroscope, which measures angular velocities, $\boldsymbol{\omega}$, and an accelerometer, which measures linear accelerations, ${\bf a}$. These measurements are direct observations of motion, and are a useful addition to the trajectory estimation problem. \cite{lovegrove13} therefore extend \eqref{eq:ct_sfm_basic} to also include residuals for the gyroscope and accelerometer measurements: \begin{align} J(\mathcal{T}, \mathcal{X}) = \sum_{{\bf y}_{k, n} \in \mathcal{Y}}& \| {\bf y}_{k, n} - \pi({\bf T}^{-1}(t_{k, n}){\bf x}_k) \|^2 \notag \\ +\sum_m&||\boldsymbol{\omega}_m-\nabla_{\omega}{\bf T}(t_m)||^2_{{\bf W}_g} \label{eq:ct_sfm_inertial} \\ +\sum_l&||{\bf a}_l-\nabla^2_{a}{\bf T}(t_l)||^2_{{\bf W}_a}\,. \notag \end{align} The operators $\nabla_\omega$ and $\nabla^2_a$ represent inertial sensor models which predict gyroscope and accelerometer values given the trajectory model ${\bf T}(t)$, using analytic differentiation. The norm weight matrices ${\bf W}_g$ and ${\bf W}_a$ are used to balance the three modalities fairly. We show how to set the norm weight matrices in section \ref{sec:sew_weights}. For best results, the inertial sensor models, $\nabla_\omega$ and $\nabla^2_a$, should model the used sensors as well as possible. At the very least they should account for a constant measurement bias, however, more advanced models that include e.g.\@\xspace axis misalignment, or time-varying biases, are also possible. In section \ref{sec:trajectories} we derive basic inertial sensor models for the trajectories which we are interested in. Looking at the IMU residuals in \eqref{eq:ct_sfm_inertial}, we can see that there are two things that makes it problematic to use a discrete camera trajectory here. Firstly, the IMU measurement timestamps do not in general coincide with the frame times. This is partly because the IMU is usually sampling at a much higher rate than the camera. With a trajectory consisting only of discrete poses, it is not obvious how to extract a pose for these intermediate timestamps. The continuous-time formulation does not have this problem, since it allows us to determine the camera pose at any given time instance. Secondly, the IMU residuals require us to compute derivatives of the trajectory, to get angular velocity and linear acceleration, respectively. With a discrete-time trajectory, these derivatives are not available. A continuous-time trajectory can, however, be built such that the required derivatives exist. To avoid derivatives, discrete time systems commonly use sensor integration instead. However, whenever the sensor bias is updated, the sensor integration has to be recomputed. Much effort has thus been spent to improve performance of sensor integration in the discrete pose case \citep{forster15}. Since we need second order derivatives to compute the acceleration, it is crucial that the trajectory representation ${\bf T}(t)$ is $\mathcal{C}^2$-continuous. This is the reason why cubic B-splines are a popular choice. \subsection{Splined trajectories} \label{sec:splines} Splines are an excellent choice for representing a continuous trajectory because their derivatives can be easily computed analytically. To introduce the general concept of splines, we will first describe it in only one dimension. In section \ref{sec:trajectories} we then describe how splines can be used to model a continuous camera pose. A spline consists of a set of \emph{control points}, $\boldsymbol{\Theta} = (\theta_1,\ldots,\theta_K)$, which are positioned at \emph{knots}, $(t_1,\ldots,t_K)$, in time. The value of the spline at a specific time $t$ is computed from the control points, which are weighted by a \emph{basis function}, $B(t)$. If the knots are evenly spaced, $\Delta t$ apart, we have $t_k = k\Delta t$, and say that the spline is \emph{uniform}: \begin{align} f(t|\boldsymbol{\Theta}) = \sum_{k=1}^K \theta_k B(t-k\Delta t)\,. \label{eq:spline_1d} \end{align} Fitting data to a (uniform) spline means to optimize the spline control points, $\boldsymbol{\Theta}$, such that the shape of the spline matches the measurements. The knot spacing, $\Delta t$, is a hyper parameter, and in section \ref{sec:sew_knot_spacing} we show one way to set it to an appropriate value. \subsection{Spline Error Weighting} \label{sec:spline_error_weighting} Before we attempt to minimize \eqref{eq:ct_sfm_inertial}, using a splined trajectory, there are three hyper-parameters that must be set to apropriate values: the knot spacing, $\Delta t$, and the IMU residual norm matrices, ${\bf W}_a$ and ${\bf W}_g$. \cite{lovegrove13}, who introduced \eqref{eq:ct_sfm_inertial}, used a fixed knot spacing value of $\Delta t = 0.1$, and set the norm weight matrices to the inverse covariance of the respective measurement noises. In \cite{ovren18a} we showed why these choices are suboptimal, and derived a robust method to set these values. We will now give a summary of this method, which is called \emph{Spline Error Weighting}. \subsubsection{Selecting the IMU weights.} \label{sec:sew_weights} If we use inverse covariances to weight the inertial measurements in \eqref{eq:ct_sfm_inertial}, then we make the implicit assumption that the chosen trajectory parameterization can perfectly represent the real motion. However, a high $\Delta t$ (sparse spline) results in a smooth trajectory which might not be able to fully represent the real motion. In this case the residuals in \eqref{eq:ct_sfm_inertial} will consist of two error terms: the measurement noise, and a \emph{spline approximation error}. The Spline fusion method only accounts for the former, and in \cite{ovren18a} we showed that ignoring the approximation error leads to reconstruction failures. Spline fitting can be characterized in terms of a frequency response function, $H(f)$, see \cite{unser1993}. In this formulation, a signal $x(t)$ with the Discrete Fourier Transform (DFT) $X(f)$ will have the frequency content $(H\cdot X)(f)$ after spline fitting. In Figure \ref{fig:spline_dt_freq} we show examples of how the spline interpolation function $H(f)$, and the spline fit error, depends on the choice of knot spacing. \begin{figure}[bt] \begin{center} \includegraphics[width=\columnwidth]{spline_dt_freq_plot} \caption{Top: Interpolation error for a test signal $x(t)$ as a function of spline knot spacing $\Delta t$. Bottom: The frequency spectrum of $x(t)$ (black) together with the frequency function $H(f; \Delta t)$ for different choices of $\Delta t$.} \label{fig:spline_dt_freq} \end{center} \end{figure} By denoting the DFT of the frequency response function by the vector ${\bf H}$, and the DFT of the signal by ${\bf X}$, we can express the error introduced by the spline fit as: \begin{equation} {\bf E}=(1-{\bf H})\cdot {\bf X}\,. \label{eq:approximation_error} \end{equation} This results in an approximation error variance \begin{equation} \hat{\sigma}_e^2=\energy{{\bf E}}/N=\energy{(1-{\bf H})\cdot{\bf X}}/N\,, \label{eq:approximation_variance} \end{equation} where $N$ is the number of samples. The residual weight matrices in \eqref{eq:ct_sfm_inertial} are then computed as \begin{align} {\bf W}_r = \frac{1}{\hat\sigma_r^2}{\bf I}~~\text{where}~~ \hat{\sigma}_r^2 = \hat{\sigma}_e^2+\hat{\sigma}_f^2\,. \label{eq:residual_error} \end{align} Here $\hat{\sigma}_f^2$ is a filtered version of the sensor noise variance $\sigma_n^2$, to account for the fact that ${\bf X}$ used in \eqref{eq:approximation_error} already contains this noise. \begin{figure}[t] \includegraphics{variance} \caption{Top: test signal and noise (right subplot is a detail). Bottom: standard deviations as functions of knot spacing. $\sigma_r$ is the empirical residual standard deviation, $\sigma_n$ is the noise standard deviation, which is used in \cite{patron-perez15} to predict $\sigma_r$, {\it Predicted} is the \emph{Spline Error Weighting} residual noise prediction. $\sigma_{r_0}$ is the residual with respect to the noise-free signal $x_0(t)$.} \label{fig:splinefit} \end{figure} In Figure \ref{fig:splinefit} we illustrate a simple experiment that demonstrates the behaviour of the Spline Error Weighting residual error prediction \eqref{eq:residual_error}. In Figure \ref{fig:splinefit} top left, we show a test signal, $x(t)$, which is the sum of a true signal, $x_0(t)$, and white Gaussian noise $n(t)$ with variance $\sigma_n^2$. The true signal has been generated by filtering white noise to produce a range of different frequencies and amplitudes. In figure \ref{fig:splinefit} top right, we show a detail of the signal, where the added noise is visible. We now apply a least-squares spline fit to the signal $x(t)$, to obtain the spline $\hat{x}(t)$, defined as in \eqref{eq:spline_1d}. This is repeated for a range of knot spacings, $\Delta t$, each resulting in a different residual $r(t)=x(t)-\hat{x}(t)$. The residual standard deviation $\sigma_r$ is plotted in Figure \ref{fig:splinefit}, bottom. We make the same plot for the residual $r_0(t)=x_0(t) - \hat{x}(t)$ which measures the error compared to the true signal. The resulting $\sigma_{r_0}$ curve has a minimum at approximately $\Delta t=0.15$, which is thus the optimal knot spacing. The fact that the actual residual $\sigma_r$ decreases for knot spacings below this value thus indicates overfitting. From this short experiment, we can also see that the implicit assumption made in \cite{patron-perez15} that the noise standard deviation $\sigma_n$ can predict $\sigma_r$ is reasonable for knot spacings at or below the optimal value. However, for larger knot spacings (at the right side of the plot) this assumption becomes increasingly inaccurate. \subsubsection{Selecting the knot spacing.} \label{sec:sew_knot_spacing} In \cite{patron-perez15} the spline knot spacing is fixed to $\Delta t=0.1$. However, instead of deciding on a knot spacing explicitly, a more convenient design criterion is the amount of approximation error introduced by the spline fit. To select a suitable knot spacing, $\Delta t$, we thus first decide on a {\it quality value}, $\hat q \in (0, 1]$, that corresponds to the fraction of signal energy we want the approximation to retain. For a given signal, $x(t)$, with the DFT, ${\bf X}$, we define the quality value as the ratio between the energy, before and after spline fitting: \begin{align} q(\Delta t) = \frac{\energy{{\bf H}(\Delta t) \cdot {\bf X}}}{\energy{{\bf X}}} \label{eq:quality} \end{align} To find a suitable knot spacing for the signal, we search for the largest knot spacing $\Delta t$ for which $q(\Delta t) \geq \hat q$. The signals ${\bf X}$ are based on the accelerometer, and gyroscope measurements, since these contain information about both orientation and translation. See \cite{ovren18a} for further details. \subsubsection{Adding a robust error norm.} In \cite{patron-perez15}, the cost function is defined as in \eqref{eq:ct_sfm_inertial}, which assumes that the measurements are drawn from a zero-mean (Gaussian) distribution. This is a useful model for the IMU measurements, if we account for the sensor biases, but not for the image measurements. The image measurements are produced by tracking or feature matching over a sequence of images. The associations made are not perfect, and the risk of producing a feature track where the measurements do not correspond to one single 3D point is significant. Depending on the environment, we might also have moving objects in the scene, which can be successfully tracked, but are obviously not good landmarks. Since such \emph{outliers} do not correspond to the geometry we are trying to estimate, their errors can easily be orders of magnitude larger than those of the inlier set. If the outliers are not removed, the least-squares solver will try to bring these large errors down, even if it means that all the other measurement residuals (those in the inlier set) are increased. In standard structure from motion with global shutter cameras, most outliers can be removed by enforcing geometric consistency between observed image points. For rolling shutter cameras, enforcing geometric consistency is much harder, because the images no longer have a single corresponding camera pose. We instead accept that we will have at least some outliers, and try to mitigate their effect. We do this by introducing a \emph{robust error norm} \citep{zhang97} which scales the residuals such that large residuals have less impact. The cost function is thus modified to its final formulation \begin{align} J(\mathcal{T}, \mathcal{X}) = \sum_{{\bf y}_{k, n} \in \mathcal{Y}}& \phi( {\bf y}_{k, n} - \pi({\bf T}^{-1}(t_{k, n}){\bf x}_k) ) \notag \\ +\sum_n&||\boldsymbol{\omega}_n-\nabla_{\omega}{\bf T}(t_n)||^2_{{\bf W}_g} \label{eq:cvpr_cost_function} \\ +\sum_l&||{\bf a}_l-\nabla^2_{a}{\bf T}(t_l)||^2_{{\bf W}_a}\,, \notag \end{align} where $\phi(x)$ is a robust error norm. In \cite{ovren18a}, as well as in this work, $\phi(x)$ is the Huber norm. \section{Rolling shutter projection} \label{sec:projection} In \eqref{eq:sfm_basic} and \eqref{eq:ct_sfm_basic} the landmark projection function $\pi(\cdot)$ was defined to simply project a 3D point to its image plane location. This formulation works fine in the case of a global shutter camera, where there is a single camera pose for each captured image. In a rolling shutter camera, the image rows are captured and read out sequentially, which results in each row having its own camera pose. This means that an image observation \begin{align} {\bf y}_{k, n} = [u, v]^T = \pi({\bf T}^{-1}(t_{k, n}) {\bf x}_k) \label{eq:rs_projection_standard} \end{align} was captured at time \begin{align} t_{k, n} = t^0_n + r \frac{v}{N_v} \,. \label{eq:rs_projection_time} \end{align} Here $t^0_n$ is the time of the first row of frame $n$, $N_v$ is the number of image rows, and $r$ is the rolling shutter \emph{image readout time}. $r$ is simply the time it takes to read out a frame from the camera sensor. The astute reader may have noticed a problem with equations \eqref{eq:rs_projection_standard} and \eqref{eq:rs_projection_time}: the projection time $t_{k,n}$ requires knowledge of the projection row, $v$, but at the same time, the projection row also depends on the projection time! One of the contributions of this work are to analyze different methods for solving this chicken and egg problem. Before doing that, we will however have to replace the landmark projection function $\pi(\cdot)$. \subsection{The rolling shutter transfer function, $\psi$} So far we have represented a landmark $k$ as a 3D point ${\bf x}_k \in \mathbb{R}^3$. This is, however, not the only possible parameterization. In \cite{patron-perez15}, whose approach we follow, a landmark is instead represented by its first observation ${\bf y}_{k, \ast}$ and a corresponding \emph{inverse depth}, $\rho_k$. The inverse depth formulation has the nice property that it is easy to represent points at infinity by setting $\rho_k = 0$. It also means that the number of landmark parameters shrinks from $3N$ to $N$, because only $\rho_k$ has to be optimized for instead of the full 3D point ${\bf x}_k$. With the inverse depth landmark representation we redefine the image measurement process to instead use a \emph{rolling shutter transfer function}, $\psi(\cdot)$: \begin{align} {\bf y}_{k, n} &= \psi({\bf y}_{k, \ast}, {\bf T}^{-1}(t_{k,n}) {\bf T}(t_{k,\ast}), \rho_k) \label{eq:transfer_function} \\ &= \pi \left( {\bf T}^{-1}(t_{k, n}) {\bf T}(t_{k, \ast}) \begin{bmatrix} \pi^{-1}({\bf y}_{k,\ast}) \\ \rho_k \end{bmatrix} \right) \,. \notag \end{align} $\psi(\cdot)$ is called a transfer function because it transfers the reference observation ${\bf y}_{k, \ast}$, at time $t_{k, \ast}$, to a new measurement at image $n$, using the inverse depth, $\rho_k$, and the trajectory ${\bf T}(t)$. For brevity, we will mostly use the shorter form $\psi(t)$, which should be understood as the projection (reference observation transfer) at time $t$ for some landmark and trajectory. In the following sections we describe three different strategies to implement $\psi(\cdot)$. One important property of each method, is how well they handle the \emph{rolling shutter time deviation} \begin{align} \epsilon(t_{k,n})=(t_{k,n}-t^0_n)\frac{N_v}{r}-\psi_v(t_{k,n})\,. \label{eq:rs_time_deviation} \end{align} This residual measures the time deviation between the requested projection time $t_{k,n}$, and the time corresponding to the resulting image row, $\psi_v$. We choose to express this deviation in rows (pixels), instead of time (seconds), because this makes it easier to compare it to the reprojection error. An ideal rolling shutter projection model should always fulfill the \emph{rolling shutter constraint} \begin{align} \epsilon(t_{k, n}) = 0 \,, \label{eq:rs_time_constraint} \end{align} but we will see that relaxing this constraint can result in other benefits, while still producing reasonable results. In Figure \ref{fig:projection_time} we graphically compare the three different methods, by plotting their possible image projections, $\psi(t_{k, n})$, together with the time deviation $\epsilon(t_{k,n})$. \begin{figure}[tb] \begin{center} \includegraphics[width=\columnwidth]{projection_time} \end{center} \caption{Geometric illustration of the different approaches to the projection time problem. This is an image plane plot, where $y_1$ is the rolling shutter axis, ${\bf y}_{k,n}$ is the landmark observation in the current frame, $\psi(t)$ is the reprojection (transfer) curve for the first observation, as a function of spline evaluation time $t$, and $\epsilon(t)$ is the absolute value of the projection time deviation, plotted along the $y_1$ axis, as a function of $t$. In the illustration we can see that ${\bf y}_\text{Static}$ is obtained by setting the spline time to the observation time of the landmark ${\bf y}_{k,n}$, ${\bf y}_\text{Newton}$ is the point on the reprojection curve $\psi(t)$ that perfectly satisfies the projection time constraint $\epsilon(t)$, and ${\bf y}_\text{Lifting}$ is a point on $\psi(t)$ somewhere between ${\bf y}_\text{closest}$, the point closest to the observation, and ${\bf y}_\text{Newton}$ (depending on residual weighting).} \label{fig:projection_time} \end{figure} \subsection{Static projection} One simple approach to deal with the chicken and egg problem described in section \ref{sec:projection}, is to ignore it completely. If we denote the observed image row by $v_{k, n}$, we set the projection time to \begin{align} t_{k, n} = t^0_n + r \frac{v_{k, n}}{N_v} \, \end{align} and directly compute \eqref{eq:transfer_function}. The advantage of this method is that it is fast to compute, and simple to implement. The downside is that the projected point in general will not fulfill the rolling shutter constraint in \eqref{eq:rs_time_constraint}. This is shown in Figure \ref{fig:projection_time}, where the ${\bf y}_\text{Static}$ point can end up anywhere on the $\psi(t)$ line, regardless of the value of $\epsilon(t)$. \subsection{Newton projection} To make sure that the rolling shutter projection time constraint in \eqref{eq:rs_time_constraint} holds, \cite{patron-perez15} uses Newton's method to iteratively find the projection time. To use Newton's method to solve $\epsilon(t) = 0$ we must compute $\frac{d\epsilon(t)}{dt}$, which in turn requires computation of $\frac{d \psi(t)}{dt}$. The transfer function $\psi(t)$ involves applying the camera projection model, $\pi(t)$, and its inverse, $\pi^{-1}(t)$, which means that the implementation can be quite tricky, as derivatives of these functions are also required. Each iteration is thus more expensive than the {\bf Static} method, but we must also compute multiple iterations, making this a quite slow strategy. The advantage is of course that the rolling shutter time constraint \eqref{eq:rs_time_constraint} is now fulfilled, as we can see in Figure \ref{fig:projection_time}. \subsection{Lifting} The two previous methods are extremes when it comes to trading accuracy for computational efficiency. \cite{kim16} therefore introduced a third method that aims to be more accurate than {\bf Static}, while being faster than {\bf Newton}. This works by adding the time deviation $\epsilon(t_{k,n})$ (see \eqref{eq:rs_time_deviation}) as a new residual to the optimization problem. The unknown projection time $t_{k, n}$ is now an additional parameter to optimize over. The added residual makes \eqref{eq:rs_time_constraint} into a soft constraint, which means that at best it will match the {\bf Newton} method, and at worst give the point closest to the measured observation. See Figure \ref{fig:projection_time} for a graphical illustration. The described method, which we denote {\bf Lifting}, has the same computational complexity as the {\bf Static} method. However, since we are adding an extra residual and parameter per image observation, the optimization problem grows larger. \section{Spline interpolation spaces} \label{sec:trajectories} A time-continuous pose ${\bf T}(t)$ consists of a rotational component ${\bf R}(t)$, and a translational component ${\bf p}(t)$, \begin{equation} {\bf T}(t)=\begin{bmatrix}{\bf R}(t) & {\bf p}(t)\\ {\bf 0}^T & 1\end{bmatrix}\,. \end{equation} Nearly all continuous camera pose representations are based on B-splines, that define the continuous pose by blending discrete poses $\left\{{\bf T}_k\right\}_1^K$. In this section we introduce and compare the two trajectory representations that are used in this work: one interpolating over control points ${\bf T}_k\in\SE3$, and one that uses separate splines for translation, and rotation, with control points ${\bf p}_k\in\R3$, and ${\bf R}_k\in\SO3$, respectively. We also analyze the theoretical difference between the two when interpolating a camera pose. \subsection{A split spline in $\RplusSO3$} \label{sec:split} A regular B-spline curve in vector space $\mathbb{R}^n$ can be written: \begin{equation} {\bf p}(t)= \sum_{k=1}^K {\bf p}_k B(t-k\Delta t) = \sum_{k=1}^K{\bf p}_kB_k(t)\,, \label{eq:spline} \end{equation} where ${\bf p}_k\in\mathbb{R}^n$ are the spline control points, and $B_k(\cdot)$ are the shifted B-spline basis functions (cf. \eqref{eq:spline_1d}), that distribute the influence of each control point in a specific time window. Any spline of form \eqref{eq:spline} may instead be written in cumulative form: \begin{equation} {\bf p}(t)={\bf p}_1\tilde{B}_1(t)+\sum_{k=2}^K({\bf p}_k-{\bf p}_{k-1})\tilde{B}_k(t)\,, \label{eq:r3_spline} \end{equation} where $\tilde{B}(t)$ are the corresponding {\it cumulative} basis functions. \cite{kim95} show that this construction is also feasible on $\SO3$, and propose to use unit quaternions ${\bf q}_k$ as orientation control points to interpolate \begin{equation} {\bf q}(t)={\bf q}_1^{\tilde{B}_1(t)}\prod_{k=2}^K\exp(\log({\bf q}_{k-1}^\ast{\bf q}_k)\tilde{B}_k(t))\,. \label{eq:so3_spline} \end{equation} Here ${\bf q}^*$ denotes the conjugation of the quaternion ${\bf q}$, and $\exp()$ and $\log()$ are mappings to $\text{Spin}(3)$, and its tangent space, respectively. The rationale behind \eqref{eq:so3_spline} is the classical SLeRP interpolation \citep{shoemake85}: \begin{equation} {\bf q}(\lambda)={\bf q}_1\exp(\lambda\log({\bf q}_1^\ast{\bf q}_2)) \quad \lambda\in[0,1]\,. \label{eq:slerp} \end{equation} The expression \eqref{eq:slerp} moves smoothly between ${\bf q}_1$ and ${\bf q}_2$ as $\lambda$ is moved from $0$ to $1$. By comparing \eqref{eq:so3_spline} with \eqref{eq:slerp} we see that the Kim et al.~construction is essentially a blending of SLeRP interpolations, within each B-spline support window. In summary, \cite{kim95} advocate pose interpolation with \eqref{eq:r3_spline} for position and \eqref{eq:so3_spline} for orientation. We will denote this as \emph{split interpolation}, or \emph{split representation}. \subsubsection{IMU predictions for the split interpolation.} The IMU predictions for the split representation is most suitably derived using quaternion algebra, with vectors ${\bf v} \in \R3$ embedded in pure quaternions ${\bf q_v} = \begin{pmatrix}0 & {\bf v}\end{pmatrix}^T$. ${\bf g}$ is the gravity vector, in the global coordinate frame. We only show how to get the ideal gyroscope and IMU measurements from the trajectory, and disregard other aspects of the IMU model, such as bias, or axis misalignment. \begin{description} \item[Gyroscope prediction] \begin{align} \begin{pmatrix} 0\\ \nabla_\omega {\bf T}(t) \end{pmatrix} = {\bf q}_\omega^\text{body}(t) &= {\bf q}^*(t) {\bf q}_\omega^\text{global}(t) {\bf q}(t)\,\text{ where} \\ {\bf q}_\omega^\text{global}(t) &= 2 \dot{\bf q}(t) {\bf q}^*(t) \label{eq:split_gyroscope} \end{align} \item[Accelerometer prediction] \begin{align} \begin{pmatrix} 0\\ \nabla_a^2 {\bf T}(t) \end{pmatrix} = {\bf q}^*(t) \begin{pmatrix} 0\\ \ddot{{\bf p}}(t) - {\bf g} \end{pmatrix} {\bf q}(t) \label{eq:split_accelerometer} \end{align} \end{description} \subsection{A spline in $\SE3$} In \cite{patron-perez15} the quaternion spline \eqref{eq:so3_spline} is generalized to a spline construction with control points ${\bf T}_k\in\SE3$: \begin{equation} {\bf T}(t)=\exp(\log({\bf T}_1) \tilde{B}_1)\prod_{k=2}^K\exp(\log({\bf T}_{k-1}^{-1}{\bf T}_k)\tilde{B}_k(t))\,. \label{eq:se3_spline} \end{equation} Just like in the quaternion case, this is a blend of linear interpolations on the group, within each B-spline window. In \cite{patron-perez15} the poses to interpolate are defined as transformations from the body frame to the global frame, i.e.\@\xspace, \begin{equation} {\bf T}({\bf R},{\bf p})=\begin{bmatrix} {\bf R} & {\bf p}\\ {\bf 0}^T & 1\end{bmatrix}\,, \end{equation} where ${\bf p}$ is the spline position in the global frame, and ${\bf R}$ is the rotation from the body frame to the global frame. Note that interpolation of ${\bf p}$ and ${\bf R}$ separately, using \eqref{eq:r3_spline} and \eqref{eq:so3_spline} is not equivalent to \eqref{eq:se3_spline}. The difference between the two is revealed by expanding the $\SE3$ tangent, or {\it twist} \citep{murray94}, that is used to move between two poses in \eqref{eq:se3_spline}: \begin{equation} \log({\bf T}_1^{-1}{\bf T}_2)=\log\begin{bmatrix} {\bf R}_1^T{\bf R}_2 & {\bf R}_1^T({\bf p}_2-{\bf p}_1)\\ {\bf 0}^T & 1 \end{bmatrix}\,. \label{eq:se3_tangent} \end{equation} A twist $\boldsymbol{\xi}=({\bf v},\boldsymbol{\omega})\in\mathfrak{se}(3)$, consists of a translation ${\bf v}$ (with direction and scale), and an axis-angle vector $\boldsymbol{\omega}$. By exponentiating a twist times a scalar amount $\theta$ we obtain an element in $\SE3$, with the following analytic expression: \begin{gather} \text{exp}(\boldsymbol{\xi}\theta) =\text{exp}\left(\begin{bmatrix}[\boldsymbol{\omega}]_\times& {\bf v}\\{\bf 0}^T & 0\end{bmatrix}\theta\right)=\\ \begin{bmatrix} \text{exp}([\boldsymbol{\omega}]_\times\theta)& ({\bf I}-\text{exp}([\boldsymbol{\omega}]_\times\theta))[\boldsymbol{\omega}]_\times{\bf v}+\boldsymbol{\omega}\boldsymbol{\omega}^T{\bf v}\theta \\ {\bf 0}^T & 1 \end{bmatrix}\,, \label{eq:twist_exponent} \end{gather} where $[\cdot]_\times$ is the cross product operator, i.e.\@\xspace, $[{\bf a}]_\times{\bf b}={\bf a}\times {\bf b}$, see \cite[eq.~2.36]{murray94}. In analogy with this, the twist in \eqref{eq:se3_tangent} is weighted by a basis function value $\tilde{B}_k(t)$ and exponentiated in \eqref{eq:se3_spline}. We can thus identify $\theta$ with $\tilde{B}_k(t)$. \subsubsection{IMU predictions for $\SE3$.} To compute the IMU preditions for $\SE3$, we use the same formulation as in \cite{patron-perez15}. Here $\dot{\bf R}(t)$, $\dot{\bf p}(t)$, and $\ddot{\bf p}(t)$, are the corresponding submatrices of $\dot{\bf T}(t)$, and $\ddot{\bf T}(t)$. ${\bf g}$ is the gravity vector, in the global coordinate frame. Again, we only show how to get the ideal gyroscope and IMU measurements from the trajectory, and disregard other aspects of the IMU model. \begin{description} \item[Gyroscope prediction] \begin{align} \nabla_\omega {\bf T}(t) &= {\boldsymbol \omega}\, \text{ where} \\ [{\boldsymbol \omega}]_\times &= \begin{bmatrix} 0 & -\omega_z & \omega_y \\ \omega_z & 0 & -\omega_x \\ -\omega_y & \omega_x & 0 \end{bmatrix} = {\bf R}^T(t) \dot{\bf R}(t) \label{eq:se3_gyroscope} \end{align} \item[Accelerometer prediction] \begin{align} \nabla_a^2 {\bf T}(t) = {\bf R}^T(t) (\ddot{\bf p}(t) - {\bf g}) \label{eq:se3_accelerometer} \end{align} \end{description} \subsection{Why $\SE3$ splines are problematic} \label{sec:se3_problems} Here we describe a number of problems with choosing $\SE3$ as the interpolation space. \subsubsection{Translation is linked with orientation.} \label{sec:se3_linked} \begin{figure}[tb] \includegraphics*[width=\columnwidth]{trajectories} \caption{Trajectories from interpolation of two poses on $\SE3$ (left) and separate interpolation in $\RplusSO3$ (right). Here, start and end poses differ by $150^\circ$ in orientation, which exposes the screw motion caused by the $\SE3$-based interpolation.} \label{fig:interaction} \end{figure} By identifying the exponentiation of \eqref{eq:se3_tangent} with \eqref{eq:twist_exponent}, when $\theta=1$, we can further identify the rotation component as $\text{exp}([\boldsymbol{\omega}]_\times)={\bf R}_1^T{\bf R}_2$ (and thus $\boldsymbol{\omega}$ is parallel to the axis of rotation, which implies $\boldsymbol{\omega}={\bf R}_1^T{\bf R}_2\boldsymbol{\omega}$). For intermediate values of $\theta$, the translation in \eqref{eq:twist_exponent} consists of a component parallel to the rotation axis (i.e.\@\xspace, $\boldsymbol{\omega\omega}^T{\bf v}$) and one orthogonal to it (i.e.\@\xspace, $[\boldsymbol{\omega}]_\times{\bf v}$) that depends on the amount of rotation. Unless the translation is parallel to the rotation axis, there will thus be an interaction between the rotation and the translation. The effect of this coupling of translation and orientation is that the camera position moves along a trajectory that spirals about the rotation axis $\boldsymbol{\omega}$, as exemplified in \figurename~\ref{fig:interaction}. Such a motion is called a {\it screw motion} \citep{murray94}. The implicit mechanical model in $\SE3$-based interpolation is that the pose is manipulated by an {\it internal force and torque}, i.e.\@\xspace, a force applied to the same fixed reference point, and with a torque about a fixed axis in the intrinsic pose frame (such an action is called a {\it wrench} \citep{murray94}). For separate interpolation of position and orientation (see section \ref{sec:split}), pose is instead manipulated by a {\it generic force and torque} acting on the pose frame in different ways at different times. The above interpretation predicts that the $\SE3$ model would be a good fit for e.g.\@\xspace, cameras mounted at the end of a robot arm, and in the idealized case also car mounted cameras e.g.\@\xspace, dashcams. The split interpolation model makes fewer assumptions about how the motion changes, and is thus likely to be of more general use. \subsubsection{Derivative vs. body acceleration.} \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{acceleration_problem} \caption{The problem with acceleration in $\SE3$. Solid and colored lines are the position, velocity, and acceleration as computed from the spline interpolation. Black dashed lines are the same quantities, but instead computed by numerical differentiation.} \label{fig:acceleration_problem} \end{center} \end{figure} To compute the accelerometer predictions, \eqref{eq:se3_accelerometer} and \eqref{eq:split_accelerometer}, we must first compute the linear acceleration of the body, denoted $\ddot{\bf p}(t)$. For split interpolation this is simply the second order derivative of the $\R3$-spline, which does not impose any problem. Using the $\SE3$ representation, $\ddot{\bf p}(t)$ is defined as a submatrix of $\ddot{\bf T}(t)$. In section \ref{sec:se3_linked} we saw that during interpolation, $\SE3$ introduces a dependency between the translation part, and the orientation part. It turns out that a similar orientation-dependency problem exists also when computing the acceleration, see e.g.\@\xspace \cite{zefran96a}. We illustrate this problem in Figure \ref{fig:acceleration_problem}. Here we have constructed two pose trajectories: one in $\SE3$ and one split in $\RplusSO3$. These trajectories have equal knot placements, and are designed to be as similar as possible. For a trajectory to be well behaved, we expect that its velocity is the first order derivative of the position (${\bf v}(t) = \frac{d {\bf p}(t)}{dt}$), and that acceleration is the first order derivative of velocity (${\bf a}(t) = \frac{d {\bf v}(t)}{dt}$). To test whether this holds true for the two trajectories, we first analytically compute the position, velocity, and acceleration, using their respective spline formulations \citep[eqns. 4-6]{patron-perez15}. We then compute velocity and acceleration again, but this time using numerical differentiation of position and velocity, respectively. The idea is to now check whether the numerical and analytical quantities are equal. Figure \ref{fig:acceleration_problem} clearly shows that both trajectory representations behave as expected with respect to velocity, since the analytical and numerical results are identical. For acceleration, we can see that this holds true only for the split interpolation, while $\SE3$ shows severe differences. Only if we were to set the orientation constant (${\bf R}(t) = {\bf R}$) does the analytical and numerical results agree, which verifies that the problem is indeed caused by interaction with the orientation. This means that the acceleration produced by the $\SE3$ trajectory derivative is not the true, kinematic, body acceleration. Accelerometer predictions computed from it, will therefore also be inaccurate. \subsubsection{Efficiency.} \label{sec:se3_efficiency} In general, evaluation of an $\SE3$ spline is slightly more expensive, as the translation part of a spline is evaluated using the upper right element in \eqref{eq:twist_exponent} instead of the simpler vector difference in \eqref{eq:r3_spline}. The $\SO3$ part is, however, the same for both methods. Another efficiency issue has to do with evaluation of derivatives. Here, the split $\RplusSO3$ representation allows for a potential speedup by choosing to compute only the derivatives that are required for each term in the visual-inertial cost function \eqref{eq:cvpr_cost_function}: \begin{itemize} \item To compute the gyroscope residuals (see \eqref{eq:split_gyroscope} and \eqref{eq:se3_gyroscope}), only the first order orientation derivative is needed. However, when using $\SE3$ we must compute the full $\dot{\bf T}(t)$ matrix, which implicitly also calculates the superfluous linear part. \item Computing the acceleration residuals (see \eqref{eq:split_accelerometer} and \eqref{eq:se3_accelerometer}) requires the linear acceleration, and orientation. In the case of split interpolation on $\RplusSO3$, the linear acceleration in $\R3$ is very efficient to compute, while we only need to evaluate the orientation in $\SO3$. In $\SE3$, we must of course compute the full $\ddot {\bf T}(t)$ matrix, which requires more computations. \end{itemize} \section{Experiments} \label{sec:experiments} In section \ref{sec:trajectories} we described two different choices of trajectory representation, and their properties and theoretical problems. We will now investigate what impact the identified problems have on practical applications. In section \ref{sec:projection}, we described three different choices of rolling shutter projection methods. We now want to see how these methods differ with respect to accuracy and runtime efficiency. To investigate this, we perform a number of experiments on both synthetically generated, and recorded real data. \subsection{Software} \label{sec:kontiki} To estimate the trajectory and 3D structure we used the open source \emph{Kontiki} framework \citep{kontiki}, which is developed by us\endnote{Kontiki will be released to the public in June 2018.}. Kontiki is a general purpose continuous-time trajectory estimation framework, built to be easy to extend. Users choose a trajectory, add measurements (IMU, camera, etc.), and then ask the framework to find the most probable trajectory matching the measurements. The least-squares solver uses the well known \emph{Ceres Solver} \citep{Ceres-Solver}, and for $\SE3$ calculations we use the \emph{Sophus} library \citep{libsophus}. Kontiki is written in C++, but is mainly intended to be used with its Python frontend. \subsection{Reconstruction method} \label{sec:reconstruction} All experiments follow the same reconstruction pipeline, which we describe here. First we compute a suitable knot spacing for the splines, using the method by \cite{ovren18a}, summarized in section \ref{sec:sew_knot_spacing}. Since that method assumes a split spline defined on $\RplusSO3$, we get one knot spacing for each interpolation space: $\Delta t_{\mathbb{R}^3}$ and $\Delta t_{\mathbb{SO}(3)}$. To make the comparison with $\SE3$ fair, we set $\Delta t = \min(\Delta t_{\mathbb{R}^3}, \Delta t_{\mathbb{SO}(3)})$, and use this value for \emph{all} splines. From the selected knot spacing, $\Delta t$, we then computed the corresponding IMU norm weights, ${\bf W}_a$ and ${\bf W}_g$, as summarized in section \ref{sec:sew_weights}. Like \cite{ovren18a}, we use keyframing to reduce the number of measurements, to reduce the processing time. In this case, we extract keyframes uniformly, spaced 10 frames apart. We then use the adaptive non-maxima suppression method by \cite{gauglitz2011} to select the set of landmarks and observations such that each keyframe has at most 100 observations. Trajectories are initialized such that ${\bf p}(t) = {\bf 0}$, and ${\bf R}(t) = {\bf I}$, for all $t$. Landmarks are set to points at infinity, using $\rho_k = 0$. The robust error norm $\phi(\cdot)$ is the Huber norm, with parameter $c=1$. \subsection{Datasets} \begin{figure} \centering \begin{subfigure}{0.49\columnwidth} \begin{center} \includegraphics[width=\columnwidth]{handheld} \caption{The GoPro camera with attached IMU logger} \label{fig:camera} \end{center} \end{subfigure} \begin{subfigure}{0.49\columnwidth} \begin{center} \includegraphics[width=\columnwidth]{rccar} \caption{The radio controlled car used for the {\bf RC-Car} dataset} \label{fig:rccar} \end{center} \end{subfigure} \caption{Hardware used for experiments} \label{fig:hardware} \end{figure} To show that the optimization behaves differently depending on the choice of interpolation space we define the following types of motion that we want to investigate: \begin{enumerate} \item {\bf Free}. Camera with free orientation. The camera orientation changes independently of the motion path. This simulates the case of a handheld camera, or a camera mounted on a gimbal on e.g.\@\xspace, a UAV. \item {\bf Forward}. Camera locked in the forward direction of the path. This is similar to e.g.\@\xspace, a dash-cam mounted in a car. \item {\bf Sideways}. As above but the camera is mounted looking $90^\circ$ left or right. \end{enumerate} Checking both the {\bf Forward locked} and {\bf Sideways locked} cases are of interest since they are known to differ in difficulty, where the former is harder \citep{vedaldi07}. \subsubsection{Synthetic data.} Our synthetic data was created using the \emph{IMUSim} software package \citep{young2011}. Since IMUSim only models IMU measurements, we implemented an extension package\endnote{The rolling shutter extension to IMUSim can be found at \url{https://github.com/hovren/rsimusim}.} that models rolling shutter cameras. For each of the motion types we generated 200 random trajectories, with matching 3D-structure, which were then processed by the simulator. For the {\bf Forward} and {\bf Sideways} cases the ground truth trajectories were generated using a simple motion model that tried to emulate a driving car. The landmarks were projected into the simulated camera by finding a solution for $\epsilon(t_{k,n}) = 0$, using the bounded root search method by \cite{brent73}. \subsubsection{Real data.} For the real data experiments we used two datasets\endnote{The full dataset is available from \url{http://www.cvl.isy.liu.se/research/datasets/gopro-imu-dataset/}} called {\bf Handheld} and {\bf RC-Car}. For both datasets, we used a \emph{GoPro Hero 3+ Black} camera, to which we attached a custom designed IMU logger, see Figure \ref{fig:camera}. The camera was recording using 1080p wide mode at 29.97 Hz, while the IMU measurements were collected at 1000 Hz. In the experiments, the raw IMU samples were resampled to 300 Hz to reduce processing time. The {\bf Handheld} dataset was recorded while holding the camera and walking in a loop outdoors. Since the camera was free to rotate, it represents the {\bf Free} motion type. Example frames from the {\bf Handheld} dataset can be found in Figure \ref{fig:example_handheld}. \begin{figure} \begin{center} \includegraphics[width=\columnwidth]{render_apple3_montage} \caption{Rendered model estimated on the {\bf Handheld} dataset, using split interpolation. Top: model rendered using Meshlab. Bottom: Sample frames from dataset.} \label{fig:example_handheld} \end{center} \end{figure} In the {\bf RC-Car} dataset, the camera was attached to a radio controlled car, see Figure \ref{fig:rccar}. The camera was mounted pointing forwards, and thus represents the {\bf Forward} motion type. The RC-car was then driven in a loop over (relatively) rough terrain, resulting in both high-frequency motion and motion blur. Example frames from the {\bf RC-Car} dataset can be found in Figure \ref{fig:example_rccar}. Image measurements were collected by tracking FAST features \citep{rosten10} over subsequent frames, using the OpenCV KLT-tracker \citep{bouguet00}. For added robustness, we performed backtracking and discarded tracks which did not return to within $0.5$ pixels of its starting point. Using tracking instead of feature matching means that landmarks that are detected more than once will be tracked multiple times by the system. The camera-IMU extrinsics, gyroscope bias, and time offset, were given an initial estimate using the \emph{Crisp} \citep{ovren15} toolbox. Since Crisp does not support accelerometer measurements, we then refined the initial estimate using Kontiki, described in section \ref{sec:kontiki}, by optimizing over a short part of the full sequence with the accelerometer bias as a parameter to optimize. \subsection{Trajectory representation convergence rates} \label{sec:exp_convergence} We want to investigate whether the choice of trajectory representation has any impact on the reconstruction process. By performing many reconstructions using both trajectory representations, we can gather statistics on how the optimization cost changes over time. Ideally we would like to compare reconstruction quality, but since the real dataset does not have any ground truth, this is not possible. The use of convergence rate as a metric is thus justified by the fact that it allows us to compare the results from the synthetic and the real datasets. Since a failed reconstruction should also cause a higher cost, the reconstruction quality is implicitly measured by the convergence metric. In order to gather statistics also for the real datasets (of which we have only two, longer, sequences), we split them into a set of five seconds long, overlapping, slices, and perform the reconstructions on these instead. In the synthetic datasets, the camera observations and IMU measurements were perturbed by additive gaussian noise with $\sigma_{\text{image}} = 0.5$ and $\sigma_{\text{IMU}} = 0.01$, respectively. We always used exactly the same measurements for the $\SE3$ and split reconstructions. Figures \ref{fig:convergence_synth} and \ref{fig:convergence_real} show the median \emph{relative cost} per iteration for the synthetic and real datasets, respectively. The relative cost is simply the ratio between the current iteration cost, and the initial cost at iteration 0. To give an idea on the distribution, the shaded area shows the 40/60 percentiles of the data. We can see that the split trajectory performs much better than $\SE3$, giving a larger reduction in cost, which indicates a better solution. This is true both for the synthetic and real data case, and for all motion types. In section \ref{sec:se3_linked} we hypothesized that $\SE3$ could be a better choice for the fixed orientation cases. It is clear from Figure \ref{fig:convergence_synth} that the difference between split interpolation and $\SE3$ is largest on the {\bf Free} dataset, which corroborates this. However, $\SE3$ is clearly inferior on \emph{all} datasets, both real and synthetic, which means that the negative aspects of $\SE3$, as described in section \ref{sec:se3_problems}, outweigh the possible benefit this might have had. \begin{figure} \includegraphics[width=\columnwidth]{convergence_synth} \caption{Convergence rate results on the synthetic dataset. The Y-axis shows the ratio between the current iteration cost and the initial cost at iteration 0. Solid line is the median, and the shaded area shows the distribution using the 40/60-percentiles.} \label{fig:convergence_synth} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{convergence_real} \caption{Convergence rate results on the real dataset. The Y-axis shows the ratio between the current iteration cost and the initial cost at iteration 0. Solid line is the median, and the shaded area shows the distribution using the 40/60-percentiles.} \label{fig:convergence_real} \end{figure} To get further clues on what might affect performance, we plot the relative cost ratio for each reconstruction as a function of the chosen knot spacing. As we can see in figures \ref{fig:convergence_knotspacing_synth} and \ref{fig:convergence_knotspacing_real} it is clear that $\SE3$ tends to have worse performance for small knot spacings (denser splines). \begin{figure} \includegraphics[width=\columnwidth]{convergence_knotspacing_synth} \caption{Distribution of relative performance between split interpolation on $\RplusSO3$ and $\SE3$ on synthetic data. The Y-axis shows the ratio between their respective relative costs at the final iteration. Samples above the line are where split representation performed better.} \label{fig:convergence_knotspacing_synth} \end{figure} \begin{figure} \includegraphics[width=\columnwidth]{convergence_knotspacing_real} \caption{Distribution of relative performance between split interpolation on $\RplusSO3$ and $\SE3$ on real data. The Y-axis shows the ratio between their respective relative costs at the final iteration. Samples above the line are where split representation performed better.} \label{fig:convergence_knotspacing_real} \end{figure} \subsection{Projection method} \begin{figure*}[tb] \begin{center} \includegraphics[width=\textwidth]{projection} \caption{Distribution of errors for different combinations of trajectory representation and landmark projection method. The violin plots show the distribution for all errors in the inlier set, for which the error $<0.25$. The percentage above each violin is the inlier ratio.} \label{fig:projection} \end{center} \end{figure*} In section \ref{sec:projection} we described three different methods to do rolling shutter landmark projection. Since they differ both in implementation complexity, and runtime efficiency, we want to make sure that slower and more complex methods actually result in better accuracy. In this experiment we performed reconstructions on the 200 sequences in the {\bf Free} dataset, for all combinations of trajectory representations and projection methods. To also investigate whether any of the methods are sensitive to the available amount of data, we also performed the reconstructions with only half the available landmarks. To evaluate the result we compared the estimated trajectory to the ground truth trajectory using the soap-bubble area between the two position trajectories, as previously suggested in \cite{hedborg12}. The optimized trajectory was aligned to the ground truth by using Horn's method for the translation \citep{horn87} and the orthogonal Procrustes method for the orientation \citep{golub83}. Since the optimization gives a trajectory with a metric scale, we do not need to estimate any scaling factor, as was done in \cite{hedborg12}. For two position trajectories ${\bf f}(t)$ and ${\bf g}(t)$, we compute the area error numerically by trapezoid summation: \begin{gather} a({\bf f},{\bf g})=\sum_{k=1}^{K-1} a_\text{trap}({\bf f}(t_k), {\bf f}(t_{k+1}),{\bf g}(t_k),{\bf g}(t_{k+1})),\text{ where}\\ a_\text{trap}({\bf a},{\bf b},{\bf c},{\bf d})=\frac{\displaystyle ||{\bf a}-{\bf c}||}{\displaystyle 2}(||{\bf a}-{\bf b}||+||{\bf c}-{\bf d}||)\,. \end{gather} This approximation of the area is only valid when sampled densely, which we do. In Figure \ref{fig:projection} we plot the error distributions for all tested combinations. Since some reconstructions fail, we choose to plot only an inlier set, which we define as the samples with an error below $0.25\text{m}^2$. The results in Figure \ref{fig:projection} support the conclusion from the convergence experiment in section \ref{sec:exp_convergence}: $\SE3$ fails more often than split interpolation, as shown by the inlier percentage. However, even for the inlier sets, it is clear that split interpolation provides better reconstructions since most of the distribution mass is concentrated at lower errors. Looking only at the results for split interpolation we can see that all three projection methods perform more or less identically. Also, they all benefit from more available data, which is expected. \subsection{Efficiency} \begin{table} \small\sf\centering \caption{Mean iteration time for different choices of interpolation space and projection method. The times are given relative to $\SE3$ with {\bf Newton}. Lower values are faster.} \label{tab:time_complexity} \begin{tabular}{lccc} \toprule & {\bf Newton} & {\bf Static} & {\bf Lifting}\\ \midrule $\mathbb{SE}(3)$ & 1.00 & 0.52 & 0.54\\ $\mathbb{R}^3$ and $\mathbb{SO}(3)$ & 0.54 & 0.36 & 0.37\\ \bottomrule \end{tabular} \end{table} The choice of interpolation space and reprojection method will affect the runtime of the optimization. In Table~\ref{tab:time_complexity} we show the mean iteration time of our implementations on the {\bf Free} dataset, normalized with respect to $\SE3$ with {\bf Newton}. Note that these timings include also the time to compute the IMU residuals. From section \ref{sec:se3_efficiency} we hypothesized that $\SE3$ would be the slowest, both because it is more computationally involved, and because it must compute superfluous derivatives for the IMU measurements. In our implementations, split interpolation on $\RplusSO3$ is roughly twice as fast as $\SE3$ per iteration, which supports this. The {\bf Static} and {\bf Lifting} reprojection methods share the same cost function, but the latter adds parameters to the optimization which should yield a higher per-iteration cost. The cost of the {\bf Newton} method is linear in the number of iterations taken, which is usually around 2. Although performance is always contingent upon the specific implementation, these practical results are consistent with the principled discussion above. Also, the $\SE3$ (i.e.\@\xspace, Sophus by \cite{libsophus}) and split implementations both use the same {\it Eigen} (\cite{libeigen}) linear algebra library for spline interpolation and projection computations, ensuring a fair comparison. \subsection{Example reconstructions} To showcase the real dataset, and to also verify that reconstruction is possible, we performed 3D reconstruction on the original, full, sequences. We used the pipeline from \cite{ovren18a}, which uses a split trajectory and the {\bf Static} projection method. Since the resulting sparse reconstructions are hard to visualize, we densified them by triangulating new landmarks using the final trajectory. During this densification step, the trajectory was locked, and not updated. The trajectories and densified 3D scenes are shown in figures \ref{fig:example_rccar} and \ref{fig:example_handheld}. \section{Conclusions and future work} \label{sec:conclusions} We have looked at two different spline-based trajectory representations, and compared them theoretically, and experimentally. From the presented theory we hypothesized that $\SE3$ interpolation would perform worse than split interpolation because it makes translation dependent on the orientation. The experiments support this, since the $\SE3$ spline converges slower, and to a worse result than interpolation on $\RplusSO3$, while also having a much higher failure rate. It is also clear that $\SE3$ is less efficient, being roughly half as fast as split interpolation. A split $\RplusSO3$ spline also has the added flexibility of allowing splines of different densities. Because of these findings, we recommend that researchers use a split $\RplusSO3$ spline over an $\SE3$ spline for this type of application. The three landmark projection methods all performed well, and produced nearly identical results. There was however a large difference in efficiency, with {\bf Newton} up to twice as slow as {\bf Lifting} and {\bf Static}. In the context of continuous-time structure from motion, we therefore recommend researchers to use the {\bf Static} projection method, since it is both the fastest, and the most easy to implement. In other applications, e.g.\@\xspace, when the rolling shutter readout time is also calibrated for \citep{oth13}, the difference between the methods may be larger. Here, hybrid optimization schemes could be of interest, where a fast method is used initially, and a more accurate one is used in the final few iterations. In the experiments all reconstructions were started with a trajectory with constant position ${\bf p}(t) = {\bf 0}$ and orientation ${\bf R}(t) = {\bf I}$, and landmarks at infinity with $\rho_k = 0$. In contrast, discrete-time structure from motion requires a suitable initialization to get any meaningful result. We believe that this works because the addition of inertial measurements gives constraints on the shape of the trajectory which can force even a bad starting state into something useful. From the experiments it is clear that while this initialization-free start works quite well in general (at least for a spline defined on $\RplusSO3$), there are failure cases. In the future we would like to investigate more robust ways to perform initialization for visual-inertial fusion. On the synthetic {\bf Forward} and {\bf Sideways} datasets, we have observed a correlation between the velocity of the simulated vehicle, and the final relative cost value. We hypothesize that the lack of a zero-velocity point makes the estimation harder, since the integration from accelerometer measurements to velocity assumes an initial speed of $0$. If available, adding velocity measurements to the optimization could be a way to remedy this. \begin{acks} The authors would like to thank Andreas Robinson for designing the IMU logger, and helping out with the radio controlled car. \end{acks} \begin{funding} This work was funded by the Swedish Research Council through projects LCMM (2014-5928) and and EMC2 (2014-6227). \end{funding} \theendnotes \bibliographystyle{SageH}
{ "timestamp": "2018-05-08T02:18:17", "yymm": "1805", "arxiv_id": "1805.02543", "language": "en", "url": "https://arxiv.org/abs/1805.02543" }
\section{Introduction} \label{sec:intro} Deep neural networks (DNNs) are being deployed at an increasing scale---across the cloud and IoT platforms---to solve complex regression and classification problems in image recognition~\cite{simonyan2014very}, speech recognition~\cite{amodei2015deep}, language translation~\cite{wu2016google}, and many more fields, with accuracy close to and even surpassing that of humans~\cite{karpathy2015deep,toshev2014deeppose,farabet2013learning}. Tight latency, throughput, and energy constraints when running DNNs have led to a meteoric increase in hardware accelerators. DNN accelerators achieve high performance by exploiting parallelism over hundreds of processing elements (PEs) and high energy efficiency by maximizing data reuse within PEs and on-chip scratchpads~\cite{eyeriss_isca,chen2014diannao, nvdla,parashar2017scnn,sharma2016high,jouppi2017datacenter}. For a specific DNN workload and a hardware accelerator, the achieved utilization and data-reuse directly depends on (1) how we schedule the DNN computations (e.g., choice of loop transformations) and (2) how we map computations across PEs. These two components are collectively referred to as \textit{dataflow} in the accelerator literature~\cite{eyeriss_isca, parashar2017scnn, lu2017flexflow, kwon2018maeri}. It has been shown that the energy cost of moving data exceeds the cost of computation~\cite{eyeriss_isca, gao2017tetris}, and so understanding and optimizing dataflow is a critical component of DNN accelerator design, as it directly determines how data is transferred between multipliers (L0), staged in local buffers (L1), and in the global buffer hierarchy (L2 and beyond). The performance and energy efficiency of DNN accelerators depend on (1) target DNN model and its layers types/dimensions, (2) dataflow, and (3) available hardware resources and their connectivity. These three dimensions are tightly coupled, and optimizing DNN accelerators across these dimensions is a challenging task. For example, a dataflow that exploits input channel parallelism~\cite{nvdla} in convolutional neural networks (CNNs) may not achieve high utilization on layers with a small number of channels. Alternatively, dataflows that require more transfer bandwidth than the network-on-chip (NoC) provides may result in under-utilization of the hardware. In such cases, increasing the L1 scratchpad size may allow the same dataflow to require less data bandwidth, but this larger L1 may increase area and energy consumption. Thus, co-optimizing the hardware microarchitecture and the dataflows it supports is one of the primary optimization targets for any accelerator design. This remains an open challenge, as observed by the number of novel dataflows and microarchitectures that continue to be proposed recently~\cite{lu2017flexflow, gao2017tetris, mahmoud2018diffy, chen2018eyeriss}. Regrettably, these proposals do not cover the complete space of dataflows at an exhaustive-enough level to serve as a reference for architects designing custom accelerators within a variety of constraints. In contrast, recent proposals on compilation~\cite{rstream:polyhedral,chen:tvm} and analysis tools~\cite{parashar2019timeloop} for DNNs analyze a broad space of software mappings of a DNN workload onto a given architecture, but the relationship between software mappings and hardware dataflows is not elucidated, and these black-box tools do not provide architects with intellectual intuitions on the consequences of dataflow selection and their impact on reuse. In fact, the very term "dataflow" is used in an inconsistent manner across both architecture and analysis proposals. Architects are thus left with an incomplete and unstructured set of intuitions on dataflows and the complex interplay between dataflow and microarchitecture choices. \begin{comment} Dataflows are often expressed as \emph{loop nests}~\cite{ma2017optimizing, chen2018eyeriss, lu2017flexflow}, a syntax that resembles a simple imperative programming language with explicit parallelism, and infers potential data reuse opportunities from loop nests. This approach makes it possible for humans to read, write, and reason about dataflows using familiar concepts from software development. However, once a dataflow is specified, there is a need to represent its properties and the relationship between its various entities in a precise, compiler-friendly format, and to build cost models that support the format, so as to enable the development of an ecosystem of tools that leverage the dataflow concept in order to significantly increase productivity in architectural design space exploration. \end{comment} In this paper, we seek to remedy this situation by providing a thorough set of insights on the choices and consequences of dataflow selection and their interplay with microarchitectural alternatives, and a structured mechanism to reason about them quantitatively. To that end, we make the following specific contributions. First, we introduce a {\em data-centric} notation to represent various accelerator dataflows with data mappings and reuses being first-class entities, unlike the compute-centric notation used by prior proposals which infer the data reuses from a loop-nest representation~\cite{ma2017optimizing, chen2018eyeriss, lu2017flexflow, parashar2019timeloop}. These data-centric directives can express a wide range of data-reuses (across space, time, and space-time) over arbitrary hierarchies of PEs for both dense and sparse DNN layers such as convolutions, LSTMs, and fully-connected layers. We believe that our data-centric notation can complement the commonly used loop-nest notation, i.e., our notation can be viewed as an intermediate representation (IR) which can be extracted from a high-level loop-nest notation or specified directly. Second, we show how these data-centric directives can be used to reason about reuse in a structured manner. We demonstrate the relationship between each directive, the specific form of algorithmic reuse exposed by the directive, and the potential ways to exploit that reuse using a hardware capability to improve efficiency. This analysis covers the complete space of ways in which any dataflow can exploit reuse. Third, we introduce an analytical cost model named \textsc{MAESTRO}\xspace (Modeling Accelerator Efficiency via Spatio-Temporal Reuse and Occupancy) that programmatically implements the above analysis. \textsc{MAESTRO}\xspace takes as input 1) a DNN model with a set of layers, 2) a dataflow description for each layer specified using our proposed directives, and 3) the hardware configuration. Based on these inputs, \textsc{MAESTRO}\xspace outputs estimates of end-to-end execution time, energy (including all compute, buffer, and interconnect activities), NoC costs, and so on. A key challenge in our proposed approach is to provide a cost estimation that is both efficient and sufficiently precise to effectively support design space exploration. We demonstrate \textsc{MAESTRO}\xspace's abstract hardware model and analytic model to be within 90-95\% accuracy of actual open-source RTL~\cite{kwon2018maeri} while being 1029-4116$\times$ faster (10ms to run \textsc{MAESTRO}\xspace versus 7.2-28.8 hours for an equivalent RTL simulation on a workstation with Xeon E5-2699 processor and 64GB memory). Finally, we demonstrate how the \textsc{MAESTRO}\xspace cost model can be used by accelerator designers to determine Pareto-optimal parameters for an accelerator with a given area, energy, or throughput budget. For a NVDLA~\cite{nvdla}-like dataflow (KC-Partitioned in~\autoref{table:EvalDataflows}) in VGG16~\cite{VGGnet} CONV layer 11, we see up to a 2.16$\times$ difference in power consumption between energy- versus throughput-optimized design points. The energy-optimized design employs 10.6$\times$ more SRAM and 80\% the PEs of the throughput-optimized design. This leads to an energy-delay product improvement of 65\%, with 62\% throughput. The range of these numbers is a concrete example of the significance of this problem for accelerator architects. \begin{comment} Deep neural network (DNN)-based applications are now pervasive in edge devices for face recognition~\cite{apple_neural_core}, autonomous cars~\cite{nvidia_self_driving}, and so on. Since not only tight energy constraints but also fast run time requirements simultaneously apply on edge devices for DNN applications, new hardware specialized for DNN operations, or DNN accelerators, has emerged. As illustrated in \TODO{Figure X: Abstract hardware model}, DNN accelerators achieves high performance by exploiting parallelism over hundreds of processing elements (PEs) and high energy efficiency by maximizing data reuse within PEs and on-chip~\cite{eyeriss_isca} \TODO{Add more citations}. Although DNN accelerators are optimized to deliver high performance and energy efficiency, actual performance and energy efficiency heavily depend on (1) how we organize the computation (loop interchange and tiling) and (2) map computation over PEs. Those two components determine how data is transferred and staged among local buffers (L1) in PEs and global buffer (L2). Therefore, they are referred to as \textit{dataflow}, and it has been the prime optimization target for many recent DNN accelerators because dataflow determines performance and energy efficiency for given layer and hardware (e.g., dataflow changes the number of energy-consuming global buffer accesses (6X of compute~\cite{eyeriss_isca}), and data reuse factors in each PE, which are critical for energy). However, we still lack an exact definition of dataflow and precise description method other than complex loop-nest with many loop transformations, which are not friendly to programmers and hardware designers. Therefore, we first define \textit{dataflow} as fine-grained data movement and staging schedules resulting from an organization and mapping of DNN operations to hardware, which we formally discuss in ~\autoref{sec:dataflow}. The performance and energy efficiency of DNN accelerators depends on (1) target DNN layer types/dimensions, (2) dataflow, and (3) available hardware resources. Those three factors are intertwined so optimizing DNN accelerators over those factors is not trivial. For example, a dataflow that exploits input channel parallelism~\cite{nvdla} in convolutional neural networks (CNNs) does not perform well on a layer with a small number of channels. Even for a layer with a large number of channels, insufficient network-on-chip (NoC) bandwidth introduces heavy serialization that can make the dataflow inefficient. Therefore, all the components need to be considered together to evaluate the performance and energy of DNN accelerators, and required amount of hardware resources to support the target dataflow during design time. To enable quick estimation of the performance and energy consumption we develop an analytic cost model, \textsc{MAESTRO}\xspace, that receives DNN layer type/dimension, dataflow, and hardware configuration written in domain-specific language (DSL) we propose as inputs. \textsc{MAESTRO}\xspace is based on realistic behaviors of hardware such as latency hiding, serialization in NoC, double buffering, and so on. As outputs, the cost model quickly (~10ms on a laptop with i7-4980HQ processor and 16GB memory) generates roof-line throughput, estimated run time with given inputs, L1/L2 buffer access counts (can be integrated with energy models to estimate energy) and size requirements, NoC bandwidth requirement, and so on. The \textsc{MAESTRO}\xspace model also highlights bottleneck(e.g., L2 to L1 communication, PE computation, and so on) of the accelerator so that hardware designers can explore alternative options that mitigate the reported bottleneck. We also provide DSL for \textsc{MAESTRO}\xspace, which contains a concise syntax for dataflow description. The dataflow description does not require an understanding of complex loop transformations and 10~20+ level of loops, which performs computation mapping, but allows users to directly specify the data mapping, as shown in \TODO{Figure X: Loop-nest vs. MAESTRO dataflow description}. The dataflow description syntax consists of four statements; two specifies data mapping, and the other two specifies PE tiling. As the syntax provides the same view as one hardware designers have when they design dataflow, the syntax is more intuitive and more concise than loop transformations. Using the cost model and DSL, we implement a hardware design space exploration (DSE) tool, \textsc{COMPOSITORE}\xspace, that searches for run time-, energy-, or run time-energy product-optimized hardware configuration for given DNN layer type/dimensions and dataflow. Users can specify the area and power constraints or run \textsc{COMPOSITORE}\xspace without constraints. When launched without constraints, \textsc{COMPOSITORE}\xspace identifies an upper bound resource that provides roof-line performance. \textsc{COMPOSITORE}\xspace utilizes gradient-descent style optimization with random starting points to prevent exhaust brute-force search, which enables fast DSE in combination of quick analytic model, \textsc{MAESTRO}\xspace. In summary, the main contributions of this paper include: \squishlist \item {A formal definition of DNN accelerator dataflows} \item {A concise data-centric DSL that enables precisely describing dataflows} \item {An analytic cost model, \textsc{MAESTRO}\xspace, that provides quick estimation of performance, energy, and hardaware cost to support given DNN layer and dataflow} \item {A design space exploration tool, \textsc{COMPOSITORE}\xspace, based on DSL and cost model we propose that searches for hardware configuration optimized for performance, energy, or run time-energy product.} \squishend \end{comment} \section{Describing Dataflows} \label{sec:dataflow-IR} \TK{TODO - Precisely define dataflow} To explicitly describe key aspects of dataflows, we propose a {\em data-centric} IR that clearly describes (1) data iteration schedule, (2) data tiling, and (3) data mapping on PEs, unlike computer-centric representations in the loop nest form imply them. The IR is based on three data-centric dataflow directives - spatial map, temporal map, and cluster - that describe how each dimension of data iterates over time (iterations) and space (PEs), how large a mapping over each iteration is for each dimension, and how we organize the granularity of the mapping over space (PE clustering). We discuss the syntax and semantics of those three directives in the following subsection. \subsection{Data-centric Dataflow Directives} \label{subsec:dataflow-directives} Because data in DNN accelerators are high dimensional tensors, we take the divide-and-conquer strategy: we describe the tiling and mapping of each dimension and stack them to represent the tiling and mapping of entire data in a data class (input/weight/output), as presented in~\autoref{fig:DataMappingExample} (b). We represent the data iteration order using the order of dataflow directives for each dimension, similar to the loop order in the compute-centric representation. Although the concept of schedule implied by order of data/compute dimension is similar, two directives, temporal and spatial map, in the proposed directives innately encapsulate the tiling and mapping over time/space Also, the other directive, cluster, enables manipulating the spatial granularity of mapping. We discuss the syntax and semantics of each directive next. \TK{Move these to a table form:} \subsubsection{Cluster: Dimensionality in PE array} \label{subsubsec:cluster} \colonparagraph{Syntax} \texttt{Cluster(size, type)}, where size is integer, and type is either L (logical) or P (physical) \colonparagraph{Semantics} The base clusters at the inner-most loop directive are logical PEs. Cluster directive is processed from the innermost one, and it bundles \texttt{size} sub-clusters at the level \texttt{cluster} is specified to construct superclusters that becomes unit spatial dimension for directives outer than the \texttt{cluster}. If the \texttt{type} is physical, the super cluster constructed by the \texttt{cluster} becomes the unit physical compute unit, or physical PE. If no physical cluster is defined, the logical PEs are treated as physical PEs. \colonparagraph{Implication} \texttt{cluster} enables to describe mapping over multi-dimensional processing element array with static/dynamic grouping of PEs. For example, Eyeriss~\cite{eyeriss_isca} has a 2D PE array and constructs a dynamic cluster over rows to cover the row dimension of a sliding window. Two \texttt{cluster} directives can be used to describe such clustering. \colonparagraph{Example} In the example in~\autoref{fig:DirectiveExample} (c), two \texttt{cluster} directives are defined in the order of \texttt{cluster (3,P)} and \texttt{cluster (2,L)}. Because cluster is constructed from the inner-most level, \texttt{cluster(2,L)} first operates on 12 logical PEs and generates level 1 clusters that contain two logical PEs each. Other directives between \texttt{cluster(3,P)} and \texttt{clsuter(2,L)} operates on the level 1 clusters., \texttt{cluster(3,P)} groups three level 1 clusters and construct two level 2 clusters. All directives above \texttt{cluster(3,P)} operate on the level 2 cluster. Because of the type \texttt{P} specified in \texttt{cluster(3,P)}, level 2 clusters are mapped on a physical PE. Logical PEs within a level 2 cluster, or a physical PE, potentially operate as vector lanes of ALUs depending on the vector lane of the target hardware. \subsubsection{Temporal Map: Tiling and mapping over time} \label{subsubsec:temporal-map} \colonparagraph{Syntax} \texttt{TemporalMap (size, offset) x}, where size and offset are integers and \texttt{x} is a data dimension. \colonparagraph{Semantics} \texttt{TemporalMap} directives specify (1) the number of elements mapped in a dimension x (``size") and (2) how the mapping move in the next iteration of x (offset) to all of the sub-clusters. \colonparagraph{Implication} \texttt{TemporalMap} maps the same set of elements in a dimension to all the sub-clusters. It expresses data tiling using \texttt{size} and tile iteration rule over time using \texttt{offset}. \colonparagraph{Example} \texttt{TemporalMap(3,1) X} in~\autoref{fig:DirectiveExample} (a) indicates that three elements in X dimension is mapped in each iteration, and the mapping shifts by one in the next iteration. That is, the mapped x over time is (0,1,2), (1,2,3), (2,3,4), and so on. \subsubsection{Spatial Map: Tiling and mapping over space} \label{subsubsec:spatial-map} \colonparagraph{Syntax} \texttt{Spatial (size, offset) x}, where size and offset are integers, and x is a data dimension. \colonparagraph{Semantics} \texttt{SpatialMap} directives specify (1) (1) the number of elements mapped in a dimension x (``size") and (2) how the mapping move in the next cluster (offset). When the size of the spatial dimension (number of sub-clusters) is not sufficient at the level \texttt{SpatialMap} is specified, the spatial mapping is folded over time. \colonparagraph{Implication} \texttt{SpatialMap} maps different sets of elements in a dimension to sub-clusters. It expresses data tiling using \texttt{size} and tile iteration rule over space using \texttt{offset}. \colonparagraph{Example} \texttt{SpatialMap(3,1) Y} in~\autoref{fig:DirectiveExample} (b) indicates that three elements in Y dimension is mapped on each cluster (in this example, logical PEs) and the mapped elements shift by one over space (PEs). That is, the mapped y on each PE is as follows: PE0 <= (0,1,2), PE1 <= (1,2,3), PE2 <= (2,3,4) .., PE5 <= (5,6,7). Because the number of PEs is not sufficient to cover entire Y dimension, the mapping is folded over time. \subsection{Data Movement Order} \label{subsec:data_movement_order} \TK{Need some text} The schedule of tile iterations, or data movement order, is one of the key aspects of dataflow, which determines temporal reuse. The dimension in the outer-most loop in loop nest-based representation and outer-most directive in data-centric directive-based representation changes in the slowest rate among all the data dimensions, and vice-versa for the dimension in the inner-most loop or directive. That is, by comparing the relative positions of loop or directive, we can identify which dimension changes relatively faster or slower in a given dataflow description. To infer stationary data from the directive order, we need to consider another aspect, dimension coupling with data classes. We presented how each of seven dimensions correlate to each data class (Input feature map, filter weight, and output feature map) in~\autoref{fig:7DConv}. If the dimension whose mapping is changing does not couple with a data class, the data class is stationary between two mappings (original and changed mapping), or over time. For example, if only output channel mapping is updated between two sets of mappings, input feature map does not change between two mappings since input feature map does not have output channel dimension in its data dimension. \subsection{Dataflow Playground} \label{subsec:dataflow_playground} In this subsection, we illustrate our data centric representation of dataflows with examples. {\bf Spatial and temporal map}: These mappings are used to specify data distributions of dimensions of data classes across PEs and time. A spatial map corresponding to a dimension of a data class specifies a distribution of the dimension across PEs. For example, {\tt SpatialMap(1,1)X'} of mapping A in~\autoref{fig:Dataflow_Examples} (where {\tt X'} refers to the first dimension of output data class), spatially distributes indices of the {\tt X'} dimension with a chunk size of one (derived from {\tt size} argument) across PEs. The distribution can be viewed from the data space of output feature map in the same column of mapping A in~\autoref{fig:Dataflow_Examples}, where PEs get consecutive indices of {\tt X'} in a given time step. Similarly, temporal map specifies a distribution of the dimension across time steps in a given PE, and also the mapped chunk of dimension indices is the same across PEs in a time step. For example, {\tt TemporalMap(1,1)S} of mapping A in~\autoref{fig:Dataflow_Examples} (where {\tt S} refers to the first dimension of filter weight data class), distributes a chunk of size one index of the {\tt S} across time steps in a given PE. The distribution can be viewed from the data space of filter weight, where all PEs get the same chunk of indices of {\tt S} across PEs in a given time step, but consecutive indices in later time steps. Since all PEs get same data values corresponding to a temporally mapped dimension, this creates opportunities for spatial reuse, i.e., spatially reusing the same data values across PEs in a time step. {\bf Data movement order}: The sequence of spatial and temporal maps in the dataflow specification dictate the order of data movement, i.e., change in the data mappings to PEs across time. A change in the order can result in an entirely different stationary behavior. For example, the sequence of directives in mapping A in~\autoref{fig:Dataflow_Examples}, i.e., spatial map on {\tt X'} followed by the temporal map on {\tt S}, indicates that all data indices of S should be explored before working on the next chunk of {\tt X'} indices. This order results in temporally reusing values of data corresponding to {\tt X'} indices, i.e., partial sums, for all its chunk of {\tt S}, leading to output stationary dataflow. This behavior can be observed from the iteration space for the mapping A in~\autoref{fig:Dataflow_Examples}. Now, if the order of directives is interchanged, i.e., temporal map on {\tt S} followed by spatial map on {\tt X'} shown in mapping B in~\autoref{fig:Dataflow_Examples}, results in weight-stationary dataflow because PEs can temporally reuse values of data corresponding to {\tt S} indices, i.e., weight values, for all its chunk of {\tt X'} indices before going to next chunk of {\tt S} indices. Similarly, mappings C and D in~\autoref{fig:Dataflow_Examples} shows the spatial distribution on {\tt S} instead {\tt X'}, and also the impact of data movement order on temporal reuses leading to different stationary dataflows. \PC{Talk about spatio-temporal reuse in the mappings as well} {\bf Impact of non-unit map sizes in spatial and temporal maps}: In all of the mappings from A-D in~\autoref{fig:Dataflow_Examples}, the mapping sizes (first argument) of all mappings are one -- resulting in either no temporal reuse (e.g., partial output sums in case of mapping B) or full temporal reuse (e.g., input feature map in case of mapping C). Increasing the map size of the spatial or temporal maps can help in presenting opportunities for partial temporal reuse, which can capture convolutional reuse in CNN layers. For example, the spatial map corresponding to the {\tt S} dimension in mapping E in~\autoref{fig:Dataflow_Examples} helps in exploiting partial temporal reuse of input data across time steps. {\bf Exploiting multi-dimensional spatial distributions}: As can be seen from the data movement orders in mappings from A-E in~\autoref{fig:Dataflow_Examples}, data mappings related to an outer map gets updated after full exploration of the inner map. For example, spatial map on {\tt S} followed by the temporal map on {\tt X'} in Mapping E, indicates that all data indices of {\tt X'} should be explored completely before working on the next chunk of {\tt S} indices. This inherent assumption can limit certain dataflow behaviors where one might be interested in simultaneously exploiting spatial distribution of more than one data dimensions. So, we introduced {\em Cluster} directive as a mean to support simultaneous spatial distribution of multiple data dimensions, and the cluster directive works by logically grouping multiple PE's or multiple sub-clusters, e.g., cluster(3) in mapping F in~\autoref{fig:Dataflow_Examples} groups available PEs in groups of 3, resulting in two clusters. The advantage of clusters directive is that mappings specified before the directive are applied across the logical clusters resulted from grouping, and mappings specified after the directive are applied inside the logical cluster. With this intuition, one can specify spatial maps after and before the cluster to explore multi-dimensional spatial distribution. An example of this can be seen in mapping F in~\autoref{fig:Dataflow_Examples}, where {\tt X'} dimension is spatially distributed across clusters, and {\tt S} dimension is spatially distributed within the cluster. We use the cluster directive to represent popular row-stationary dataflow in Eyeriss~\cite{eyeriss_isca} since it involves the spatial distribution of R and Y dimensions simultaneously, and also NVDLA dataflow~\cite{nvdla} which involves the spatial distribution of K and C dimensions. {\bf Support for coarse-grained PEs}: Another advantage of our cluster directive is that its notion of grouping multiple PEs can represent coarse-grained PEs in our abstract hardware model, such as SIMD units~\cite{song2016cbrain} and matrix tensor accelerators like GPU Tensor cores. By representing such coarse-grained PEs using our cluster directive, our cost model to estimate energy and throughput estimates doesn't need to be modified. \insertWideFigure{Dataflow_Examples}{The impact of directive order, spatial/temporal maps, tile sizes, and clustering.} \input{tables/HW_Implication_table.tex} \insertTableFigure{HW_Impl_choices}{Hardware Implementation Choices for supporting spatial and temporal reuse. Note - by {\it temporal multicast}, we refer to {\it stationary} buffers from which the same data is read over time.} \TK{TODO: Examples of dataflows with implications on reuse etc. Impact of spatial vs temporal, impact of ordering, impact of size parameter, and impact of clustering. Hyoukjun owns this right now.} \subsection{Legal Dataflows} \label{subsec:dataflow_legality} The dataflow description directives are flexible that users can specify any data mapping following a uniform pattern for each dimension. However, like the flexibility of programming languages allow users to write buggy code, dataflow directives allow users to specify illegal dataflows. Legal dataflows must meet the following conditions: \colonparagraph{Bound condition} Bound condition is the first requirement that prohibits mappings of non-existing indices. For example, in the example layer presented in~\autoref{fig:7DConv}, the number of output channels (K) is four. Therefore, TemporalMap(5,5) K is an illegal mapping directive because the mapping size (five) in the directive exceeds the bound of K dimension (four). \colonparagraph{Coverage condition} Coverage condition requires the data mapping to cover all the pairs of operands (inputs and weights in CNNs) to generate the desired outputs. For example, in the dataflow in~\autoref{fig:DataMappingExample}, if we replace the second directive, TemporalMap(2,2) K, to TemporalMap(2,4) K, the dataflow becomes illegal because the dataflow does not cover entire output channels (K). \textsc{MAESTRO}\xspace prints out warning messages when the dataflow does not cover all the pairs of operands assuming CNNs as the target program. However, in some cases such as stride larger than one in a CNN or dilated convolutions can intentionally drop some data points to implement their functionality. Therefore, users need to carefully review the dataflow when they use \textsc{MAESTRO}\xspace to model such cases. \colonparagraph{No redundancy condition} No redundancy condition prevents redundant data mapping that produces the same output as previously mapped data points. That is, once a set of operands are mapped on a PE, all the computation using the operands need to be done at the PE. For example, in the dataflow in~\autoref{fig:DataMappingExample}, if we replace the third directive, TemporalMap(3,3) C, with TemporalMap(2,1) C, the dataflow becomes an illegal one because a redundant input channel is mapped along two temporal iterations of C. \subsection{The MAESTRO Tool} \AP{Cut / Shorten} The input to \textsc{MAESTRO}\xspace consists of a DNN model, hardware description, and dataflows described in data-centric directives discussed in~\autoref{sec:dataflow-IR}. ~\autoref{fig:DFSLExample} shows a sample specification of the VGG16 model as an input to the framework. Each layer of the model also includes a description of dataflow using a sequence of our data-centric directives discussed in~\autoref{subsec:dataflow-directives}. Also, a hardware description of target accelerator such as the number of PEs, sizes of L1/L2 buffers and bandwidth of NoC are specified. \textsc{MAESTRO}\xspace supports a wide range of layers including convolutional, pooling, fully-connected, and so on. \textsc{MAESTRO}\xspace also models sparse DNN layers via specifying a percentage of sparsity for each data class assuming a uniform distribution of zeros. \subsection{Outputs} \label{subsec:maestro_outputs} \autoref{fig:OutputCapture} shows an example output of MAESTRO analysis. MAESTRO prints out (TBA) MAESTRO also generates csv files that contains cost information of valid design points explored during design space exploration when a user activates DSE. \TK{Add a snapshot or table showing outputs of MAESTRO.} \begin{comment} \section{Design Space Exploration} \label{sec:dse} Using \textsc{MAESTRO}\xspace, we implement a hardware design space exploration (DSE) tool that searches five hardware parameters (the number of PEs, L1/L2 buffer sizes, NoC bandwidth, and vector width in ALUs) optimized for either energy efficiency, throughput, or throughput per Watt within given hardware area and power constraint. The DSE tool receives layer dimensions, dataflow description, hardware area, and power constraints, and the area and power of building blocks synthesized with the target technology as inputs. For the cost of building blocks, we implement FP multiplier and adder, bus, bus arbiter, global and local scratch pad in RTL and synthesis them using 28nm standard cell and SRAM library. For bus and arbiter cost, we fit the costs into a linear and quadratic model using regression because the bus cost increases linearly and the arbiter (matrix arbiter) cost increases in a quadratic manner. Users can specify one of the three optimization targets: runtime (throughput), energy, or run time-energy product. The DSE tool searches entire legal search spaces within given constraints and outputs the optimal hardware parameters it searched and corresponding \textsc{MAESTRO}\xspace analysis results (buffer access counts, estimated execution time, and others). It also reveals the bottleneck point (L2 to L1 or L1 to L2 communication, or computation) that hinders to achieve the roof-line performance so that users can adjust the constraints and dataflows to mitigate the bottleneck. Users also can replace the area and power information of building blocks using the building block RTL we provide and their technology library to perform DSE using a different technology. We present case studies using DSE tool in~\autoref{subsec:dse_eval}. \end{comment} \subsection{Supported Dataflows} \label{subsec:dfcasestudy} \textsc{MAESTRO}\xspace can model a variety of layers (LSTM hidden layer, pooling, fully-connected, separable convolution, and so on) thanks to the generality of our data-centric approach that specifies a mapping of input tensors. For LSTM hidden layer, we use the input width (dimension X) to specify the input size to the hidden layer, input channel (dimension C) to specify different gates. For convolution with stride, pooling layer, and transposed convolution, users need to specify the stride, pooling size, and expansion factor, respectively. \textsc{MAESTRO}\xspace also models uniformly distributed sparsity for any supported dataflow. \textsc{MAESTRO}\xspace does not support programs with non-affine indices because most DNNs have only affine data indices. Also, \textsc{MAESTRO}\xspace does not support non-uniform tiling that maps a different number of data points to each PE. However, such mapping is also very rare because PEs are regular and load balancing is required across PEs. \section{Describing Dataflows} \label{sec:dataflow-IR} \insertWideFigure{Dataflow_Examples}{The impact of directive order, spatial/temporal maps, tile sizes, and clustering over 1D convolution presented in~\autoref{fig:Unrolled1DConvExample}. The first row shows mapping described using our data-centric directives. The second row shows iteration spaces whose points correspond to each partial sum. In row three to five, we show data mapping of each data structure. Finally, we describe temporal and spatial reuse opportunities from each mapping.} Our data-centric representation consists of four key directives -- 1) spatial map, 2) temporal map, 3) data movement order, and 4) clusters. We briefly explain all the key directives using 1D convolution (shown in~\autoref{fig:Unrolled1DConvExample} (a)) as a pedagogical example, and then discuss various hardware implementation choices for supporting a wide range of data-reuse across space, time, and space-time \subsection{Data-Centric Representation} \label{subsec:dataflow-directives} We define the dataflow of an accelerator design to consist of two major aspects -- (1) the schedule of DNN computations (e.g., choice of loop transformations) across time for exploiting a wide range of reuse, and (2) the mapping of the DNN computations across PEs for parallelism. The representation is based on four key components, and we briefly discuss the first three components below. The fourth component, {\tt Cluster}, will be introduced in \autoref{subsec:dataflow_playground}. \begin{enumerate} \item {\bf Spatial Map(size, offset) $\pmb{\alpha}$} specifies a distribution of dimension $\alpha$ (e.g., {\tt R}, {\tt X}) of a data structure across PEs, where {\tt size} refers to the number of indices mapped in the dimension $\alpha$ to each PE, and {\tt offset} describes the shift in the starting indices of $\alpha$ across consecutive PEs. \item {\bf Temporal Map(size, offset) $\pmb{\alpha}$} specifies a distribution of dimension $\alpha$ of a data structure across time steps in a PE, and also the mapped chunk of dimension indices is the same across PEs in a time step. The {\tt size} refers to the number of indices mapped in the dimension $\alpha$ to each PE, and {\tt offset} describes the shift in the starting indices of $\alpha$ across consecutive time steps in a PE. \item {\bf Data Movement Order: } The sequence of spatial and temporal maps in the dataflow specification dictate the order of data movement, i.e., the change of the data mappings to PEs across time. \end{enumerate} We demonstrate reuse opportunities presented by various dataflows using the 1D convolution example in ~\autoref{fig:Unrolled1DConvExample}(a). We start by creating a unique dataflow for this program by the loop nest representation in ~\autoref{fig:Unrolled1DConvExample}(b), assuming the accelerator has 2-level hierarchy (L0 register at PE + L1 local scratchpad buffer). The two loops enclosed in the red box are indicative of the mapping over the PEs, and their corresponding data-centric representation is in ~\autoref{fig:Unrolled1DConvExample}(c) and (d). As can be seen from ~\autoref{fig:Unrolled1DConvExample}(e), the data elements corresponding to outputs (dimension {\tt X'}) is spatially distributed across three PEs, i.e., each PE receives different chunks of two output elements. This particular data distribution can be captured with our spatial map directive with size and offset parameters being 2, resulting in {\tt SpatialMap(2,2) X'} where {\tt X'} is the first dimension of output data structure. Also, the data elements corresponding to weights (dimension {\tt S}) is replicated across multiple PEs, i.e., each PE receives a same chunk of three weight elements in the first iteration, and receives different chunk of weight elements in the next iterations. This particular replicated and temporal distribution can be captured with our temporal map directive with size and offset parameter being 3, resulting in {\tt TemporalMap(3,3) S}, where {\tt S} is the first dimension of the weight data structure. Putting it together, spatial map on {\tt X'} followed by a temporal map on {\tt S} captures data mapping and movement behavior across PEs and time corresponding to the two loops in the loop-nest version, and these two directives are enclosed in the red box in~\autoref{fig:Unrolled1DConvExample}(c). Each data-centric representation is a complete description of a unique dataflow. \subsection{Dataflow Playground} \label{subsec:dataflow_playground} We build six example dataflows upon the simple 1D convolution discussed in~\autoref{fig:Unrolled1DConvExample} (d) to demonstrate how small changes to a dataflow expose various forms of reuse---both spatial and temporal. ~\autoref{fig:Dataflow_Examples} illustrates those six example dataflows, which consists of a base dataflow~\autoref{fig:Dataflow_Examples}(A) and its variants. We modify the directive order, spatially/temporally mapped dimensions, mapping size, and PE clustering and discuss their impact on data reuse. \betterparagraph{Directive Order} A change in directive order can result in an entirely different temporal reuse (or, stationary behavior). For example, the sequence of directives in mapping in~\autoref{fig:Dataflow_Examples}(A) indicates that all data indices of S should be explored before working on the next chunk of {\tt X'} indices. This order results in temporally reusing values of data corresponding to {\tt X'} indices (i.e., partial sums) for all indices of {\tt S}. Therefore, this dataflow is informally referred to as output-stationary and partitioned across multiple outputs in parallel. \autoref{fig:Dataflow_Examples}(B) shows the impact of interchanging the order of directives. This results in a weight-stationary dataflow, because PEs can temporally reuse weight values corresponding to {\tt S} indices, for all indices of {\tt X'} before going to next chunk of {\tt S} indices. Similarly, \autoref{fig:Dataflow_Examples}(C) and (D) shows the spatial distribution on {\tt S} instead of {\tt X'}, and also the impact of data movement order on temporal reuse leading to different dataflow variations. This indicates why the informal dataflow name should not be taken as a complete and precise specification of its behavior. \betterparagraph{Spatially and Temporally Mapped Dimensions} In~\autoref{fig:Dataflow_Examples}(A) the directive {\tt SpatialMap(1,1) X'} (where {\tt X'} refers to the first dimension of the output data structure), spatially distributes indices of the {\tt X'} dimension with a chunk size of one (the {\tt size} parameter) across PEs with an offset of one (the {\tt offset} parameter). This means that each PE works on a different column of the output data space. If the number of PEs is not sufficient to cover all indices of the dimension mapped, then the mapping is folded over time across the same set of PEs. Also, if {\tt offset} value is smaller than {\tt size} value, then there will be an overlap of indices across consecutive PEs, and this is useful in describing mappings on input activation dimensions X and Y because their iteration space is skewed. Similarly, {\tt TemporalMap(1,1) S} (where {\tt S} refers to the first dimension of filter weight data structure), distributes indices of the {\tt S} dimension with a chunk size of one across time steps with an offset of one. This means that each PE works on the same column of the weight data space. Since all PEs get the same data indices corresponding to a temporally mapped dimension, this creates an opportunity for {\em spatial reuse}, i.e., multicasting the same data values across PEs in a time step. \betterparagraph{Mapping Size} In all of the mappings from \autoref{fig:Dataflow_Examples}A-D, the mapping sizes (first argument) of weights and outputs are one -- resulting in full temporal reuse of weights but no temporal reuse of outputs (e.g., mapping B and D) or vice versa (e.g., mapping A and C). There is no temporal reuse of inputs in any mapping. Increasing the map size of the spatial or temporal maps can help in presenting opportunities for partial temporal reuse, which can capture convolutional reuse of inputs in CNN layers. For example, the spatial map corresponding to the {\tt S} dimension in \autoref{fig:Dataflow_Examples}(E) helps in exploiting the partial temporal reuse of input data across time steps. \betterparagraph{PE Clustering for Multi-dimensional Spatial Distributions} As can be seen in \autoref{fig:Dataflow_Examples}(A-E), data mappings related to a map in the outer position get updated after a full exploration of a map in the inner position. This inherent assumption can limit certain dataflow behaviors where one might be interested in simultaneously exploiting spatial distribution of more than one data dimensions. To address this, we introduce another directive called {\em Cluster} as a mean to support the simultaneous spatial distribution of multiple data dimensions. The cluster directive logically groups multiple PEs or nested sub-clusters (when a dataflow has multiple cluster directives) of {\tt size} parameter. For example, \textsc{Cluster}\xspace(3) in ~\autoref{fig:Dataflow_Examples}(F) arranges available PEs into groups of three, resulting in two clusters of three PEs. All the mapping directives specified above a \textsc{Cluster}\xspace directive perform the mapping across logical clusters created by the \textsc{Cluster}\xspace directive. All the mapping directives specified below a \textsc{Cluster}\xspace directive perform the mapping across PEs or lower level logical clusters inside a logical cluster created by the \textsc{Cluster}\xspace directive. That is, all the mapping directives above a \textsc{Cluster}\xspace directive see logical clusters while those below the \textsc{Cluster}\xspace directive see \textit{inside} of each logical cluster. With this mechanism, one can specify complex dataflows with multiple parallelization dimensions represented by multiple \textsc{SpatialMap}\xspace directives (one in each cluster level). An example of this can be seen in~\autoref{fig:Dataflow_Examples}(F), where the {\tt X'} dimension is spatially distributed across clusters, and the {\tt S} dimension is spatially distributed within the cluster. The cluster directives enable us to represent existing real-world accelerator dataflows, such as Eyeriss~\cite{eyeriss_isca} since it involves the spatial distribution of R and Y dimensions simultaneously, and also NVDLA~\cite{nvdla} which involves the spatial distribution of K and C dimensions. Another advantage of the cluster directive is that its notion of grouping multiple PEs can represent coarse-grained PEs in accelerators, such as SIMD units~\cite{song2016cbrain} and matrix tensor accelerators like GPU Tensor Cores. In summary, we discussed five transformations that capture all possible aspects of dataflows: scheduling, tiling, and mapping. As shown in~\autoref{fig:Dataflow_Examples} the data-centric directives can concisely represent all of those aspects. We envision that the data-centric representation could be either auto-generated from a loop nest version of the dataflow (with affine constraints), or manually written. \insertWideTableFigure{DataReuseOpportunities}{Reuse opportunities based on spatially-mapped dimensions in combination with innermost temporally-mapped dimensions. Filters (F), Inputs (I), and Outputs (O) are considered separately. For brevity, X/Y should be interpreted as X'/Y' as appropriate.} \insertTableFigure{HW_Impl_choices}{Hardware Implementation Choices for supporting spatial and temporal reuse. Note - by {\it temporal multicast}, we refer to {\it stationary} buffers from which the same data is read over time.} \subsection{Hardware Implications of Reuse \label{subsec:hardware_implementation} As we discussed above, various data reuse opportunities appear based on the dataflow. ~\autoref{table:DataReuseOpportunities} summarizes how such opportunities appear in the relationship of spatially mapped dimension within a cluster (Map column) and inner-most temporally mapped dimension (InnerMap column). For example, if output channels (K) are spatially mapped, a decoupled data structure, input feature map, does not change over space. That is, all the PEs receive the same input feature map, which implies a full spatial reuse opportunity (broadcast). In the same example, when the inner-most temporally mapped dimension is the input channels (C), the input channel changes every iteration, which provides temporal reduction opportunities of outputs. Although a dataflow provides temporal or spatial data reuse opportunities, appropriate hardware support is required to actually exploit these phenomena. ~\autoref{table:HW_Impl_choices} summarizes four reuse categories and corresponding hardware implementation to support them. As the table shows, reuse can be either spatial or temporal. Based on the data structure, the communication type can be either multicast (input tensors) or reduction (output tensors). Multicast is a communication type that delivers the same data to multiple targets over space (different PEs at the same time) or time (the same PE in different time). Therefore, multicast is one to many communication type, which requires either a fan-out network-on-chip structure such as bus or tree, or a ``stationary" buffer to hold the data and deliver it to the future. In contrast, the reduction is many to one communication type, which applies to partial sums to generate final outputs. The reduction also can be either spatial or temporal. Example hardware to support spatial reduction is a reduction tree or reduce-and-forward chain such as systolic arrays. Temporal reduction can be supported by a read-modify-write buffer. In summary, different dataflows (expressed via our directives) expose different forms of reuse: spatial and temporal, both for multicasts and reductions, which in turn can have multiple hardware implementations. Reasoning about dataflows in this structured manner exposes new insights and potential microarchitectural solutions. The discussion so far focused on a simple 1D convolution, which itself exposed many possible dataflows and reuse opportunities. We extend this to a full convolution loop and analyze reuse opportunities within a specific dataflow. \subsection{Extended Example: Row-stationary Dataflow} \insertWideFigure{EyerissDeepDive}{An extended example of a row-stationary style dataflow mapped on a six-PE accelerator. We select our own tile sizes for any not specified in the original work~\cite{eyeriss_isca}. We do not apply additional mapping optimizations to minimize PE under-utilization. Colors represent data replication either across time or space (PEs). Directives with asterisks indicate fully unrolled directives that cover entire data dimension with one mapping.} ~\autoref{fig:EyerissDeepDive} presents detailed mapping and reuse patterns across two unit time steps of an example row-stationary dataflow~\cite{eyeriss_isca} over a six-PE accelerator. The accelerator has two PE clusters with three PEs in each cluster. We use the same example layer previously used in~\autoref{fig:7DConv_New}. ~\autoref{fig:EyerissDeepDive}(a) and (b) are compute- and data-centric representations of the row-stationary dataflow. ~\autoref{fig:EyerissDeepDive}(c) shows how the mapping moves across space (PE clusters) and time ~\autoref{fig:EyerissDeepDive}(d) shows the actual coordinates of each tensor across two time steps and two clusters (i.e., time and space). Each colored box in~\autoref{fig:EyerissDeepDive}(d) represents replicated data points, which imply reuse opportunities. Based on the replicated data points, we can infer data reuse over the PE array, as shown in data reuse row in~\autoref{fig:EyerissDeepDive}(d). The mapping in~\autoref{fig:EyerissDeepDive}(d) shows that the same set of input activation values are replicated across two clusters in a skewed manner within the same time step, which implies spatial reuse opportunities in the diagonal direction of the example PE array. Similarly,~\autoref{fig:EyerissDeepDive}(d) shows that the same set of weight values are replicated over two time steps within the same PE, which implies temporal reuse opportunities and weight-stationary style dataflow in unit time step granularity. Note that the dataflow is still row-stationary in a coarse-grained time step although it is weight stationary in unit time steps we define in ~\autoref{fig:EyerissDeepDive} (a) and (b). Finally,~\autoref{fig:EyerissDeepDive} (d) shows the same set of output activation over PEs in each PE cluster, which means that all the PEs in each cluster cooperate to generate a set of output activation data. That is, each PE in a PE cluster generates different partial sums for the same output activation, and they need to be accumulated across PEs in each PE cluster to generate final output activation values. Based on the example analysis in~\autoref{fig:EyerissDeepDive}, we observe that the data reuse pattern exactly matches the original work~\cite{eyeriss_isca}: reuse in the horizontal direction for filter weights and vertical for outputs (partial sum accumulation), and reuse in the diagonal direction for input activations. In summary, reuse opportunities are based on the replicated data across time or space (PEs), which implies temporal and spatial reuse opportunities, respectively. The examples in this section demonstrate the need for a fast, accurate quantitative methodology to compute reuse for complex dataflows. \section{Describing Dataflows} \label{sec:dataflow-IR} \insertWideFigure{Dataflow_Examples}{The impact of directive order, spatial/temporal maps, tile sizes, and clustering.} In this section, we introduce our {\em data-centric} representation to represent various accelerator dataflows. We illustrate four key directives of our representation: 1) spatial map, 2) temporal map, 3) data movement order, and 4) clusters. We use 1D convolution (shown in~\autoref{}(a)) as a pedagogical example. Then, we briefly discuss the construction of dataflows using our representation, and then explain various hardware implementation choices for supporting a wide range of data-reuses across space, time, and space-time \subsection{Data-Centric Representation} We define the dataflow of an accelerator design to consist of two major aspects -- 1) The schedule of DNN computations (e.g., choice of loop transformations) across time for exploiting a wide range of reuses, and 2) The mapping of the DNN computations across PEs for parallelism. To explicitly describe above key aspects of dataflows, we propose a data-centric representation that clearly describes data mappings and movement behavior, unlike compute-centric representations in the loop nest form which imply them. The representation is based on four key components that are briefly discussed below with examples. \\ \noindent {\bf 1) Spatial Map(size, offset) X} specifies a distribution of the dimension X of a data class across PEs, where {\tt size} refers to the number of indices mapped in the dimension {\tt X} to each PE, and {\tt offset} describes the shift in the starting indices of {\tt X} across consecutive PEs. For example, {\tt SpatialMap(1,1) X'} of mapping A in~\autoref{fig:Dataflow_Examples} (where {\tt X'} refers to the first dimension of output data class), spatially distributes indices of the {\tt X'} dimension with a chunk size of one (derived from the {\tt size} parameter) across PEs with an offset of one (derived from the {\tt offset} parameter). This spatial data distribution can be viewed from the data space of output data class in the same column of mapping A in~\autoref{fig:Dataflow_Examples}. If the number of PEs is not sufficient to cover all indices of the dimension mapped, then the mapping is folded over time across the same set of PEs. Also, if {\tt offset} value is smaller than {\tt size} value, then there will be an overlap of indices across consecutive PEs, and this is useful in describing mappings on X and Y in input-centric notation because their iteration space is skewed. \\ \noindent {\bf 2) Temporal Map(size, offset) X} specifies a distribution of the dimension X of a data class across time steps in a PE, and also the mapped chunk of dimension indices is the same across PEs in a time step. Similar to spatial map, {\tt size} refers to the number of indices mapped in the dimension {\tt X} to each PE, and {\tt offset} describes the shift in the starting indices of {\tt X} across consecutive time steps in a PE. For example, {\tt TemporalMap(1,1)S} of mapping A in~\autoref{fig:Dataflow_Examples} (where {\tt S} refers to the first dimension of filter weight data class), distributes indices of the {\tt S} dimension with a chunk size of one (derived from {\tt size} parameter) across time steps with an offset of one (derived from {\tt offset} parameter). This temporal distribution can be viewed from the data space of the filter weight data class in the same column of mapping A in~\autoref{fig:Dataflow_Examples}. Since all PEs get same data indices corresponding to a temporally mapped dimension, this may create opportunities for {\em spatial reuse}, i.e., spatially reusing the same data values across PEs in a time step. \\ \noindent {\bf 3) Data Movement Order: } The sequence of spatial and temporal maps in the dataflow specification dictate the order of data movement, i.e., change in the data mappings to PEs across time. A change in the order can result in an entirely different stationary behavior. For example, the sequence of directives in mapping A in~\autoref{fig:Dataflow_Examples}, i.e., spatial map on {\tt X'} followed by the temporal map on {\tt S}, indicates that all data indices of S should be explored before working on the next chunk of {\tt X'} indices. This order results in temporally reusing values of data corresponding to {\tt X'} indices, i.e., partial sums, for all indices of {\tt S}, leading to output stationary dataflow. This behavior can be observed from the iteration space for the mapping A in~\autoref{fig:Dataflow_Examples}. Now, if the order of directives is interchanged, i.e., temporal map on {\tt S} followed by spatial map on {\tt X'} shown in mapping B in~\autoref{fig:Dataflow_Examples}, results in weight-stationary dataflow, because PEs can temporally reuse values of data corresponding to {\tt S} indices, i.e., weight values, for all indices of {\tt X'} before going to next chunk of {\tt S} indices. Similarly, mappings C and D in~\autoref{fig:Dataflow_Examples} shows the spatial distribution on {\tt S} instead {\tt X'}, and also the impact of data movement order on temporal reuses leading to different stationary dataflows. In all of the mappings from A-D in~\autoref{fig:Dataflow_Examples}, the mapping sizes (first argument) of all mappings are one -- resulting in either no temporal reuse (e.g., partial output sums in case of mapping B) or full temporal reuse (e.g., input feature map in case of mapping C). Increasing the map size of the spatial or temporal maps can help in presenting opportunities for partial temporal reuse, which can capture convolutional reuse in CNN layers. For example, the spatial map corresponding to the {\tt S} dimension in mapping E in~\autoref{fig:Dataflow_Examples} helps in exploiting partial temporal reuse of input data across time steps.\\ \noindent {\bf 4) Cluster (size, type)}: As can be seen from the data movement orders in mappings from A-E in~\autoref{fig:Dataflow_Examples}, data mappings related to a map in outer position gets updated after full exploration of a map in inner position. For example, spatial map on {\tt S} followed by the temporal map on {\tt X'} in Mapping E, indicates that all data indices of {\tt X'} should be entirely explored before working on the next chunk of {\tt S} indices. This inherent assumption can limit certain dataflow behaviors where one might be interested in simultaneously exploiting spatial distribution of more than one data dimensions. We introduced the {\em Cluster} directive as a mean to support the simultaneous spatial distribution of multiple data dimensions. The cluster directive works by logically grouping multiple PE's or multiple sub-clusters of {\tt size} parameter, e.g., cluster(3) in mapping F in~\autoref{fig:Dataflow_Examples} groups available PEs in groups of 3, resulting in two clusters. The advantage of clusters directive is that mappings specified before the directive are applied across the logical clusters resulted from grouping, and mappings specified after the directive are applied inside the logical cluster. With this intuition, one can specify spatial maps after and before the cluster to explore multi-dimensional spatial distribution. An example of this can be seen in mapping F in~\autoref{fig:Dataflow_Examples}, where {\tt X'} dimension is spatially distributed across clusters, and {\tt S} dimension is spatially distributed within the cluster. We use the cluster directive to represent popular row-stationary dataflow in Eyeriss~\cite{eyeriss_isca} since it involves the spatial distribution of R and Y dimensions simultaneously, and also NVDLA dataflow~\cite{nvdla} which involves the spatial distribution of K and C dimensions. \PC{TODO: Didn't talk about type parameter of the cluster?} Another advantage of our cluster directive is that its notion of grouping multiple PEs can represent coarse-grained PEs in our abstract hardware model, such as SIMD units~\cite{song2016cbrain} and matrix tensor accelerators like GPU Tensor cores. With the above four key components of our data-centric representation, all possible aspects of dataflow, i.e., scheduling and mapping, can be represented elegantly. For example, the loop-nest version of a dataflow involving multi-level tiling for the 1D convolution is shown in~\autoref{}(b), and its corresponding data-centric representation with our directives is shown in~\autoref{}(c). Currently, data-centric representation of a dataflow could be either autogenerated by our framework from the correct description of a loop nest version of the dataflow, or manually written. \input{tables/HW_Implication_table.tex} \insertTableFigure{HW_Impl_choices}{Hardware Implementation Choices for supporting spatial and temporal reuse. Note - by {\it temporal multicast}, we refer to {\it stationary} buffers from which the same data is read over time.} \subsection{Hardware Implementation Choices for Data Reuses from Dataflows} \section*{Acknowledgement} We thank Joel Emer for insightful advice and constructive comments to improve this work; Vivienne Sze and Yu-Hsin Chen for their insights and taxonomy that motivated this work. This work was supported by NSF Awards 1755876 and 1909900. \newpage \section{Discussion and Future work} \label{sec:conclusion} This work is motivated by the observation that co-optimizing DNN accelerator microarchitecture and its internal dataflow(s) is crucial for accelerator designers to achieve both higher performance and energy efficiency. In this work, we introduced data-centric directives to specify DNN dataflows in a compact form and understand data reuse opportunities. We also presented an analytical model called \textsc{MAESTRO}\xspace to estimate execution time, energy efficiency, and the hardware cost of dataflows. We evaluated our analytical model relative to the MAERI and Eyeriss accelerators and found our model to be highly consistent with cycle-accurate simulations and reported runtime, which shows the soundness of the analytic model. We provided cases studies about the costs and benefits of dataflow choices over in five state-of-the-art DNN models with a focus on common DNN operators in them, showing diverse preference to dataflow and hardware, which motivates adaptive dataflow accelerator and heterogeneous accelerators. Finally, we also demonstrated the use of MAESTRO for design-space exploration of two dataflows in early and late layers, showing dramatically different hardware preference of each layer. Our DSE tool based on \textsc{MAESTRO}\xspace enabled fast DSE based on optimization to skip invalid designs, which led to a high average DSE rate of 0.17M designs per second. In the future, we plan to leverage \textsc{MAESTRO}\xspace to implement a dataflow auto-tuner to find an optimal dataflow on the specified DNN model and hardware configuration. With the optimal dataflow, we plan to extend our infrastructure to automatically generate RTL, facilitating end-to-end DNN acceleration flow. \section{Background} \label{sec:background} \insertFigure{7DConv_New}{Convolutional layer example} To understand the cost-benefit tradeoffs of various approaches to compute convolutions, we discuss core concepts related to data reuse and dataflows in the context of DNN accelerators. \subsection{Tensors in DNNs} \label{subsec:tensors} We present an example of a multi-channel 2D convolution in~\autoref{fig:7DConv_New} that involves seven data dimensions across three data structures: input/output activation and weight tensors. Although our approach can be applied to various DNN layers---CONV2D, fully-connected (FC), LSTM, separable convolution, and so on---we focus on CONV2D and its variants in this paper because convolutional neural networks (CNNs) are popular, and CONV2D accounts for more than 90\% of overall computation in CNNs~\cite{cong2014minimizing, eyeriss_isca}. Tensors in DNNs are addressed using seven dimensions in a complex manner. For example, the row/column indices of output can be deduced using input row/column and filter row/column indices (i.e., an input-centric view of the convolution loop nest). Also, the input channel index \texttt{c} appears in both filter and input activation, and the output channel \texttt{k} appears in both filter and output activation. We call these dimensions {\it coupled} to these indices, as the position in the data space changes when the index is modified. Because of these specific data access patterns, we can transform the loop nest to keep one of the data structures \emph{stationary} over a range of space or time (i.e., unchanged in a local buffer), which can significantly reduce global/local buffer access counts in DNN accelerators, as well as energy consumption by keeping local wires unchanging. \insertFigure{HWModel}{Abstract DNN accelerator architecture model which is pervasive in many state-of-the-art accelerators~\cite{eyeriss_isca, sharma2016high, parashar2017scnn, jouppi2017datacenter, aklaghi2018snapea}. The illustrated base architecture can be hierarchically organized. } \subsection{DNN Accelerators} \label{subsec:DNNaccelerators} DNN accelerators are specialized architectures to run DNN applications with high throughput and energy efficiency. As described in~\autoref{fig:HWModel}, most DNN accelerators employ hundreds of processing elements (PEs) to exploit inherent parallelism in DNN applications. PEs typically include scratchpad memories (L1) and ALUs that perform multiply-accumulate operations (MACs). To reduce energy- and time-consuming DRAM accesses, most DNN accelerators also include a shared scratchpad buffer (L2) large enough to stage data to feed all the PEs. Shared L2 buffer and PEs are interconnected with a network-on-chip (NoC). Our approach supports a wide range of interconnect designs in the NoC module. For example, a systolic array could be represented as a 2D array that provides unidirectional links toward East and South. Depending on the hardware parameters selected, our approach can support architecture designs that can efficiently execute a wide range of DNN operations, including convolutions, because it enables exploiting not only parallelism but also data reuse via buffers and forwarding/multicasting NoCs. \subsection{Data Reuse Taxonomy} \label{subsec:datareuse} We observe that data reuse originates from two behaviors of DNN accelerators over time and space - multicasting (input tensors) and reduction (output tensors). \betterparagraph{Multicasting} Spatial multicasting reads a data point from a buffer only once, spatially replicates the data point via wires, and delivers the data point to multiple spatial destinations (i.e., PEs), which reduces expensive remote buffer accesses and saves energy. Likewise, temporal multicasting also reads a data point from a large remote buffer only once, temporally replicates the data point via a smaller local buffer, and delivers the data point to multiple temporal destinations (i.e., different time instances) at the same PE, which also reduces expensive remote buffer accesses and saves energy. \betterparagraph{Reduction} Spatial reduction accumulates partial outputs from multiple spatial sources and spatially accumulates them via multiple compute units (e.g., an adder tree or reduce-and-forward). Similarly, temporal reduction accumulates partial outputs from multiple temporal sources (i.e., partial sums computed at different time) and temporally accumulates them via an accumulation register or buffer (e.g., accumulation buffer in TPU~\cite{jouppi2017datacenter}). \subsection{Dataflow Definition and Example} \label{subsec:dataflowdefinition} \insertFigure{DataReuse}{An operational example of a weight-stationary style accelerator with four PEs. For simplicity, input/output channels and batch are omitted. A 2x2 kernel (R=2, S=2) is used in this example.} In order to leverage these opportunities, the accelerator must schedule operations such that the PEs proceed through the data tensors in a coordinated fashion, which can be viewed as transformations (e.g., ordering and tiling) applied to the convolution in \autoref{fig:7DConv_New}, along with a partitioning of data to PEs. Such schedules are termed as {\em dataflows} in prior work~\cite{eyeriss_isca}, which categorizes dataflows into classes based on the tensor which is scheduled to change least frequently, e.g., weight-stationary, output-stationary, and input-stationary. \autoref{fig:DataReuse} shows an example weight-stationary dataflow run on four PEs. We can observe that $W_{1}$ is multicast across time (temporal multicasting), $I_{1}$ is multicast across PEs (spatial multicasting), and $P_{3\_1}$ is reduced across space and time. That is, the example accelerator temporally reuses $W_{1}$ and spatially reuses $I_{1}$ and $P_{3\_1}$. Note that the name ``weight-stationary" conveys intuition and a high-level characterization of scheduling strategy, but detailed insight and analysis requires more precise description. Chen et al.~\cite{chen2018eyeriss} refine the definition of dataflow by additionally specifying that two schedules which differ only in the concrete bounds should be considered \emph{instances} or \emph{mappings} of the same dataflow. This is an important distinction, as it allows families of accelerators to be categorized together even if they have different buffer sizes---i.e., a mobile chip and a datacenter chip may use the same traversal orderings despite large differences in tile size. For brevity, for the remainder of this work, we make no distinction between schedules with fully-specified or partially unspecified concrete bounds but refer to them all as dataflows. \subsection{Existing Expressions of Dataflow} \label{subsec:dataflowdescription} \insertWideFigure{Unrolled1DConvExample}{An example 1D convolution and an example output-stationary dataflow on the convolution. We represent the dataflow (b) in loop nest and (c) data-centric directives. In (c), gray boxes represent omittable descriptions, which can be inferred (upper gray box) or do not affect the data reuse over PEs (lower gray box). (d) shows an abbreviated form of the dataflow description in data-centric directives. (e) and (f) show resulting mapping on PEs and iteration space, whose dots correspond to computation (or, partial sums).} To convey the scheduling decisions of a particular architecture, dataflows have been expressed as \emph{loop nests}, a syntax that resembles a simple imperative programming language with explicit parallelism, as presented in Eyeriss v2~\cite{chen2018eyeriss}. We term the loop nest notation a \emph{compute-centric} representation since the data movement is implicit from the loop order and the explicit parallelism specified by the programmer. The loop order dictates the schedule (or, ordering) of computations, the explicit annotation of loops with {\tt parallel-for} captures parallelism, and the combination of loop ordering, tiling, and parallelism enables data reuse. Therefore, architects started to explore optimized loop nests encompassing all of the three aspects; loop order, parallelism, and tiling. For example, Eyeriss v2~\cite{chen2018eyeriss} describes its dataflow in a 22-dimensional loop nest. Compute-centric representation including the polyhedral model has been a huge help to compilers in estimating reuses in guiding optimal loop transformations for both parallelism and locality~\cite{DBLP:journals/ibmrd/Sarkar97,DBLP:conf/ispass/SarkarM00,DBLP:conf/cc/ShirakoSFPRSS12,Wolf:1991:DLO:113445.113449,Bondhugula:2008:PAP:1375581.1375595,DBLP:conf/sc/PouchetBBCRS10,Shirako:2014:OWM:2683593.2683626}. Those works provide sufficiently accurate cost estimations to drive a series loop transformation in a compiler. However, they do not precisely model data reuse, so therefore computing throughput and energy-efficiency with high accuracy is challenging for those works. Bao et al.~\cite{Bao:2017:AMC:3177123.3158120} developed an analytical model to accurately estimate cache behavior (thereby computing reuses) for a class of affine programs that can be precisely analyzed by a polyhedral model at compile time. However, they use heavyweight linear-algebra frameworks within the polyhedral model to compute reuse, thereby making it impractical to use these techniques on real large applications. Also, it is very challenging for the polyhedral-based frameworks to compute reuse arising from array subscripts involving non-affine expressions or complex subscripts, such as modulus operations which are common in strided convolutions. In addition, although there exists a body of past compiler work that performs reuse analysis on sequential programs~\cite{DBLP:journals/ibmrd/Sarkar97,DBLP:conf/ispass/SarkarM00,DBLP:conf/cc/ShirakoSFPRSS12,Wolf:1991:DLO:113445.113449,Bondhugula:2008:PAP:1375581.1375595,DBLP:conf/sc/PouchetBBCRS10,Shirako:2014:OWM:2683593.2683626,Bao:2017:AMC:3177123.3158120}, they lack the ability to analyze loop nests with explicit parallelism, while DNN dataflows often contain multiple levels of parallelism. Also, those past works did not consider spatial reuse (which does not refer to the spatial locality in cache-based architectures but data reuse via wires or across PEs) that leverages multicasting and reduction support of accelerators, which plays a key role in estimating the overall throughput and energy efficiency of spatial DNN accelerators. Such limitations and challenges motivate us to explore an alternative intermediate representation (IR) of dataflows, a \emph{data-centric} representation where data movement and organization are first-class entities. Since data movement is explicit in the data-centric representation, our analytical model becomes simpler and relatively faster as there is no need to leverage heavyweight linear-algebra frameworks to precisely estimate data movement/reuse behavior. \section{Background} \label{sec:background} \insertFigure{HWModel}{Abstract DNN accelerator architecture model which is pervasive in many state-of-the-art accelerators~\cite{eyeriss_isca, sharma2016high, parashar2017scnn, jouppi2017datacenter, aklaghi2018snapea}. } Although our approach can be applied to various DNN layers---CONV2D, fully-connected (FC), LSTM, separable convolution, and so on---we focus on CONV2D and its variants in this paper because convolutional neural networks (CNNs) are popular, and CONV2D accounts for more than 90\% of overall computation in CNNs~\cite{cong2014minimizing, eyeriss_isca}. To understand the cost-benefit tradeoffs of various approaches to compute convolutions, we first introduce common DNN accelerator architectures, and discuss core concepts related to data reuse and dataflows. \subsection{DNN Accelerators} \label{subsec:DNNaccelerators} DNN accelerators are specialized hardware to run DNN applications with high throughput and energy efficiency. As described in~\autoref{fig:HWModel}, most DNN accelerators employ hundreds of processing elements (PEs) to exploit inherent parallelism in DNN applications. PEs typically include scratchpad memories (L1) and ALUs that perform multiply-accumulate operations (MACs). To reduce energy- and time-consuming DRAM accesses, most DNN accelerators also include a shared scratchpad buffer (L2) large enough to stage data to feed all the PEs. Shared L2 buffer and PEs are interconnected with a network-on-chip (NoC). Our approach supports a wide range of interconnect designs in the NoC module. For example, a systolic array could be represented as a 2D array that provides unidirectional links toward East and South. Depending on the hardware parameters selected, our approach can support architecture designs that can efficiently execute a wide range of DNN operations, including convolutions, because it enables exploiting not only parallelism but also data reuse via buffers and forwarding/multicasting NoCs. \subsection{Tensors in Convolution} \label{subsec:tensors} \insertFigure{7DConv_New}{An example convolutional layer with its dimensions and indexing are shown in (a), and a visualization of the convolution shown in (b). } We present an example of a 2D convolution in~\autoref{fig:7DConv_New} that involves seven data dimensions across three data structures: input/output activation and weight tensors. Tensors in those three structures are coupled with seven dimensions in a complex manner. For example, the row/column indices of output can be deduced using input row/column and filter row/column indices (i.e., an input-centric view of the convolution loop nest). Also, the input channel index \texttt{c} appears in both filter and input activation, and the output channel \texttt{k} appears in both filter and output activation. We call these dimensions {\it coupled} to these indices, as the position in the data space changes when the index is modified. Because of these specific data access patterns, we can transform the loop nest to keep one of the data structures \emph{stationary} over a range of space or time (i.e., unchanged in a local buffer), which can significantly reduce global/local buffer access counts in DNN accelerators, as well as energy consumption by keeping local wires unchanging. Such combinations of loop transformations and mappings to PEs are termed as {\em dataflows} in a prior work~\cite{eyeriss_isca}. The work has categorized dataflows into three classes: weight-stationary, output-stationary, and input-stationary. Although these classes provide a high-level characterization of strategies used when the work was published, the rapid rate of DNN accelerator development necessitates a more precise enumeration of the design space. \begin{comment} Different data movement schedule, tiling, and mapping of the tensors, which we define as dataflow in~\autoref{subsec:dataflowdescription}, lead to various data reuse patterns. Because data reuse is the prime behavior that accelerators exploit to deliver high throughput and energy efficiency, the cost model, \textsc{MAESTRO}\xspace, also models it as the first-order target behavior. \textsc{MAESTRO}\xspace classifies data reuse in three classes based on their reuse span in time and space (PEs). We discuss the taxonomy in the following subsection. \end{comment} \subsection{Data Reuse Taxonomy} \label{subsec:datareuse} \insertFigure{DataReuse}{An operational example of a weight-stationary style accelerator over four PEs. For simplicity, input/output channels and batch are omitted. A 2x2 kernel (R=2, S=2) is used in this example.} We broaden the taxonomy of dataflows by observing that data reuse originates from two behaviors of DNN accelerators over time and space - multicasting (input tensors) and reduction (output tensors). \betterparagraph{Multicasting} Spatial multicasting reads a data point from a buffer only once, spatially replicates the data point via wires, and delivers the data point to multiple spatial destinations (i.e., PEs), which reduces expensive remote buffer accesses and saves energy. Likewise, temporal multicasting also reads a data point from a large remote buffer only once, temporally replicates the data point via a smaller local buffer, and delivers the data point to multiple temporal destinations (i.e., different time instances) at the same PE, which also reduces expensive remote buffer accesses and saves energy. \betterparagraph{Reduction} Spatial reduction accumulates partial outputs from multiple spatial sources and spatially accumulates them via multiple compute units (e.g., an adder tree). Similarly, temporal reduction accumulates partial outputs from multiple temporal sources (i.e., partial sums computed at different time) and temporally accumulates them via an accumulation register or buffer (e.g., accumulation buffer in TPU~\cite{jouppi2017datacenter}). When multicasting or reduction temporally occurs, it provides temporal reuse opportunity, and vice versa. For example, ~\autoref{fig:DataReuse} shows an operational example of weight-stationary dataflow style run on four PEs. We can observe that $W_{1}$ is multicasted across time, which is temporal multicasting, $I_{1}$ is multicasted across space (PEs), which is spatial multicasting, and $P_{3\_0}$ is reduced across space and time. That is, the example accelerator temporally reuses $W_{1}$ and spatially reuses $I_{1}$ and $P_{3\_0}$. \begin{comment} The purpose of staging is to keep data points in a local buffer to reuse them in the future, multicasting is to simultaneously send the same data points to multiple PEs to amortize the cost of buffer access, and forwarding to send data points to adjacent PEs so that they can be reused in future iterations. We define staging as \textit{temporal} reuse and multicasting and forwarding as \textit{spatial} reuse. We categorize both of multicasting and forwarding as spatial reuse because their data reuse is across space (PEs), and they are hardware implementation choices that materialize spatial reuse opportunity in a different manner. \autoref{fig:DataReuse} shows a timeline of a row-stationary accelerator (logical PE version ~\cite{eyeriss_isca}) with four PEs. ~\autoref{fig:DataReuse} (b) highlights examples of (1) temporal, (2) spatial (multicasting), and (3) another spatial (reduction) data reuse. The {\tt weight} data value W1 is used at cycle 0 in PE3, and used again at cycle 2 in the same PE, which is \textit{temporal reuse}. Input data value I1 is used at cycle 0 across PE2 and PE1, which is \textit{spatial reuse}. The partial sum P$_{3\_1}$ is accumulated at cycle 3 in PE0 but generated by PE3 at cycle 2, which is spatial reuse across time (\textit{spatio-temporal}). Note that any given architecture may or may not exploit all of these opportunities. As shown in~\autoref{fig:DataReuse} (b), we can observe that the data reuse occurs either across time or space (across PEs). Within each data reuse, two reuse styles, multicasting and reduction, exist. Multicasting is reuse for input tensors (input activation and filter), which indicates that the same data points are replicated over time or space. Reduction is reuse for output tensors (output activation), which indicates that the same output is replicated over time or space. That is, if the dataflow is legal (does not perform redundant computation), partial sums for the replicated output are accumulate over time or space. Note that such replication of data points across time and space implies \texttt{data reuse opportunities} not actual data reuse. To materialize such data reuse opportunities, appropriate hardware support is required, which we discuss in~\autoref{subsec:hardware_implementation}. \end{comment} \subsection{From Loop Nest to Dataflow} \label{subsec:dataflowdescription} \insertWideFigure{Unrolled1DConvExample}{An example 1D convolution and an example output-stationary dataflow on the convolution. We represent the dataflow (b) in loop nest and (c) data-centric directives. In (c), gray boxes represent omittable descriptions, which can be inferred (upper gray box) or does not affect the data reuse over PEs (lower gray box). (d) shows an abbreviated form of the dataflow description in data-centric directives. (e) and (f) show resulting mapping on PEs and iteration space, whose dots correspond to computation (or, partial sums).} Dataflows are often expressed as \emph{loop nests}, a syntax that resembles a simple imperative programming language with explicit parallelism, as presented in Eyeriss v2~\cite{chen2018eyeriss}. We term the loop nest notation a \emph{compute-centric} representation since the data movement is implicit from the loop order and the explicit parallelism specified by the programmer. The loop order dictates the schedule (or, ordering) of computations, the explicit annotation of loops with {\tt parallel-for} captures parallelism, and the combination of loop ordering, tiling and parallelism enables data reuse. Therefore, architects started to explore optimized loop nests encompassing all of the three aspects; loop order, parallelism, and tiling. For example, Eyeriss v2~\cite{chen2018eyeriss} describes its dataflow in a 22-dimensional loop nest. Compute-centric representation (i.e., loop-nest notation) including the polyhedral model has been a huge help to compilers in estimating reuses in guiding optimal loop transformations for both parallelism and locality~\cite{DBLP:journals/ibmrd/Sarkar97,DBLP:conf/ispass/SarkarM00,DBLP:conf/cc/ShirakoSFPRSS12,Wolf:1991:DLO:113445.113449,Bondhugula:2008:PAP:1375581.1375595,DBLP:conf/sc/PouchetBBCRS10,Shirako:2014:OWM:2683593.2683626}. Those works provide sufficiently accurate cost estimations to drive a series loop transformation in a compiler. However, they do not precisely model data reuse so computing throughput and energy-efficiency with high accuracy is challenging for those works. Bao et al.~\cite{Bao:2017:AMC:3177123.3158120} developed an analytical model to accurately estimate cache behavior (thereby computing reuses) for a class of affine programs that can be precisely analyzed by a polyhedral model at compile time. However, they use heavyweight linear-algebra frameworks within the polyhedral model to compute reuse, thereby making it impractical to use these techniques on large real applications. Also, it is very challenging for the polyhedral-based frameworks to compute reuse arising from array subscripts involving non-affine expressions or complex subscripts, such as modulus operations which are common in strided convolutions. In addition, although there exists a body of past compiler work that performs reuse analysis on sequential programs~\cite{DBLP:journals/ibmrd/Sarkar97,DBLP:conf/ispass/SarkarM00,DBLP:conf/cc/ShirakoSFPRSS12,Wolf:1991:DLO:113445.113449,Bondhugula:2008:PAP:1375581.1375595,DBLP:conf/sc/PouchetBBCRS10,Shirako:2014:OWM:2683593.2683626,Bao:2017:AMC:3177123.3158120}, they lack the ability to analyze loop nests with explicit parallelism while DNN dataflows often contain multiple levels of parallelism. Also, those past works did not consider spatial reuse(which does not refer to spatial locality in cache-based architectures but data reuse via wires or acress PEs) that leverages multicasting and reduction support of accelerators, which plays a key role in estimating the overall throughput and energy efficiency of spatial DNN accelerators. Such limitations and challenges motivate us to explore an alternative intermediate representation (IR) of dataflows, a \emph{data-centric} representation where data movement and organization are first-class entities. Since data movement is explicit in the data-centric representation, our analytical model becomes simpler and relatively faster as there is no need to leverage heavyweight linear-algebra frameworks to precisely estimate data movement / reuse behavior. \section{Related Works} \label{sec:related} \colonparagraph{Hardware DSE and dataflow optimization} Dataflow optimization is one of the key optimization targets in many recent DNN accelerators such as Eyeriss~\cite{eyeriss_isca}, Flexflow~\cite{lu2017flexflow}, SCNN~\cite{parashar2017scnn}, and NVDLA~\cite{nvdla}. C-brain~\cite{song2016cbrain}, Flexflow~\cite{lu2017flexflow}, and analyzed the cost-benefit tradeoff of three dataflows and explored the opportunity of adaptive dataflow based on the tradeoff. Ma et al.~\cite{ma2017optimizing} also constructed an analytic model for convolutions on FPGAs focusing on three loop transformations; interchange, unroll, and tiling. Although their analytic model provides an intuition for trade-offs of dataflows, the model focus on one dataflow style they propose, does not consider regional spatial reuse, spatio-temporal reuse opportunities in DNN accelerators, and also doesn't consider communication delay in the NoC, which can dominate for dataflows with large tile sizes. Also, the target dataflow is optimized for HLS flow, and it requires expressing using complex annotated loop nest with HLS synthesis directives. Caffeine~\cite{zhang2018caffeine} proposed a full automatic FPGA flow that includes pragma-based annotation in programs, dataflow optimization framework, and DSE for FPGAs based on the analytic model defined over loop tiling and unrolling. However, the dataflow search space is limited due to fixed loop orders; three presets termed straightforward, input-major, and weight-mapping. {\bf Past works related to data-centric approaches}: There have been some works related to exploring data-centric approaches~\cite{Kodukula2001, Kodukula:1997:DMB:258915.258946,Kodukula:1999:EET:305138.305243}, where the approaches reason about flow of data through memory hierarchy, instead of control-structure centric analysis, for locality-enhancement transformations such as multi-level data blocking ~\cite{Kodukula2001} and data shackling~\cite{Kodukula:1997:DMB:258915.258946}. But, the above data-centric approaches have been explored in the context of driving optimizations for multi-level caches, but not estimating energy or throughput of input kernels precisely. We discuss related work on loop-nest notation and reuse analysis in compilers in ~\autoref{subsec:dataflowdescription}. \section{Quantitative Dataflow Analysis} \label{sec:framework} \insertFigure{PerformanceCostAnalysisEngine}{A high-level overview of algorithms in performance and cost analysis engines.} \insertWideFigure{AnalysisFramework}{An overview of \textsc{MAESTRO}\xspace's analysis framework. For simplicity, we omit components other than analysis engines.} In this section, we present our approach to quantitatively estimating runtime and energy efficiency of dataflows on a target DNN model and hardware configuration. Based on the approach, we implement an analysis framework, \textsc{MAESTRO}\xspace, which consists of five engines: tensor, cluster, data reuse, performance analysis, and cost analysis. ~\autoref{fig:AnalysisFramework} provides a high-level overview of the five engines. In the interest of space, we only discuss high-level algorithms without edge case handling, multiple layers, and multiple cluster levels. For details, we present them in our open-source repository\footnote{https://github.com/georgia-tech-synergy-lab/MAESTRO}. \subsection{Tensor Analysis} \label{subsec:tensor_analysis} As described in~\autoref{fig:AnalysisFramework}, the tensor analysis engine identifies dimension coupling for each tensor based on specified layer operations. For example, in depth-wise convolutions, output channel is not coupled with output-channel dimension, but the input channel. Note that depth-wise convolution can be understood as either this way or eliminating input channel dimension (C). We select this convention because it aligns with \textsc{MAESTRO}\xspace's input-centric cost model. \textsc{MAESTRO}\xspace allows users to specify tensors with arbitrary dimension coupling, and such coupling relationship is input to the rest of engines, which provides generality to \textsc{MAESTRO}\xspace. \subsection{Cluster Analysis} \label{subsec:pe_cluster_hierarchy_analysis} A PE cluster refers to a group of PEs that processes one or more data dimensions in parallel, which is specified by the \textsc{Cluster}\xspace directive. ~\autoref{fig:HighLevelAlgorithm} (b) shows a brief description of the Cluster Analysis (CLA) engine. The CLA engine analyzes a given dataflow description written in dataflow directives to identify the number of sub-clusters in each cluster level, extract cluster dataflow directives and data dimensions for each cluster level, and augment the given dataflow descriptions for missing directives, stride handling, and so on. \begin{comment} \betterparagraph{Cluster size analysis} The CHA engine first evaluates cluster sizes from the lower-most cluster level, which is the bottom-most cluster in a data-centric dataflow representation, to the top and calculates the total number of PEs, or unit clusters. Based on the total number of unit clusters (PEs) specified by users, the CHA engine infers the minimum number of active unit clusters to run an input dataflow. For example, in~\autoref{fig:EyerissDeepDive} (b), \textsc{Cluster}\xspace(3,L) directive exists, which implies we need at least three unit clusters to form the lowest level cluster and run this dataflow. If the number of PEs is not sufficient (e.g., 2 in this example), \textsc{MAESTRO}\xspace outputs an error message and terminates. If a user specifies 168 PEs, $168 \div 3 = 56$ lowest-level clusters are constructed. When the number of PEs is not even divisible by the number of unit clusters in the second top-most cluster, the remainder PEs are inactive. \betterparagraph{Cluster dataflow extraction} The CHA engine traverses through each directive in the data-centric representation and extracts dataflow directives for each cluster level with dimension size information. The data dimension size needs to be updated for each cluster level because a data dimension size in a cluster is the mapping size of its upper cluster (For the top-most cluster, it is the same as the layer dimension). That is, at a non top-most cluster level, the size of each data dimension is inherited from the mapping size of the corresponding dimension in the upper level cluster. For example, in~\autoref{fig:EyerissDeepDive} (b) has two cluster levels. In the lower cluster level, the size of data dimension Y is 3 because \textsc{TemporalMap}\xspace(3,1) Y is specified in the upper cluster. \betterparagraph{Dataflow Directive Augmentation} At each cluster level, mapping directives over all the dimensions need to be specified. If no directive is specified on a data dimension, \textsc{MAESTRO}\xspace automatically inserts \textsc{TemporalMap}\xspace (\textit{dimension size}, \textit{dimension size}), which effectively fully unrolls the corresponding dimension. Also, the CHA engine internally converts output-centric notation, a data-centric dataflow representation that uses output activation dimensions (X' and Y') instead of input activation dimensions (X and Y). In addition to the discussed augmentation, the CHA engine also updates the directives to reflect stride size, which enables users to specify the dataflow without tuning directives for strides in the same way a user deals with layers with unit stride size. \end{comment} \subsection{Data Reuse Analysis} \label{subsec:data_reuse_analysis} ~\autoref{fig:AnalysisFramework} (b) includes a high-level description of analysis in data reuse analysis (DR) engine. DR engine identifies the amount of temporal and spatial reuse across adjacent time steps, which is the data iteration corresponding to the inner-most non-temporally/spatially unrolled mapping directive. \begin{comment} To compute the exact amount of data reuse, we first analyze reuse opportunities by computing the number of temporally and spatially reused data points in each data structure during an accelerator run. The Data Reuse Analysis (DRA) engine identifies such reuse opportunities For simplicity, we discuss reuse analysis within a single cluster because we can repeatedly apply the same method for each cluster level in multi-level cluster cases. To focus on core ideas, we omit various edge cases when the data dimension size to map is smaller than the mapping size (temporal edge), the number of sub-clusters in a cluster level (spatial edge), or both of those (spatio-temporal edge) although they are all dealt in the DRA engine. ~\autoref{fig:ReuseAnalysis} presents core algorithms in DRA engine. \betterparagraph{Prime Changing Dimension} When a data mapping moves at each unit time step, one of the dimensions proceed and rest of them either resets to the initial position or remain stationary. This is similar to the behavior of loop variables in a loop nest. For example, loop variable x in ~\autoref{fig:7DConv_New} (c) increments only if the iteration moves to the next step when (r,s) = (2,2) and they are updated to (r,s) = (0,0). When such an update occurs, all the loop variable above loop x remains stationary. In this example, the dimension x is the prime changing dimension. In data mapping iterations, we can observe the same behavior; a data dimension $\alpha$ coupled with a directive in the middle of dataflow directives is updated only if all the dimensions coupled with directives below the directive on $\alpha$. The inner-most data dimension neither initialized to its initial position nor fully unrolled is the prime changing dimension, which can be computed as presented in~\autoref{fig:ReuseAnalysis}. Unrolled directives are \textsc{TemporalMap}\xspace with mapping size larger than its corresponding dimension size or \textsc{SpatialMap}\xspace with the spatial coverage (mapping size + (number of sub-clusters) $times$ offset; when mapping size > offset). The prime changing dimension is critical to compute temporal reuse. \betterparagraph{Temporal Reuse Analysis} To compute the temporal reuse, the DRA engine tracks the changes in data mapping of each tensor across every possible two adjacent unit time steps, as shown in~\autoref{fig:ReuseAnalysis} (b). The amount of spatial reuse depends on where the data mapping traverses in the entire data space, which we term as data dimension iteration status. For each dimension, only three data dimension iteration states exist: \texttt{Init}, \texttt{Steady}, \texttt{Edge}. \texttt{Init} refers to the first position of the data dimension iteration, which corresponds to the initial value of a loop variable in a loop nest. \texttt{Steady} refers to all the non-init data iteration positions except edge cases. \texttt{Edge} refers to the last data iteration position when the remaining dimension size is less than the mapping size of a \textsc{TemporalMap}\xspace or the spatial coverage of \textsc{SpatialMap}\xspace. \HK{Algorithms to compute iteration cases and number of case occurrence is not added yet.} The DRA engine analyzes the dataflow and data dimension sizes and extracts all the possible combinations of iteration positions for each data dimension, which results in a maximum of $3^7$ cases (not likely to occur in practice unless an extremely inefficient mapping is selected). The DRA engine identifies how the mapping is updated for each tensor in each case, the number of case occurrences, and the amount of temporal data reuse during execution. \betterparagraph{Spatial Reuse Analysis} The key observation for spatial reuse analysis is that reuse analysis of temporal and spatial map is identical but spatial reuse needs to consider spatial steps (analogous to the concept of time steps) in a certain granularity, or number of sub-clusters (in the lowest cluster level, PEs) at the corresponding cluster level. That is, the DRA engine computes the spatial reuse in the same way as temporal reuse analysis, but groups up the number of sub-clusters spatial steps (time steps in the temporal reuse case) and identifies the reuse in this granularity, which makes the spatial reuse analysis presented in~\autoref{fig:ReuseAnalysis} (c) a super set of temporal reuse analysis in~\autoref{fig:ReuseAnalysis} (b). \end{comment} \subsection{Performance Analysis} \label{subsec:performance-analysis} ~\autoref{fig:AnalysisFramework} (a) presents a high-level overview of performance and cost analysis engine, and~\autoref{fig:PerformanceCostAnalysisEngine} shows high-level algorithm of the performance analysis (PA) engine. Utilizing the reuse information computed in DR engine, the performance analysis (PA) engine computes the runtime for all the possible cases based on the data dimension and dataflow. The computed runtime is multiplied with the number of each case's occurrences, and accumulated to compute the total runtime. The runtime of a DNN accelerator consists of communication delay (L2 to L1, L1 to L2, local forwarding) and computation delay in each PE, which are directly related to the accelerator's hardware parameters. PA engine considers the double buffering when it computes the outstanding delay (the worst case delay of communication/computation delay) that directly contributes to the runtime. To estimate communication delays, \textsc{MAESTRO}\xspace relies on its analytical network-on-chip (NoC) model, which is a pipe model similar to other analytic models~\cite{parashar2019timeloop}. The pipe model utilizes two parameters, the pipe width (bandwidth) and length (average delay), to estimate the communication delay via NoC. The model incorporates a pipelining effect as many packet-switching NoCs have similar behavior. Various combinations of the bandwidth and average delay enables to model NoC structures with reasonable accuracy. For example, Eyeriss~\cite{eyeriss_isca} has a two-level hierarchical bus with a dedicated channel for three data structures (inputs, weights outputs). Therefore, a bandwidth of 3X properly models the top level buffer. The average latency depends on implementation details; users should choose an appropriate value considering implementation details (e.g., both of ingress/egress buffers or one of them might exist in a bus implementation, which adds one cycle delay each). For more complicated NoC architectures, users should select bisection bandwidth and average latency considering uniform communication to all the PEs from a global buffer. For example, a $N \times N$ 2D mesh network with the injection point at one of the corners, the bisection bandwidth is $N$ and the average latency is $N$. Assuming that the user has access to the NoC implementation information, the NoC model is precise when the NoC is a bus or a crossbar. \begin{comment} \betterparagraph{Accelerator Model} \textsc{MAESTRO}\xspace receives the following parameters listed in~\autoref{fig:HWModel} as inputs: number of PEs, vector processing width per PE, interconnect latency, buffer sizes, and bandwidth. The hardware model is a hierarchy of PE clusters with buffers at the highest (L2 global scratch pad) and lowest (L1 local scratch pad) and interconnection between adjacent PE cluster levels. ~\autoref{fig:HWModel} shows the simplest case of the hardware model that has no additional PE cluster level. In multi-cluster level cases, the network-on-chip and PE array structure in~\autoref{fig:HWModel} is replicated. \betterparagraph{NoC Model} \textsc{MAESTRO}\xspace employs an analytic NoC model based on a pipe model, similar to other analytic models~\cite{parashar2019timeloop}. The pipe model utilizes two parameters, the pipe width (bandwidth) and length (average delay), to estimate the communication delay via NoC. The model incorporates a pipelining effect as many packet-switching NoCs have similar behavior. Various combinations of the bandwidth and average delay enables to model NoC structures with reasonable accuracy. For example, Eyeriss~\cite{eyeriss_isca} has a two-level hierarchical bus with a dedicated channel for three data structures (inputs, weights outputs). Therefore, a bandwidth of 3X properly models the top level buffer. The average latency depends on implementation details; users should choose an appropriate value considering implementation details (e.g., both of ingress/egress buffers or one of them might exist in a bus implementation, which adds one cycle delay each). For more complicated NoC architectures, users should select bisection bandwidth and average latency considering uniform communication to all the PEs from a global buffer. For example, a $N \times N$ 2D mesh network with the injection point at one of the corners, the bisection bandwidth is $N$ and the average latency is $N$. Assuming that the user has access to the NoC implementation information, the NoC model is precise when the NoC is a bus or a crossbar. \betterparagraph{Runtime Estimation} To evaluate the runtime, we perform a worst-case analysis to estimate the delay of each unit time step at each cluster level. That is, we compare computation, L2-to-L1 communication, L1-to-L2 communication, and PE to PE forwarding delays and select the longest delay as the computation delay of the corresponding cluster level since the worst case delay is the outstanding delay observed from the upper level cluster. The PA engine applies double-buffering and latency-hiding based on a user-specified parameter to enable users to explore the tradeoff between buffer size and performance. To compute both computation and communication delay, the PA engine identifies the total number of computations for each sub-cluster (or PE in the lowest cluster level), the amount of \texttt{unique} data of each tensor across adjacent unit time steps in each data dimension iteration case. In the lowest level cluster, the computation delay is the worst case number of computations for a PE divided by the vector width of the datapath in each PE. The L2 to L1 communication delay is the amount of unique input/filter data divided by the NoC bandwidth plus tail delay of NoC pipeline. We apply the same method to the L1 to L2 communication delay except that we use the amount of unique output data. For multi-level cluster cases, we compute from the lowest level cluster and apply the outstanding delay as the computation delay of the upper level cluster. \betterparagraph{Bottleneck Analysis} The PA engine also reports the bottleneck delay of each iteration case via an output log file. Users can review the output file and identify the source of the bottleneck and try updating either hardware parameters or dataflow to eliminate the bottleneck. \end{comment} \subsection{Cost Analysis} \label{subsec:cost-analysis} ~\autoref{fig:PerformanceCostAnalysisEngine} describes how the cost analysis (CA) engine computes the number of buffer accesses and estimates the buffer size requirements for each tensor, considering data reuse computed in DR engine and data iteration cases. Utilizing the access counts and the number of MAC operation information, \textsc{MAESTRO}\xspace computes the energy cost. \textsc{MAESTRO}\xspace includes an energy model based on those activity counts and Cacti~\cite{muralimanohar2009cacti} simulation, which can be replaced by any other energy model based on such activity counts (e.g., Accelergy~\cite{iccad_2019_accelergy}). \begin{comment} The major cost of DNN accelerators is data movement~\cite{eyeriss_isca} so modeling movement and reuse is the key to DNN accelerator cost estimation, which is the prime target of the Cost Analysis (CA) engine's analysis. \betterparagraph{Buffer Access and Computation Counts} The CA engine estimates the activity counts of buffer accesses at each hierarchy and computations, which is the basis of the energy estimation of \textsc{MAESTRO}\xspace. To precisely estimate the activity counts, the CA engine combines information reported by other engines including PE cluster hierarchy information reported by the CHA engine, cluster dataflow information reported by the CHA engine, and the data reuse information reported by the DRA engine. The CA engine counts all the buffer accesses within each cluster level and all the mapped unique data points across buffer hierarchies, which includes data reuse. \betterparagraph{Buffer Size Requirements} The CA engine estimates the buffer size requirement by identifying the worst-case mapping size of each buffer hierarchy and prints out warning messages if the user-specified buffer size is not sufficient to support the target dataflow. The mapping size of each tensor is the product of all the mapping sizes of coupled dimensions to the tensor. The total buffer size requirements are the sum of mapping size of all the tensors. Depending the dataflow, the CA engine also reserves buffer space for partial sums in global buffers. \end{comment} \subsection{Complex Dataflow Analysis} \label{subsec:complex_dataflow_analysis} \betterparagraph{Multi-cluster Analysis} As we discussed in this section so far, multi-cluster cases can be split into single-cluster cases with the data dimension size set as the mapping size of the corresponding mapping directive in the upper cluster. The outstanding delay of a cluster level becomes the computation delay of the next cluster level above. To handle various edge cases that affects all the lower cluster levels, \textsc{MAESTRO}\xspace recursively performs performance and cost analysis, as illustrated in~\autoref{fig:AnalysisFramework}. In the recursive analysis, the base case is the inner-most cluster whose sub-clusters are actual PEs. Although \textsc{MAESTRO}\xspace performs recursion, the complexity is not high because the number of PE cluster levels are typically two or three. In summary, the multi-cluster analysis is recursive cost analysis whose base case is the lowest level cluster that has unit clusters (i.e., PEs) as sub-clusters. To deal with edge cases, each of the edge cases at each cluster level also needs to be recursively processed. However, in most cases, we observe the number of edge cases across cluster levels are less than 20, which is still in a tractable scale. \betterparagraph{Other DNNs} Although we used dense convolution as examples for simplicity, \textsc{MAESTRO}\xspace can model a variety of layers (LSTM hidden layer, pooling, fully-connected, transposed convolution, and so on) based on the generality of the data-centric approach. Our data-centric approach supports all the operations represented as the loop nest with two input tensors and one output tensor wherein all the tensor indices are coupled in only one or two data dimensions in affine functions. \textsc{MAESTRO}\xspace also can model uniformly distributed sparsity for any supported dataflow. Support for more complex statistical sparsity distributions is future work. \subsection{Model Validation} \label{subsec:validation} \insertFigure{Validation}{Runtime model validation against MAERI~\cite{kwon2018maeri} RTL simulation with 64 PEs and Eyeriss~\cite{chen2017eyeriss_issc} runtime reported in the paper with 168 PEs.} We validated \textsc{MAESTRO}\xspace's performance model against RTL simulations of two accelerators - MAERI~\cite{kwon2018maeri} and Eyeriss~\cite{chen2017eyeriss_issc} when running VGG16 and AlexNet respectively\footnote{MAERI RTL is open-source. For Eyeriss, we use the reported runtime for AlexNet because detailed mapping parameters are described for only AlexNet in the paper.}. \autoref{fig:Validation} shows that the runtime estimated by \textsc{MAESTRO}\xspace are within 3.9\% absolute error of the cycle-accurate RTL simulation and reported processing delay~\cite{chen2017eyeriss_issc} in average. \section{Case Studies} \label{sec:eval} \insertTableFigure{EvalDataflows}{Five example dataflows used for the evaluation. For conciseness, we omit redundant directives that are automatically inferred by \textsc{MAESTRO}\xspace. YX-P, YR-P, and CK-P dataflows are motivated by Shidiannao~\cite{du2015shidiannao}, Eyeriss~\cite{eyeriss_isca}, and NVDLA~\cite{nvdla}, respectively. The name of each dataflow is based on spatial dimensions from the upper-most cluster level. } \insertWideFigure{RuntimeEnergy_LayerType}{Plots in top and bottom rows present runtime and energy estimation of five dataflows listed in the table, respectively. We apply 256 PEs and 32GBps NoC bandwidth. We evaluate all the dataflows using five different DNN model; Resnet50~\cite{Resnet}, VGG16~\cite{VGGnet}, ResNeXt50~\cite{ResNeXt}, MobileNetV2~\cite{mobilenetv2}, and UNet~\cite{UNet}. The final column (f) presents the average results across models for each DNN operator type listed in~\autoref{table:DNN_Operators} and the adaptive dataflow case.} \autoref{table:DNN_Operators} summarizes the features of frequently used DNN operators from state-of-the-art DNN models~\cite{Resnet, ResNeXt, mobilenet, mobilenetv2, UNet}. Early and late layers refer to layers with high-resolution activation with shallow channels and vice versa, respectively. We label them as early and late layers because such layers appear early and late in classification networks~\cite{Resnet, ResNeXt, mobilenetv2, VGGnet}. We compare the number of input channels and the input activation height to identify them\footnote{If C > Y, late layer. Else, early layer}. With \textsc{MAESTRO}\xspace, we perform deeper case studies about the costs and benefits of various dataflows when they are applied to different DNN operations listed in~\autoref{table:DNN_Operators}. We evaluate five distinct dataflow styles listed in~\autoref{table:EvalDataflows} in~\autoref{subsec:dataflow_case_study} and the preference of each dataflow to different DNN operators. For energy estimation, we multiply activity counts with base energy values from Cacti~\cite{muralimanohar2009cacti} simulation (28nm, 2KB L1 scratchpad, and 1MB shared L2 buffer). We also present distinct design space of an early layer (wide and shallow) and a late layer (narrow and deep) to show the dramatically different hardware preference of different DNN operator styles and dataflow in~\autoref{subsec:dse_eval}. \subsection{Case study I: Dataflow Trade-offs} \label{subsec:dataflow_case_study} \insertTableFigure{DNN_Operators}{Operators in state-of-the-art DNNs and their features and implication. Bottleneck~\cite{Resnet} and depth-wise separable convolution~\cite{mobilenet} are listetd in a fine-grained operators (point-wise convolution, depth-wise convolution, and residual links). Examples are based on notable networks (VGGnet~\cite{VGGnet} and DCGAN~\cite{DCGAN}) and state-of-the-art networks (MobileNetV2~\cite{mobilenetv2}, ResNet50~\cite{Resnet}, ResNeXt50~\cite{ResNeXt}.} \insertFigure{Dataflow_Analysis_Large_rev}{Reuse and NoC bandwidth requirements of dataflows in~\autoref{table:EvalDataflows} with 256 PEs for four common DNN operators from~\autoref{table:DNN_Operators}. We select representative operators from state-of-the-art DNN models (Early layer: CONV1 in Resnet50~\cite{Resnet}, late layer: CONV13 in VGG16~\cite{VGGnet}, depth-wise convolution (DWCONV): DWCONV of CONV2 in ResNeXt50~\cite{ResNeXt}, point-wise convolution: first conv of bottleneck1 in MobilenetV2~\cite{mobilenetv2} C, X, YX, YR, and KC refers to C-P, X-P, YX-P, YR-P, and KC-P dataflows. A refers to algorithmic maximum reuse.). } \insertFigure{NewEnergy}{The breakdown of energy consumption (MAC and L1/L2 scratchpad access energy) of the dataflows from~\autoref{table:EvalDataflows}. The access counts generated by \textsc{MAESTRO}\xspace are multiplied by appropriate energy values from Cacti~\cite{muralimanohar2009cacti}. The values are normalized to the MAC energy of C-P.} ~\autoref{fig:RuntimeEnergy_LayerType} shows the DNN-operator granularity estimation of runtime and energy of each dataflow across five state-of-the-art DNN models listed in~\autoref{sec:eval}. Note that this should be considered a comparison of dataflows---not of actual designs, which can contain several low-level implementation differences, e.g., custom implementations of logic/memory blocks, process technology, and so on. We observe that KC-P dataflow style dataflow provides overall low runtime and energy. However, the energy efficiency in VGG16 (\autoref{fig:RuntimeEnergy_LayerType} (b)) is worse than YR-P (Eyeriss~\cite{eyeriss_isca} style) dataflow, and the runtime is worse than YX-P (Shidiannao~\cite{du2015shidiannao} style) dataflow in UNet (~\autoref{fig:RuntimeEnergy_LayerType} (e)). This is based on the different preference toward dataflow of each DNN operator. YX-P provides short runtime to segmentation networks like UNet, which has wide activation (e.g., 572x572 in the input layer) and recovers the original activation dimension at the end via up-scale convolution (e.g., transposed convolutions). Such a preference to the YX-P style is mainly based on its parallelization strategy: it exploits parallelism over both of row and column dimensions in activation. The energy efficiency of YR-P dataflow in VGG16 is based on its high reuse factor (the number of local accesses per fetch) in early layers, as shown in red bars in~\autoref{fig:Dataflow_Analysis_Large_rev} (a) and (b) (note the log scale). The YR-P dataflow has 5.8$\times$ and 15.17$\times$ higher activation and filter reuse factors, respectively, in early layers. However, in late layers, the reuse factors of YR-P and KC-P dataflow are almost similar (difference < 11\%), so the KC-P dataflow provides similar energy efficiency as YR-P in these cases. This can also be observed in the late layer (blue) bars in~\autoref{fig:RuntimeEnergy_LayerType} bottom-row plots. Although the KC-P and YX-P dataflows provide low runtime (\autoref{fig:RuntimeEnergy_LayerType}), it comes with high NoC cost, as the high bandwidth requirements shown in~\autoref{fig:Dataflow_Analysis_Large_rev} (c) highlight. Based on the operator type, some dataflows require dramatically higher NoC bandwidth than others. For example, YX-P requires high bandwidth for point-wise convolution as it has no convolutional reuse (i.e., overlapped activation data points among sliding windows) because of its 1x1 kernel while YX-P is optimized to exploit convolutional reuse via spatial reuse. The diverse preference to dataflows of different DNN operators motivates us to employ optimal dataflow for each DNN operator type. We refer such an approach as adaptive dataflow and present the benefits in~\autoref{fig:RuntimeEnergy_LayerType} (f), the average case analysis across entire models in DNN operator granularity. By employing the adaptive approach, we could observe a potential 37\% runtime and 10\% energy reduction. Such an optimization opportunity can be exploited by flexible accelerators like Flexflow~\cite{lu2017flexflow} and MAERI~\cite{kwon2018maeri} or via heterogeneous accelerators that employ multiple sub-accelerators with various dataflow styles in a single DNN accelerator chip. \subsection{Case study II: Hardware Design-Parameters and Implementation Analysis} \label{subsec:dse_eval} \insertWideFigure{DSE_ALL_HALF}{The design space of an accelerator with (a) KC-P and (b) YR-P dataflow. We highlight the design space of an early and a late layer to show their significantly different hardware preference. We apply the area and power of Eyeriss~\cite{chen2017eyeriss_issc} as area/power constraints to the DSE.(16mm$^2$, 450mW~\cite{chen2017eyeriss_issc}). The color of each data point indicates the number of PEs. Design points with fewer PEs can be paired with larger buffer sizes, up to the area budget. We mark the throughput- and energy-optimized designs using a star and cross. } \insertWideTableFigure{DesignTweaks}{The impact of multicasting capability, bandwidth, and buffer size. Design points are from the design space of ~\autoref{fig:DSE_ALL_HALF} (a) VGG16-CONV2.} Using \textsc{MAESTRO}\xspace, we implement a hardware design space exploration (DSE) tool that searches four hardware parameters (the number of PEs, L1 buffer size, L2 buffer size, and NoC bandwidth) optimized for either energy efficiency, throughput, or energy-delay-product (EDP) within given hardware area and power constraints. The DSE tool receives the same set of inputs as \textsc{MAESTRO}\xspace with hardware area/power constraints and the area/power of building blocks synthesized with the target technology. For the cost of building blocks, we implement float/fixed point multiplier and adder, bus, bus arbiter, and global/local scratchpad in RTL and synthesis them using 28nm technology. For bus and arbiter cost, we fit the costs into a linear and quadratic model using regression because bus cost increases linearly and arbiter cost increases quadratically (e.g., matrix arbiter). The DSE tool sweeps a target design space specified in the range of each parameter and search granularity. However, it skips design spaces at each iteration of hardware parameters by checking the minimum area and power of all the possible design points from inner loops of hardware parameters. This optimization allows it to skip invalid design points in a various granularity that reduces a large number of futile searches, which led to a large effective DSE rate ranging from 3.3K to 0.46M designs per second, as presented in~\autoref{fig:DSE_ALL_HALF} (c). ~\autoref{fig:DSE_ALL_HALF} (c) shows statistics of four DSE runs explored the design space. We ran DSEs on a machine with i7-8700k CPU and 32GB memory operating Linux Mint 19 OS. We run four sets of the DSE on the machine at the same time, and all of them terminated within 24 minutes, with effective DSE rate of 0.17M designs per second, on average. \colonparagraph{Design Space Analysis} Using the DSE tool, we explore the design space of KC-P and YR-P dataflow accelerators. We set the area and power constraint as 16mm$^2$ and 450mW, which is the reported chip area and power of Eyeriss~\cite{chen2017eyeriss_issc}. We plot the entire design space we explored in~\autoref{fig:DSE_ALL_HALF}. Whether an accelerator can achieve peak throughput depends on not only the number of PEs but also NoC bandwidth. In particular, although an accelerator has sufficient number of PEs to exploit the maximum degree of parallelism a dataflow allows, if the NoC does not provide sufficient bandwidth, the accelerator suffers a communication bottleneck in the NoC. Such design points can be observed in the area-throughput plot in ~\autoref{fig:DSE_ALL_HALF} (a). YR-P dataflow requires low NoC bandwidth as shown in~\autoref{fig:Dataflow_Analysis_Large_rev} (c) so it does not show the same behavior as KC-P dataflow. However, with more stringent area and power constraints, YR-P dataflow will show the same behavior. During DSE runs, \textsc{MAESTRO}\xspace reports buffer requirements for each dataflow and the DSE tool places the exact amount buffers \textsc{MAESTRO}\xspace reported. Contrary to intuition, larger buffer sizes do not always provide high throughput, as shown in buffer-throughput plots in~\autoref{fig:DSE_ALL_HALF} (plots in the second column). The optimal points regarding the throughput per buffer size are in the top-left region of the buffer-throughput plots. The existence of such points indicates that the tiling strategy of the dataflow (mapping sizes in our directive representation) significantly affects the efficiency of buffer use. We also observe the impact of hardware support for each data reuse, discussed in~\autoref{table:HW_Impl_choices}. ~\autoref{table:DesignTweaks} shows such design points found in the design space of KC-P dataflow on VGG16-conv2 layer presented in the first row of ~\autoref{fig:DSE_ALL_HALF} (a). The first design point is the throughput-optimized design represented as a star in the first row of ~\autoref{fig:DSE_ALL_HALF}. When bandwidth gets smaller, the throughput significantly drops, but energy remains similar. However, the lack of spatial multicast or reduction support resulted in approximately 47\% energy increase, as the third and fourth design points shows. We observe that the throughput-optimized designs have a moderate number of PEs and buffer sizes, implying that hardware resources need to be distributed not only to PEs but also to NoC and buffers for high PE utilization. Likewise, we observe that the buffer amount does not directly increase throughput and energy efficiency. These results imply that all the components are intertwined, and they need to be well-balanced to obtain a highly-efficient accelerator. \section{Quantitative Dataflow Analysis} \label{sec:framework} \insertWideFigure{AnalysisFramework}{An overview of \textsc{MAESTRO}\xspace's analysis framework. For simplicity, we omit components other than analysis engines.} \insertFigure{PerformanceCostAnalysisEngine}{A high-level overview of algorithms in performance and cost analysis engines.} In this section, we present our approach to quantitatively estimating runtime and energy efficiency of dataflows on a target DNN model and hardware configuration. Based on the approach, we implement an analysis framework, \textsc{MAESTRO}\xspace, which consists of five engines: tensor, cluster, reuse, performance analysis, and cost analysis. ~\autoref{fig:AnalysisFramework} provides a high-level overview of the five engines. In the interest of space, we only discuss high-level algorithms without edge case handling, multiple layers, and multiple cluster levels. For details, we present them in our open-source repository~\cite{maestro_opensource}. \subsection{Preliminary Engines} \label{subsec:prelim_engines} \betterparagraph{Tensor Analysis} As described in~\autoref{fig:AnalysisFramework}, the tensor analysis engine identifies dimension coupling for each tensor based on specified layer operations. For example, in depth-wise convolutions, output activation is not coupled with the output-channel dimension but coupled with the input channel dimension. Note that depth-wise convolution can be understood either in this manner or by eliminating input channel dimension (C). We select this convention because it aligns with \textsc{MAESTRO}\xspace's input-centric cost model. \textsc{MAESTRO}\xspace allows users to specify tensors with arbitrary dimension coupling, and such coupling relationship is input to the rest of engines, which provides generality to \textsc{MAESTRO}\xspace. \betterparagraph{Cluster Analysis} A PE cluster refers to a group of PEs that processes one or more data dimensions in parallel, specified by the \textsc{Cluster}\xspace directive. ~\autoref{fig:AnalysisFramework} (b) describes the analysis in Cluster Analysis (CLA) engine. The CLA engine analyzes a given dataflow description written in dataflow directives to identify the number of sub-clusters, extract cluster dataflow directives and data dimensions, and augment the given dataflow descriptions for missing directives, stride handling, and so on, for each cluster level. \betterparagraph{Reuse Analysis} ~\autoref{fig:AnalysisFramework} (b) includes a high-level description of analysis in data reuse analysis (RA) engine. RA engine identifies the amount of temporal and spatial reuse across adjacent time steps, which is the data iteration corresponding to the inner-most non-temporally/spatially unrolled mapping directive. \subsection{Performance Analysis} \label{subsec:performance-analysis} ~\autoref{fig:AnalysisFramework} (a) presents a high-level overview of the performance and cost analysis engine, and~\autoref{fig:PerformanceCostAnalysisEngine} shows high-level algorithm of the performance analysis (PA) engine. Utilizing the reuse information computed in the RA engine, PA engine computes the runtime for all the possible cases based on the data dimension and dataflow. The computed runtime is multiplied with the number of each case's occurrences and accumulated to compute the total runtime. The runtime of a DNN accelerator consists of communication delay (L2 to L1, L1 to L2, local forwarding) and computation delay in each PE, which are directly related to the accelerator's hardware parameters. PA engine considers double buffering when it computes the outstanding delay (the worst case delay of communication/computation delay) that directly contributes to the runtime. To estimate communication delays, \textsc{MAESTRO}\xspace relies on its analytical network-on-chip (NoC) model based on a pipe model similar to other analytic models~\cite{parashar2019timeloop}. The pipe model utilizes two parameters, the pipe width (bandwidth) and length (average delay), to estimate the communication delay via NoC. The model incorporates a pipelining effect as many packet-switching NoCs have similar behavior. Various combinations of the bandwidth and average delay enables to model NoC structures with reasonable accuracy. For example, Eyeriss~\cite{eyeriss_isca} has a two-level hierarchical bus with dedicated channels for input, weight, and output tensors. Therefore, a bandwidth of 3X properly models the top level NoC. The average latency depends on implementation details; users should choose an appropriate value considering implementation details (e.g., the use of ingress/egress buffers, which adds one cycle delay each). For more complicated NoC architectures, users should select bisection bandwidth and average latency considering uniform communication to all the PEs from a global buffer. For example, a $N \times N$ 2D mesh network with the injection point at one of the corners, the bisection bandwidth is $N$, and the average latency is $N$. Assuming that the user has access to the NoC implementation information, the NoC model is precise when the NoC is a bus or a crossbar. \subsection{Cost Analysis} \label{subsec:cost-analysis} ~\autoref{fig:PerformanceCostAnalysisEngine} describes how the cost analysis (CA) engine computes the number of buffer accesses and estimates the buffer size requirements for each tensor, considering data reuse computed in the RA engine and data iteration cases. Utilizing the access counts and the number of MAC operation information, \textsc{MAESTRO}\xspace computes the energy cost. \textsc{MAESTRO}\xspace includes an energy model based on those activity counts and Cacti~\cite{muralimanohar2009cacti} simulation, which can be replaced by any other energy model based on such activity counts (e.g., Accelergy~\cite{iccad_2019_accelergy}). \subsection{Complex Dataflow Analysis} \label{subsec:complex_dataflow_analysis} \betterparagraph{Multi-cluster Analysis} Multi-cluster cases can be split into single-cluster cases with the data dimension size set as the mapping size of the corresponding mapping directive in the upper cluster. The outstanding delay of a cluster level becomes the computation delay of the next cluster level above. To handle various edge cases that affects all the lower cluster levels, \textsc{MAESTRO}\xspace recursively performs performance and cost analysis, as illustrated in~\autoref{fig:AnalysisFramework}. In the recursive analysis, the base case is the inner-most cluster whose sub-clusters are actual PEs. Although \textsc{MAESTRO}\xspace performs recursion, the complexity is not high because the number of PE cluster levels are typically two or three. Note that each of the edge cases at each cluster level also needs to be recursively processed. However, in most cases, we observe the number of edge cases across cluster levels is less than 20, which is still in a tractable scale. \betterparagraph{Other DNNs} Although we used dense convolution as examples for simplicity, \textsc{MAESTRO}\xspace can model a variety of layers (LSTM hidden layer, pooling, fully-connected, transposed convolution, and so on) based on the generality of the data-centric approach. Our data-centric approach supports all the operations represented as the loop nest with two input tensors and one output tensor wherein all the tensor indices are coupled in only one or two data dimensions in affine functions. \textsc{MAESTRO}\xspace also can model uniformly distributed sparsity for any supported dataflow. Support for more complex statistical sparsity distributions is future work. \subsection{Model Validation} \label{subsec:validation} \insertFigure{Validation}{Runtime model validation against MAERI~\cite{kwon2018maeri} RTL simulation with 64 PEs and Eyeriss~\cite{chen2017eyeriss_issc} runtime reported in the paper with 168 PEs.} We validated \textsc{MAESTRO}\xspace's performance model against RTL simulations of two accelerators - MAERI~\cite{kwon2018maeri} and Eyeriss~\cite{chen2017eyeriss_issc} when running VGG16 and AlexNet respectively\footnote{MAERI RTL is open-source. For Eyeriss, we use the reported runtime for AlexNet because detailed mapping parameters are described for only AlexNet in the paper.}. \autoref{fig:Validation} shows that the runtime estimated by \textsc{MAESTRO}\xspace are within 3.9\% absolute error of the cycle-accurate RTL simulation and reported processing delay~\cite{chen2017eyeriss_issc} in average.
{ "timestamp": "2019-10-03T02:05:41", "yymm": "1805", "arxiv_id": "1805.02566", "language": "en", "url": "https://arxiv.org/abs/1805.02566" }
\section{Introduction} \label{sec:intro} Audio source separation (ASS) has recently been intensively studied. Various approaches have been introduced such as local Gaussian modeling \cite{DuongVG10, FitzgeraldLB16}, non-negative factorization \cite{LiutkusFB15, Roux15, MitsufujiKS16}, kernel additive modeling \cite{LiutkusFRPD14} and their combinations \cite{OzerovF10, LiutkusFR15, FitzgeraldLB162}. Recently, deep neural network (DNN) based ASS methods have shown a significant improvement over conventional methods. In \cite{Nugraha15, Uhlich15}, a standard feedforward fully connected network (FNN) was used to estimate source spectra. A common way to exploit temporal contexts is to concatenate multiple frames as the input. However, the number of frames that can be used is limited in practice to avoid the explosion of the model size. In \cite{Uhlich17}, long short-term memory (LSTM), a type of recurrent neural network (RNN), was used to model longer contexts. However, the model size tends to become excessively large and the training becomes slow owing to the full connection between the layers and the gate mechanism in an LSTM cell. Recently, convolutional neural networks (CNNs)~\cmmnt{~\cite{LeCun1998}} have been successfully applied to audio modeling of spectrograms \cite{Sercu2015, Takahashi2016, Korzeniowski16, Takahashi2017AENet}, although CNNs were originally introduced to address the transition-invariant property of images. A CNN significantly reduces the number of parameters and improves generalization by sharing parameters to model local patterns in the input. However, a standard CNN requires considerable depth to cover long contexts, making training difficult. To address this problem, a multi-scale structure was used to adapt a CNN to solve the ASS problem in \cite{Jansson17, Takahashi17MMDense}, where convolutional layers were applied on multiple {\it scales} obtained by downsampling feature maps, and low-resolution feature maps were progressively upsampled to recover the original resolution. Another problem in applying a two dimensional convolution to a spectrogram is the biased distribution of the local structure in the spectrogram. Unlike an image, a spectrogram has different local structures depending on the frequency bands. Complete sharing of convolutional kernels over the entire frequency range may reduce modeling flexibility. In \cite{Takahashi17MMDense}, we proposed a multi-band structure where each band was linked to a single CNN dedicated to particular frequency bands \cmmnt{which combined} along with a full-band CNN. The novel CNN architecture called DenseNet was extended to the multi-scale and multi-band structure called MMDenseNet, which outperformed an \cmmnt{bi-directional} LSTM system and achieved a state-of-the-art performance for the DSD100 dataset \cite{Liutkus17}. Although it has been suggested that CNNs often work better than RNNs even for sequential data modeling \cite{Bai18, Takahashi17MMDense}, RNNs can also benefit from multi-scale and multi-band modeling because they make it easier to capture long-term dependencies and can save parameters by omitting redundant connections between different bands. Moreover, the blending of two systems improves the performance even when one system consistently performs better than the other\cite{Uhlich17}. Motivated by these observations, here we propose a novel network architecture called MMDenseLSTM. This combines LSTM and DenseNet in multiple scales and multiple bands, improving separation quality while retaining a small model size. There have been several attempts to combine CNN and RNN architectures. In \cite{Sainath15, Zhao18}, convolutional layers and LSTM layers were connected serially \cmmnt{and applied to automatic speech recognition}. Shi \textit{et al.}~proposed convolutional LSTM for the spatio-temporal sequence modeling of rainfall prediction \cite{Shi15}, where matrix multiplications in LSTM cells were replaced with convolutions. In contrast to these methods, in which convolution and LSTM operate at a single scale, we show that combining them at multiple low scales increases the performance and efficiency. Moreover, we systematically compare several architectures to search for the optimal strategy to combine DenseNet and LSTM. Experimental results show that the proposed method outperforms current state-of-the-art methods for the DSD100 and MUSDB18 datasets. Furthermore, MMDenseLSTM even outperforms an ideal binary mask (IBM), which is usually considered as an upper baseline, when we train the networks with a larger dataset. \section{Multi-scale multi-band DenseLSTM} In this section, we first summarize multi-scale multi-band DenseNet (MMDenseNet) as our base network architecture. Then, we introduce strategies to combine {\it dense block} and LSTM at multiple scales and multiple bands. Finally, we discuss the architecture of MMDenseLSTM in detail. \subsection{MMDenseNet} \label{sec:MMDenseNet} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{MDenseArch2.pdf} \caption{MDenseNet architecture. Multi-scale dense blocks are connected though down- or upsampling layer or through block skip connections. The figure shows the case $s=3$.} \label{fig:mdense} \end{figure} Among the various CNN architectures, DenseNet shows excellent performance in image recognition tasks \cite{Huang2016}. The basic idea of DenseNet is to improve the information flow between layers by concatenating all preceding layers as, $x_l = H_l([x_{l-1}, x_{l-2},\ldots,x_0])$, where $x_l$ and $[\ldots]$ denote the output of the $l$th layer and the concatenation operation, respectively. $H_l(\cdot)$ is a nonlinear transformation consisting of batch normalization (BN) followed by ReLU and convolution with $k$ feature maps. Such dense connectivity enables all layers to receive the gradient directly and also reuse features computed in preceding layers. To cover the long context required for ASS, multi-scale DenseNet (MDenseNet) applies a dense block at multiple scales by progressively downsampling its output and then progressively upsampling the output to recover the original resolution, as shown in Fig.\ref{fig:mdense}. Here, $s$ is the scale index, i.e., the feature maps at scale $s$ are downsampled $s-1$ times and have $2^{2(s-1)}$ times lower resolution than the original feature maps. To allow forward and backward signal flow without passing though lower-resolution blocks, an inter-block skip connection, which directly connects two dense blocks of the same scale, is also introduced. In contrast to an image, in an audio spectrogram, different patterns occur in different frequency bands, although a certain amount of translation of patterns exists for a relatively small pitch shift. Therefore, limiting the band that shares the kernels is suitable for efficiently capturing local patterns. MMDenseNet addresses this problem by splitting the input into multiple bands and applying band-dedicated MDenseNet to each band. MMDenseNet has demonstrated state-of-the-art performance for the DSD100 dataset with about 16 times fewer parameters than the LSTM model, which obtained the best score in SiSEC 2016 \cite{Liutkus17}. \label{sec:MMDenseNet} \subsection{Combining LSTM with MMDenseNet} Uhlich \textit{et al.}~have shown that blending two systems gives better performance even when one system consistently outperforms the other \cite{Uhlich17}. The improvement tends to be more significant when two very different architectures are blended such as a CNN and RNN, rather than the same architectures with different parameters. However, the blending of architectures increases the model size and computational cost in an additive manner, which is often undesirable when deploying the systems. Therefore, we propose combining the dense block and {\it LSTM block} in a unified architecture. The {\it LSTM block} consists of a $1\times1$ convolution that reduces the number of feature maps to $1$, followed by a bi-directional LSTM layer, which treats the feature map as sequential data along the time axis, and finally a feedforward linear layer that transforms back the input frequency dimension $f^s$ from the number of LSTM units $m^s$. We consider three configurations with different combinations of the dense and LSTM blocks as shown in Fig. \ref{fig:rmdensetypes}. The {\it Sa} and {\it Sb} configurations place the LSTM block after and before the dense block, respectively, while the dense block and LSTM block are placed in parallel and concatenated in the {\it P} configuration. We focus on the use of the {\it Sa} configuration since a CNN is effective at modeling the local structure and the LSTM block benefits from local pattern modeling as it covers the entire frequency at once. This claim will be empirically validated in Sec. \ref{sec:arcvalid}. Naively inserting LSTM blocks at every scale greatly increases the model size. This is mostly due to the full connection between the input and output units of the LSTM block in the scale $s=1$. To address this problem, we propose the insertion of only a small number of LSTM blocks in the upsampling path for low scales ($s>1$). This makes it easier for LSTM blocks to capture the global structure of the input with a much smaller number of parameters. On the other hand, a CNN is advantageous for modeling fine local structures; thus placing only dense block at $s=1$ is suitable. The multi-band structure is also beneficial for LSTM blocks since the compression from the input frequency dimension $f^s$ to $m^s$ LSTM units is relaxed or it even allows the dimension ($f^s<m^s$) to be increased while using fewer LSTM units, increasing the modeling capabilities as discussed in \cite{Zagoruyko16}. The entire proposed architecture is illustrated in Fig. \ref{fig:rmmdense}. To capture the pattern that spans the bands, MDenseLSTM for the full band is also built in parallel along with the band dedicated MDenseLSTM. The outputs of the MDenseLSTMs are concatenated and integrated by the final dense block, as MMDenseNet. \label{sec:MMDenseNet} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{MDenseLSTMtypes.pdf} \caption{Configurations with different combinations of dense and LSTM blocks. LSTM blocks are inserted at some of the scales} \label{fig:rmdensetypes} \end{figure} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{RMMDenseNetArc.pdf} \caption{MMDenseLSTM architecture. Outputs of MDenseLSTM dedicated to different frequency band including the full band are concatenated and the final dense block integrates features from these bands to create the final output.} \label{fig:rmmdense} \end{figure} \subsection{Architectural details} \label{sec:detailArch} Details of the proposed network architecture for ASS are described in Table \ref{tab:densearch}. We split the input into three bands at $4.1$kHz and $11$kHz. The LSTM blocks are only placed at bottleneck blocks and at some blocks at $s=2$ in the upsampling path, which greatly reduces the model size. The final dense block has three layers with growth rate $k=12$. The effective context size of the architecture is 356 frames. Note that MMDenseLSTM can be applied to an input of arbitrary length since it consists of convolution and LSTM layers. \begin{table}[t] \caption{\label{tab:densearch} {\it The proposed architecture. All dense blocks are equipped with 3$\times$3 kernels with growth rate $k$. $l$ and $m^s$ denote the number of layer and LSTM units of LSTM block, respectively. $ds$ denotes scale $s$ in the downsampling path while $us$ is that in the upsampling path.}} \vspace{2mm} \centerline{ \small \tabcolsep=3px \begin{tabular}{ c | c | c c c c c c c c c c} \hline band & $k$ & scale & d1 & d2 & d3 & d4 & d5 & u4 & u3 & u2 & u1\\ \hline \multirow{2}{*}{1} & \multirow{2}{*}{14} & $l$ & 5 & 5 & 5 & 5 & - & - & 5 & 5 & 5 \\ & & $m^s$ & - & - & - & 128 & - & - & - & 128 & - \\ \hline \multirow{2}{*}{2} & \multirow{2}{*}{4} & $l$ & 4 & 4 & 4 & 4 & - & - & 4 & 4 & 4 \\ & & $m^s$ & - & - & - & 32 & - & - & - & - & - \\ \hline \multirow{2}{*}{3} & \multirow{2}{*}{2} & $l$ & 1 & 1 & - & - & - & - & - & 1 & 1 \\ & & $m^s$ & - & - & 8 & - & - & - & - & - & - \\ \hline \multirow{2}{*}{full} & \multirow{2}{*}{7} & $l$ & 3 & 3 & 4 & 5 & 5 & 5 & 4 & 3 & 3 \\ & & $m^s$ & - & - & - & 128 & - & - & - & 128 & - \\ \hline \end{tabular} } \end{table} \section{Experiments} \subsection{Setup} \label{sec:setup} We evaluated the proposed method on the DSD100 and MUSDB18 datasets, prepared for SiSEC 2016 \cite{Liutkus17} and SiSEC 2018 \cite{sisec2018}, respectively. MUSDB18 has 100 and 50 songs while DSD100 has 50 songs each in the {\it Dev} and {\it Test} sets. In both datasets, a mixture and its four sources, {\it bass, drums, other} and {\it vocals}, recorded in stereo format at 44.1kHz, are available for each song. Short-time Fourier transform magnitude frames of the mixture, windowed at 4096 samples with 75\% overlap, with data augmentation \cite{Uhlich17} were used as inputs. The networks were trained to estimate the source spectrogram by minimizing the mean square error with the Adam optimizer. For the evaluation on MUSDB18, we used the {\it museval} package \cite{sisec2018}, while we used the BSSEval v3 toolbox \cite{Vincent06} for the evaluation on DSD100 for a fair comparison with previously reported results. The SDR values are the median of the average SDR of each song. \subsection{Architecture validation} \label{sec:arcvalid} \begin{figure}[t] \centering \includegraphics[width=0.84\linewidth]{LSTMscale.pdf} \caption{Effect of LSTM block at different scales.} \label{fig:lstmpos} \end{figure} \begin{table}[t] \caption{\label{tab:ex1} {\it Comparison of MMDenseLSTM configurations.}} \vspace{2mm} \centering{ \begin{tabular}{c | c c c } \hline type &Sa & Sb &P \\ \hline\hline SDR &{\bf 2.83} & 2.31 & 2.47 \\ \hline \end{tabular} } \end{table} In this section we validate the proposed architecture for the singing voice separation task on MUSDB18.\\ \textbf{Combination structure} \hspace{2mm} The SDR values obtained by the \textit{Sa-}, \textit{Sb-} and \textit{P-} type MMDenseLSTMs are tabulated in Table \ref{tab:ex1}. These results validate our claim (Sec. \ref{sec:MMDenseNet}) that the \textit{Sa} configuration performs the best because the LSTM layer can efficiently model the global modulations utilizing the local features extracted by the dense layers at this scale. Henceforth, all experiments use the \textit{Sa} configuration.\vspace{1mm}\\ \textbf{LSTM insertion scale} \hspace{3mm} The efficiency of inserting the LSTM block at lower scales was validated by comparing seven MMDenseLSTMs with a single 64 unit LSTM layer inserted at different scales in band 1 (all other LSTM layers in Table \ref{tab:densearch} are omitted). Figure \ref{fig:lstmpos} shows the percentage increase in the number of parameters compared with that of the base architecture and the mean square error (MSE) values for the seven networks. It is evident that inserting LSTM layers at low scales in the up-scaling path gives the best performance.\vspace{1mm}\\ \textbf{Contribution of dense and LSTM layers} \hspace{3mm} We further compared the $l2$ norms of the feature maps (Fig.\ref{fig:l2norm}) in the LSTM block $d4$ of band 1. It can be seen that the norm of the LSTM feature map is similar to the highest norm among the dense feature maps. Even though some dense feature maps have low norms, we confirmed that they tend to learn sparse local features. \subsection{Comparison with state-of-the-art methods} \begin{table}[t] \caption{\label{tab:ex2} {\it Comparison of SDR on DSD100 dataset.}} \vspace{2mm} \centering{ \resizebox{\linewidth}{!}{ \begin{tabular}{ c | c c c c c} \hline \multicolumn{1}{c|}{} & \multicolumn{5}{c}{SDR in dB}\\ Method & Bass & Drums & Other & Vocals & Acco. \\ \hline\hline DeepNMF \cite{Roux15} & 1.88 & 2.11 & 2.64 & 2.75 & 8.90 \\ NUG \cite{Nugraha15}\ & 2.72 & 3.89 & 3.18 & 4.55 & 10.29 \\ BLSTM \cite{Uhlich17}\ & 2.89 & 4.00 & 3.24 & 4.86 & 11.26 \\ BLEND \cite{Uhlich17}\ & 2.98 & 4.13 & 3.52 & 5.23 & 11.70 \\ MMDenseNet \cite{Takahashi17MMDense}\ & {\bf 3.91} & 5.37 & 3.81 & 6.00 & 12.10 \\ {\bf MMDenseLSTM} & 3.73 & {\bf 5.46} & {\bf 4.33} & {\bf 6.31} & {\bf 12.73}\\ \hline \end{tabular} }} \end{table} \begin{figure} \centering \includegraphics[width=0.85\linewidth, height=0.2\textwidth]{1.pdf} \caption{Average $l2$ norm of feature maps.} \label{fig:l2norm} \end{figure} \begin{table}[t] \caption{\label{tab:ex2_1} {\it Comparison of SDR on MUSDB18 dataset.}} \vspace{2mm} \centering{ \resizebox{\linewidth}{!}{ \begin{tabular}{ c | c | c c c c c} \hline \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{\#params} & \multicolumn{5}{c}{SDR in dB}\\ Method & [$\times10^6$] & Bass & Drums & Other & Vocals & Acco. \\ \hline\hline IBM \ & - & 5.30 & 6.87 & 6.42 & 7.50 & 10.83 \\ \hline BLSTM \cite{Uhlich17}\ & 30.03 & 3.99 & 5.28 & 4.06 & 3.43 & 14.51 \\ MMDenseNet \cite{Takahashi17MMDense}\ & 0.33 & {\bf 5.19} & 6.27 & 4.64 & 3.87 & 15.41 \\ BLEND2 \ & 30.36 & 4.72 & 6.25 & 4.75 & 4.33 & 16.04 \\ {\bf MMDenseLSTM} & 1.22 & {\bf 5.19} & {\bf 6.62} & {\bf 4.93} & {\bf 4.94} & {\bf 16.40}\\ \hline \end{tabular} } } \end{table} We next compared the proposed method with five state-of-the-art methods, DeepNMF \cite{Roux15}, NUG \cite{Nugraha15}, BLSTM \cite{Uhlich17}, BLEND \cite{Uhlich17} and MMDenseNet \cite{Takahashi17MMDense} on DSD100. The task was to separate the four sources and accompaniment, which is the residual of the vocal extraction, from the mixture. Here, the multichannel Wiener filter was applied to MMDenseLSTM outputs as in \cite{Uhlich17,Takahashi17MMDense}. Table \ref{tab:ex2} shows that the proposed method improves SDRs by an average of $0.2$dB compared with MMDenseNet, showing that the MMDenseLSTM architecture further improves the performance for most sources. To further improve the capability of music source separation and utilize the full modeling capability of MMDenseLSTM, we next trained models with the MUSDB {\it dev} set and an internal dataset comprising 800 songs resulting in a $14$ times larger than the DSD100 {\it dev} set. The proposed method was compared with BLSTM \cite{Uhlich17}, MMDenseNet \cite{Takahashi17MMDense} and a blend of these two systems (BLEND2) as in \cite{Uhlich17}. All baseline networks were trained with the same training set, namely 900 songs. For a fair comparison with MMDenseNet, we configured it with the same base architecture as in Table \ref{tab:densearch}, with an extra layer in the {\it dense blocks}, corresponding to the {\it LSTM block} in our proposed method. We also included the IBM as an upper baseline since it uses oracle separation. Table \ref{tab:ex2_1} shows the result of this experiment. We obtained average improvements of 0.43dB over MMDenseNet and 0.41dB over BLEND2, achieving state-of-the-art results in SiSEC2018 \cite{sisec2018}. The proposed method even outperformed the IBM for {\it accompaniment}. Table \ref{tab:ex2_1} also shows that MMDenseLSTM can efficiently utilize the sequence modeling capability of LSTMs in conjunction with MMDenseNet, having 24 times fewer parameters than the naive combination of BLSTM and MMDenseNet. \section{Conclusion} \label{sec:concl} We proposed an efficient way to combine DenseNet and LSTM to improve ASS performance. The proposed MMDenseLSTM achieves state-of-the-art results on DSD100 and MUSDB18 datasets. MMDenseLSTM outperforms a naive combination of BSLTM and MMDenseNet despite having much fewer parameters, and even outperforms an IBM for a singing voice separation task when the networks were trained with 900 songs. The improvement over MMDenseNet is less for {\it bass}, which will be further investigated in future. \bibliographystyle{IEEEbib}
{ "timestamp": "2018-05-30T02:06:44", "yymm": "1805", "arxiv_id": "1805.02410", "language": "en", "url": "https://arxiv.org/abs/1805.02410" }
\section{Introduction} \label{sec:intro} Studying the interaction between players in the court, in relation to team performance, is one of the most important issue in Sport Science, as team sports' Managers, more and more in recent years, are becoming aware of the potential of Data Analytics in order to better manage their team. Recent years make it possible, thanks to the advent of Information Technology Systems (ITS), that permits to collect, store, manipulate and process a large amount of data. On the one hand, a sequence of relevant events of the match, such as passes, shots and fouls (player-specific) and time-outs (team-specific) takes the name of play-by-play. On the other hand, information on the movement of players on the court has been captured with the use of appropriate Geographical Positioning Systems (GPS) devices, for example the accelerometer, a device that measures proper velocity and positioning. Analysing players' interaction, however, is a complex task, as the trajectory of a single player depends on a large amount of factors related, among others, to coaches, single players and the whole team. The trajectory of a player depends on the trajectories of all the other players in the court, both teammates and opponents. Players interactions have been mainly studied in the new domain of ecological dynamics \cite{travassos2013performance,passos2016performance}. Typically, there are certain role definitions in a sports team that influence movements. Predefined strategies are used by the coach to achieve specific objectives. A common method to approach with this complexity in team sport analysis consists on segmenting a match into phases, as it facilitates the retrieval of significant moments of the game. For example Perin et al. \cite{perin2013soccerstories} developed a system for visual exploration of phases in football, while, to the same goal, Metulini \cite{metulini2017spatio} propose motion charts. Cluster analysis methodology is widely used in team sports literature. To name a few, Sampaio and Janeira \cite{Sampaio2003statistical} applied a cluster analysis to investigate the discriminatory power of game statistics between winning and losing teams in the Portuguese Professional Basketball League, by using game final score differences, Ross \cite{ross2007segmenting} uses cluster analysis to segment team sport spectators identifying potential similarities according to demographic variables. Csataljay et al. \cite{Csataljay2009performance} used cluster approach to the purpose of identifying those critical performance indicators that most distinguish between winning and losing performances. However, differently from the aforementioned papers, to the aim of segmenting game into phases, in this paper we cluster time instants. In doing so, we use GPS tracked data. In this regard, Gonccalves \cite{gonccalves2018collective} applied a two-step cluster to classify the regularity in teammates dyads’ positioning. Metulini et al. \cite{metulini2017space} used cluster analysis to an amatorial basketball game in order to split the match in a number of separate time-periods, each identifying homogeneous spatial relations among players in the court. They also adopt a Multidimensional Scaling to visually characterize clusters and analysed the switch from \textit{defense} to \textit{offense} clusters, by mean of transition probabilities. This paper aims to fill the gap in Metulini et al., by extending the analysis to multiple matches. Moreover: i) we apply our cluster analysis procedure to professional basketball games, ii) we use the data generated by the algorithm proposed in Metulini \cite{metulini2017filtering} in order to consider active game moments only, iii) we use a more detailed labelling scheme introducing \textit{transition} moments, which permits a better interpretation of the transition probabilities. Last, we characterize clusters in term of team performance, by retrieving shooting events throughout a video analysis. \section{Data and Methods} \label{sec:meth} Basketball is a sport generally played by two teams of five players each on a rectangular court. The objective is to shoot a ball through a hoop 46 centimeters in diameter and mounted at a height of 3.05 meters to backboards at each end of the court. According to FIBA rules, the match lasts 40 minutes, divided into four periods of 10 minutes each. There is a 2-minutes break after the first quarter and after the third quarter of the match. After the first half, there is a 10 to 20 minutes half-time break. In this paper we use tracked data from three games played by Italian professional basketball teams, at the Italian Basketball Cup Final Eight. MYagonism (\url{https://www.myagonism.com/}) was in charge to set up a system to capture these data during the games, trough accelerometer devices. Each player worn a microchip that, having been connected with machines built around the court, collected the player's position (in pixels of 1 $cm^2$ size) in the $x$-axis (court length), the $y$-axis (court width), and in the $z$-axis (height). Data, filtered with a Kalman approach, has been detected at a millisecond level. Available data contain information on players' positioning, velocity and acceleration during the full game length. Throughout the text we will call the three games case study 1 (CS1), case study 2 (CS2) and case study 3 (CS3). As the initial dataset is provided to us considering the full game length, we cleaned it by dropping the pre-match, the quarter- and the half-time intervals and the post match periods, as well as the time-outs and the moments when a player is shooting a free-throw. More information on this filtering procedure can be found in Metulini \cite{metulini2017filtering}. The final dataset for CS1 counts for $206,332$ total rows, each identifying the milliseconds in which the system captured at least one player. CS2 dataset counts for $232,544$ rows, while CS3 counts for a total of $201,651$ rows. We apply a $k$-means Cluster Analysis in order to group a set of objects. Cluster analysis is a method of grouping a set of objects in such a way the objects in the same group (clusters) are more similar to each other than to those in other groups. In our case, the objects are represented by the time instants, expressed in milliseconds, while the similarity is expressed in terms of distance between players' dyads. In the analyses that follows we only consider moments when a particular lineup is on the court. More specifically, we only consider lineups that played for at least 5 minutes. According to this criteria, we consider two lineups (\textit{p1, p3, p6, p7, p8} and \textit{p1, p4, p5, p7, p10}) for CS1, two (\textit{p1, p2, p4, p5, p6} and \textit{p1, p2, p5, p6, p8}) for CS2, and one lineup for CS3 (\textit{p2, p5, p6, p9, p10}, \textit{p} stays for player). We chose number of clusters based on the value of the between deviance (BD) / total deviance (TD) ratio and the increments of this value by increasing the number of clusters by one. We consistently, and surprisingly, find $k$=6 (BD/TD= around 45\% along the different lineups, and relatively low increments for increasing $k$, for $k$$\ge$6) for almost all the lineups considered. Specifically, increasing the number of clusters from 5 to 6, BD/TD increments by around 11-12 \% in all the five lineups, while increasing from 6 to 7, BD/TD increments by around 6-7 \%. \section{Results} \label{sec:res} In this section we describe clusters for their dimension and their characteristics in term of pattern of player's positioning and team performance, along the five lineups. According to the first lineup of CS1, the first cluster (C1) embeds 13.31\% of the observations (i.e. 13.31\% of the total game time), the other clusters, named C2, ..., C6, have size of 19.76\%, 3.40\%, 29.80\%, 6.41\% and 27.31\% of the total sample size, respectively. Consistently for all the five lineups, we find a couple of small clusters, with less than 10\% of the total observations, and 2-3 larger ones, containing at least 20\% of the observations. Cluster profile plots have been used to better interpret the players' spacing structure in each group. Figure \ref{fig:profplot} reports profile plot for the first lineup of CS1, to characterize groups in terms of average distances among players. In this case, we find the smaller cluster (C3, 3.4\% of observations) displaying large average distances among players (horizontal lines in Figure \ref{fig:profplot} represent the average value along the game time played by that lineup). On the contrary, the larger cluster (C4, 29.8\%) displays all the average distances below the game average. These two facts are confirmed in the second lineup of CS1, as it presents the larger cluster (C5, 40.4\%, which is not reported for the sake of space saving) displaying really small average distances, while its smaller cluster (C6, 3.2\%) reports large average distances. Same evidences have been found in other case studies. \begin{figure}[!htbp] \includegraphics[scale=0.6]{profileplotcs1p1.jpg} \caption{Profile plots representing, for each of the 6 clusters, the average distance among players' dyads.} \label{fig:profplot} \end{figure} To the aim of producing further visual evidences, we used Multidimensional Scaling (MDS), which plots the differences between the groups in terms of positioning in the court. With MDS algorithm we aim to place each player in $N$-dimensional space such that the between-player average distances are preserved as well as possible. Each player is then assigned coordinates in each of the $N$ dimensions. We choose $N$=2 and we draw the related scatterplots. Figure \ref{fig:mds} reports the scatterplot for the first lineup of CS1. We observe strong differences between the positioning pattern among groups. The figure highlights large space among players in CS3, as also highlighted by the average distances in the profile plot. Moreover, moments in C4 are characterized by close to each others players. Despite not reported here, other lineups display similar MDS results: smaller clusters are characterized by large average distances and by a large space among players, while larger clusters by small average distances and by close to each others players. \begin{figure}[!htbp] \includegraphics[scale=0.6]{MDSmap2dcs1p1.jpg} \caption{Map representing, for each of the 6 clusters, the average position in the $x-y$ axes of the five players in the court, using MDS.} \label{fig:mds} \end{figure} The filtered datasets label each moment as \textit{offense}, \textit{defense} or \textit{transition} by mean of looking to the average $x$-axis positioning of the five players on the court. A moment is labelled as \textit{transition} when the average $x$-axis is in within the interval [-4,+4], where 0 corresponds to the half court line. Throughout this information, we associate each cluster to offense, defense or to transition, according how many time instants in a specific cluster corresponds to the specific label. \begin{table}[h!] \centering \begin{tabular}{c|ccccccc} \textbf{Cluster}& & \textbf{1} &\textbf{2}&\textbf{ 3}& \textbf{4}& \textbf{5}& \textbf{6}\\ \hline &&&&&&&\\ \textbf{TR}& & 8.41& 21.76 &\textbf{82.11}& 7.08& \textbf{54.49}& 10.53\\ \textbf{D}&& 22.74& 10.28& 6.6& \textbf{70.48} &23.98& 17.95\\ \textbf{O}& &\textbf{68.85}& \textbf{67.97}& 11.29& 22.45& 21.53& \textbf{71.52}\\ \hline &&&&&&&\\ \textbf{Total} & &100.00 &100.00 &100.00 &100.00 &100.00 &100.00 \\ \end{tabular} \caption{Percentages of time instants classified in Transition (TR) Defense (D) or Offense (O), for each cluster.} \label{tab:lab} \end{table} Table \ref{tab:lab} reports related percentages for the first lineup in CS1. Clusters C1, C2 and C6 mainly correspond to offense (respectively, for the 68.85\%, 67.97\% and 71.52\% of the times), C3 and C5 correspond to defensive actions (82.11\% and 54.49\% of the times, respectively), while C4 corresponds to defense (70.48\%). It emerges that large clusters with small average distances among players contains defensive moments. Moreover, the small cluster with large distances corresponds to transition. This result is consistent in all the five considered lineups. For example, in the second lineup of CS1, the small cluster (C6) corresponds to transition moments for the 80.76\% of the time. The large cluster with corresponding small distances (C5) contains moments classified as defense the 72.99\% of the times. Table \ref{fig:transmat_ad} shows the transition matrix for the first lineup of CS1, which reports the relative frequency in which subsequent moments in time report a switch from a cluster to a different one. Main diagonal values of the matrix have been set to zero, so that each column percentages sum to 100\% without including subsequent moments in which there isn't a switch. \begin{table}[h!] \centering \begin{tabular}{c|ccccccc} \textbf{Cluster label} & & \textbf{ C1} & \textbf{C2} & \textbf{C3} & \textbf{C4} & \textbf{C5} & \textbf{C6} \\ \hline &&&&&&&\\ \textbf{C1} & & 0.00 &11.27& 10 & 8.45 & 15 &10.34\\ \textbf{C2} &&31.03 & 0.00 & 10& 23.94 & 15 &35.34\\ \textbf{C3}&& 0.00 & 1.41 & 0 & 0.00& 0 &7.76\\ \textbf{C4} &&34.48 &21.13 & 0 & 0.00& 25 &35.34\\ \textbf{C5}& & 3.45 &4.23 & 0 & 4.23& 0 &11.21\\ \textbf{C6} &&31.03 &61.97 & 80& 63.38 & 45 & 0.00\\ \end{tabular} \caption{Transition matrix reporting the relative frequency subsequent moments ($t$, $t + 1$) report a switch from a group to a different one.} \label{fig:transmat_ad} \end{table} It emerges that, for the 34.48\% of the times C1 switches to a new cluster, it switches to C4. It also switch to C2 and C6, respectively for the 31.03\% and for the 31.03\% of the times. We can note that C2, C4 and C6 are the three largest clusters. C3, marked as \textit{Transition}, switches 80\% of the times to C6 (a offensive cluster). Moreover, C2 switches most of the times (61.97\%) to C6. C2 and C6 are both marked as offensive clusters. Since the total number of switches for this lineup is equal to 309, and this lineup played for a total of 8 minutes and 21 seconds, on average we have a switch every 2 seconds. For this reason we have frequent cluster switches during the same action. Switch from C2 to C6 is an example: players change patterns of positioning during the same action. Table \ref{sec:conc} highlights that offensive clusters often switch to another offensive cluster (beside C2, C1 switches for the 31.03\% of the times to C2 and for the 31.03\% of the times to C6, C6 switches for the 35.34\% of the times to C2). This evidence is confirmed in the other case studies, since, in the second lineup of CS1, we have three offensive clusters (C1, C2 and C3); C1 switches to C2 for the 33.33\% of the times, C3 switches to C2 for the 40.91\% of the times. In the first lineup of C2, we find three offensive clusters (C3, C4 and C6); C4 switches to C6 for the 75.86\% of the times. \vline With this in mind, we can not associate a cluster with a whole action played with a particular tactic, instead, we have to interpret offensive clusters as subsequent players' positioning configurations, to the aim of finding the best positioning for a good shot. In light of this, we collect the shooting events of the match. Since play-by-play data are not available for this tournament, we collect such events by watching the video of the game. Zuccolotto et al. \cite{zuccolotto2017big} analysed the shooting performance under pressure. Here we study shooting performance with respect to different players' positioning patterns, by associating shots to the cluster in which the team was (at the moment of the shot). We take into consideration only shots from the court, disregarding free throws. During the 8 minutes and 21 seconds players \textit{p1, p3, p6, p7} and \textit{p8} were in the court together, the team made 15 shots from the court, with 7 made shots and 8 missed, for a percentage of 46.67\%. We find that most of these shots (8) has been attempted in moments that belongs to cluster C6. During this cluster, the team scored 5 out of 8 total attempts, with a really high percentage of 62.5\%, ways higher than the average of 46.67\%. Moreover, the team attempted only 4 shots (2 of them made) during cluster C1, only 2 shots (both missed) during cluster C2 and it missed a shot during cluster C5. So, 14 out of 15 shots have been attempted during the clusters labelled as offensive (i.e. C1, C2 and C6) while only one during a transition cluster (C5). Looking to bottom-right chart in Figure \ref{fig:mds} (C6), we find player 3 far away from the others. We could suppose that the tactic of the team was to leave that player free to shot on the weaker side of the court. Results support the idea that C6 represents the cluster of (good) shooting moments. Furthermore, the other offensive (C1, C2) and transition (C3, C5) clusters often switch to cluster C6, which support our hypothesis of subsequent game configurations to the aim of finding the best positioning for a good shot: the best positioning to shot is that in C6 moments. \section{Conclusions} \label{sec:conc} In recent years, the availability of `big data" in Sport Science increased the possibility to extract insights from the games that are useful for managers and coaches, as they are interested to improve their team's performances. In particular, with the advent of Information Technology Systems, the availability of players' trajectories permits to analyse the space-time patterns with a variety of approaches. With this paper we pursue the points raised by Metulini et al. \cite{metulini2017space} as suggestions for future research, by analyzing multiple professional games and relate clusters with team shooting performance. We segmented the game into phases of play and we characterized each phase in terms of spacing structure among players, relative distances among them and whether they represent an offensive, a defensive or a transition play, finding substantial differences among different phases. Moreover, we analysed this structure in terms of shooting performance, finding the cluster corresponding to the best shooting performance. These results shed light on the potentiality of data-mining methods for players' movement analysis in team sports. In future research we aim to better explain the relation between players' positioning and team performance, adding more play-by-play data and analysing this relationship for a larger amount of time and for multiple matches. \section{Acknowledgement} Research carried out in collaboration with the Big\&Open Data Innovation Laboratory (BODaI-Lab), University of Brescia (project nr. 03-2016, title Big Data Analytics in Sports, \url{www.bodai.unibs.it/BDSports/}), granted by Fondazione Cariplo and Regione Lombardia. Authors would like to thank MYagonism (\url{https://www.myagonism.com/}) for having provided the data and Paola Zuccolotto (University of Brescia) for the useful discussions. \input{REF_SIS18_Palermo} \end{document}
{ "timestamp": "2018-05-08T02:17:22", "yymm": "1805", "arxiv_id": "1805.02501", "language": "en", "url": "https://arxiv.org/abs/1805.02501" }
\section{Introduction} Successful diagnosis and treatment of lung cancer is highly dependent on early detection of lung nodules. Radiologists are analyzing an ever increasing amount of imaging data (CT scans) every day. Computer Aided Detection (CAD) systems are designed to help radiologists in the screening process. However, automatic detection of lung nodules with CADs remains a challenging task. One reason is the high variation in texture, shape, and position of nodules in CT scans, and their similarity with other nearby structures. Another reason is the discrepancy between the large search space (i.e., entire lung fields) and respectively tiny nature of the nodules. Detection of tiny/small objects has remained a very challenging task in computer vision, which so far has only been solved using computationally expensive multi-stage frameworks. Current sate of art methods for lung nodule detection follow the same multi-stage detection frameworks as in other computer vision areas. The literature for lung nodule detection and diagnosis is vast. To date, the common strategy for all available CAD systems for lung nodule detection is to use a candidate identification step (also known as region proposal). While some of these studies apply low-level appearance based features as a prior to drive this identification task~\cite{lopez2015large}, others use shape and size information~\cite{krishnamurthy2016automatic}. Related to deep learning based methods, Ypsilantis et al. proposed to use recurrent neural networks in a patch based strategy to improve nodule detection \cite{ypsilantis2016recurrent}. Krishnamurthy et al. proposed to detect candidates using a $2D$ multi-step segmentation process. Then a group of hand-crafted features were extracted, followed by a two-stage classification of candidates \cite{krishnamurthy2016automatic}. In a similar fashion, Huang et al. proposed a geometric model based candidate detection method which followed by a $3D$ CNN to reduce number of FPs \cite{huang2017lung}. Golan et al. used a deep $3D$ CNN with a small input patch of $5\times 20\times 20$ for lung nodule detection. The network was applied to the lung CT volume multiple times using a sliding window and exhaustive search strategy to output a probability map over the volume \cite{golan2016lung}. There has, also, been detailed investigations of high-level discriminatory information extraction using deep networks to perform a better FP reduction~\cite{setio2016pulmonary}. Setio et al. used $9$ separate $2D$ convolutional neural networks trained on $9$ different views of candidates, followed by a fusion strategy to perform FP reduction~\cite{setio2016pulmonary}. Another study used a modified version of Faster R-CNN, state of the art object detector at the time, for candidate detection and a patch based $3D$ CNN for FP reduction step \cite{ding2017accurate}. However, all these methods are computationally inefficient (e.g., exhaustive use of sliding windows over feature maps), and often computed in 2D manner, not appreciating the 3D nature of the nodule space. It is worth mentioning that patch based methods are 3D but they suffer from the same computational burdens, as well as missing the entire notion of 3D nodule space due to limited information available in the patches. \textbf{Our Contributions:} We resolve the aforementioned issues by proposing a completely $3D$ deep network architecture designed to detect lung nodules in a single shot using a single-scale network. To the best of our knowledge, this is the first study to perform lung nodule detection in one step. Specific to the architecture design of the deep network, we make use of convolution blocks with dense connections for this problem, making one step nodule detection computationally feasible. We also investigate and justify the effect of different down-sampling methods in our network due to its important role for tiny object detection. Lastly, we argue that lung nodule detection, as opposed to object detection in natural images, can be done with high accuracy using only a single scale network when network is carefully designed with its hyper-parameters. \section{Method} Fig.~\ref{fig:sys} shows the overview of the proposed method for lung nodule detection in a single shot. The input to our network is a $3D$ volume of a lung CT scan. The proposed $3D$ densely connected Convolutional Neural Network (CNN) divides the input volume into a grid of size $S\times S\times T$ cells. We model lung nodule detection as a cell-wise classification problem, done simultaneously for all the cells. Unlike commonly used region proposal networks, our proposed network is able to reason the presence of nodule in a cell using global contextual information, based on the whole 3D input volume. \vspace{-.6cm} \begin{figure}[h] \centering \includegraphics[scale=0.48]{sys.pdf} \vspace{-.4cm} \caption{Our framework, named S4ND, models nodule detection as a cell-wise classification of the input volume. The input volume is divided by a $16\times 16\times 8$ grid and is passed through a newly designed $3D$ dense CNN. The output is a probability map indicating the presence of a nodule in each cell.\label{fig:sys}} \end{figure} \vspace{-.8cm} \subsection{Single-Scale Detection} As opposed to object detection in natural scenes, we show that lung nodule detection can be performed efficiently and with high accuracy in a single scale. Current literature reports the most frequently observed nodule sizes fall within $3mm$s to $32mm$s~\cite{LUNA16}, most of which are less than $9mm$ and are considered as small (def. American Thoracic Society). Nodules less than $3mm$ in size are the most difficult to detect due to their tiny nature and high similarities to vessels. Based on the statistics of nodule size and the evidence in literature, we hypothesize that a single scale framework with the grid size that we defined ($16\times 16\times 8$ leading to the cell sized of $32\times 32\times 8$ on a volume of size $512\times 512\times 8$) is sufficient to fit all the expected nodule sizes and provide good detection results without the need to increase the algorithmic complexity to multi-scale. This has been partially proven in other multi-scale studies \cite{dou2017multilevel}. \vspace{-.4cm} \subsection{Dense and Deeper Convolution Blocks Improve Detection} The loss of low-level information throughout a network causes either a high number of false positives or low sensitivity. One efficient way that helps the flow of information in a network and keeps this low-level information, combining it with the high level information, is the use of dense connections inside the convolution blocks. We empirically show that deeper densely-connected blocks provide better detection results. This, however, comes with the cost of more computation. In our experiments we found that dense blocks with $6$ convolution layers provide a good balance of detection accuracy and computational efficiency. \vspace{-.4cm} \subsection{Max-Pooling Improves Detection} As we go deeper in a CNN, it is desired to pick the most descriptive features and pass only those to the next layers. Recently, architectures for object detection in natural images preferred the use of convolutions with stride $2$ instead of pooling~\cite{liu2016ssd}. In the context of tiny object detection, this feature reduction plays an important role. Since our objects of interest are small, if we carelessly pick the features to propagate we can easily lose the objects of interest through the network and end up with a sub-optimal model. In theory, the goal is to have as less pooling as possible. Also, it is desired to have this feature sampling step in a way that information loss is minimized. There are multiple approaches for sampling information through the network. Average pooling, max pooling and convolutions with stride $2$ are some of the options. In our experiments, we showed that max pooling is the best choice of feature sampling for our task as it selects the most discriminative feature in the network. Also, we showed that convolution layers with stride of $2$ are performing better compared to average pooling. The reason is that convolution with stride $2$ is very similar in its nature to weighted averaging with the weights being learned in a data driven manner. \vspace{-.4cm} \subsection{Proposed 3D Deep Network Architecture} Our network architecture consists of $36$, $3D$ convolution layers, $4$ max-pooling layers and a sigmoid activation function at the end. $30$ of convolution layers form $5$ blocks with dense connections and without pooling, which enhance low-level information along with high-level information, and the remainder form the transition layers. The details of our architecture can be seen in Fig.~\ref{fig:architecture}. The input to our network is $512\times 512\times 8$ and the output is a $16\times 16\times 8$ probability map. Each cell in the output corresponds to a cell of the original image divided by a $16\times 16\times 8$ grid and decides whether there is a nodule in that cell or not. \vspace{-.6cm} \begin{figure}[h] \centering \includegraphics[scale=0.47]{arch2.pdf} \caption{Input to the network is a $512\times 512\times 8$ volume and output is a $16\times 16\times 8$ probability map representing likelihood of nodule presence. Our network has $5$ dense blocks each having $6$ conv. layers. The growth rates of blocks $1$ to $5$ is $16, 16, 16, 32, 64$ respectively. The network has $4$ transition layers and $4$ max-pooling layers. The last block is followed by a convolution layer with kernel size $1\times 1\times 1$ and output channel of $1$ and a sigmoid activation function.\label{fig:architecture}} \end{figure} \vspace{-.6cm} \textbf{Densely connected convolution blocks:} As stated, our network consists of $5$ densely connected blocks, each block containing $6$ convolution layers with an output channel of $g$, which is the growth rate of that block. Inside the blocks, each layer receives all the preceding layers' feature maps as inputs. Fig.~\ref{fig:architecture} (top right) illustrates the layout of a typical dense block. Dense connections help the flow of information inside the network. Assume $x_{0}$ is the input volume to the block and $x_{i}$ is the output feature map of layer $i$ inside the block. Each layer is a non-linear function $F_{i}$, which in our case is a composition of convolution, batch normalization (BN) and rectifier linear unit (ReLU). With dense connections, each layer receives a concatenation of all previous layers' feature maps as input $x_{i}=F_{i}([x_{0},x_{1},...,x_{i-1}])$, where $x_{i}$ is the output feature map from layer $i$ and $[x_{0},x_{1},...,x_{i-1}]$ is the channel-wise concatenation of previous layers' feature maps. \textbf{Growth rate (GR):} is the number of feature maps that each layer $F_{i}$ produces in the block. This number is fixed for each block but it can change from one block to the other. Assume the number of channels in the input layer of a block is $c_{0}$ and the block has $i$ convolution layers with a growth rate of $g$. Then the output of the block will have $c_{0}+(i-1)g$ channels. \textbf{Transition layers:} as can be seen in the above formulations, the number of feature maps inside each dense block increases dramatically. Transition layers are $1\times 1\times 1$ convolution layers with $4\times g$ output channels, where $g$ is the growth rate of previous block. Using a convolution with kernel size of $1\times 1\times 1$ compresses the information channel-wise and reduces the total number of channels throughout the network. \textbf{Training the network:} The created ground truths for training our network are $3D$ volumes with size $16\times 16\times 8$. Each element in this volume corresponds to a cell in the input image and has label $1$ if a nodule exists in that cell and $0$ otherwise. The design of our network allows for an end-to-end training. We model detection as a cell wise classification of input which is done in one feed forward path of the network in one shot. This formulation detects all the nodules in the given volume simultaneously. The loss function for training our network is weighted cross-entropy defined as: \vspace{-.6cm} \begin{equation} L(Y^{(n)},f(X^{(n)})=\sum_{i=1}^{k_{n}}-y_{i}\log(f(x_{i})), \end{equation} where $Y$s are the labels and $X$s are the inputs. \vspace{-.3cm} \section{Experiments and Results} \textbf{Data and evaluation:} To evaluate detection performance of S4ND, we used Lung Nodule Analysis (LUNA16) Challenge dataset (consisting of a total of $888$ chest CT scans, slice thickness$<2.5$ mm, with ground truth nodule locations). For the training, we performed a simple data augmentation by shifting the images in $4$ directions by $32$ pixels. We sampled the 3D volumes for training so that nodules appear in random locations to avoid bias toward location of nodules. We performed $10$-fold cross validation to evaluate our method by following the LUNA challenge guidelines. Free-Response Receiver Operating Characteristic (FROC) analysis has been conducted to calculate sensitivity and specificity~\cite{kundel2008receiver}. Suggested by the challenge organizers, sensitivity at $7$ FP/scan rates (i.e. $0.125, 0.25, 0.5, 1, 2, 4, 8$) was computed. The overall \textit{score} of system (Competition Performance Metric-CPM) was defined as the average sensitivity for these $7$ FP/scan rates. \textbf{Building blocks of S4ND and comparisons:} This subsection explains how we build the proposed S4ND network and provides a detailed comparison with several baseline approaches. We compared performance of S4ND with state-of-the-art algorithms, including SSD (single-shot multi-box object detection)~\cite{liu2016ssd}, known to be very effective for object detection in natural scenes. We show that SSD suffers from low performance in lung nodule detection, even though trained from scratch on LUNA dataset. A high degree of scale bias and known difficulties of the lung nodules detection (texture, shape, etc.) in CT data can be considered as potential reasons. To address this poor performance, we propose to replace the convolution layers with \textit{dense} blocks to improve the information flow in the network. Further, we experimentally tested the effects of various down sampling techniques. Table \ref{table:results} shows the results of different network architectures along with the number of parameters based on these combinations. We implemented the SSD based architecture with $3$ different pooling strategies: (1) average pooling (2D Dense Avepool), (2) replacing pooling layers with convolution layers with kernel size $3\times 3$ and stride $2$ (2D Dense Nopool) and (3) max pooling (2D Dense Maxpool). Our experiments show that max pooling is the best choice of feature sampling for tiny object detection as it selects the most discriminating feature in each step. \textit{2D Dense Nopool} outperforms the normal average pooling (\textit{2D Dense Avepool}) as it is in concept a learnable averaging over $3\times 3$ regions of our network, based on the way we defined kernel size and stride. \textbf{3D Networks, growth rate (GR), and comparisons:} We implemented S4ND in a completely 3D manner. Growth rate for all the blocks inside the network was initially fixed to $16$ (3D Dense). However, we observed that increasing the growth rate in the last $2$ blocks of our network, where the computational expense is lowest, (from $16$ to $32$ and $64$, respectively) improved the performance of detection (3D Increasing GR in Table \ref{table:results}). Also, having deeper blocks, even with a fixed growth rate of $16$ for all the blocks, help the information flow in the network and improved the results further (3D Deeper Blocks in Table \ref{table:results}). The final proposed method benefits from both deeper blocks and increasing growth rate in its last two blocks. Fig.~\ref{fig:baselinecomparison} (left) shows the FROC comparison of proposed method with the baselines. The 10-fold cross validation results were compared with the current state of the art lung nodule detection method (3D DCNN which is the best published results on LUNA dataset)~\cite{ding2017accurate}. Our proposed method outperformed the best available results both in sensitivity and FROC score, while only using as less as a third of its parameters, and without the need for multi-stage refinements. \begin{table}[htb] \caption{Comparison of different models with varying conditions.} \vspace{.25cm} \scalebox{1.1}{ \begin{tabular}{|c|l|c|c|c|} \hline \rowcolor{lightgray} & Model & Sensitivity\% & Num of parameters & CPM\\ \cline{2-5} \parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{\tiny\textbf{Randomly selected 1-fold}}}}&2D SSD & 77.8\% & 59,790,787 & 0.649 \\ \cline{2-5} &2D Dense Avepool & 84.8\% & 67,525,635 & 0.653\\ \cline{2-5} &2D Dense Nopool & 86.4\% & 70,661,955 & 0.658 \\ \cline{2-5} &2D Dense Maxpool & 87.5\% & 67,525,635 & 0.672 \\ \cline{2-5} &3D Dense & 93.7\% & 694,467 & 0.882\\ \cline{2-5} &3D Increasing GR & 95.1\% & 2,429,827 & 0.890\\ \cline{2-5} &3D Deeper Blocks & 94.2\% & 1,234,179 & 0.913\\ \cline{2-5} &Proposed (S4ND) & \textbf{97.2\%} & 4,572,995 & \textbf{0.931}\\ \hline \parbox[t]{2mm}{\multirow{2}{*}{\rotatebox[origin=c]{90}{\tiny\textbf{10-fold}}}}&3D DCNN \cite{ding2017accurate} & 94.6\% & 11,720,032 & 0.891\\ \cline{2-5} &Proposed (S4ND) & \textbf{95.2\%} & 4,572,995 & \textbf{0.897}\\ \hline \end{tabular}} \label{table:results} \end{table} \vspace{-.3cm} \begin{figure}[h] \centering \includegraphics[scale=0.28]{froc_comparison_modified.pdf} \includegraphics[scale=0.28]{comparisonsoa_modified.pdf} \caption{Comparison of base line as well as comparison with the state of the art. Numbers in front of each method in the legend show Competition Performance Metric (CPM). \label{fig:baselinecomparison}} \end{figure} \vspace{.35cm} \textbf{Major findings:} (1) We obtained $0.897$ FROC rate in 10-fold cross validation, and consistently outperformed the state of the art methods as well as other alternatives. (2) SSD (the state of the art for object detection in natural images) resulted in the lowest accuracy in all experiments. Proposed S4ND, on the other hand, showed that single scale single shot algorithm performs better and more suited to tiny object detection problem. (3) The proposed method achieved better sensitivity, specificity, and CPM in single fold and 10-fold throughout experiments where S4ND used less than the half parameters of 3D DCNN (current state of the art in lung nodule detection). (4) A careful organization of the architecture helps avoiding computationally heavy processing. We have shown that maxpooling is the best choice of feature selection throughout the network amongst current available methods. (5) Similarly, dense and deeper connections improve the detection rates through better information flow through layers. It should be noted that the runtime of our algorithm for the whole scan, on the test phase, varies from $11~secs$ to $27~secs$ based on the number of slices in the scan on a single NVIDIA TITAN Xp GPU workstation with RAM of $64$ GBs. \section{Conclusion} This paper introduces a single-shot single-scale fast lung nodule detection algorithm without the need for additional FP removal and user guidance for refinement of detection process. Our proposed deep network structure is fully 3D and densely connected. We also critically analyzed the role of densely connected layers as well as maxpooling, average pooling and fully convolutional down sampling in detection process. We present a fundamental solution to address the major challenges of current region proposal based lung nodule detection methods: candidate detection and feature resampling stages. We experimentally validate the proposed network's performance both in terms of accuracy (high sensitivity/specificity) and efficiency (less number of parameters and speed) on publicly available LUNA data set, with extensive comparison with the natural object detector networks as well as the state of the art lung nodule detection methods. A promising future direction will be to combine diagnosis stage with the detection. \bibliographystyle{splncs03}
{ "timestamp": "2018-06-05T02:12:02", "yymm": "1805", "arxiv_id": "1805.02279", "language": "en", "url": "https://arxiv.org/abs/1805.02279" }
\section{Introduction} A fundamental problem in complex algebraic and K{\"{a}}hler\xspace geometry is to determine the relationship between smooth projective varieties and compact K{\"{a}}hler\xspace manifolds. Since a compact complex manifold is projective if and only if it admits a K{\"{a}}hler\xspace form whose cohomology class is rational, the following question suggests itself. \begin{ques}[Kodaira problem] \label{Kodaira problem} Is it possible to make any compact K{\"{a}}hler\xspace manifold~$X$ projective by an arbitrarily small deformation $X_t$ of its complex structure? \end{ques} \noindent Such a deformation will be called an \emph{algebraic approximation} of $X$. See \cref{def alg approx} for the precise notion. Kodaira proved that every compact K{\"{a}}hler\xspace surface can be deformed to an algebraic surface~\cite[Thm.~16.1]{Kod63}. In higher dimensions, the Kodaira problem remained open until in~\cite{Voi04} Voisin gave counterexamples (of Kodaira dimension $\kappa = 0$) in any dimension $\ge 4$. In~\cite{Voi06}, she even constructed examples (uniruled and of even dimension $\ge 10$) of compact K{\"{a}}hler\xspace manifolds $X_{\mathrm{Voi}}$ such that no compact complex manifold $X'$ bimeromorphic to $X_{\mathrm{Voi}}$ admits an algebraic approximation. At first sight, this seems to provide a definite negative answer to the Kodaira problem. However, from the viewpoint of the Minimal Model Program (MMP), it is natural to take into account also singular bimeromorphic models. A most influential statement in this direction is Peternell's conjecture that minimal models of compact K{\"{a}}hler\xspace manifolds should admit an algebraic approximation. This conjecture has recently spawned substantial progress on the Kodaira problem in dimension three~\cite{AlgApprox, ClaudonHoeringKahlerGroups, Lin16, Lin17a, Lin17b}. That said, an obvious desire arises to revisit Voisin's example $X_{\mathrm{Voi}}$ and to investigate whether some singular model of it is approximable. By construction, $X_{\mathrm{Voi}}$ comes equipped with a bimeromorphic map to a mildly singular K{\"{a}}hler\xspace space $X$, and the map $X_{\mathrm{Voi}} \to X$ is a (composition of) $K_X$-negative extremal contractions. Our first result shows that this new space $X$ does not admit an algebraic approximation. Of course, one would then like to contract (or flip) further extremal rays, hoping to arrive at an approximable model. We show that this is impossible: $X$ is minimal in the sense that every run of the $K_X$-MMP immediately yields a Mori fibre space\xspace. Actually, we prove an even stronger statement---see~\labelcref{main.cont} below. In a slightly different direction, one might consider the Mori fibrations of a given uniruled space and ask whether approximability of the base of such a fibration implies approximability of the total space. Our example $X$ shows that this is likewise not the case. Summing up, what we prove is the following: \begin{thm}[Non-approximable minimal uniruled K{\"{a}}hler\xspace space] \label{main} For every even number $n \ge 10$, there exists an $n$-dimensional uniruled compact K{\"{a}}hler\xspace space $X$ with the following properties: \begin{enumerate} \item \label{main.sg} $X$ has only terminal quotient singularities. \item \label{main.cont} Any bimeromorphic map $X \to X'$ to a normal complex space $X'$ is an isomorphism. In particular, every run of the $K_X$-MMP immediately terminates with a Mori fibration. \item \label{main.mori} There is a Mori fibration $X \to Y$ such that $Y$ admits an algebraic approximation. \item \label{main.approx} $X$ does not admit an algebraic approximation. \end{enumerate} \end{thm} In~\cite{Lin17b}, Lin has shown that any uniruled K{\"{a}}hler\xspace \emph{threefold} is approximable. Our result shows that in higher dimensions, the situation becomes considerably more complicated. We are not aware of any natural condition on a uniruled K{\"{a}}hler\xspace space that would guarantee, at least conjecturally, the existence of an algebraic approximation. This suggests that higher-dimensional uniruled spaces are quite pathological from this point of view. \subsection*{Open questions} We cannot exclude the possibility that our example $X$ is bimeromorphic to an approximable K{\"{a}}hler\xspace space $X'$ in some haphazard way. But by~\labelcref{main.cont}, the existence of such an $X'$ would not be explained by general principles such as the MMP. Hence from a systematic viewpoint, we do not expect such an $X'$ to exist. Nevertheless, this is of course an interesting question. All we can say at the moment is that such an $X'$ would necessarily have non-rigid singularities. This follows from our proof of~\labelcref{main.approx}. \subsection*{Acknowledgements} This project was started during a stay at the Ma\-the\-ma\-tisch\-es Forschungs\-institut Ober\-wolfach, whose hospitality is unmatched. \section{Basic facts and definitions} \subsection*{Complex spaces} All complex spaces are assumed to be separated, connected and reduced, unless otherwise stated. An irreducible compact complex space $X$ is said to be \emph{of Fujiki class $\cC$} (or \emph{in $\cC$}, for short) if it is bimeromorphic to a compact K{\"{a}}hler\xspace manifold. We say that $X$ is \emph{Moishezon} if its field of meromorphic functions $\mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O(X)$ has maximal transcendence degree $\operatorname{trdeg}_\ensuremath{\mathbb C} \mathscr M} \newcommand{\sN}{\mathscr N} \newcommand{\sO}{\mathscr O(X) = \dim X$. Being Moishezon is equivalent to being bimeromorphic to a projective manifold. We say that a (not necessarily irreducible) compact complex space is Moishezon if each of its irreducible components is Moishezon. \subsection*{Resolution of singularities} A \emph{resolution of singularities} of a complex space $X$ is a proper bimeromorphic morphism $f \colon \widetilde X \to X$, where $\widetilde X$ is smooth. We say that the resolution is \emph{projective} if $f$ is a projective morphism. In this case, if $X$ is projective (resp.~compact K{\"{a}}hler\xspace) then so is $\widetilde X$. A resolution is said to be \emph{strong} if it is an isomorphism over the smooth locus of $X$. It will be important for us that resolving singularities is not only possible any-old-how, but there is a canonical way of doing so: \begin{thm}[Functorial resolutions] \label{funct res} There exists a \emph{resolution functor} which assigns to any complex space $X$ a strong projective resolution $\pi_X \colon \cR(X) \to X$, such that $\cR$ commutes with smooth maps in the following sense: For any smooth morphism $f \colon W \to X$, there is a unique smooth morphism $\cR(f) \colon \cR(W) \to \cR(X)$ such that the following diagram is a fibre product square. \[ \xymatrix{ \cR(W) \ar^-{\cR(f)}[rr] \ar_-{\pi_W}[d] & & \cR(X) \ar^-{\pi_X}[d] \\ W \ar^-f[rr] & & X. } \] \end{thm} \begin{proof} See~\cite[Thm.~3.45]{Kol07}. \end{proof} \subsection*{K{\"{a}}hler\xspace spaces} \label{sec kahler spaces} While we will not work directly with the definition of a singular K{\"{a}}hler\xspace space, we include the definition here for the reader's convenience. \begin{dfn}[K{\"{a}}hler\xspace space] \label{def kahler} Let $X$ be a normal complex space. A \emph{K{\"{a}}hler\xspace form} $\omega$ on $X$ is a K{\"{a}}hler\xspace form $\omega^\circ$ on the smooth locus $\Reg X \subset X$ such that $X$ can be covered by open sets $U_\alpha$ with the following property: there is an embedding $U_\alpha \hookrightarrow W_\alpha$ of $U_\alpha$ as an analytic subset of an open set $W_\alpha \subset \ensuremath{\mathbb C}^{n_\alpha}$ and a strictly plurisubharmonic $\sC^\infty$ function $f_\alpha \colon W_\alpha \to \ensuremath{\mathbb R}$ such that \[ \omega^\circ\big|_{U_\alpha \cap \Reg X} = \big( \mathrm i\partial\bar\partial f_\alpha \big) \big|_{U_\alpha \cap \Reg X}. \] A normal complex space $X$ is said to be \emph{K{\"{a}}hler\xspace} if there exists a K{\"{a}}hler\xspace form on $X$. \end{dfn} For example, the analytification of a normal complex projective variety is a K{\"{a}}hler\xspace space. \subsection*{Deformation theory} \label{sec def theory} We collect some notation and basic facts from deformation theory. \begin{dfn}[Deformations of complex spaces] A \emph{deformation} of a complex space $X$ is a proper flat morphism $\frX \to (S, 0)$ from a (not necessarily reduced) complex space $\frX$ to a complex space germ $(S, 0)$, together with the choice of an isomorphism $\frX_0 \cong X$, where we write $\frX_s \coloneqq \pi^{-1}(s)$ for the fibre over any $s\in S$. We usually suppress both the base point $0 \in S$ and the choice of the isomorphism from notation. \end{dfn} \begin{dfn}[Locally trivial deformations] A deformation $\pi \colon \frX \to S$ is called \emph{locally trivial} if for every $x \in \frX_0$ there exist open subsets $0 \in S^\circ \subset S$ and $x \in U \subset \pi^{-1}(S^\circ)$ and an isomorphism \[ \xymatrix{ U \ar^-\sim[rr] \ar_-\pi[dr] & & (\frX_0 \cap U) \times S^\circ \ar^-{\operatorname{pr}_2}[dl] \\ & S^\circ. & } \] \end{dfn} \begin{dfn}[Rigid singularities] \label{def rigid sing} A compact complex space $X$ is said to have \emph{rigid singularities} if every deformation of $X$ is locally trivial. \end{dfn} \begin{dfn}[Algebraic approximations] \label{def alg approx} Let $X$ be a compact complex space and $\pi \colon \frX \to S$ a deformation of $X$. Consider the set of projective fibres \[ S^{\mathrm{alg}} \coloneqq \big\{ s \in S \;\big|\; \frX_s \text{ is projective} \big\} \subset S \] and its closure $\overline{S^{\mathrm{alg}}} \subset S$. We say that $\frX \to S$ is an \emph{algebraic approximation of $X$} if $0 \in \overline{S^{\mathrm{alg}}}$. \end{dfn} \section{Voisin's example: construction and properties} The aim of this section is threefold. First we recall Voisin's example from~\cite{Voi06} in order to fix notation and for the reader's convenience. Second, we investigate some of its properties which have not been discussed by Voisin. In particular, we take a closer look at the singularities arising in the construction. Third, we make the example as concrete as possible by providing an explicit example of Voisin's ``property (\textasteriskcentered)''. \begin{dfn}[Scenic\xspace tori] A \emph{scenic\xspace torus} is a pair $(T, \phi)$ consisting of an $n$-dimensional complex torus $T$ and an endomorphism $\phi \colon T \to T$ such that the induced map $\phi_* \colon \Hh1.T.\ensuremath{\mathbb C}. \to \Hh1.T.\ensuremath{\mathbb C}.$ has the following property: the eigenvalues $\mu_1, \dots, \mu_{2n}$ of $\phi_*$ are pairwise distinct, none of them are real, and the Galois group $\Gal \left( \factor{ \ensuremath{\mathbb Q}( \mu_1, \dots, \mu_{2n} ) }{ \ensuremath{\mathbb Q} } \right)$ is the full symmetric group $\mathfrak S} \newcommand{\frX}{\mathfrak X_{2n}$. \end{dfn} \subsection{Polynomials with large Galois group} In~\cite[\S1]{Voi04}, it is explained how to construct a scenic\xspace torus starting from a rank $2n$ lattice $\Gamma$ and an endomorphism $\phi_\ensuremath{\mathbb Z}$ of $\Gamma$ whose characteristic polynomial has full symmetric Galois group and no real roots, as above. So for us it only remains to give an example of such a lattice and endomorphism. We will see that such examples are abundant for any value of $n$. The following theorem gives a criterion for the characteristic polynomial $f$ of $\phi_\ensuremath{\mathbb Z}$ to have the desired Galois group. \begin{thm}[Polynomials with full symmetric Galois group] \label{vanderWaerden} Let $f \in \ensuremath{\mathbb Z}[x]$ be a monic polynomial of degree $d$ with the following properties: \begin{enumerate} \item The image of $f$ in $\mathbb F_2[x]$ is irreducible. \item The image of $f$ in $\mathbb F_3[x]$ splits into a linear factor and an irreducible factor of degree $d - 1$. \item The image of $f$ in $\mathbb F_5[x]$ splits into an irreducible quadratic factor and one or two irreducible factors of odd degree. \end{enumerate} Then the splitting field $K$ of $f$ has Galois group $\Gal(K/\ensuremath{\mathbb Q}) = \mathfrak S} \newcommand{\frX}{\mathfrak X_d$. \qed \end{thm} \begin{proof} See~\cite[\S66, p.~204]{vanderWaerdenAlgebra}. \end{proof} For any prime $p$, there exist irreducible polynomials over $\mathbb F_p$ of any given degree. Thus for any $d$ we can find monic polynomials $f_2, f_3, f_5 \in \ensuremath{\mathbb Z}[x]$ which over $\mathbb F_2, \mathbb F_3, \mathbb F_5$ split as described in \cref{vanderWaerden}. Then $f \coloneqq -15 f_2 + 10 f_3 + 6 f_5 + 30 k$ is, for any $k \in\ensuremath{\mathbb Z}$, a monic polynomial of degree $d$ with Galois group $\mathfrak S} \newcommand{\frX}{\mathfrak X_d$. If $d = 2n$ is even and $k \gg 0$ sufficiently big, then this polynomial does not have any real roots. For a concrete example, consider the case $n = 4$, which is the smallest value to which~\cite{Voi06} applies. Then we may take \begin{align*} f & = -15 \underbrace{ (x^8 + x^4 + x^3 + x + 1) }_{\text{irreducible mod $2$}} + 10 (x - 1) \underbrace{ (x^7 + x^2 + 2) }_{\text{irred.~mod $3$}} \\[2ex] & \hspace{2em} + 6 \underbrace{(x^2 + 2) (x^3 + x + 1) (x^3 + x + 4)}_{\text{each factor irreducible mod $5$}} + 120 \\[2ex] & = x^8 - 10 x^7 + 24 x^6 + 30 x^5 + 15 x^4 + 85 x^3 + 26 x^2 + 65 x + 133 \in \ensuremath{\mathbb Z}[x]. \end{align*} For any $f$ as above, set $\Gamma \coloneqq \factor{\ensuremath{\mathbb Z}[x]}{(f)}$ and take $\phi_\ensuremath{\mathbb Z} \colon \Gamma \to \Gamma$ to be multiplication by $x$. Since $f$ is monic, $\Gamma$ is a lattice and by construction, the minimal polynomial of $\phi_\ensuremath{\mathbb Z}$ is $f$. By degree reasons, $f$ is then also the characteristic polynomial of $\phi_\ensuremath{\mathbb Z}$. \subsection{Voisin's construction} \label{voisin construction} Before we sum up the construction in~\cite{Voi06}, recall the following standard definitions. \begin{dfn}[Dual torus, Poincar\'e bundle, Kummer construction] Let $T$ be an $n$-dimensional complex torus. \begin{enumerate} \item \label{dual torus} The \emph{dual torus} of $T$ is defined as \[ T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \coloneqq \factor{\HH1.T.\O T.}{\HH1.T.\ensuremath{\mathbb Z}.}. \] By the exponential sequence on $T$, the map $\exp \colon \HH1.T.\O T. \to \HH1.T.\O T^*.$ induces an isomorphism \[ T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \overset\sim\lto \Picn(T) \coloneqq \ker \left( \HH1.T.\O T^*. \xrightarrow{\; \mathrm c_1 \;} \HH2.T.\ensuremath{\mathbb Z}. \right). \] This identifies $T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ with $\Picn(T)$, the group of topologically trivial holomorphic line bundles on $T$. For a point $t \in T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$, we will denote the corresponding line bundle by $\sL_t$. \item \label{poincare} The \emph{Poincar\'e bundle} $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R$ on $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ is the line bundle, unique up to isomorphism, with the following two properties: \begin{itemize} \item For all $t \in T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$, we have $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{ T \times \{ t \} } \cong \sL_t$. \item $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{ \{ 0 \} \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} } \cong \O{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}}$ is trivial. \end{itemize} If $\phi$ is an endomorphism of $T$, we define the \emph{twisted Poincar\'e bundle} on $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ as $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi \coloneqq (\phi, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R$. In particular, we have $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R = \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_{\mathrm{id}_T}$. \item Consider the automorphism $i$ of $T$ given by $t \mapsto -t$. Its fixed points are exactly the $2^{2n}$ two-torsion points of $T$, the set of which we denote by $\tau_2(T)$. The \emph{(singular) Kummer variety} associated to $T$ is \[ K(T) \coloneqq \factor{T}{\langle i \rangle}. \] \end{enumerate} \end{dfn} \begin{lem}[Pulling back the Poincar\'e bundle] \label{poincare pullback} Let $T$ be a complex torus with an endomorphism $\phi$. We have the following isomorphisms: \begin{align*} (-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi & \cong \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi^{-1}, \\ (\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi & \cong \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi^{-1}. \end{align*} These isomorphisms are unique if we require them to respect a choice of trivialization $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{(0,0)} \cong \ensuremath{\mathbb C}$ fixed in advance. \end{lem} \begin{proof} The involution $-\mathrm{id}_T$ acts as $-\mathrm{id}$ on $\pi_1(T)$ and hence also on $\Picn(T)$. Therefore, for all $t \in T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ we have \begin{align*} (-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{T \times \{ t \}} & \cong (-\mathrm{id}_T)^* \sL_t \cong \sL_t^{-1} \qquad \text{and} \\ (\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{T \times \{ t \}} & \cong \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{T \times \{ -t \}} \cong \sL_{-t} \cong \sL_t^{-1}, \end{align*} as well as \begin{align*} (-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{\{ 0 \} \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} & \cong \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}}^* \O{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} \cong \O{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} \qquad \text{and} \\ (\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{\{ 0 \} \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} & \cong (-\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \O{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} \cong \O{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}}. \end{align*} This shows that $(-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R^{-1}$ and $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})^* \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R^{-1}$ both have the defining properties of the Poincar\'e bundle. By the uniqueness in~\labelcref{poincare}, we obtain the desired isomorphisms in case $\phi = \mathrm{id}_T$. These isomorphisms will only be unique up to a constant. But as $(0, 0)$ is a fixed point of both $(-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ and $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$, there will be only one isomorphism of each kind respecting a fixed trivialization $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R \big|_{(0,0)} \cong \ensuremath{\mathbb C}$. For the general case, note that pulling back by the map $(\phi, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ commutes with both $(-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ and $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$, as $\phi$ is an endomorphism. \end{proof} Let $T$ be a complex torus of dimension $n \ge 2$ and equipped with an endomorphism $\phi$. We consider the rank $2$ vector bundle $\sE_\phi \coloneqq \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_{\phi} \oplus \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_{\phi}^{-1}$ and the $\P1$-bundle $p_\phi \colon \ensuremath{\mathbb P}(\sE_{\phi}) \to T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. By \cref{poincare pullback}, the automorphisms $(-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ and $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ of $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ induce automorphisms $i_\phi$ and $\hat i_\phi$ of $\ensuremath{\mathbb P}(\sE_{\phi})$. These automorphisms generate a finite group isomorphic to $\factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}} \times \factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}}$. We consider the quotient \[ \sQ_\phi \coloneqq \factor{\ensuremath{\mathbb P}(\sE_\phi)}{\langle i_\phi, \hat i_\phi \rangle}. \] Using this notation, we can finally outline Voisin's example. \begin{cons}[Voisin's example] \label{construction} Let $(T, \phi)$ be a scenic\xspace torus of dimension $n \ge 4$. We do the construction in the above paragraph for the endomorphisms $\mathrm{id}_T$ and $\phi$. For the sake of readability, we drop all indices referring to $\mathrm{id}_T$. The automorphisms $i$, $\hat i$, $i_\phi$ and $\hat i_\phi$ induce automorphisms $(i, i_\phi)$ and $(\hat i, \hat i_\phi)$ of the fibre product \[ Z \coloneqq \ensuremath{\mathbb P}(\sE) \times_{T\times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} \ensuremath{\mathbb P}(\sE_{\phi}). \] These automorphisms generate a finite group $G$, which is isomorphic to ${\factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}} \times \factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}}}$. We denote the quotient by $X \coloneqq \factor Z G$. We get the following two commutative diagrams, where the second one is the quotient of the first one by the action of $G$: \begin{center} \begin{tikzpicture}[scale=2] \node (A) at (0,1){$Z$}; \node (B) at (1.5,1){$\ensuremath{\mathbb P}(\sE_\phi)$}; \node (C) at (0,0){$\ensuremath{\mathbb P}(\sE)$}; \node (D) at (1.5,0){$T\times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$}; \path[->,font=\scriptsize] (A) edge node[above]{$q_\phi$} (B) (A) edge node[left]{$q$} (C) (C) edge node[above]{$p$} (D) (B) edge node[right]{$p_\phi$} (D) (A) edge node[above]{$\pi$} (D); \end{tikzpicture} \hspace{5em} \begin{tikzpicture}[scale=2] \node (A) at (0,1){$X$}; \node (B) at (1.5,1){$\sQ_\phi$}; \node (C) at (0,0){$\sQ$}; \node (D) at (1.5,0){$K(T)\times K(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}})$}; \path[->,font=\scriptsize] (A) edge node[above]{} (B) (A) edge node[left]{} (C) (C) edge node[above]{} (D) (B) edge node[right]{} (D) (A) edge node[above]{} (D); \end{tikzpicture} \end{center} \end{cons} The interest in this construction stems from the following result of Voisin. \begin{thm}[\protect{\cite[Theorem~4]{Voi06}}] \label{voi} Let $X'$ be any compact complex manifold bimeromorphically equivalent to $X$. Then $X'$ does not have the homotopy type of a complex projective manifold. In particular, it does not admit an algebraic approximation. \qed \end{thm} \subsection{Local description of the singularities} The aim of this subsection is to prove that $X$ has rigid singularities. To this end, we examine the singularities arising in the above construction more closely. \begin{lem}[Singularities of $\sQ$] \label{p1bundle} The spaces $\sQ$ and $\sQ_\phi$ are $(2n + 1)$-dimensional, with only terminal quotient singularities of codimension $n+1$. Locally analytically the singularities look like one of the following double points: \begin{enumerate} \item \label{p1bundle.1} $(\ensuremath{\mathbb C}^{n+1} / \pm) \times \ensuremath{\mathbb C}^n$, or \item \label{p1bundle.2} $(\ensuremath{\mathbb C}^{2n} / \pm) \times \ensuremath{\mathbb C}$, or \item \label{p1bundle.3} $\factor { (\ensuremath{\mathbb C}^n \times \ensuremath{\mathbb C}^n \times \ensuremath{\mathbb C}) }{ \big\langle (-\mathrm{id}, \mathrm{id}, -\mathrm{id}), (\mathrm{id}, -\mathrm{id}, -\mathrm{id}) \big\rangle }$. \end{enumerate} \end{lem} \begin{proof} The variety $\sQ_\phi$ is smooth except possibly for the image of points $x \in \ensuremath{\mathbb P}(\sE_\phi)$ with non-trivial stabilizer. Let $x$ be such a point and denote the fibre containing it by $F \coloneqq p_\phi^{-1} \big( p_\phi(x) \big)$. Then $p_\phi(x) \in T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ is a fixed point of $(-\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$, $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$ or $(-\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}})$. This means $p_\phi(x) \in \tau_2(T) \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \cup T \times \tau_2(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}})$. Let $\psi_1 \colon U \times \ensuremath{\mathbb C} \to \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi$ be a trivialization of $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_{\phi}$ near $F$, where we may assume $U \subset T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ to be a symmetric neighbourhood of $p_{\phi}(x)$. Consider the map $(\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}}) \colon U \to U$. By \cref{poincare pullback}, there is an isomorphism $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi\big|_U \times_U U \cong \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi^{-1}\big|_U$. Using this, we obtain trivializations \[ \begin{array}{rlcl} \psi_2 \colon & U \times \ensuremath{\mathbb C} & \to & \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi^{-1}\big|_U = \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_\phi\big|_U \times_U U, \\[1ex] & (u, t) & \mapsto & \big( \psi_1 \big( ( \mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} )(u), \ t \big), \ u \big), \quad \text{and} \\[1ex] \psi \colon & U \times \ensuremath{\mathbb C}^2 & \to & \sE_\phi\big|_U, \\[1ex] & \big( u, \ (a, b) \big) & \mapsto & \big( \psi_1(u, a), \ \psi_2(u, b) \big), \end{array} \] of $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_{\phi}^{-1}\big|_U$ and $\sE_\phi\big|_U$, respectively. Projectivizing gives a trivialization \[ \ensuremath{\mathbb P}(\psi) \colon U \times \P1 \overset\sim\lto \ensuremath{\mathbb P}(\sE_\phi)\big|_U. \] In these coordinates the automorphisms $i_\phi, \hat{i}_\phi$ and their composition are given as \[ \begin{array}{rlcl} i_\phi \colon & \big( u, [a : b] \big) & \mapsto & \big( ( -\mathrm{id}_T, \mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} )(u), \ [b : a] \big), \\[1ex] \hat i_\phi \colon & \big( u, [a : b] \big) & \mapsto & \big( ( \mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} )(u), \ [b : a] \big), \qquad \text{and} \\[1ex] i_\phi \circ \hat i_\phi \colon & \big( u, [a : b] \big) & \mapsto & \big( ( -\mathrm{id}_T, -\mathrm{id}_{T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} )(u), \ [a : b] \big). \end{array} \] Their fixed point sets are precisely \begin{align*} \operatorname{Fix}(i_\phi) & = \big( U \cap ( \tau_2(T) \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} ) \big) \times \big\{ [\pm 1 : 1] \big\}, \\ \operatorname{Fix}(\hat i_\phi) & = \big( U \cap ( T \times \tau_2(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}) ) \big) \times \big\{ [\pm 1 : 1] \big\}, \quad \text{and} \\ \operatorname{Fix}(i_\phi \circ \hat i_\phi) & = \big( U \cap ( \tau_2(T) \times \tau_2(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}) ) \big) \times \P1. \end{align*} Now, if $x$ is a fixed point of exactly one of $i_\phi$, $\hat{i}_\phi$, $i_\phi \circ \hat{i}_\phi$, then the above description in coordinates shows that locally at $x$, the quotient $\sQ_\phi$ looks like~\labelcref{p1bundle.1} or~\labelcref{p1bundle.2}, respectively. Otherwise $x$ is a common fixed point of all three automorphisms and we get the local description~\labelcref{p1bundle.3}. All these singularities are terminal by the Reid--Tai criterion~\cite[Theorem~3.21]{Kol13}. To be more precise, in our situation that criterion boils down to having the eigenvalue $-1$ with multiplicity $\ge 3$ in every non-identity element of $G$, and this is clearly satisfied. \end{proof} \begin{lem}[Singularities of $X$] \label{sing} The space $X$ is $(2n + 2)$-dimensional, with only terminal quotient singularities of codimension $n+1$. Locally analytically the singularities look like one of the following double points: \begin{enumerate} \item \label{sing.1} $(\ensuremath{\mathbb C}^{n+2} / \pm) \times \ensuremath{\mathbb C}^n$, or \item \label{sing.2} $(\ensuremath{\mathbb C}^{2n} / \pm) \times \ensuremath{\mathbb C}^2$, or \item \label{sing.3} $\factor { (\ensuremath{\mathbb C}^n \times \ensuremath{\mathbb C}^n \times \ensuremath{\mathbb C}^2) }{ \big\langle (-\mathrm{id}, \mathrm{id}, -\mathrm{id}), (\mathrm{id}, -\mathrm{id}, -\mathrm{id}) \big\rangle }$. \end{enumerate} \end{lem} \begin{proof} The space $Z$ is a $\P1 \times \P1$-bundle over $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. As $G$ is finite, the quotient ${X = \factor Z G}$ is also a $(2n+2)$-dimensional complex space with only quotient singularities, contained in the image of the fixed point set of the automorphisms $g \in G \setminus \{ \mathrm{id} \}$. The action of these $g$ can be described in local analytic coordinates, analougously to \cref{p1bundle}. This gives the above local analytic description of the singularities, and the Reid--Tai criterion shows again that they are terminal. The singular locus consists of a section over $\tau_2(T) \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \cup T \times \tau_2(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}})$, together with the fibres over $\tau_2(T) \times \tau_2(T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}})$. \end{proof} \begin{cor}[Local rigidity] \label{rigid} $X$ has rigid singularities. \end{cor} \begin{proof} According to \cref{sing}, the variety $X$ has only quotient singularities of codimension $n + 1 \ge 5$. Such singularities are rigid by~\cite[p.~72]{Ste03}. (Actually it suffices that the codimension is $\ge 3$.) \end{proof} \section{$X$ does not admit an algebraic approximation} In this section, we prove~\labelcref{main.approx}: \begin{thm} \label{approximation} The space $X$ from \cref{construction} does not admit an algebraic approximation. \end{thm} We begin with an auxiliary lemma. It says that for locally trivial deformations, the functorial resolution of the total space is a deformation of a resolution of the central fibre. This obviously fails if local triviality is dropped (consider e.g.~a deformation $f \colon \sX \to (S, 0)$ where $\sX$ is smooth but $f^{-1}(0)$ is not). \begin{lem}[Resolving lt deformations] \label{ltresolution} Let $f \colon \sX \to S$ be a locally trivial deformation of a compact complex space $X \cong \sX_0$ over a smooth base $S$, and ${\pi_\sX \colon \cR(\sX) \to \sX}$ the functorial resolution of $\sX$, as in \cref{funct res}. Then, after shrinking $S$ around~$0$, the composition $f \circ \pi_\sX \colon \cR(\sX) \to S$ is a locally trivial deformation of its central fibre. Furthermore, that central fibre is a resolution of $X$. \end{lem} \begin{proof} As the deformation $f$ is locally trivial, for every point $x \in X$ there are open neighbourhoods $0 \in S_x \subset S$ and $x \in U_x \subset \sX$ such that $U_x$ is isomorphic to ${(U_x \cap X) \times S_x}$ over $S$. As the fibres of $f$ are compact, after shrinking $S$ we can assume $S_x = S$ for all $x \in X$. Let $x \in X$ and $U \coloneqq U_x \cap X$. The projection $U \times S \to U$ and the open embedding $U \times S \hookrightarrow \sX$ are smooth. Hence we get for the functorial resolutions \[ \cR(U \times S) = \cR(U) \times_U (U \times S) = \cR(U) \times S \] and that $\cR(U\times S) \hookrightarrow \cR(\sX)$ is also an open embedding. By definition it follows that $\cR(\sX) \to S$ is a locally trivial deformation. It is also clear that the central fibre has to be smooth. Hence it is a resolution of $X$, via the restriction of $\pi_\sX$. \end{proof} \begin{proof}[Proof of \cref{approximation}] Let $f \colon \sX \to S$ be an arbitrary deformation of $X \cong \sX_0$. Pulling back the deformation to a resolution of $S$, we may assume that $S$ is smooth. As $X$ has rigid singularities by \cref{rigid}, the deformation $f$ is locally trivial. After shrinking $S$, the map $\cR(\sX) \to S$ is a deformation of some resolution $\widetilde X$ of $X$, by \cref{ltresolution}. According to \cref{voi}, no fibre of $\cR(\sX) \to S$ can be projective. Then the same holds for the fibres of $\sX \to S$, because the functorial resolution is a projective morphism. Therefore $f$ is not an algebraic approximation of $X$. \end{proof} \section{$\sQ$ does admit an algebraic approximation} Keeping notation from \cref{construction}, in this section we will prove a substantial part of~\labelcref{main.mori}. \begin{thm} \label{admitsapprox} The space $\sQ$ admits an algebraic approximation. \end{thm} An approximation of $\sQ$ will be constructed out of an approximation of $T$, which is well-known to exist. To this end, we will show that the construction of $\sQ$ can be done in families. \subsection{The Poincar\'e bundle in families} We show in this auxiliary section that for a family $\sX$ of complex tori, the Poincar\'e bundles belonging to the fibres $\sX_s$ locally glue together to a line bundle on the total space of the induced family $(\sX_s \times \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}_s)_s$. \begin{prp}[Deformations of the Poincar\'e bundle] \label{deformpoincare} Let $\pi \colon \sX \to S$ be a deformation of a complex torus $T \cong \sX_0$. Then: \begin{enumerate} \item \label{dualfamily} Each fibre $\sX_s$ is a complex torus, and there is a deformation $p \colon \mathscr Y} \newcommand{\sZ}{\mathscr Z \to S$ with fibres $\mathscr Y} \newcommand{\sZ}{\mathscr Z_s = \sX_s \times \sX_s^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. \item \label{relpoincare} After shrinking $S$ around $0$, there is a line bundle $\sL$ on $\mathscr Y} \newcommand{\sZ}{\mathscr Z$ whose restriction $\sL_s \coloneqq \sL\big|_{\mathscr Y} \newcommand{\sZ}{\mathscr Z_s}$ is isomorphic to $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_s$, the Poincar\'e bundle on $\mathscr Y} \newcommand{\sZ}{\mathscr Z_s$, for each $s \in S$. \end{enumerate} \end{prp} The proof is based on the following computational lemma. To fix notation, let ${T = \factor V \Lambda}$ be a complex torus. Then we have $\pi_1(T) = \Lambda$ and consequently $\HH1.T.\ensuremath{\mathbb Z}. = \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \coloneqq \Hom\Lambda.\ensuremath{\mathbb Z}.$. By~\labelcref{dual torus}, it follows that $\HH1.T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}.\ensuremath{\mathbb Z}. = \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft} \hspace{-.5em} \rotatebox{90}{\textup\guilsinglleft}}}} = \Lambda$. \begin{lem} \label{id11} The identity map of $\Lambda$, viewed as an element of $\HH2.T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}.\ensuremath{\mathbb Z}.$ via the natural maps \begin{equation} \label{kunneth} \End\Lambda. = \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \otimes \Lambda = \HH1.T.\ensuremath{\mathbb Z}. \otimes \HH1.T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}.Z. \hookrightarrow \HH2.T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}.\ensuremath{\mathbb Z}., \end{equation} is equal to $\cc1\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R$, the first Chern class of the Poincar\'e bundle on $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. In particular, it is of Hodge type $(1, 1)$. \end{lem} \begin{proof} The inclusion map in~\labelcref{kunneth} is given by the K\"unneth formula, that is, by pulling back and taking cup product. Furthermore, $\HH2.T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}.\ensuremath{\mathbb Z}.$ is naturally identified with the set of alternating integral $2$-forms on $\Lambda \times \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. Spelled out, this means that an element $g \otimes \mu \in \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \otimes \Lambda$ is sent to the following $2$-form on $\Lambda \times \Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$: \[ \big( (\lambda_1, f_1), (\lambda_2, f_2) \big) \mapsto g(\lambda_1) f_2(\mu) - g(\lambda_2) f_1(\mu). \] Now, choose a basis $\gamma_1, \dots, \gamma_{2n}$ of $\Lambda$ and let $\gamma_1^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}, \dots, \gamma_{2n}^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ be the dual basis of $\Lambda^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. Then $\mathrm{id}_\Lambda = \sum_{i=1}^{2n} \gamma_i^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \otimes \gamma_i$ and by the above formula, under~\labelcref{kunneth} this gets sent to \[ \big( (\lambda_1, f_1), (\lambda_2, f_2) \big) \mapsto f_2(\lambda_1) - f_1(\lambda_2). \] According to~\cite[Thm.~2.5.1]{BL04}, this form represents $\cc1\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R$. This proves the first claim. The second one is then clear since the first Chern class of any line bundle is of type $(1, 1)$. \end{proof} \begin{proof}[Proof of \cref{deformpoincare}] Any deformation of a complex torus is a complex torus, so all fibres $\sX_s$ are complex tori by~\cite[Theorem~4.1]{Cat02}. Now we consider the total space of the sheaf $\sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \coloneqq \factor{\RR1.\pi.\sO_{\sX}.}{\RR1.\pi.\ensuremath{\mathbb Z}_{\sX}.}$ on $S$. Since $\RR1.\pi.\sO_{\sX}.$ is a vector bundle and in each fibre we are dividing out a lattice, it is clear that $\pi^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \colon \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \to S$ is a flat family of complex tori. By definition, the fibres $(\sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}})_s$ are the dual tori $(\sX_s)^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. Hence $\pi^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ is a deformation of $T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$, called the \emph{dual family} of $\sX$. The fibre product $\mathscr Y} \newcommand{\sZ}{\mathscr Z \coloneqq \sX \times_S \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ fits into a commutative diagram \[ \xymatrix{ \mathscr Y} \newcommand{\sZ}{\mathscr Z \ar^{r'}[rr] \ar_r[d] \ar^p[drr] & & \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \ar^{\pi^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}}[d] \\ \sX \ar^\pi[rr] & & S, } \] where $p \colon \mathscr Y} \newcommand{\sZ}{\mathscr Z \to S$ is a deformation of $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. For each $s \in S$, the fibre $\mathscr Y} \newcommand{\sZ}{\mathscr Z_s$ is the complex torus $\sX_s \times \sX_s^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. This proves~\labelcref{dualfamily}. In order to fix the group structure on the complex tori $\sX_s$, we pick an arbitrary section $\sigma \colon S \to \sX$ of $\pi$ and regard it as the zero section. The family $\sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \to S$ already comes equipped with a zero section $\tau \colon S \to \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. Pulling back induces sections $j = ( \sigma \circ \pi^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}, \mathrm{id}_{\sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}} ) \colon \sX^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}} \to \mathscr Y} \newcommand{\sZ}{\mathscr Z$ of $r'$ and $i = ( \mathrm{id}_\sX, \tau \circ \pi ) \colon \sX \to \mathscr Y} \newcommand{\sZ}{\mathscr Z$ of $r$. After shrinking $S$, we may assume that $S$ is Stein and contractible and hence in particular the sheaf $\RR2.p.{\ensuremath{\mathbb Z}_\mathscr Y} \newcommand{\sZ}{\mathscr Z}.$ is trivial. Consider the cohomology class $\cc1\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R$ on the central fibre $\mathscr Y} \newcommand{\sZ}{\mathscr Z_0 = T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$. By the triviality of $\RR2.p.{\ensuremath{\mathbb Z}_\mathscr Y} \newcommand{\sZ}{\mathscr Z}.$, this extends to a global section $\phi$ of the latter sheaf and by \cref{id11}, for all $s \in S$ the class $\phi(s) \in \HH2.\mathscr Y} \newcommand{\sZ}{\mathscr Z_s.\ensuremath{\mathbb Z}.$ continues to be the first Chern class of the Poincar\'e bundle $\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_s$ on $\mathscr Y} \newcommand{\sZ}{\mathscr Z_s$. In particular, $\phi(s)$ is of type $(1, 1)$ for all $s \in S$. The pushforward of the exponential sequence on $\mathscr Y} \newcommand{\sZ}{\mathscr Z$, more precisely the exact sequence \[ \RR1.p.{\O\mathscr Y} \newcommand{\sZ}{\mathscr Z^\times}. \longrightarrow \RR2.p.\ensuremath{\mathbb Z}_\mathscr Y} \newcommand{\sZ}{\mathscr Z. \longrightarrow \RR2.p.\O\mathscr Y} \newcommand{\sZ}{\mathscr Z., \] then shows that $\phi$ lifts to a section $\widetilde\phi \in \HH0.S.{\RR1.p.\O\mathscr Y} \newcommand{\sZ}{\mathscr Z^\times.}.$, at least after shrinking $S$. The space $S$ being Stein and contractible, the sheaf cohomology groups $\HH i.S.\sO_S.$ and $\HH i.S.\ensuremath{\mathbb Z}_S.$ vanish for $i > 0$. By the exponential sequence on $S$, also $\HH i.S.\O S^\times.$ vanishes for $i > 0$. Hence the five-term exact sequence associated to the Leray spectral sequence for $p$ and $\O\mathscr Y} \newcommand{\sZ}{\mathscr Z^\times$ induces an isomorphism $\Pic(\mathscr Y} \newcommand{\sZ}{\mathscr Z) \cong \HH0.S.{\RR1.p.\O\mathscr Y} \newcommand{\sZ}{\mathscr Z^\times.}.$. This shows that the germ $\widetilde\phi$ comes from a line bundle $\sL$ on $\mathscr Y} \newcommand{\sZ}{\mathscr Z$. By construction, $\sL$ has the property that $\cc1{\sL_s} = \cc1{\mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_s}$ for each $s \in S$, where $\sL_s \coloneqq \sL\big|_{\mathscr Y} \newcommand{\sZ}{\mathscr Z_s}$. We normalize $\sL$ by replacing it with \[ \sL \otimes r^* \big( i^* \sL^{-1} \big) \otimes r'^* \big( j^* \sL^{-1} \big). \] Then by the uniqueness in~\labelcref{poincare}, we have $\sL_s \cong \mathscr P} \newcommand{\sQ}{\mathscr Q} \newcommand{\sR}{\mathscr R_s$ for each $s \in S$. This is the statement of~\labelcref{relpoincare}. \end{proof} \subsection{Proof of \cref{admitsapprox}} Consider the miniversal deformation of $T = \sX_0$, \[ \pi \colon \sX \to S \coloneqq \Def(T). \] Using notation from \cref{deformpoincare}, let $p \colon \mathscr Y} \newcommand{\sZ}{\mathscr Z \to S$ be the deformation of $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ with fibres $\mathscr Y} \newcommand{\sZ}{\mathscr Z_s = \sX_s \times \sX_s^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$, and let $\sL$ be the line bundle on $\mathscr Y} \newcommand{\sZ}{\mathscr Z$ restricting to the Poincar\'e bundle on each fibre. Consider the rank two vector bundle $\sE_S \coloneqq \sL \oplus \sL^{-1}$ on $\mathscr Y} \newcommand{\sZ}{\mathscr Z$, as well as its projectivization $\ensuremath{\mathbb P}(\sE_S) \to \mathscr Y} \newcommand{\sZ}{\mathscr Z$. It is clear that $\ensuremath{\mathbb P}(\sE_S) \to S$ is a deformation of $\ensuremath{\mathbb P}(\sE)$. Furthermore the action of $G = \factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}} \times \factor{\ensuremath{\mathbb Z}}{2\ensuremath{\mathbb Z}}$ on the central fibre described in \cref{voisin construction} extends to all of $\ensuremath{\mathbb P}(\sE_S)$, since the other fibres are built in the same way. We denote by $\sQ_S$ the quotient of $\ensuremath{\mathbb P}(\sE_S)$ by $G$. Then $\sQ_S \to S$ is a deformation of its central fibre $(\sQ_S)_0 \cong \sQ$. It is well-known that $\pi \colon \sX \to S$ is an algebraic approximation of $T$, see~\cite[Ch.~5, Ex.~1]{Voi03}. Also $\Picn$ of any projective variety is again projective~\cite[Prop.~7.16]{Voi02}. Finally, projectivized vector bundles over projective varieties and finite quotients there\-of remain projective~\cite{Laz04a},~\cite[Ch.~IV, Prop.~1.5]{Knu71}. Taken together, this shows that $\sQ_S \to S$ is an algebraic approximation of $\sQ$, as desired. \qed \section{$X$ cannot be contracted further} The purpose of this section is to prove~\labelcref{main.cont}. \begin{thm}[MMP for $X$] \label{MMP X} \label{contr} Let $X$ be as in \cref{construction}. \begin{enumerate} \item\label{bimcontraction} Any bimeromorphic map $X \to X'$ onto a normal complex space $X'$ is an isomorphism. \item\label{morifiberspace} Every run of the $K_X$-MMP immediately terminates with one of the Mori fibre spaces $X \to \sQ$ or $X \to \sQ_\phi$. \end{enumerate} \end{thm} \subsection{Auxiliary results} The following proposition is a strengthening of~\cite[Ch.~III, \S4.3,~Lemma]{Sha13} in the analytic setting. If $\pi$ is a submersion and $Z$ is compact K{\"{a}}hler\xspace, then the claim follows easily from the fact that all fibres of $\pi$ have the same homology class. However, for the applications we have in mind, $Z$ can only be assumed to be of class $\cC$ and then it may contain curves that are homologous to zero. \begin{prp}[Maps contracting fibres of another map] \label{fibercontr} Let $\pi \colon E \to S$ be a proper surjective morphism with connected fibres between complex spaces $E$ and $S$. Furthermore let $f \colon E \to Z$ be any holomorphic map to another complex space $Z$. \begin{enumerate} \item \label{fibercontr.1} If for some $s_0 \in S$, the map $f$ contracts the fibre $\pi^{-1}(s_0)$ to a point, then it contracts all fibres $\pi^{-1}(s)$ for $s$ in a non-empty Zariski-open subset of $S$. \item \label{fibercontr.2} If moreover $\pi$ is equidimensional and $S$ is locally irreducible and connected (e.g.~if $S$ is normal and irreducible), then $f$ contracts each fibre of $\pi$ to a point. \end{enumerate} \end{prp} \begin{proof} We denote the fibres of $\pi$ as $E_s \coloneqq \pi^{-1}(s)$. For~\labelcref{fibercontr.1}, we want to show that the set \[ S_0 \coloneqq \big\{ s\in S \;\big|\; f(E_s) \text{ is a point} \big\} \subset S \] is Zariski-open in $S$. We consider the graph $\Gamma$ of $f$, which is closed in $E \times Z$. The map $\pi \times \mathrm{id}_Z$ is closed because $\pi$ is proper~\cite[Ch.~III, Cor.~4.3]{GPR94} and maps $\Gamma$ onto the image $\Gamma'$ of $\pi \times f$. Thus $\Gamma'$ is an analytic subspace of $S \times Z$. The projection $p \colon \Gamma' \to S$ has fibres $p^{-1}(s) = \{ s \} \times f(E_s)$, and by assumption $p^{-1}(s_0)$ is a point. Hence the subset of $S$ where the fibres of $p$ are zero-dimensional is non-empty, and it is Zariski-open by~\cite[Ch.~II, Thm.~1.16]{GPR94}. This set equals $S_0$ because the fibres $E_s$ are connected. For~\labelcref{fibercontr.2}, we assume additionally that $S$ is locally irreducible and connected. Then equidimensionality of $\pi$ is equivalent to $\pi$ being an open map~\cite[Ch.~II, Thm.~1.18]{GPR94}. We will show that $S_0$ is also closed in $S$. Then connectedness of $S$ implies $S = S_0$. If $S \setminus S_0 \ne \emptyset$, let $s \in S \setminus S_0$ be arbitrary. Then $f(E_s)$ contains at least two distinct points $x, y \in Z$. As $Z$ is Hausdorff, we can separate these points by disjoint open analytic neighborhoods $U_x, U_y \subset S$. The preimages $f^{-1}(U_x), f^{-1}(U_y) \subset E$ are disjoint and open in $E$. As $\pi$ is an open map, the set \[ U \coloneqq \pi \big( f^{-1}(U_x) \big) \cap \pi \big( f^{-1}(U_y) \big) \] is an open neighborhood of $s$ in $S$. Note that for any $t \in U$, the set $f(E_t)$ contains at least two distinct points. Hence $U \subset S \setminus S_0$, i.e.~$S \setminus S_0$ is open in $S$. \end{proof} \begin{prp}[Bimeromorphic maps contract curves] \label{exc moishezon} Let $f \colon X \to Y$ be a proper bimeromorphic morphism of normal complex spaces. Then for every $y \in Y$, the fibre $f^{-1}(y)$ is Moishezon. In particular, if $f$ is not an isomorphism then there exists a compact curve $C \subset X$ which is mapped to a point by $f$. \end{prp} \begin{proof} By Hironaka's Chow Lemma~\cite[Cor.~2]{HironakaFlattening}, there exists a projective bimeromorphic morphism $g \colon Y' \to Y$ which factors through $f$ via a morphism $h \colon Y' \to X$. Then $h$ is automatically a bimeromorphism and closed, hence $h$ surjects for any $y \in Y$ the fibre $g^{-1}(y)$ onto the fibre $f^{-1}(y)$. As $g^{-1}(y)$ is projective, the fibre $f^{-1}(y)$ is Moi\-shezon. If $f$ is not an isomorphism, then some fibre $f^{-1}(y_0)$ is positive-dimensional. Being Moishezon, it must contain a curve, which is then mapped to the point $y_0$. \end{proof} \subsection{Proof of \cref{MMP X}} Let $\rho \colon Z \to X = \factor ZG$ be the quotient map. Let $f \colon X \to X'$ be a bimeromorphic map onto a normal complex space $X'$. As $f \circ \rho$ is proper, we can consider the Stein factorization $f \circ \rho = \rho' \circ f_Z$, where $f_Z \colon Z \to Z'$ is bimeromorphic, $\rho'$ is finite and $Z'$ is normal. If $f$ is not an isomorphism, then by \cref{exc moishezon} it contracts a curve $C \subset X$. Let $C_Z \subset Z$ be any curve contained in $\rho^{-1}(C)$. Then $f_Z$ contracts $C_Z$ and in particular $f_Z$ is not an isomorphism. So we have reduced~\labelcref{bimcontraction} to showing that every bimeromorphic map $g \colon Z \to Z'$ with $Z'$ normal is an isomorphism. If such $g$ is not an isomorphism, then by \cref{exc moishezon} it contracts a curve $C \subset Z$. The image $\pi(C)$ has to be a point, as $T \times T^{\smash{\scalebox{.7}[1.4]{\rotatebox{90}{\textup\guilsinglleft}}}}$ does not contain any curves by~\cite[Lemma~7]{Voi06}. Hence $C$ is contained in the fibre $\pi^{-1} \big( \pi(C) \big)$. This fibre is isomorphic to $\P1 \times \P1$ and the restrictions of $q$ and $q_\phi$ to it are nothing but the projections onto the first and second factor, respectively. Any curve $C$ in $\P1 \times \P1$ is numerically equivalent to an effective linear combination of the horizontal and the vertical fibre. Hence any morphism from $\P1 \times \P1$ contracting $C$ contracts at least a horizontal or a vertical fibre. So $g$ contracts a fibre of $q$ or a fibre of $q_\phi$. If $g$ contracts a fibre of the $\P1$-bundle $q \colon Z \to \ensuremath{\mathbb P}(\sE)$, then by \cref{fibercontr} every fibre of $q$ is contracted by $g$. In particular, $g$ factors through $q$, contradicting the assumption that $g$ is bimeromorphic. Analogously, if $g$ contracts a fibre of $q_\phi$, then it factors through $q_\phi$ and we get a similar contradiction. This proves~\labelcref{bimcontraction}. Concerning~\labelcref{morifiberspace}, let us first note that both $X \to \sQ$ and $X \to \sQ_\phi$ are Mori fibre spaces since $K_X$ is relatively ample and the relative Picard numbers are $\rho(X / \sQ) = \rho(X / \sQ_\phi) = 1$. Conversely, let $\psi \colon X \to W$ be the first map produced by the $K_X$-MMP. By~\labelcref{bimcontraction}, $\psi$ can be neither a divisorial nor a small contraction. Hence $\psi$ is a Mori fibre space\xspace. By an argument completely analogous to the proof of~\labelcref{bimcontraction}, we see that $\psi$ has to factor through either $X \to \sQ$ or $X \to \sQ_\phi$. In the first case, it has to be equal to $X \to \sQ$ since otherwise $\rho(X / W)$ would be at least two. In the second case, $\psi$ is equal to $X \to \sQ_\phi$ for the same reason. The proof of~\labelcref{morifiberspace} is thus finished. \qed \section{Proof of \cref{main}} Let $n \ge 10$ be an arbitrary even integer. Pick a scenic\xspace torus $(T, \phi)$ of dimension $(n - 2) / 2 \ge 4$, and do \cref{construction} for this choice of $T$. The resulting space $X$ will be our example: Using notation from \cref{construction}, we have $X = \factor Z G$, where $Z$ is obviously uniruled and K{\"{a}}hler\xspace. Hence also $X$ is uniruled, and it is K{\"{a}}hler\xspace by~\cite[Ch.~IV, Cor.~1.2]{Var89}. Now, \cref{sing} implies~\labelcref{main.sg}, and~\labelcref{bimcontraction} is~\labelcref{main.cont}. By~\labelcref{morifiberspace} our variety $X$ admits the Mori fibre space\xspace $X \to \sQ$, where the base $\sQ$ admits an algebraic approximation by \cref{admitsapprox}. This proves~\labelcref{main.mori}, with $Y = \sQ$. However, we showed in \cref{approximation} that $X$ itself does not admit an algebraic approximation. This is~\labelcref{main.approx}. \newpage \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR} \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2018-05-08T02:12:53", "yymm": "1805", "arxiv_id": "1805.02277", "language": "en", "url": "https://arxiv.org/abs/1805.02277" }
\section{Introduction and preliminaries} The linear programming approach to control systems is based on the fact that the occupational measures generated by admissible controls and the corresponding solutions of a dynamical system satisfy certain linear equations that represent the system's dynamics in an integral form. Such \lq\lq linearization" proved to be an efficient tool for dealing with various problems of control, and it has been explored extensively in both deterministic and stochastic settings (see, e.g., \cite{Redbook}, \cite{BhaBor}, \cite{Vivek}, \cite{BorGai}, \cite{BGQ}, \cite{F-V}, \cite{Jean}, \cite{Kurtz}, \cite{Stockbridge1} and, respectively, \cite{Gai8}, \cite{GQ}, \cite{GQ-1}, \cite{GR}, \cite{Goreac-Serea}, \cite{Her-Her-Lasserre}, \cite{Adelman-1}, \cite{Lass-Trelat}, \cite{QS}, \cite{Rubio}, \cite{Vinter} as well as references therein). In the present paper, we continue this line of research by studying the infinite dimensional (ID) linear programming (LP) problem which along with its dual allows one to characterize the optimal value of the deterministic long-run average optimal control problem\footnote{Note that infinite time horizon optimal control problems have been traditionally studied with other (not IDLP related) techniques; see, e.g., \cite{Arisawa-1}, \cite{Arisawa-2}, \cite{Arisawa-3}, \cite{Bardi}, \cite{CHL}, \cite{GruneSIAM98}, \cite{GruneJDE98}, \cite{Sorin92}, \cite{QR-2012}, \cite{Z14}, \cite{Z06a} and references therein.} in the general case when the latter may depend on the initial conditions of the system. Note that, while the form and the properties of the IDLP problem related to the ergodic case (that is, the case when the optimal value is independent of the initial conditions) is well understood, the linear programming formulation of the long-run average optimal control problem in the non-ergodic case has not been discussed in the literature. In fact, a justification of such LP formulation presents a significant mathematical challenge, and (to the best of our knowledge) this is the first paper aimed at addressing this matter. We consider the optimal control of the system \begin{equation}\label{e-CSO} y'(t)=f(y(t),u(t)), \ \ \ \ \ u(t)\in U, \ \ \ \ \ t\in [0,\infty) \end{equation} where $\ f(\cdot,\cdot): \R^m\times U \to \R^m$ is continuous in $(y,u) $ and satisfies Lipschitz condition in $y$ uniformly in $u \in U$ ($U$ is assumed to be a compact metric space). The controls $u(\cdot) $ are measurable functions $u(\cdot): [0,\infty)\to U $, with the set of all controls being denoted as $\U$. Given $u(\cdot) \in \U $ and an initial condition $y(0)=y_0 $, the solution of system (\ref{e-CSO}) obtained with this control and this initial condition is denoted as $y(t,y_0,u) $. Let $Y\subset \R^m$ be a compact domain, i.e., a compact set which is the closure of its interior. We denote by $\U_T(y_0)$, $\ \U(y_0)$ the sets of controls such that \begin{equation}\label{e-CSO-1} y(t,y_0,u)\in Y \ \ \end{equation} for any $\ t\in [0,T]$, respectively, for any $\ t\in [0,\infty)$. (The inclusion (\ref{e-CSO-1}) can be interpreted as a state constraint.) Consider two optimal control problems \begin{equation}\label{Cesaro} \frac{1}{T} \inf_{u(\cdot)\in \U_T(y_0)}\int_0^T k(y(t,y_0,u),u(t))dt:=v_T(y_0): \end{equation} and \begin{equation}\label{Abel} \lambda \inf_{u(\cdot)\in \U(y_0)}\int_0^{\infty}e^{-\lambda t} k(y(t,y_0,u),u(t))dt:=h^{\lambda}(y_0), \end{equation} where $T>0 $, $\lambda>0 $ and $k(y,u): \R^m\times U \to \R^1 $ is a continuous function. The main contribution of this paper is the introduction of an IDLP problem such that the limits $\ \lim_{T\rightarrow\infty}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}h^{\lambda}(y_0)$ (if they exist) are bounded from above by the optimal value of this IDLP problem and are bounded from below by the optimal value of its corresponding dual (a corollary of this being the fact that the limits $\ \lim_{T\rightarrow\infty}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}h^{\lambda}(y_0)$ are equal to the optimal value of the IDLP problem provided that there is no duality gap). An extensive literature is devoted to matters related to the existence and equality of the limits $\ \lim_{T\rightarrow\infty}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}h^{\lambda}(y_0)$. The ergodic case when these limits are constants (that is, when they do not depend on the initial condition $y_0$) was studied, for example, in \cite{Redbook}, \cite{Arisawa-1}, \cite{Arisawa-2}, \cite{Arisawa-3}, \cite{Bardi}, \cite{BhaBor}, \cite{BorGai}, \cite{GQ} (see also references therein). Results for the non-ergodic case were obtained in \cite{BQR-2015}, \cite{GruneSIAM98}, \cite{GruneJDE98}, \cite{Sorin92}, \cite{OV-2012} and \cite{QR-2012} (of particular importance for our consideration being a nice representation for $\ \lim_{T\rightarrow\infty}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}h^{\lambda}(y_0)$ established in \cite{BQR-2015}). In the framework of the linear programming approach, it has been shown (see \cite{GQ} and \cite{GQ-1})\footnote{Extensions of these results to degenerate diffusions appear in \cite{BhaBor}; see also \cite{Redbook}.} that in the ergodic case, the limits $\ \lim_{T\rightarrow\infty}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}h^{\lambda}(y_0)$ are equal to the optimal value of the IDLP problem \begin{equation}\label{limits-ergodic} k^*:= \min_{\gamma\in W}\int_{Y\times U}k(y,u)\gamma(dy,du), \end{equation} where \begin{equation}\label{limits-ergodic-W} W:= \left\{\gamma\in \mathcal{P}(Y\times U) \ : \ \int_{Y\times U}\nabla \phi(y)^Tf(u,y)\gamma(du,dy)=0 \ \ \forall \phi(\cdot)\in C^1 \right\}, \end{equation} with $\ \mathcal{P}(Y\times U)$ standing for the space of probability measures defined on Borel subsets of $\ Y\times U $ and $\ C^1 $ standing for the space of continuously differentiable functions. The IDLP problem that we are introducing in this paper is obtained by narrowing the feasible set $W$ with the help of additional constraints allowing one to capture the dependence of the limits $\ \lim_{T\rightarrow\infty}\inf_{y_0\in Y}v_T(y_0)$ and $\ \lim_{\lambda\rightarrow 0+}\inf_{y_0\in Y}h^{\lambda}(y_0)$ on the initial conditions. Note that our results establishing that it is this IDLP problem and its dual that characterize the limits of the optimal values are consistent with a celebrated result of the controlled Markov chain theory establishing that additional constraints are needed to characterize the limit long run average optimal value in the non-ergodic case (see \cite{HK-1} and \cite{HK-2}). Note that this result was obtained in the context of Markov chains with finite state/action spaces and the corresponding finite-dimensional LP problems for which there is no duality gap (such gap being certainly a possibility in the IDLP setting; see \cite{And-1} and \cite{And-2}). The paper is organized as follows. After introducing key notation below, section 2 establishes lower bounds for long run average control viewed as a limiting case of finite horizon or discounted infinite horizon control problems. Section 3 derives matching upper bounds under suitable hypotheses. Together they yield the desired linear program. Section 4 considers a special case, in which there is no duality gap. Section 5 gives some longer proofs, specifically, of a duality result and another allied result used in the foregoing. We conclude this section with some notation and definitions that are used in the sequel. First of all, $\ \mathcal{P}(Y\times U)$, $\ \mathcal{M}_+(Y\times U)$ and $\ \mathcal{M}(Y\times U)$ will stand for the space of probability measures, the space of non-negative measures and the space of all finite measures (respectively) defined on the Borel subsets of $Y\times U$. The convergence in these spaces will always be understood in the weak$^*$ sense, with $\gamma^k \in \mathcal{M}(Y\times U), k =1,2,... ,$ converging to $\gamma \in \mathcal{M}(Y\times U)$ if and only if $\ \lim_{k\rightarrow \infty}\int_{Y\times U} \phi(y,u) \gamma^k (dy,du) \ = \ \int_{Y\times U} \phi(y,u) \gamma (dy,du) \ $ for any continuous $\phi(y,u): Y\times U \rightarrow \R^1$. The set $\mathcal{P}(Y\times U)$ will always be treated as a compact metric space with a metric $\rho$, which is consistent with its weak$^*$ convergence topology. Using this metric $\rho$, one can define the Hausdorff metric $\rho_H$ on the set of subsets of $\mathcal{P}(Y\times U)$ as follows: $\forall \Gamma_i \subset \mathcal{P}(Y\times U) \ , \ i=1,2 \ ,$ \vspace{-0.15cm} \begin{equation}\label{e:intro-3} \rho_H(\Gamma_1, \Gamma_2) := \max \{\sup_{\gamma \in \Gamma_1} \rho(\gamma,\Gamma_2), \sup_{\gamma \in \Gamma_2} \rho(\gamma,\Gamma_1)\}, \ \end{equation} where $\rho(\gamma, \Gamma_i) := \inf_{\gamma' \in \Gamma_i} \rho(\gamma,\gamma') \ .$ Note that, although, by some abuse of terminology, we refer to $\rho_H(\cdot,\cdot)$ as a metric on the set of subsets of ${\cal P} (Y \times U)$, it is, in fact, a semi-metric on this set (note that $\rho_H(\Gamma_1, \Gamma_2)=0$ implies $\Gamma_1 = \Gamma_2$ if $\Gamma_1$ and $\Gamma_2$ are closed, but the equality may not be true if at least one of these sets is not closed). Let $u(\cdot) \in \U_T(y_0)$ and $y(t) = y(t,y_0,u(\cdot)), \ t\in [0,T] $. A probability measure $\gamma_{u(\cdot),T} \in {\cal P} (Y \times U)$ is called the {\it occupational measure} generated by the pair $(y(\cdot),u(\cdot) )$ on the interval $[0,T]$ if, for any Borel set $Q \subset Y \times U$, \begin{equation}\label{e:occup-meas-def-S} \gamma _{u(\cdot),T} (Q) = \frac{1}{T}\int _0 ^ T 1_Q (y(t),u(t)) dt ,\end{equation} where $1_Q (\cdot)$ is the indicator function of $Q$. This definition is equivalent to the statement that the equality \begin{equation}\label{e:occup-meas-def-eq-S} \int_{Y\times U} q(y,u)\gamma_{u(\cdot),T} (dy,du) = \frac{1}{T} \int _0 ^ T q (y(t),u(t)) dt \end{equation} is valid for any for any $q(\cdot)\in C(Y\times U)$ (the space of continuous functions defined on $Y\times U $). Let $u(\cdot) \in {\cal U}(y_0)$ and $y(t)=y(t,y_0,u(\cdot)), \ t\in [0,\infty) $. A probability measure $\gamma^{\lambda}_{u(\cdot)} \in {\cal P} (Y \times U)$ is called the {\it discounted occupational measure} generated by the pair $(y(\cdot),u(\cdot) )$ if for any Borel set $Q \subset Y \times U$, \begin{equation}\label{e:occup-meas-def} \gamma ^{\lambda}_{u(\cdot)} (Q) = \lambda \int _0 ^ \infty e^{-\lambda t} 1_Q (y(t),u(t)) dt ,\end{equation} the latter definition being equivalent to the equality \begin{equation}\label{e:occup-meas-def-eq} \int_{Y\times U} q(y,u)\gamma ^{\lambda}_{u(\cdot)} (dy,du) = \lambda \int _0 ^ \infty e^{-\lambda t} q (y(t),u(t)) dt \end{equation} for any $q(\cdot)\in C(Y\times U)$. Let $\Gamma_T(y_0)$ and $\Theta^{\lambda}(y_0) $ stand for the set of attainable occupational, respectively, discounted occupational measures: \begin{equation}\label{e:occup-meas-def-eq-1} \Gamma_T(y_0):= \bigcup_{u(\cdot)\in\U_T(y_0)}\{\gamma _{u(\cdot),T}\}, \ \ \ \ \ \ \ \ \Theta^{\lambda}(y_0):= \bigcup_{u(\cdot)\in\U(y_0)}\{\gamma^{\lambda}_{u(\cdot)}\}. \end{equation} Note that, due to (\ref{e:occup-meas-def-eq-S}) and (\ref{e:occup-meas-def-eq}), problems (\ref{Cesaro}) and (\ref{Abel}) can be reformulated in terms of occupational (resp., discounted occupational) measures as follows: \begin{equation}\label{e:occup-meas-def-eq-2} \inf_{\gamma\in \Gamma_T(y_0) }\int_{Y\times U}k(y,u)\gamma(dy,du) := v_T(y_0) \end{equation} \begin{equation}\label{e:occup-meas-def-eq-3} \inf_{\gamma\in \Theta^{\lambda}(y_0) }\int_{Y\times U}k(y,u)\gamma(dy,du) := h^{\lambda}(y_0) \end{equation} \section{IDLP problems and estimates of the limit optimal values from below}\label{Section-Main-1} Consider the IDLP problem \begin{equation}\label{limits-non-ergodic} \inf_{(\gamma, \xi)\in \Omega(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du):= k^*(y_0), \end{equation} where $$ \Omega(y_0):= \{(\gamma, \xi)\in \mathcal{P}(U\times Y)\times \mathcal{M}_{+}(U\times Y) \ : \ \gamma\in W, $$ \begin{equation}\label{non-ergodic-Omega} \int_{Y\times U}(\phi(y_0)-\phi(y))\gamma(du,dy) + \int_{Y\times U}\nabla \phi(y)^Tf(u,y)\xi(du,dy) =0 \ \ \forall \phi(\cdot)\in C^1 \}. \end{equation} Consider also the IDLP problem \begin{equation}\label{limits-non-ergodic-dual} \sup_{(\mu , \psi(\cdot), \eta(\cdot) )\in \mathcal{D}}\mu :=d^*(y_0)\ \ \ \ \ \ \ \ \ \ \ \end{equation} where $\mathcal{D}$ is the set of triplets $(\mu , \psi(\cdot), \eta(\cdot) )\in \R^1\times C^1\times C^1$ that satisfy the inequalities \begin{equation}\label{limits-non-ergodic-dual-1} k(y,u)+ (\psi (y_0)- \psi (y)) + \nabla \eta (y)^T f(y,u)-\mu \geq 0 \ \ \ \ \ \ \forall\ (y,u)\in Y\times U, \end{equation} \begin{equation}\label{limits-non-ergodic-dual-2} \nabla \psi (y)^T f(y,u)\geq 0 \ \ \ \ \ \ \forall\ (y,u)\in Y\times U. \end{equation} Problem (\ref{limits-non-ergodic-dual}) is dual to (\ref{limits-non-ergodic}) (see \cite{And-1}, \cite{And-2} and Section \ref{Section-Duality-proofs} below). In particular, the following result is valid.\\ {\bf Lemma 2.1.} {\it The optimal values of the problems (\ref{limits-non-ergodic}) and (\ref{limits-non-ergodic-dual}) are related by the inequality} \begin{equation}\label{limits-non-ergodic-dual-4} k^*(y_0) \geq d^*(y_0). \end{equation} \bigskip {\it Proof.} Take an arbitrary $(\gamma, \xi)\in \Omega(y_0)$ and an arbitrary $(\mu, \psi(\cdot), \eta(\cdot))\in \mathcal{D}$. By integrating (\ref{limits-non-ergodic-dual-1}) over $\gamma$ and taking into account the fact that $\gamma\in W$, we obtain $$ \int_{Y\times U}k(y,u)\gamma(dy,du) + \int_{Y\times U}(\psi (y_0)- \psi (y))\gamma(dy,du)\geq \mu. $$ Also, since $(\gamma, \xi)\in \Omega(y_0)$ and since (\ref{limits-non-ergodic-dual-2}) is satisfied, $$ \int_{Y\times U}(\psi (y_0)- \psi (y))\gamma(dy,du) = - \int_{Y\times U}\nabla\psi (y)^Tf(y,u) \xi(dy,du)\leq 0. $$ Thus $$ \int_{Y\times U}k(y,u)\gamma(dy,du) \geq \mu. $$ Due to the fact that $(\gamma, \xi)\in \Omega(y_0)$ and $(\mu, \psi(\cdot), \eta(\cdot))\in \mathcal{D} $ are arbitrary, the latter implies (\ref{limits-non-ergodic-dual-4}). $\ \Box$ \bigskip As can be readily seen, problem (\ref{limits-non-ergodic}) can be rewritten in the form \begin{equation}\label{limits-non-ergodic-1} \inf_{\gamma\in W_1(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du)=k^*(y_0), \end{equation} where $$ W_1(y_0):= \{\gamma \ : \ (\gamma, \xi)\in \Omega(y_0))\} =\{\gamma\in W \ : \ \exists \ \xi\in \mathcal{M}_{+}(Y\times U)\ \ \ {\rm such\ that} \ $$ \begin{equation}\label{limits-non-ergodic-2} \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma(du,dy) \ =\ \int_{Y\times U}\nabla \phi(y)^Tf(u,y)\xi(du,dy) \ \forall\ \phi(\cdot)\in C^1\}. \end{equation} Along with problem (\ref{limits-non-ergodic-1}), let us consider the problem \begin{equation}\label{limits-non-ergodic-3} \min_{\gamma\in W_2(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du), \end{equation} where $$ W_2(y_0):=\{\gamma\in W \ : \ \exists \ \xi_l\in \mathcal{M}_{+}(Y\times U), \ l=1,2,..., \ \ \ {\rm such\ that} $$ \begin{equation}\label{limits-non-ergodic-4} \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma(du,dy) \ =\ \lim_{l\rightarrow\infty}\int_{Y\times U}\nabla \phi(y)^Tf(u,y)\xi_l(du,dy) \ \forall\ \phi(\cdot)\in C^1\}. \end{equation} It can be readily verified that the set $W_2(y_0) $ is closed (and hence compact, since $W$ is compact). Also, both $W_1(y_0) $ and $W_2(y_0)$ are convex, with \begin{equation}\label{limits-non-ergodic-5} cl W_1(y_0)\subset W_2(y_0) \end{equation} where $cl$ stands for the closure of the corresponding set. \bigskip {\bf Lemma 2.2.} {\it If $W_2(y_0)\neq\emptyset$, then the optimal value of the dual problem (\ref{limits-non-ergodic-dual}) is bounded and it is equal to the optimal value of problem (\ref{limits-non-ergodic-3}). That is,} \begin{equation}\label{limits-non-ergodic-3-1} d^*(y_0) =\min_{\gamma\in W_2(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du). \end{equation} \bigskip {\it Proof.} The proof of the lemma is given in Section \ref{Section-Duality-proofs}. $\ \Box $ \bigskip {\bf Proposition 2.3.} {\it Assume that $\ \U(y_0)\neq\emptyset$ (that is, $y_0$ belongs to the viability kernel of (\ref{e-CSO}) in $Y$; see \cite{Aub}). Then} \begin{equation}\label{e-main-1} \limsup_{T\rightarrow\infty}\Gamma_T(y_0)\subset W_2(y_0),\ \ \ \ \ \ \ \ \ \ \ \liminf_{T\rightarrow\infty}v_T(y_0)\geq d^*(y_0), \end{equation} and \begin{equation}\label{e-main-2} \limsup_{\lambda\rightarrow 0}\Theta^{\lambda}(y_0)\subset W_2(y_0),\ \ \ \ \ \ \ \ \ \ \ \liminf_{\lambda\rightarrow 0}h_{\lambda}(y_0)\geq d^*(y_0). \end{equation} \bigskip {\it Proof.} Note that due to our assumption that $\ \U(y_0)\neq\emptyset$, the sets $$limsup_{T\rightarrow\infty}\Gamma_T(y_0) \ \mbox{and} \ \limsup_{\lambda\rightarrow 0}\Theta^{\lambda}(y_0) $$ are not empty. Also, as can be readily verified (see, e.g., Propositions 2.2-2.4 in \cite{GQ}), \begin{equation}\label{e-main-3-1} \limsup_{T\rightarrow\infty}\Gamma_T(y_0)\subset W, \ \ \ \ \ \ \limsup_{\lambda\rightarrow 0}\Theta^{\lambda}(y_0)\subset W. \end{equation} By (\ref{e:occup-meas-def-eq-2}) and (\ref{e:occup-meas-def-eq-3}), $$ \liminf_{T\rightarrow\infty}v_T(y_0)=\inf\left\{\int_{Y\times U}k(y,u)\gamma(dy,du)\ | \ \gamma\in \limsup_{T\rightarrow\infty}\Gamma_T(y_0)\right\} $$ $$ \liminf_{\lambda\rightarrow 0}h_{\lambda}(y_0)= \inf\left\{ \int_{Y\times U}k(y,u)\gamma(dy,du)\ | \ \gamma\in \limsup_{\lambda\rightarrow 0}\Theta^{\lambda}(y_0) \right\}. $$ Therefore, by (\ref{limits-non-ergodic-3-1}), the second relationship in (\ref{e-main-1}) and the second relationship in (\ref{e-main-2}) follow from the corresponding first ones. To prove the first relationship in (\ref{e-main-1}), take any $\ \gamma\in \limsup_{T\rightarrow\infty}\Gamma_T(y_0)$. By definition, this means that there exist sequences $T_l\rightarrow\infty $ and $\gamma_l\in \Gamma_{T_l}(y_0)$ such that $\gamma_l\rightarrow\gamma $ as $l\rightarrow\infty$. The fact that the measure $\gamma_l$ belongs to the set $\Gamma_{T_l}(y_0) $ means that this measure is generated by some control $u_l(\cdot)\in \U_{T_l}(y_0) $ and the corresponding solution $y_l(t)=y(t,y_0,u_l) $ of system (\ref{e-CSO}). Consequently, for any $\phi\in C^1 $, $$ \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma_l(dy,du)= \frac{1}{T_l}\int_0^{T_l}(\phi(y_l(t))-\phi(y_0))dt $$ \vspace{-0.4cm} \begin{equation}\label{e-main-4} = \frac{1}{T_l}\int_0^{T_l}\left(\int_0^t\nabla \phi(y_l(s))^Tf(y_l(s),u_l(s))ds\right)dt. \end{equation} Define $\xi_l\in C(Y\times U)^*$ by the equation $$ \langle \xi_l, q \rangle = \frac{1}{T_l}\int_0^{T_l}\left(\int_0^t q(y_l(s),u_l(s))ds\right)dt \ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ Note that $\ \langle \xi_l, q \rangle\geq 0 $ if $q(\cdot, \cdot)\geq 0 $. Hence by Riesz theorem (Theorem 4.3.9, p.\ 181 in \cite{Ash}) there exists $\xi_l\in \M_+(Y\times U) $ such that $$ \langle \xi_l, q \rangle = \int_{Y\times U} q(y,u)\xi_l(dy,du)\ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ Taking these relationships into consideration, one can rewrite (\ref{e-main-4}) in the form $$ \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma_l(dy,du) = \int_{Y\times U} \nabla \phi(y)^Tf(y,u)\xi_l(dy,du). $$ Passing to the limit in the expression above, one obtains $$ \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma(dy,du) = \lim_{l\rightarrow\infty}\int_{Y\times U} \nabla \phi(y)^Tf(y,u)\xi_l(dy,du). $$ Since by (\ref{e-main-3-1}), $\gamma\in W$, the latter implies that $\gamma\in W_2(y_0) $. Thus the first relationship in (\ref{e-main-1}) is established. To prove the first relationship in (\ref{e-main-2}), note that \begin{equation}\label{e-main-5-0} \Theta^{\lambda}(y_0)\subset W(\lambda, y_0), \end{equation} where $$ W(\lambda, y_0)= \{\gamma\in \mathcal{P}(Y\times U) \ : $$ \begin{equation}\label{e-main-5} \ \int_{U\times Y}\left(\nabla \phi(y)^Tf(u,y)+ \lambda(\phi(y_0)-\phi(y)\right)\gamma(dy,du)=0 \ \ \forall \phi(\cdot)\in C^1\}; \end{equation} see, e.g., Proposition 2.2 in \cite{GQ}. (In fact under certain non-restrictive conditions, the closed convex hull of $\Theta^{\lambda}(y_0)$ is equal to $W(\lambda, y_0)$; see \cite{GQ} and \cite{GQ-1}.) By (\ref{e-main-5-0}), to prove that $\ \limsup_{\lambda\rightarrow 0}\Theta^{\lambda}(y_0)\subset W_2(y_0) $, it is sufficient to prove that \begin{equation}\label{e-main-6} \limsup_{\lambda\rightarrow 0}W(\lambda, y_0)\subset W_2(y_0) . \end{equation} Note that it can be readily verified (see, e.g., Lemma 2.4 in \cite{GQ}) that \begin{equation}\label{e-main-7} \limsup_{\lambda\rightarrow 0}W(\lambda, y_0)\subset W . \end{equation} Take now an arbitrary $\gamma\in \limsup_{\lambda\rightarrow 0}W(\lambda, y_0) $. By definition, it means that there exists sequences $\lambda_l\rightarrow 0 $ and $\ \gamma_l\in W(\lambda_l, y_0) $ such that $\ \gamma_l\rightarrow \gamma $ as $l\rightarrow\infty$. Since $\gamma_l\in W(\lambda_l, y_0) $, we have \begin{equation}\label{e-main-8} \int_{U\times Y}(\phi(y)-\phi(y_0))\gamma_l(dy,du)= \frac{1}{\lambda_l}\int_{U\times Y}\nabla \phi(y)^Tf(u,y)\gamma_l(dy,du). \end{equation} Define $\xi_l\in C(Y\times U)^*$ by the equation $$ \langle \xi_l, q \rangle = \frac{1}{\lambda_l}\int_{U\times Y}q(y,u)\gamma_l(dy,du) \ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ Note that $\ \langle \xi_l, q \rangle\geq 0 $ if $q(\cdot, \cdot)\geq 0 $. Hence (see Theorem 4.3.9, p. 181 in \cite{Ash}) there exists $\xi_l\in \M_+(Y\times U) $ such that $$ \langle \xi_l, q \rangle = \int_{Y\times U} q(y,u)\xi_l(dy,du)\ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ Thus (\ref{e-main-8}) can be rewritten in the form $$ \int_{U\times Y}(\phi(y)-\phi(y_0))\gamma_l(dy,du)= \int_{U\times Y}\nabla \phi(y)^Tf(u,y)\xi_l(dy,du). $$ Passing to the limit in this expression, one obtains $$ \int_{U\times Y}(\phi(y)-\phi(y_0))\gamma(dy,du)= \lim_{l\rightarrow\infty}\int_{U\times Y}\nabla \phi(y)^Tf(u,y)\xi_l(dy,du). $$ The latter, along with the fact that $\gamma\in W $ (see (\ref{e-main-7})), establish the validity of the first relationship in (\ref{e-main-2}). $\ \Box$ \bigskip Let $u_{\mathcal{T}}(\cdot)\in \U(y_0)$ be $\mathcal{T}$-periodic (for some $\mathcal{T}>0$). Assume that corresponding to this periodic control, there exists a $\mathcal{T}$-periodic solution $y_{\mathcal{T}}(t)= y(t, y_0, u_{\mathcal{T}}) $ of system (\ref{e-CSO}). The pair $(y_{\mathcal{T}}(\cdot), u_{\mathcal{T}}(\cdot)) $ will be referred to as a {\it $y_0 $-admissible $\mathcal{T}$-periodic pair}. Consider the optimal control problem (this being commonly referred to as periodic optimization problem) \begin{equation}\label{e-main-8-1} \inf_{\mathcal{T},\left(y_{\mathcal{T}}(\cdot),u_{\mathcal{T}}(\cdot)\right) }\left\{\frac{1}{\mathcal{T}}\int_0^{\mathcal{T}} k(y_{\mathcal{T}}(t),u_{\mathcal{T}}(t))dt \right\} :=v_{per}(y_0), \end{equation} where $inf$ is over all $\mathcal{T}>0$ and over all $y_0$-admissible $\mathcal{T}$-periodic pairs $(y_{\mathcal{T}}(\cdot), u_{\mathcal{T}}(\cdot)) $. Similarly to (\ref{e:occup-meas-def-eq-2}), this problem can be reformulated in terms of occupational measures \begin{equation}\label{e:occup-meas-def-eq-per} \inf_{\gamma\in \Gamma_{per}(y_0) }\int_{Y\times U}k(y,u)\gamma(dy,du)=v_{per}(y_0), \end{equation} where $\Gamma_{per}(y_0)$ is the set of occupational measures generated by all $y_0 $-admissible periodic pairs. Note that \begin{equation}\label{e:occup-meas-def-eq-per-11} \Gamma_{per}(y_0)\subset \limsup_{T\rightarrow\infty}\Gamma_T(y_0) \end{equation} and, therefore, \begin{equation}\label{e:occup-meas-def-eq-per-1} v_{per}(y_0)\geq \liminf_{T\rightarrow\infty}v_T(y_0). \end{equation} \bigskip {\bf Proposition 2.4.}{\it The following relationships are valid:} \begin{equation}\label{e-main-8-2} \Gamma_{per}(y_0)\subset W_1(y_0), \ \ \ \ \ \ v_{per}(y_0)\geq k^*(y_0). \end{equation} \bigskip {\it Proof.} Due to (\ref{limits-non-ergodic-1}) and (\ref{e:occup-meas-def-eq-per}), it is sufficient to prove only the first relationship. Note that from (\ref{e-main-3-1}) and (\ref{e:occup-meas-def-eq-per-11}) it follows that \begin{equation}\label{e-main-8-2-5} \Gamma_{per}(y_0)\subset W. \end{equation} Take now an arbitrary $\gamma\in \Gamma_{per}(y_0)$. By definition, it means that $\gamma$ is generated by some $y_0 $-admissible $\mathcal{T}$-periodic pair $(y_{\mathcal{T}}(\cdot), u_{\mathcal{T}}(\cdot)) $. That is, for any continuous function $q(y,u) $, $$ \int_{(y,u)\in Y\times U}q(y,u)\gamma(dy,du)= \frac{1}{\mathcal{T}}\int_0^{\mathcal{T}}q(y_{\mathcal{T}}(t), u_{\mathcal{T}}(t))dt $$ Consequently, for any $\phi\in C^1 $, $$ \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma(dy,du)= \frac{1}{\mathcal{T}}\int_0^{\mathcal{T}}(\phi(y_{\mathcal{T}}(t))-\phi(y_0))dt $$ \vspace{-0.4cm} \begin{equation}\label{e-main-4-per} = \frac{1}{\mathcal{T}}\int_0^{\mathcal{T}}\left(\int_0^t\nabla \phi(y_{\mathcal{T}}(s))^Tf(y_{\mathcal{T}}(s),u_{\mathcal{T}}(s))ds\right)dt. \end{equation} Define $\xi\in C^*(Y\times U)$ by the equation $$ \langle \xi, q \rangle = \frac{1}{\mathcal{T}}\int_0^{\mathcal{T}}\left(\int_0^{\mathcal{T}} q(y_{\mathcal{T}}(s),u_{\mathcal{T}}(s))ds\right)dt \ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ As one can see, $\ \langle \xi, q \rangle\geq 0 $ if $q(\cdot, \cdot)\geq 0 $. Hence, by Riesz theorem (see Theorem 4.3.9, p. 181 in \cite{Ash}) there exists $\xi\in \M_+(Y\times U) $ such that $$ \langle \xi, q \rangle = \int_{Y\times U} q(y,u)\xi(dy,du)\ \ \ \forall q(\cdot, \cdot)\in C(Y\times U). $$ Taking these relationships into consideration, one can rewrite (\ref{e-main-4-per}) in the form $$ \int_{Y\times U}(\phi(y)-\phi(y_0))\gamma(dy,du) = \int_{Y\times U} \nabla \phi(y)^Tf(y,u)\xi(dy,du). $$ Since $\gamma\in W$ (by (\ref{e-main-8-2-5})), the latter implies that $\gamma\in W_1(y_0) $. Thus the first relationship in (\ref{e-main-8-2}) is established. $\ \Box $ \bigskip {\bf Corollary 2.5.} {\it If \begin{equation}\label{e-main-8-3} v_{per}(y_0) = \liminf_{T\rightarrow\infty}v_T(y_0), \end{equation} then} \begin{equation}\label{e-main-8-4} \liminf_{T\rightarrow\infty}v_T(y_0)\geq k^*(y_0). \end{equation} \section{Estimate of the limit optimal values from above}\label{Section-Main-2} Let is introduce the following assumptions: \bigskip ASSUMPTION I. The set $Y$ is forward invariant with respect to solutions of system (\ref{e-CSO}). That is, $\U(y_0)= \U\ \forall y_0\in Y$. \medskip ASSUMPTION II. For any $y_0\in \mathcal{N}$, where $\mathcal{N}$ is an open neighbourhood of $Y$, there exists a limit \begin{equation}\label{eq-est-from-above-0} \lim_{T\rightarrow\infty}v_T(y_0):=v^*(y_0), \end{equation} the convergence being uniform with respect to $y_0\in \mathcal{N}$ and the limit function $v^*(\cdot) $ being Lipschitz continuous on $\mathcal{N}$. (Note that from this assumption it follows that $\lim_{\lambda\rightarrow 0}h^{\lambda}(y_0)=v^*(y_0) $; see \cite{OV-2012}.) \medskip {\bf Proposition 3.1.} {\it Let problem (\ref{limits-non-ergodic}) be consistent (that is, $\Omega(y_0)\neq\emptyset$)\footnote{Note that $\Omega(y_0)\neq\emptyset$ if $\Gamma_{per}\neq\emptyset$; see Proposition 2.4. } and let Assumptions I and II be valid. Then the limit optimal value $v^*(y_0)$ is less or equal than the optimal value of the IDLP problem (\ref{limits-non-ergodic}). That is,} \begin{equation}\label{eq-est-from-above-3} v^*(y_0)\leq k^*(y_0)\ \ \ \forall \ y_0\in Y. \end{equation} \bigskip {\it Proof.} In \cite{BQR-2015} it has been established that, under Assumptions I and II, the limit function $v^*(y_0) $ allows the following representation for $y_0\in Y $: \begin{equation}\label{eq-est-from-above-1} v^*(y_0)=\sup_{w(\cdot)\in \mathcal{H}}w(y_0)\ \ \ \ \ \forall \ y_0\in Y, \end{equation} where $\mathcal{H}$ is the set of Lipschitz continuous on $\mathcal{N}$ functions\footnote{In \cite{BQR-2015}, the functions in $\mathcal{H}$ were assumed to be just continuous (not Lipschitz continuous). However, if $v^*(\cdot) $ is Lipschitz continuous, then the representation (\ref{eq-est-from-above-1}) is valid with $\mathcal{H}$ consisting of Lipschitz continuous functions.} such that $w(\cdot)\in \mathcal{H} $ if and only if: $(i)$ For any $y_0\in \mathcal{N} $ and any $u(\cdot)\in \U $, the function $w(y(t,y_0,u))$ is nondecreasing in $t$ on any interval $t\in [0,T)\ (T>0) $ such that $y(t,y_0,u)\in \mathcal{N}\ \forall t\in [0,T]$; \medskip $(ii) $ The following inequality is valid: \begin{equation}\label{eq-est-from-above-2} \int_{Y\times U}w(y)\gamma(du,dy)\leq \int_{Y\times U}k(y,u)\gamma(du,dy)\ \ \ \ \forall \ \gamma\in W. \end{equation} Since any function $w(\cdot)\in \mathcal{H}$ is Lipschitz continuous, it is almost everywhere differentiable on $\mathcal{N}$ (by Rademacher theorem; see \cite{EG92}). Moreover, due to the property $(i)$ of the set $\mathcal{H}$, \begin{equation}\label{eq-est-from-above-3-1} \nabla w(y)^T f(y,u)\geq 0 \ \ \ \forall \ u\in U \end{equation} at any point $y\in \mathcal{N}$ where $\nabla w(y)$ exists. Let the set $D^*_{w}(y) $ be defined as follows \begin{equation}\label{eq-est-from-above-3-2} D^*_{w}(y):= \{p\in \R^m \ | \ p = \lim_{l\rightarrow\infty}\nabla w(y_l) \ \ {\rm for \ some}\ \ y_l\rightarrow y \}. \end{equation} The set $D^*_{w}(y) $ is non-empty and compact for any $y\in \mathcal{N} $, and by (\ref{eq-est-from-above-3-1}), \begin{equation}\label{eq-est-from-above-3-3} p^T f(y,u)\geq 0 \ \ \ \forall \ u\in U , \ \ \forall \ p\in D^*_{w}(y) . \end{equation} According to a well-known result in non-smooth analysis, $$ co D^*_{w}(y)= \partial w(y) $$ where $\partial w(y) $ is the generalized Clarke's gradient (see, e.g., p. 63 in \cite{Bardi}). Therefore, \begin{equation}\label{eq-est-from-above-3-4} p^T f(y,u)\geq 0 \ \ \ \forall \ u\in U , \ \ \forall \ p\in \partial w(y) \end{equation} for any $y\in \mathcal{N} $. Let now $\epsilon > 0$ be small enough so that \begin{equation}\label{eq-est-from-above-3-5-1} Y + \epsilon B\subset \mathcal{N}, \end{equation} where $B$ is the open unit ball in $\R^m $. Due to (\ref{eq-est-from-above-1}), there exists $w_{\epsilon}(\cdot)\in \mathcal{H} $ such that \begin{equation}\label{eq-est-from-above-4} w_{\epsilon}(y_0)\geq v^*(y_0) - \epsilon. \end{equation} By Theorem 2.2 in \cite{CZAR}, there exists $\psi_{\epsilon}(\cdot)\in C^1 $ such that \begin{equation}\label{eq-est-from-above-5} \max_{y\in Y}|\psi_{\epsilon}(y) - w_{\epsilon}(y)|\leq \epsilon, \end{equation} and such that \begin{equation}\label{u-2} \displaystyle{ \nabla \psi _\epsilon (y) \in \bigcup _{y' \in y +\epsilon B} \partial \psi (y ' ) + \epsilon B\ \ \ \ \forall \ y\in Y. }\end{equation} Note that from (\ref{eq-est-from-above-4}) and (\ref{eq-est-from-above-5}) it follows that \begin{equation}\label{eq-est-from-above-6} \psi_{\epsilon}(y_0)\geq v^*(y_0) - 2 \epsilon \end{equation} and also (since $w_{\epsilon}(\cdot)\in \mathcal{H} $; see (\ref{eq-est-from-above-2})) that \begin{equation}\label{eq-est-from-above-7} \int_{Y\times U}\psi_{\epsilon}(y)\gamma(du,dy)\leq \int_{Y\times U}(k(y,u)+\epsilon)\gamma(du,dy)\ \ \ \ \forall \ \gamma\in W. \end{equation} From (\ref{u-2}), on the other hand, it follows that, for an arbitrary $y \in Y $, there exist $\ y _{\epsilon} \in y + \epsilon B $, $ \ d_{\epsilon} \in \partial \psi (y _{\epsilon}) $ and $h_{\epsilon} \in \epsilon B$ such that \begin{equation}\label{eq-est-from-above-8-1} \nabla \psi_{\epsilon} (y)= d_{\epsilon}+ h_{\epsilon}, \end{equation} with $\ d_{\epsilon}^Tf(y,u)\geq 0\ \forall u\in U$ (due to (\ref{eq-est-from-above-3-4}) and (\ref{eq-est-from-above-3-5-1})) and with $\ || h_{\epsilon} ||\leq \epsilon $. Therefore, $$ \nabla \psi_{\epsilon}(y)^Tf(y,u)= d_{\epsilon}^T f(y,u) + h_{\epsilon}^T f(y,u)\geq -\epsilon ||f(y,u)||. $$ Consequently, \begin{equation}\label{eq-est-from-above-9} \nabla \psi_{\epsilon}(y)^Tf(y,u)\geq - \epsilon M \ \ \forall (y,u)\in Y\times U, \ \ \ {\rm where} \ \ M:=\max_{(y,u)\in Y\times U}||f(y,u)||. \end{equation} Let us now rewrite inequality (\ref{eq-est-from-above-7}) in the form $$ \int_{Y\times U}(k(y,u)+\epsilon - \psi_{\epsilon}(y))\gamma(du,dy) \geq 0\ \ \ \ \forall \ \gamma\in W $$ implying that \begin{equation}\label{eq-est-from-above-10} \min_{\gamma\in W} \int_{Y\times U}(k(y,u)+\epsilon - \psi_{\epsilon}(y))\gamma(du,dy) \geq 0. \end{equation} The problem on the left-hand-side of (\ref{eq-est-from-above-10}) is of the IDLP class, the dual of which is \begin{equation}\label{eq-est-from-above-11} \sup_{\eta(\cdot)\in C^1}\min_{(y,u)\in Y\times U}\left\{ k(y,u)+\epsilon - \psi_{\epsilon}(y) + \nabla \eta(y)^T f(y,u)\right\}. \end{equation} The optimal values of the former and the latter are equal (see, Theorem 4.1 in \cite{FinGaiLeb} or Theorem 3.1 in \cite{GQ}). Therefore, the inequality (\ref{eq-est-from-above-10}) can be rewritten in the form \begin{equation}\label{eq-est-from-above-12} \sup_{\eta(\cdot)\in C^1}\min_{(y,u)\in Y\times U}\left\{ k(y,u)+\epsilon - \psi_{\epsilon}(y) + \nabla \eta(y)^T f(y,u)\right\} \geq 0. \end{equation} From (\ref{eq-est-from-above-12}), it follows that there exists a function $\eta_{\epsilon}(\cdot)\in C^1 $ such that $$ \min_{(y,u)\in Y\times U}\left\{ k(y,u)+\epsilon - \psi_{\epsilon}(y) + \nabla \eta_{\epsilon}(y)^T f(y,u)\right\} \geq -\epsilon. $$ That is, \begin{equation}\label{eq-est-from-above-13} k(y,u) - \psi_{\epsilon}(y) + \nabla \eta_{\epsilon}(y)^T f(y,u) \geq -2\epsilon\ \ \ \forall (y,u)\in Y\times U . \end{equation} Consider the following IDLP problem \begin{equation}\label{eq-est-from-above-14} \sup_{(\psi(\cdot),\eta(\cdot))\in \mathcal{Q}(\epsilon)}\psi(y_0):=\bar d^*(y_0,\epsilon) \end{equation} where $\mathcal{Q}(\epsilon)$ is the set of pairs $(\psi(\cdot), \eta(\cdot) )\in C^1\times C^1$ that satisfy the inequalities \begin{equation}\label{limits-non-ergodic-dual-1-eps} k(y,u)- \psi (y) + \nabla \eta (y)^T f(y,u) \geq -2\epsilon \ \ \ \ \ \ \forall\ (y,u)\in Y\times U, \end{equation} \begin{equation}\label{limits-non-ergodic-dual-2-eps} \nabla \psi (y)^T f(y,u)\geq -M\epsilon \ \ \ \ \ \ \forall\ (y,u)\in Y\times U. \end{equation} Note that, by (\ref{eq-est-from-above-9}) and (\ref{eq-est-from-above-13}), $(\psi_{\epsilon}(\cdot), \eta_{\epsilon}(\cdot))\in \mathcal{Q}(\epsilon)$. Consequently (and also due to (\ref{eq-est-from-above-6})), \begin{equation}\label{eq-est-from-above-15} \bar d^*(y_0,\epsilon)\geq \psi_{\epsilon}(y_0)\geq v^*(y_0) - 2 \epsilon . \end{equation} Consider also the problem \begin{equation}\label{limits-non-ergodic-dual-eps-d} \sup_{(\mu , \psi(\cdot), \eta(\cdot) )\in \mathcal{D}(\epsilon)}\mu :=d^*(y_0,\epsilon)\ \ \ \ \ \ \ \ \ \ \ \end{equation} where $\mathcal{D}(\epsilon)$ is the set of triplets $(\mu , \psi(\cdot), \eta(\cdot) )\in \R^1\times C^1\times C^1$ that satisfy the inequalities \begin{equation}\label{limits-non-ergodic-dual-eps-1-d} k(y,u)+ (\psi (y_0)- \psi (y)) + \nabla \eta (y)^T f(y,u)-\mu \geq - 2 \epsilon \ \ \ \ \ \ \forall\ (y,u)\in Y\times U, \end{equation} \begin{equation}\label{limits-non-ergodic-dual-2-eps-d} \nabla \psi (y)^T f(y,u)\geq - M \epsilon \ \ \ \ \ \ \forall\ (y,u)\in Y\times U. \end{equation} Let us show that the optimal values of (\ref{eq-est-from-above-14}) and (\ref{limits-non-ergodic-dual-eps-d}) are equal. That is, \begin{equation}\label{eq-est-from-above-165} \bar d^*(y_0,\epsilon)= d^*(y_0,\epsilon) . \end{equation} Firstly, note that $\bar d^*(y_0,\epsilon)\leq d^*(y_0,\epsilon) $ (since, for any pair $(\psi(\cdot), \eta(\cdot))\in \mathcal{Q}(\epsilon) $, the triplet $(\mu, \psi(\cdot), \eta(\cdot))\in \mathcal{D}(\epsilon) $, where $\mu=\psi(y_0)$; see (\ref{limits-non-ergodic-dual-1-eps})-(\ref{limits-non-ergodic-dual-2-eps}) and (\ref{limits-non-ergodic-dual-eps-1-d})-(\ref{limits-non-ergodic-dual-2-eps-d})). Let us prove the converse inequality. Let a triplet $(\mu', \psi'(\cdot), \eta'(\cdot))\in \mathcal{D}(\epsilon) $ be such that $\mu'\geq d^*(y_0,\epsilon)- \delta $, with $\delta > 0 $ being arbitrarily small. Then the pair $(\tilde \psi'(\cdot), \eta'(\cdot))\in \mathcal{Q}(\epsilon) $, where $\tilde \psi'(y):= \psi'(y)- \psi'(y_0)+\mu' $. Since $\tilde \psi'(y_0) = \mu' $, it leads to the inequality $\bar d^*(y_0,\epsilon)\geq d^*(y_0,\epsilon)- \delta$, and, consequently, to the inequality $\bar d^*(y_0,\epsilon)\geq d^*(y_0,\epsilon) $ (since $\delta > 0$ is arbitrarily small). Thus (\ref{eq-est-from-above-165}) is proved. Problem (\ref{limits-non-ergodic-dual-eps-d}) is dual to the IDLP problem \begin{equation}\label{limits-non-ergodic-pert-eps} \inf_{(\gamma, \xi)\in \Omega(y_0)}\left\{\int_{Y\times U}(k(y,u)+ 2\epsilon)\gamma(dy,du)+ M\epsilon \int_{Y\times U} \xi(dy,du) \right\}:= k^*(y_0, \epsilon). \end{equation} As established by Lemma 5.1 (see Section \ref{Section-Duality-proofs}), the optimal values of (\ref{limits-non-ergodic-dual-eps-d}) and (\ref{limits-non-ergodic-pert-eps}) are equal for any $\epsilon > 0$. That is, \begin{equation}\label{eq-est-from-above-16} d^*(y_0, \epsilon) = k^*(y_0, \epsilon)\ \ \ \forall \ \epsilon > 0. \end{equation} Therefore, by (\ref{eq-est-from-above-15}) and (\ref{eq-est-from-above-165}), \begin{equation}\label{eq-est-from-above-17} k^*(y_0, \epsilon)\geq v^*(y_0) - 2\epsilon. \end{equation} Note that problem (\ref{limits-non-ergodic-pert-eps}) is a perturbed version of problem (\ref{limits-non-ergodic}) and that (due to (\ref{eq-est-from-above-17})), to prove (\ref{eq-est-from-above-3}) it is sufficient to establish that \begin{equation}\label{eq-est-from-above-18} \lim_{\epsilon\rightarrow 0}k^*(y_0, \epsilon)= k^*(y_0). \end{equation} As can be easily seen, $\ k^*(y_0, \epsilon)$ is a decreasing function of $\epsilon $, and $\ k^*(y_0, \epsilon)\geq k^*(y_0) \ \forall \ \epsilon > 0$. Hence, $$ \lim_{\epsilon\rightarrow 0}k^*(y_0, \epsilon)\geq k^*(y_0). $$ Let $\delta > 0 $ be arbitrary small and $(\gamma', \xi')\in \Omega(y_0) $ be $\delta$-near-optimal for (\ref{limits-non-ergodic}). That is, $$ \int_{Y\times U}k(y,u)\gamma'(dy,du)\leq k^*(y_0) + \delta. $$ Then $$ k^*(y_0, \epsilon)\leq \int_{Y\times U}(k(y,u)+ 2\epsilon)\gamma'(dy,du)+ M\epsilon \int_{Y\times U} \xi'(dy,du) $$ $$ \leq k^*(y_0) + \delta +2\epsilon \int_{Y\times U}\gamma'(dy,du) + M\epsilon \int_{Y\times U}\xi'(dy,du), $$ $$ \Rightarrow\ \ \ \ \lim_{\epsilon\rightarrow 0}k^*(y_0, \epsilon)\leq k^*(y_0) + \delta \ \ \ \Rightarrow \ \ \ \lim_{\epsilon\rightarrow 0}k^*(y_0, \epsilon)\leq k^*(y_0). $$ (The latter inequality is valid due to the fact that $\delta > 0 $ can be arbitrary small). Thus (\ref{eq-est-from-above-18}) is established and the proof of the proposition is completed. $\ \Box$ \medskip \medskip {\bf Corollary 3.2.} {\it Let the assumptions of Proposition 3.1 be satisfied. Then the equality \begin{equation}\label{e-SC-1} v^*(y_0)=k^*(y_0) \end{equation} is valid if \begin{equation}\label{e-SC-2} d^*(y_0)=k^*(y_0) \end{equation} or if the equality (\ref{e-main-8-3}) is true.} {\it Proof. } The proof follows from Proposition 2.3 and Proposition 3.1 or from Corollary 2.5 and Proposition 3.1 $\ \Box $ \medskip In the next section, we will consider a class of systems for which the equalities (\ref{e-SC-1}) and (\ref{e-SC-2}) are established to be valid. \section{One special class of \lq\lq non-ergodic" systems}\label{Section-examples} Let us introduce the following assumptions. \medskip ASSUMPTION III. There exists a continuously differentiable vector function $F(y)=(F_i(y)), \ i=1,...,k,$ such that \begin{equation}\label{e-CSO-3} \nabla F_i(y)^Tf(y,u)= 0 ,\ i=1,...,k, \ \ \ \ \forall \ (y,u)\in \hat Y\times U \end{equation} where $\hat Y$ is a sufficiently large compact set. \medskip Define the set $Y_z$ by the equation \begin{equation}\label{e-CSO-4} Y_z:=\{y\in \R^m \ : \ F(y)=z\}. \end{equation} and define the set $Y$ as the union \begin{equation}\label{e-SC-3} Y=\bigcup_{z\in Z}Y_z, \end{equation} where $Z $ is some compact subset of $\R^k $. We assume that $Y $ is contained in $\hat Y $, that is, (\ref{e-CSO-3}) is satisfied for all $(y,u)\in Y\times U $. Therefore, each of the sets $Y_z, \ z\in Z$ and the set $Y$ are forward invariant with respect to system (\ref{e-CSO}). In addition to Assumption III, let us also introduce the following assumption. \medskip ASSUMPTION IV. For any $z\in Z$ and for any $y^1,y^2\in Y_z $, there exists a control $u(\cdot)$ ($u(t)\in U$) that steers system (\ref{e-CSO}) from $y^1$ to $y^2$ in finite time $ \mathcal{T}(y^1, y^2,z)\leq \mathcal{T}_0 $ ($\mathcal{T}_0$ being some positive constant). \medskip Due to Assumption III, system (\ref{e-CSO}) is not ergodic on $Y$. However, Assumption IV makes it ergodic on each of $Y_z$ for $z\in Z$. To illustrate these assumptions, let us consider the following elementary example, in which they are readily verifiable. \medskip {\it Example.} Let $y(\tau)= (y_1(\tau), y_2(\tau))\in \R^2 $, \ $u(\tau)\in [-1,1]\in \R^1$, $\ f(y,u) = (f_1(y,u), f_2(y,u)) $ with $\ f_1(y,u) = uy_2 $ and $\ f_2(y,u) = -uy_1 $. That is, system (\ref{e-CSO}) is of the form $$ y_1'(t)= u(t)y_2(t), \ \ \ \ \ \ \ \ \ \ \ \ y_2'(t)= -u(t)y_1(t) . $$ It can be seen that Assumption III is satisfied in this case with $F(y)=y_1^2 + y_2^2$ \ ($k=1$) and $$ Y_z = \{(y_1,y_2) \ | \ y_1^2 + y_2^2 = z\} \ \ \forall \ z\geq 0.\ $$ Assuming that $Z= [a,b] $, where $0<a<b$ are some constants, one can also see that, with the use of the control $u(t)=1 $, any point in the set $Y_z$ (\ $z\in Z $) can be reached from any other point of this set within a time interval that is less or equal than $2\pi $. Thus, Assumption IV is satisfied as well in this case. \medskip The following proposition establishes that the equalities (\ref{e-SC-1}) and (\ref{e-SC-2}) are valid for the class of systems satisfying Assumptions III and IV.\\ {\bf Proposition 4.1.} {\it Let Assumptions III and IV be satisfied. Then \begin{equation}\label{e-CSO-further-6-0} \lim_{T\rightarrow\infty}v_T(y_0)=k^*(y_0)= d^*(y_0)= \tilde{k}^*(z) \ \ \ \ \ \forall y_0\in Y_z, \end{equation} where $\tilde k^*(z)$ is the optimal value of the IDLP problem \begin{equation}\label{e-CSO-further-6-1} \tilde{k}^*(z)= \min_{\gamma\in \mathcal{W}(z)}\int_{Y_z\times U}k(y,u)\gamma(dy,du), \end{equation} in which the minimization is over the set $\mathcal{W}(z)$ defined by the equation \begin{equation}\label{limits-ergodic-W-Y-z} \mathcal{W}(z):= \left\{\gamma\in W \ : \ supp(\gamma)\in Y_z\times U \right\} \end{equation} ($W$ being defined in (\ref{limits-ergodic-W}) and $ supp(\cdot)$ standing for the support of the corresponding measure). Also, \begin{equation}\label{e-CSO-further-6-2} cl W_1(y_0)= W_2(y_0)= \mathcal{W}(z) \ \ \ \ \ \ \forall y_0\in Y_z, \end{equation} where $W_1(y_0) $ and $W_2(y_0) $ are defined in (\ref{limits-non-ergodic-2}) and (\ref{limits-non-ergodic-4})}. \medskip {\it Proof.} Note that we do not need to distinguish between the set $ \mathcal{W}(z)$ defined in (\ref{limits-ergodic-W-Y-z}) and the set defined by the equation \begin{equation}\label{limits-ergodic-W-Y-z-1} \left\{\gamma\in \mathcal{P}(Y_z\times U) \ : \ \int_{U\times Y_z}\nabla \phi(y)^Tf(u,y)\gamma(du,dy)=0 \ \ \forall \phi(\cdot)\in C^1 \right\}, \end{equation} where $\ \mathcal{P}(Y_z\times U) $ stands for the space of probability measures defined on the Borel subsets of $Y_z\times U $. This set will also be denoted as $\mathcal{W}(z) $. It can be established that Assumptions III and IV imply that the following statement is valid (see, e.g., Proposition 3.3 in \cite{GR}): {\it For any $z\in Z$, given two arbitrary initial conditions $y_0^1, y_0^2\in Y_z $ and an arbitrary control $u^1(\cdot)\in \U$, there exists a control $u^2(\cdot)\in \U$ such that, for any continuous $q(u,y)$, \begin{equation}\label{e-CSO-further-5} |\frac{1}{T}\int_0^T q(y(t, y_0^1, u^1), u^1(t)dt - \frac{1}{T}\int_0^T q(y(t, y_0^2, u^2), u^2(t)dt |\leq \beta_q(T), \end{equation} for some $\beta_q(T) $ such that $\ \lim_{T\rightarrow\infty}\beta_q(T) = 0$.} Due to the validity of this statement, from Theorem 2.1(iii) and Proposition 4.1 in \cite{Gai8} it follows that \begin{equation}\label{e-CSO-further-6} \rho_H\left(\Gamma_T(y_0), \mathcal{W}(z)\right)\leq \beta(T)\ \ \ \ \forall y_0\in Y_z \end{equation} for some $\beta(T) $ such that $\ \lim_{T\rightarrow\infty}\beta(T) = 0$. By (\ref{e:occup-meas-def-eq-2}), the latter implies that \begin{equation}\label{e-CSO-further-8} \lim_{T\rightarrow\infty}v_T(y_0)=\tilde{k}^*(z)\ \ \ \ \forall y_0\in Y_z. \end{equation} Let us prove that \begin{equation}\label{e-CSO-extra-2} W_2(y_0)= \mathcal{W}(z) \ \ \ \ \ \ \forall y_0\in Y_z. \end{equation} To this end, let us first show that \begin{equation}\label{e-CSO-extra-1} W_2(y_0)\subset \mathcal{W}(z) \ \ \ \ \ \ \forall y_0\in Y_z. \end{equation} Define the function \begin{equation}\label{e-CSO-further-10} \Psi_z(y):= \sum_{i=1}^k (F_i(y)-z_i)^2. \end{equation} Note that, according to this definition, \begin{equation}\label{e-SC-5} \Psi_z(y)= 0\ \forall\ y\in Y_z , \ \ \ \ \ \ \Psi_z(y)> 0\ \forall\ y\in Y/Y_z \end{equation} and also \begin{equation}\label{e-SC-4} \nabla \Psi_z(y)^T f(y,u)= 2\sum_{i=1}^k (F_i(y)-z_i)\left(\nabla F_i(y)^T f(y,u)\right)= 0\ \ \ \ \forall \ (y,u)\in Y\times U. \end{equation} Take an arbitrary $\gamma\in W_2(y_0) $. Due to definition of $W_2(y_0) $ (see (\ref{limits-non-ergodic-4})), it implies that $\gamma\in W$ and that there exists a sequence $\xi_l\in \mathcal{M}_{+}(Y\times U), \ l=1,2,..., $ such that $$ \int_{Y\times U}(\Psi_z(y)-\Psi_z(y_0))\gamma(du,dy) \ =\ \lim_{l\rightarrow\infty}\int_{Y\times U}\nabla \Psi_z(y)^Tf(u,y)\xi_l(du,dy) \ = 0, $$ where the equality to $0$ follows from (\ref{e-SC-4}). From this equality and from (\ref{e-SC-5}) it follows also that $\ supp(\gamma)\subset Y_z $. Thus $\gamma\in \mathcal{W}(z) $ and the inclusion (\ref{e-CSO-extra-1}) is proved. Take now an arbitrary $\gamma \in \mathcal{W}(z) $. By (\ref{e-CSO-further-6}), there exist $T_l\rightarrow\infty $ and $\gamma_l\in \Gamma_{T_l}(y_0)$ such that $\gamma_l\rightarrow\gamma $ as $l\rightarrow\infty$. The fact that the measure $\gamma_l$ belongs to the set $\Gamma_{T_l}(y_0) $ means that this measure is generated by some control $u_l(\cdot)\in \U_{T_l}(y_0) $ and the corresponding solution $y_l(t)=y(t,y_0,u_l) $ of system (\ref{e-CSO}). Thus, the equality (\ref{e-main-4}) is valid for any $\phi(\cdot)\in C^1 $. Proceeding now in exactly the same way as in the proof of Proposition 2.3, we obtain that $\gamma\in W_2(y_0) $. Consequently, $\ \mathcal{W}(z)\subset W_2(y_0) $ and by (\ref{e-CSO-extra-1}), the equality (\ref{e-CSO-extra-2}) is valid. From (\ref{e-CSO-extra-2}) and from (\ref{limits-non-ergodic-3-1}), (\ref{e-CSO-further-6-1}) it follows that \begin{equation}\label{e-CSO-further-7} d^*(y_0) = \tilde{k}^*(z) \ \ \ \ \forall y_0\in Y_z. \end{equation} To finalize the proof of (\ref{e-CSO-further-6-0}), we now only need to show that \begin{equation}\label{e-SC-4-1-5} k^*(y_0) = \tilde{k}^*(z) \ \ \ \ \forall y_0\in Y_z. \end{equation} From (\ref{limits-non-ergodic-dual-4}) and (\ref{e-CSO-further-7}) it follows that $k^*(y_0) \geq\tilde{k}^*(z) $. On the other hand, by Lemma 4.2 (see below), the equality (\ref{e-main-8-3}) is true. Therefore, by Corollary 2.5 (and thanks to (\ref{e-CSO-further-8})), $\tilde{k}^*(z)\geq k^*(y_0) $. Thus, (\ref{e-SC-4-1-5}) is true and (\ref{e-CSO-further-6-0}) is proved. The fact that $cl W_1(y_0) = \mathcal{W}(z) \ \forall \ y_0\in Y_z $ follows from the fact that (\ref{e-SC-4-1-5}) is valid with the use of any $k(y,u)$ in (\ref{limits-non-ergodic-1}) and (\ref{e-CSO-further-6-1}), and also from the fact that the sets $W_1(y_0) $ and $\mathcal{W}(z) $ are convex. Since (\ref{e-CSO-extra-2}) has been already established, the proof of the proposition is completed. $\ \Box$ {\bf Lemma 4.2.} {\it If Assumptions III and IV are satisfied, then the equality (\ref{e-main-8-3}) is true.} {\it Proof.} By (\ref{e:occup-meas-def-eq-per-1}), (\ref{e-main-8-2}) and (\ref{e-CSO-further-8}), $\ v_{per}(y_0)\geq \tilde{k}^*(z) $. Thus, to prove (\ref{e-main-8-3}), it is sufficient to prove that \begin{equation}\label{e-CSO-further-17} \tilde{k}^*(z)\geq v_{per}(y_0)\ \ \ \forall y_0\in Y_z. \end{equation} Let $x_0\in Y_z$. By (\ref{e-CSO-further-8}), for any sequence $T_l, l=1,2,...$, $\ T_l\rightarrow\infty $, there exist $\ u^l(\cdot)\in \U_{T_l} $, $\ y^l(t)= y(t,y_0,u^l) $ such that \begin{equation}\label{e-CSO-further-13-1} \lim_{\l\rightarrow\infty}\frac{1}{T_l}\int_0^{T_l}k(y^l(t), u^l(t))dt = \tilde{k}^*(z). \end{equation} By Assumption IV, there exists a control $\ u(t)\in U$ defined on an interval $ \ t\in [T_l, T_l+\Delta_l]$ (with $\ 0\leq \Delta_l\leq \mathcal{T}_0 $) such that, with the use of this control, system (\ref{e-CSO}) will be steered from the point $y^l(T_l) $ at $t=T_l$ to the point $y_0$ at $t=T_l+\Delta_l$. Denote by $\tilde{u}^l(\cdot) $ the control that is equal to $u^l(\cdot)$ on the interval $[0, T_l) $ and equal to the \lq\lq steering control" on the interval $[T_l, T_l + \Delta_l] $. Denote also by $\tilde{y}^l(\cdot) $ the corresponding solution of (\ref{e-CSO}). The definition of the pair $(\tilde{y}^l(\cdot),\tilde{u}^l(\cdot)) $ can be extended to the infinite time horizon as $\mathcal{T}_l$-periodic pair with $\mathcal{T}_l= T_l+\Delta_l$. Therefore, by (\ref{e-main-8-1}), \begin{equation}\label{e-CSO-further-13-2} \frac{1}{\mathcal{T}_l}\int_0^{\mathcal{T}_l}k(\tilde y^l(t), \tilde u^l(t))dt\geq v_{per}(y_0) , \ \ l=1,2,... \end{equation} On the other hand, $$ \left|\frac{1}{T_l}\int_0^{T_l}k(y^l(t), u^l(t))dt - \frac{1}{\mathcal{T}_l}\int_0^{\mathcal{T}_l}k(\tilde y^l(t), \tilde u^l(t))dt\right| $$ $$ \leq \left|\frac{1}{T_l}\int_0^{T_l}k(y^l(t), u^l(t))dt - \frac{1}{\mathcal{T}_l}\int_0^{T_l}k( y^l(t), u^l(t))dt\right| $$ $$ + \frac{1}{\mathcal{T}_l}\int_{T_l}^{T_l+\Delta_l}|k(\tilde y^l(t), \tilde u^l(t))|dt\leq \frac{M_k\mathcal{T}_0}{\mathcal{T}_l}, \ \ \ {\rm where} \ \ \ M_k:=\max_{(y,u)\in Y\times U}|k(y,u)|. $$ The latter and (\ref{e-CSO-further-13-1}) imply $$ \lim_{l\rightarrow\infty}\frac{1}{\mathcal{T}_l}\int_0^{\mathcal{T}_l}k(\tilde y^l(t), \tilde u^l(t))dt = \tilde{k}^*(z), $$ which, in turn, implies (\ref{e-CSO-further-17}) (due to (\ref{e-CSO-further-13-2})). \ $\ \Box $ \section{Proofs of some duality results}\label{Section-Duality-proofs} Let $(C^1)^* $ stand for the space of continuous functionals on the space of smooth functions $C^1$, the latter being considered as the normed vector space with norm $||\phi(\cdot)||:= \max_{y\in Y}|\phi(y)| + \max_{y\in Y}||\nabla \phi(y)|| $ for any $\phi(\cdot)\in C^1 $. Define a linear operator $\A(\cdot): \M(Y\times U) \times \M(Y\times U)\mapsto \R^1\times (C^1)^* \times (C^1)^* $ as follows: for any $(\gamma, \xi)\in \M(Y\times U)\times \M(Y\times U) $ \begin{equation}\label{e-Duality-1} \A(\gamma, \xi):= \left(\int_{Y\times U}\gamma(dy,du), \ a_{(\gamma, \xi)}, \ b_{\gamma}\right), \end{equation} where $a_{(\gamma, \xi)}, \ b_{\gamma}\in (C^1)^*$ are defined by the equation: $\ \forall \ \phi(\cdot)\in C^1$, $$ \ a_{(\gamma, \xi)}(\phi) := -\left\{\int_{Y\times U} (\phi(y_0)-\phi(y))\gamma(dy,du) + \int_{Y\times U}\nabla\phi (y)^Tf(y,u)\xi(dy,du)\right\}, $$ \vspace{-0.4cm} \begin{equation}\label{e-Duality-2-1} \ b_{\gamma}(\phi):= - \left\{\int_{Y\times U}\nabla\phi (y)^Tf(y,u)\gamma(dy,du)\right\}. \end{equation} In this notation, the set $\Omega(y_0)$ (defined in (\ref{non-ergodic-Omega})) can be rewritten as follows \begin{equation}\label{e-Duality-2-0-11} \Omega(y_0)= \{(\gamma , \xi)\in \M_+(Y\times U)\times \M_+(Y\times U)\ : \ \A(\gamma, \xi)= (1, {\bf 0}, {\bf 0})\} \end{equation} where ${\bf 0} $ stands for the zero element of $(C^1)^*$. Also, problem (\ref{limits-non-ergodic}) takes the form \begin{equation}\label{e-Duality-2-0} \inf_{(\gamma , \xi)\in \Omega(y_0)}\langle k, \gamma \rangle\ := k^*(y_0) \end{equation} where $\langle \cdot, \gamma \rangle $ (also, $\langle \cdot, \xi \rangle $ in the sequel) denoting the integral of the corresponding function over $\gamma$ (respectively, over $\xi$). Note that for any $(\mu, \psi(\cdot), \eta(\cdot))\in \R^1\times C^1\times C^1 $, $$ \langle A(\gamma, \xi), (\mu, \psi, \eta)\rangle = \mu \int_{Y\times U}\gamma(dy,du) + a_{(\gamma, \xi)}(\psi) + b_{\gamma}(\eta) $$ $$ = \int_{Y\times U}\left(\mu - (\psi(y_0)-\psi(y)) - \nabla \eta(y)^Tf(y,u)\right)\gamma(dy,du) $$ \begin{equation}\label{e-Duality-2-2-0} - \int_{Y\times U} \nabla \psi(y)^Tf(y,u)\xi(dy,du). \end{equation} Define now the linear operator $\A^*(\cdot): \R^1\times C^1\times C^1\mapsto C(Y\times U)\times C(Y\times U)\subset \M^*(Y\times U)\times \M^*(Y\times U) $ in such a way that, for any $(\mu, \psi(\cdot),\eta(\cdot))\in \R^1\times C^1\times C^1 $, \begin{equation}\label{e-Duality-3-1} \A^*(\mu, \psi, \eta)(y,u):= \left(\mu - (\psi(y_0)-\psi(y)) - \nabla \eta(y)^Tf(y,u), \ \nabla \psi(y)^Tf(y,u)\right). \end{equation} Thus $$ \langle (\gamma, \xi), \A^*(\mu, \psi, \eta) \rangle = \int_{Y\times U}\left(\mu - (\psi(y_0)-\psi(y)) - \nabla \eta(y)^Tf(y,u)\right)\gamma(dy,du) $$ $$ - \int_{Y\times U} \nabla \psi(y)^Tf(y,u)\xi(dy,du) = \langle A(\gamma, \xi), (\mu, \psi, \eta)\rangle . $$ That is, the operator $ \A^*(\cdot) $ is the adjoint of $ \A(\cdot) $. The problem dual to (\ref{e-Duality-2-0}) is of the form (see \cite{And-1} and \cite{And-2}) \begin{equation}\label{e-Duality-4-0} \sup_{(\mu, \psi(\cdot), \eta(\cdot))\in \R^1\times C^1\times C^1} \mu = d^*(y_0) \end{equation} $$ \ s.\ t. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ \begin{equation}\label{e-Duality-4-0-1} - \A^*(\mu, \lambda, \nu)(y,u) + (k(y,u), 0)\geq (0,0) \ \ \forall (y,u)\in Y\times U, \end{equation} the latter being equivalent to (\ref{limits-non-ergodic-dual}). \bigskip {\it Proof of Lemma 2.2.} Let $$ H:= \Big\{\left( \A(\gamma, \xi), \int_{Y\times U}k(y,u)\gamma(dy,du) + r\right)\ : $$ \vspace{-0.6cm} \begin{equation}\label{limits-non-ergodic-pert-dual-4} \ (\gamma, \xi)\in \M_+(Y\times U)\times \M_+(Y\times U), \ r\geq 0\}\subset \R^1\times (C^1)^*\times (C^1)^*\times \R^1\ \Big\} , \end{equation} and let $\bar{H}$ stand for the closure of $H$ in the weak$^*$ topology of $\R^1\times (C^1)^*\times (C^1)^*\times \R^1$. Consider the problem \begin{equation}\label{limits-non-ergodic-pert-dual-4-1} \inf \{\theta \ | \ (1, {\bf 0}, {\bf 0}, \theta)\in \bar{H} \}:= k_{sub}^*(y_0). \end{equation} Its optimal value $k_{sub}^*(y_0)$ is called the subvalue of the IDLP problem (\ref{e-Duality-2-0}). Let us show that the optimal value of (\ref{limits-non-ergodic-3}) is equal to the subvalue. In fact, as can be readily seen, $\left(1, {\bf 0}, {\bf 0}, \int_{Y\times U}k(y,u)\gamma(dy,du)\right)\in \bar{H} $ if $\gamma\in W_2(y_0)$. Consequently, \begin{equation}\label{limits-non-ergodic-pert-dual-5} k_{sub}^*(y_0)\leq \min_{\gamma\in W_2(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du). \end{equation} From the fact that $k_{sub}^*(y_0)$ is defined as the optimal value in (\ref{limits-non-ergodic-pert-dual-4-1}) it follows that there exists a sequence $(\gamma_l,\xi_l)\in \M_+(Y\times U)\times \M_+(Y\times U) $ such that $\A(\gamma_l,\xi_l) $ converges (in weak$^*$ topology) to $(1, {\bf 0}, {\bf 0})$, with $\int_{Y\times U}k(y,u)\gamma_l(dy,du) $ converging to $k_{sub}^*(y_0)$ as $l$ tends to infinity. That is (see (\ref{e-Duality-1})), $$ \int_{Y\times U}\gamma_l(dy,du)\rightarrow 1, \ \ a_{(\gamma_l, \xi_l)}\rightarrow {\bf 0},\ \ b_{\gamma_l}\rightarrow {\bf 0}, $$ \vspace{-0.4cm} $$ \ \ \int_{Y\times U}k(y,u)\gamma_l(dy,du)\rightarrow k_{sub}^*(y_0). $$ Without loss of generality, one may assume that $\gamma_l$ converges in weak$^*$ topology to a measure $\gamma$ that satisfies the relationships $$ \int_{Y\times U}\gamma(dy,du)=1, \ \ b_{\gamma}= {\bf 0}\ \ \ \ \Rightarrow \ \ \ \ \gamma\in W. $$ Also, $ \ a_{(\gamma, \xi_l)}\rightarrow {\bf 0} $ and $\int_{Y\times U}k(y,u)\gamma(dy,du)= k_{sub}^*(y_0) $. That is, $\gamma\in W_2(y_0)$ and therefore, \begin{equation}\label{limits-non-ergodic-pert-dual-5-1} \min_{\gamma\in W_2(y_0)}\int_{Y\times U}k(y,u)\gamma(dy,du)\leq k_{sub}^*(y_0). \end{equation} Thus the optimal value of (\ref{limits-non-ergodic-3}) is equal to the subvalue. To complete the proof, it is sufficient to note that the subvalue of an IDLP problem is equal to the optimal value of its dual provided that the former is bounded (see, e.g., Theorem 3 in \cite{And-1}). That is, $k_{sub}^*(y_0) = d^*(y_0). $ $\ \Box$ \bigskip In the notation of this section, problem (\ref{limits-non-ergodic-pert-eps}) takes the form \begin{equation}\label{e-Duality-3} \inf_{(\gamma,\xi) \in \Omega(y_0)}\langle (k+2\epsilon, M\epsilon ), (\gamma , \xi) \rangle\ = k^*(y_0,\epsilon), \end{equation} while its dual (\ref{limits-non-ergodic-dual-eps-d}) takes the form \begin{equation}\label{e-Duality-4-0-2} \sup_{\mu, \psi(\cdot), \eta(\cdot)\in \R^1\times C^1\times C^1} \mu = d(y_0,\epsilon) \end{equation} $$ \ s.\ t. \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ $$ \begin{equation}\label{e-Duality-4-0-2} - \A^*(\mu, \lambda, \nu)(y,u) + (k(y,u)+2\epsilon, M\epsilon)\geq (0,0) \ \ \forall (y,u)\in Y\times U . \end{equation} \bigskip {\bf Lemma 5.1.} {\it Let $\Omega(y_0)\neq\emptyset $. Then the optimal values of (\ref{e-Duality-3}) and (\ref{e-Duality-3})-(\ref{e-Duality-4-0-2}) are equal for any $\epsilon > 0 $. That is, (\ref{eq-est-from-above-16}) is valid. } \bigskip {\it Proof.} By Theorem 6 in \cite{And-1}, to prove the validity of (\ref{eq-est-from-above-16}), it is sufficient to establish that the set $D$, \begin{equation}\label{Lemma-Anderson-1} D:=\{ \left(\mathcal{A}(\gamma , \xi),\ \langle (k+2\epsilon, M\epsilon ), (\gamma , \xi) \rangle\ \right) \ : \ (\gamma , \xi)\in {\cal M}_+(Y\times U)\times {\cal M}_+(Y\times U)\} \end{equation} is closed in weak$^*$ topology of $ \R^1\times (C^1)^*\times (C^1)^*\times \R^1$. The proof of this is similar to the proof of Theorem 12 in \cite{And-1}. It is based on the following two properties of the problem. {\it Property 1.} The set ${\cal M}_+(Y\times U))\times {\cal M}_+(Y\times U) $ has a compact base. That is (see \cite{And-1}), \begin{equation}\label{Lemma-Anderson-2} {\cal M}_+(Y\times U)\times {\cal M}_+(Y\times U) = \{\lambda (\gamma , \xi) : \ (\gamma , \xi)\in \mathcal{L}, \ \lambda\geq 0 \}, \end{equation} where \begin{equation}\label{Lemma-Anderson-2} \mathcal{L}:= \{(\gamma , \xi)\in {\cal M}_+(Y\times U)\times {\cal M}_+(Y\times U) \ : \end{equation} $$ \int_{Y\times U} \gamma(dy,du) + \int_{Y\times U} \xi(dy,du) = 1 \}, $$ with $\mathcal{L}$ being a weak$^*$ compact subset of ${\cal M}(Y\times U)\times {\cal M}(Y\times U)$. \medskip {\it Property 2.} For $(\gamma, \xi)\in {\cal M}_+(Y\times U)\times {\cal M}_+(Y\times U)$, the equalities \begin{equation}\label{Lemma-Anderson-3} \mathcal{A}(\gamma , \xi) = (0, {\bf 0}, {\bf 0}), \ \ \ \ \langle (k+2\epsilon, M\epsilon ), (\gamma , \xi) \rangle = 0 \end{equation} can be valid only if $\gamma = 0 $ and $\xi = 0$, this being readily verifiable due to the fact that a part of the relationships (\ref{Lemma-Anderson-3}) are the following equalities: $$ \int_{Y\times U} \gamma(dy,du) = 0, \ \ \ \ \ \ \int_{Y\times U}(k(y,u)+2\epsilon) \gamma(dy,du) + M\epsilon \int_{Y\times U} \xi(dy,du) = 0. $$ Let us now prove that $D$ is closed. Let $(\gamma_l , \xi_l)\in {\cal M}_+(Y\times U)\times {\cal M}_+(Y\times U) $ be such that \begin{equation}\label{Lemma-Anderson-4} \mathcal{A}(\gamma_l , \xi_l) - {\bf z}\rightarrow (0, {\bf 0}, {\bf 0}), \ \ \ \ \langle (k+2\epsilon, M\epsilon ), (\gamma_l , \xi_l) \rangle\ - \beta\rightarrow 0 \ \ \ {\rm as} \ \ \ l\rightarrow\infty, \end{equation} where ${\bf z}\in \R^1\times (C^1)^*\times (C^1)^* $ and $\beta\in \R^1$. By (\ref{Lemma-Anderson-2}), $(\gamma_l , \xi_l) = \lambda_l (\bar \gamma_l , \bar \xi_l) $, where $\gamma_l\geq 0 $ and $\ (\bar \gamma_l , \bar \xi_l)\in \mathcal{L} $. Due to compactness of $\mathcal{L} $, one may assume (without loss of generality) that $\ (\bar \gamma_l , \bar \xi_l)\rightarrow (\bar \gamma , \bar \xi) \in \mathcal{L} $. Note that, due to Property 2, the sequence $\lambda_l$ is bounded. Indeed, assuming that this is not the case and there exists a subsequence $\{l'\} $ of $\{l\} $ such that $\lambda_{l'}\rightarrow\infty $ as $l'\rightarrow\infty $ , one would obtain (via substitution of $\ \lambda_{l'} (\bar \gamma_{l'} , \bar \xi_{l'}) $ into (\ref{Lemma-Anderson-4}) and passing to the limit with $l'\rightarrow\infty $) that $$ \ \mathcal{A}(\bar \gamma_{l'} ,\bar \xi_{l'})- \frac{1}{\lambda_{l'}}{\bf z}\rightarrow (0, {\bf 0}, {\bf 0}), \ \ \ \langle (k+2\epsilon, M\epsilon ), (\bar \gamma_{l'} , \bar \xi_{l'}) \rangle\ - \frac{1}{\lambda_{l'}}\beta\ \rightarrow 0 \ \ \ {\rm as} \ \ \ l\rightarrow\infty. $$ $$ \Rightarrow \ \ \ \ \ \mathcal{A}(\bar\gamma , \bar \xi) = (0, {\bf 0}, {\bf 0}), \ \ \ \ \langle (k+2\epsilon, M\epsilon ), (\bar\gamma , \bar \xi) \rangle = 0. $$ According to Property 2, the latter implies that $\bar\gamma = 0 $ and $\bar\xi = 0$. This contradicts to the fact that $\ (\bar \gamma , \bar \xi)\in \mathcal{L} $. Thus the sequence $\{\lambda_l\} $ is bounded and therefore one may assume (without loss of generality) that $\lambda_l\rightarrow \lambda $ as $l\rightarrow\infty $. Consequently, $(\gamma_l, \xi_l)\rightarrow \lambda (\bar\gamma , \bar \xi) $ as $l\rightarrow\infty $. Denoting $(\lambda\bar\gamma , \lambda\bar \xi):=(\gamma , \xi) $, one obtains (by (\ref{Lemma-Anderson-4})) $$ \mathcal{A}(\gamma , \xi) = {\bf z}, \ \ \ \langle (k+2\epsilon, M\epsilon ), (\gamma , \xi) = \beta \ \ \ \ \Rightarrow \ \ \ \ ({\bf z},\beta)\in D. $$ This proves that $D$ is closed. $\ \Box$
{ "timestamp": "2018-05-08T02:13:35", "yymm": "1805", "arxiv_id": "1805.02311", "language": "en", "url": "https://arxiv.org/abs/1805.02311" }
\section{Introduction} \IEEEPARstart{I}{n} the presence of missing data, the representativeness of data samples may be reduced significantly and the inference about data is therefore distorted seriously. Given this pressing circumstance, it is crucially important to devise computational methods that can restore unseen data from available observations. As the data in practice is often organized in matrix form, it is considerably significant to study the problem of \emph{matrix completion}~\cite{tao:2009:mc,CandesPIEEE,Mohan:2010:isit,rahul:jlmr:2010,akshay:2013:nips,william:2014:nips,raghunandan:jmlr:2010,raghunandan:tit:2010,troy:2013:nips}, which aims to fill in the missing entries of a partially observed matrix. \begin{prob}[Matrix Completion]\label{pb:mc} Denote by $[\cdot]_{ij}$ the $(i,j)$th entry of a matrix. Let $L_0\in\Re^{m\times{}n}$ be an unknown matrix of interest. The rank of $L_0$ is unknown either. Given a sampling of the entries in $L_0$ and a 2D sampling set $\Omega\subseteq{}\{1,\cdots,m\} \times\{1,$ $\cdots,n\}$ consisting of the locations of observed entries, i.e., given \begin{eqnarray*} \Omega\quad\textrm{and}\quad\{[L_0]_{ij} |(i,j)\in\Omega\}, \end{eqnarray*} can we identify the target $L_0$? If so, under which conditions? \end{prob} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{missing.pdf}\vspace{-0.15in} \caption{The unseen future values of time series are essentially a special type of missing data.}\label{fig:miss}\vspace{-0.25in} \end{center} \end{figure} In general cases, matrix completion is an ill-posed problem, as the missing entries can be of arbitrary values. Thus, some assumptions are necessary for studying Problem~\ref{pb:mc}. Cand{\`e}s and Recht~\cite{Candes:2009:math} proved that the target $L_0$, with high probability, is exactly restored by convex optimization, provided that $L_0$ is \emph{low rank} and \emph{incoherent} and the set $\Omega$ of locations corresponding to the observed entries is a set sampled \emph{uniformly at random} (i.e., uniform sampling). This pioneering work provides people several useful tools to investigate matrix completion and many other related problems. Its assumptions, including low-rankness, incoherence and uniform sampling, are now standard and widely used in the literatures, e.g.,~\cite{Candes:2009:JournalACM,xu:2012:tit,sun:2016:tit,tpami_2013_lrr,Jain:2014:nips,liu:tpami:2016,zhao:nips:2015,ge:nips:2016}. However, the assumption of uniform sampling is often invalid in practice: \begin{itemize} \item[$\bullet$] A ubiquitous type of missing data is the unseen future data, e.g., the next few values of a time series as shown in Figure~\ref{fig:miss}. It is certain that the (missing) future data is not randomly selected, not even being sampled uniformly at random. In this case, as will be shown in Section~\ref{sec:exp:rcn}, the theories built upon uniform sampling are no longer applicable. \item[$\bullet$] Even when the underlying regime of the missing data pattern is a probabilistic model, the reasons for different observations being missing could be correlated rather than independent. In fact, most real-world datasets cannot satisfy the uniform sampling assumption, as pointed out by~\cite{ruslan:2010:nips,Meka:2009:MCP}. \end{itemize} There has been sparse research in the direction of deterministic or nonuniform sampling, e.g.,~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,Negahban:2012:JMLR,ruslan:2010:nips,Meka:2009:MCP,JMLR:v16:chen15b,daniel:2016:jstsp}. For example, Negahban and Wainwright~\cite{Negahban:2012:JMLR} studied the case of weighted entrywise sampling, which is more general than the setup of uniform sampling but still a special form of random sampling. In particular, Kir\'{a}ly et al.~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr} treated matrix completion as an algebraic problem and proposed deterministic conditions to decide whether a particular entry of a \emph{generic} matrix can be restored. Pimentel{-}Alarc{\'{o}}n et al.~\cite{daniel:2016:jstsp} built deterministic sampling conditions for ensuring that, \emph{almost surely}, there are only finitely many matrices that agree with the observed entries. However, strictly speaking, those conditions ensure only the recoverability of a special kind of matrices, but they cannot guarantee the identifiability of an arbitrary $L_0$ for sure. This gap is indeed striking, as the data matrices arising from modern applications are often of complicate structures and unnecessary to be generic. Moreover, the sampling conditions given in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} are not so interpretable and thus not easy to use while applying to the other related problems such as \emph{matrix recovery} (which is matrix completion with $\Omega$ being unknown)~\cite{Candes:2009:JournalACM}. To break through the limits of random sampling, we propose in this work two deterministic conditions, \emph{isomeric condition}~\cite{liu:nips:2017} and \emph{relative well-conditionedness}, for guaranteeing an \emph{arbitrary} matrix to be recoverable from a sampling of its entries. The isomeric condition is a mixed concept that combines together the rank and coherence of $L_0$ with the locations and amount of the observed entries. In general, isomerism (noun of isomeric) ensures that the \emph{sampled submatrices} (see Section \ref{sec:notation}) are not \emph{rank deficient}\footnote{In this paper, rank deficiency means that a submatrix does not have the largest possible rank. Specifically, suppose that $M'$ is a submatrix of some matrix $M$, then $M'$ is rank deficient iff (i.e., if and only if) $\rank{M'}<\rank{M}$. Note here that a submatrix is rank deficient does not necessarily mean that the submatrix does not have full rank, and a submatrix of full rank could be rank deficient.}. Remarkably, it is provable that isomerism is \emph{necessary} for the identifiability of $L_0$: Whenever the isomeric condition is violated, there exist infinity many matrices that can fit the observed entries not worse than $L_0$ does. Hence, logically speaking, the conditions given in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} should suffice to ensure isomerism. While necessary, unfortunately isomerism does not suffice to guarantee the identifiability of $L_0$ in a deterministic fashion. This is because isomerism does not exclude the unidentifiable cases where the sampled submatrices are severely ill-conditioned. To compensate this weakness, we further propose the so-called \emph{relative well-conditionedness}, which encourages the smallest singular values of the sampled submatrices to be away from 0. Equipped with these new tools, isomerism and relative well-conditionedness, we prove a set of theorems pertaining to \emph{missing data recovery}~\cite{Zhang06} and matrix completion. In particular, we prove that the exact solutions that identify the target matrix $L_0$ are strict local minima to the commonly used bilinear programs. Although theoretically sound, the classic bilinear programs suffer from a weakness that the rank of $L_0$ has to be known. To fix this flaw, we further consider a method termed \emph{isomeric dictionary pursuit} (IsoDP), the formula of which can be derived from Schatten quasi-norm minimization~\cite{rahul:jlmr:2010}, and we show that IsoDP is superior to the traditional bilinear programs. In summary, the main contribution of this work is to establish deterministic sampling conditions for ensuring the success in completing arbitrary matrices from a subset of the matrix entries, producing some theoretical results useful for understanding the completion regimes of arbitrary missing data patterns. \section{Summary of Main Notations}\label{sec:notation} Capital and lowercase letters are used to represent (real-valued) matrices and vectors, respectively, except that some lowercase letters, such as $i,j,k,m,n,l,p,q,r,s$ and $t$, are used to denote integers. For a matrix $M$, $[M]_{ij}$ is the $(i,j)$th entry of $M$, $[M]_{i,:}$ is its $i$th row, and $[M]_{:,j}$ is its $j$th column. Let $\omega_1=\{i_1,i_2,\cdots,i_k\}$ and $\omega_2=\{j_1,j_2,\cdots,j_s\}$ be two 1D sampling sets. Then $[M]_{\omega_1,:}$ denotes the submatrix of $M$ obtained by selecting the rows with indices $i_1,i_2,\cdots,i_k$, $[M]_{:,\omega_2}$ is the submatrix constructed by choosing the columns at $j_1,j_2,\cdots,j_s$, and similarly for $[M]_{\omega_1,\omega_2}$. For a 2D sampling set $\Omega\subseteq{}\{1,\cdots,m\} \times\{1,\cdots,n\}$, we imagine it as a sparse matrix and define its ``rows'', ``columns'' and ``transpose'' as follows: the $i$th row $\Omega_i = \{j_1 | (i_1,j_1)\in\Omega, i_1 = i\}$, the $j$th column $\Omega^j = \{i_1 | (i_1,j_1)\in\Omega, j_1 = j\}$, and the transpose $\Omega^T = \{(j_1,i_1) | (i_1,j_1)\in\Omega\}$. These notations are important for understanding the proposed conditions. For the ease of presentation, we shall call $[M]_{\omega,:}$ as a \emph{sampled submatrix} of $M$ (see Figure~\ref{fig:sub}), where $\omega$ is a 1D sampling set. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{submatrix.pdf}\vspace{-0.15in} \caption{Illustrations of the sampled submatrices.}\label{fig:sub}\vspace{-0.25in} \end{center} \end{figure} Three types of matrix norms are used in this paper: 1) the operator norm or 2-norm denoted by $\|M\|$, 2) the Frobenius norm denoted by $\|M\|_F$ and 3) the nuclear norm denoted by $\|M\|_*$. The only used vector norm is the $\ell_2$ norm, which is denoted by $\|\cdot\|_2$. Particularly, the symbol $|\cdot|$ is reserved for the cardinality of a set. The special symbol $(\cdot)^+$ is reserved to denote the Moore-Penrose pseudo-inverse of a matrix. More precisely, for a matrix $M$ with SVD\footnote{In this paper, SVD always refers to skinny SVD. For a rank-$r$ matrix $M\in\mathbb{R}^{m\times{}n}$, its SVD is of the form $U_M\Sigma_MV_M^T$, where $U_M\in\Re^{m\times{}r},\Sigma_M\in\Re^{r\times{}r}$ and $V_M\in\Re^{n\times{}r}$.} $M=U_M\Sigma_MV_M^T$, its pseudo-inverse is given by $M^+=V_M\Sigma_M^{-1}U_M^T$. For convenience, we adopt the conventions of using $\mathrm{span}\{M\}$ to denote the linear space spanned by the columns of a matrix $M$, using $y\in\mathrm{span}\{M\}$ to denote that a vector $y$ belongs to the space $\mathrm{span}\{M\}$, and using $Y\in\mathrm{span}\{M\}$ to denote that all the column vectors of a matrix $Y$ belong to $\mathrm{span}\{M\}$. \section{Identifiability Conditions}\label{sec:setting} In this section, we introduce the so-called \emph{isomeric condition}~\cite{liu:nips:2017} and \emph{relative well-conditionedness}. \subsection{Isomeric Condition}\label{sec:setting:iso} For the ease of understanding, we shall begin with a concept called \emph{$k$-isomerism} (or \emph{$k$-isomeric} in adjective form), which can be regarded as an extension of low-rankness. \begin{defn}[$k$-isomeric]\label{def:iso:k} A matrix $M\in\Re^{m\times{}l}$ is called $k$-isomeric iff any $k$ rows of $M$ can linearly represent all rows in $M$. That is, \begin{align*} &\rank{[M]_{\omega,:}} = \rank{M}, \forall{}\omega\subseteq\{1,\cdots,m\}, |\omega| = k, \end{align*} where $|\cdot|$ is the cardinality of a sampling set and $[M]_{\omega,:}\in\mathbb{R}^{|\omega|\times{}l}$ is called a ``sampled submatrix'' of $M$. \end{defn} In short, a matrix $M$ is $k$-isomeric means that the sampled submatrix $[M]_{\omega,:}$ (with $|\omega|=k$) is not rank deficient\footnote{Here, the largest possible rank is $\rank{M}$. So $\rank{[M]_{\omega,:}} = \rank{M}$ gives that the submatrix $[M]_{\omega,:}$ is not rank deficient.}. According to the above definition, $k$-isomerism has a nice property; that is, suppose $M$ is $k_1$-isomeric, then $M$ is also $k_2$-isomeric for any $k_2\geq{}k_1$. So, to verify whether a matrix $M$ is $k$-isomeric with unknown $k$, one just needs to find the smallest $\bar{k}$ such that $M$ is $\bar{k}$-isomeric. Generally, $k$-isomerism is somewhat similar to \emph{Spark}~\cite{Donoho:spark:2003}, which defines the smallest linearly dependent subset of the rows of a matrix. For a matrix $M$ to be $k$-isomeric, it is necessary that $\rank{M}\leq{}k$, not sufficient. In fact, $k$-isomerism is also somehow related to the concept of \emph{coherence}~\cite{Candes:2009:math,liu:tsp:2016}. For a rank-$r$ matrix $M\in\mathbb{R}^{m\times{}n}$ with SVD $U_M\Sigma_MV_M^T$, its coherence is denoted as $\mu(M)$ and given by \begin{align*} \mu(M)= \max(\max_{1\leq{}i\leq{}m}\frac{m}{r}\|[U_M]_{i,:}\|_F^2, \max_{1\leq{}j\leq{}n}\frac{n}{r}\|[V_M]_{j,:}\|_F^2). \end{align*} When the coherence of a matrix $M\in\Re^{m\times{}l}$ is not too high, $M$ could be $k$-isomeric with a small $k$, e.g., $k=\rank{M}$. Whenever the coherence of $M$ is very high, one may need a large $k$ to satisfy the $k$-isomeric property. For example, consider an extreme case where $M$ is a rank-1 matrix with one row being 1 and everywhere else being 0. In this case, we need $k=m$ to ensure that $M$ is $k$-isomeric. However, the connection between isomerism and coherence is not indestructible. A counterexample is the Hadamard matrix with $2^m$ rows and 2 columns. In this case, the matrix has an optimal coherence of 1, but the matrix is not $k$-isomeric for any $k\leq{}2^{m-1}$. While Definition~\ref{def:iso:k} involves all 1D sampling sets of cardinality $k$, we often need the isomeric property to be associated with a certain 2D sampling set $\Omega$. To this end, we define below a concept called \emph{$\Omega$-isomerism} (or \emph{$\Omega$-isomeric}). \begin{defn}[$\Omega$-isomeric]\label{def:iso:omg} Let $M\in\Re^{m\times{}l}$ and $\Omega\subseteq\{1,\cdots,$ $m\}\times\{1,\cdots,n\}$. Suppose that $\Omega^j\neq\emptyset$ (empty set), $\forall{}1\leq{}j\leq{}n$. Then the matrix $M$ is called $\Omega$-isomeric iff \begin{align*} &\rank{[M]_{\Omega^j,:}} = \rank{M}, \forall{}j = 1,\cdots,n. \end{align*} Note here that $\Omega^j$ (i.e., $j$th column of $\Omega$) is a 1D sampling set and $l\neq{}n$ is allowed. \end{defn} Similar to $k$-isomerism, $\Omega$-isomerism also assumes that the sampled submatrices, $\{[M]_{\Omega^j,:}\}_{j=1}^n$, are not rank deficient. The main difference is that $\Omega$-isomerism requires the rank of $M$ to be preserved by the submatrices sampled according to a \emph{specific} sampling set $\Omega$, and $k$-isomerism assumes that \emph{every} submatrix consisting of $k$ rows of $M$ has the same rank as $M$. Hence, $\Omega$-isomerism is less strict than $k$-isomerism. More precisely, provided that $|\Omega^j|\geq{}k,\forall{}1\leq{}j\leq{}n$, a matrix $M$ is $k$-isomeric ensures that $M$ is $\Omega$-isomeric as well, but not vice versa. In the extreme case where $M$ is nonzero at only one row, interestingly, $M$ can be $\Omega$-isomeric as long as the locations of the nonzero entries are included in $\Omega$. For example, the following rank-1 matrix $M$ is not 1-isomeric but still $\Omega$-isomeric for some $\Omega$ with $|\Omega^j|=1,\forall{}1\leq{}j\leq{}n$: \begin{align*} \Omega = \{(1,1),(1,2), (1,3)\} \textrm{ and } M =\left[\begin{array}{cc} 1 &1\\ 0&0\\ 0&0 \end{array}\right], \end{align*} where it is configured that $m=n=3$ and $l=2$. With the notation of $\Omega^T = \{(j_1,i_1) | (i_1,j_1)\in\Omega\}$, the isomeric property can be also defined on the column vectors of a matrix, as shown in the following definition. \begin{defn}[$\Omega/\Omega^T$-isomeric]\label{def:iso:omgt} Let $M\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose $\Omega_i\neq\emptyset$ and $\Omega^j\neq\emptyset$, $\forall{}i,j$. Then the matrix $M$ is called $\Omega/\Omega^T$-isomeric iff $M$ is $\Omega$-isomeric and $M^T$ is $\Omega^T$-isomeric as well. \end{defn} To solve Problem~\ref{pb:mc} without the assumption of missing at random, as will be shown later, it is necessary to assume that $L_0$ is $\Omega/\Omega^T$-isomeric. This condition has excluded the unidentifiable cases where any rows or columns of $L_0$ are wholly missing. Moreover, $\Omega/\Omega^T$-isomerism has partially considered the cases where $L_0$ is of high coherence: For the extreme case where $L_0$ is 1 at only one entry and 0 everywhere else, $L_0$ cannot be $\Omega/\Omega^T$-isomeric unless the index of the nonzero element is included in $\Omega$. In general, there are numerous reasons for the target matrix $L_0$ to be isomeric. For example, the standard assumptions of low-rankness, incoherence and uniform sampling are indeed sufficient to ensure isomerism, not necessary. \begin{theo}\label{thm:iso} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. Denote $n_1 = \max(m,n)$, $n_2=\min(m,n)$, $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Suppose that $\Omega$ is a set sampled uniformly at random, namely $\mathrm{Pr}((i,j)\in\Omega)=\rho_0$ and $\mathrm{Pr}((i,j)\notin\Omega)=1-\rho_0$. If $\rho_0>c\mu_0r_0(\log{n_1})/n_2$ for some numerical constant $c$ then, with probability at least $1-n_1^{-10}$, $L_0$ is $\Omega/\Omega^T$-isomeric. \end{theo} Notice, that the isomeric condition can be also proven by discarding the uniform sampling assumption and accessing only the concept of coherence (see Theorem~\ref{thm:iso:rcn}). Furthermore, the isomeric condition could be even obeyed in the case of high coherence. For example, \begin{align}\label{eq:example:1} \hspace{-0.03in}\Omega \hspace{-0.03in}= \hspace{-0.03in}\{(1,1),\hspace{-0.03in} (1,2), \hspace{-0.03in}(1,3),\hspace{-0.03in} (2,1), \hspace{-0.03in}(3, 1)\} \textrm{ and } L_0 \hspace{-0.03in}=\hspace{-0.03in}\setlength\arraycolsep{0.1cm}\left[\hspace{-0.03in}\begin{array}{ccc} 1 &0&0\\ 0&0&0\\ 0&0&0 \end{array}\hspace{-0.03in}\right]\hspace{-0.03in}, \end{align} where $L_0$ is not incoherent and the sampling is not uniform either, but it can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. In fact, the isomeric condition is \emph{necessary} for the identifiability of $L_0$, as shown in the following theorem. \begin{theo}\label{thm:iso:necessary} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. If either $L_0$ is not $\Omega$-isomeric or $L_0^T$ is not $\Omega^T$-isomeric then there exist infinity many matrices (denoted as $L\in\Re^{m\times{}n}$) that fit the observed entries not worse than $L_0$ does: \begin{align*} L\neq{}L_0,\textrm{ } \rank{L}\leq\rank{L_0},\textrm{ }[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{align*} \end{theo} In other words, for any partial matrix $M'$ with sampling set $\Omega$, if there exists a completion $M$ that is not $\Omega/\Omega^T$-isomeric, then there are infinity many completions that are different from $M$ and have a rank not greater than that of $M$. In other words, isomerism is also necessary for the so-called \emph{finitely completable property} explored in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp}. As a consequence, logically speaking, the deterministic sampling conditions established in~\cite{Kiraly:2012:icml,Kiraly:2015:jmlr,daniel:2016:jstsp} should suffice to ensure isomerism. The above theorem illustrates that the isomeric condition is indeed necessary for the identifiability of the completions to any partial matrices, no matter how the observed entries are chosen. \subsection{Relative Well-Conditionedness} While necessary, the isomeric condition is unfortunately unable to guarantee the identifiability of $L_0$ for sure. More concretely, consider the following example: \begin{align}\label{eq:example:2} \Omega = \{(1, 1), (2, 2)\} \textrm{ and } L_0 =\left[\begin{array}{cc} 1 &\frac{10}{9}\\ \frac{9}{10} &1 \end{array}\right]. \end{align} It can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. However, there still exist infinitely many rank-1 completions different than $L_0$, e.g., $L_*=[1 ,1; 1, 1]$, which is a matrix of all ones. For this particular example, $L_*$ is the optimal rank-1 completion in the sense of coherence. In general, isomerism is only a condition for the sampled submatrices to be not rank deficient, but there is no guarantee that the sampled submatrices are well-conditioned. To compensate this weakness, we further propose an additional hypothesis called \emph{relative well-conditionedness}, which encourages the smallest singular value of the sampled submatrices to be far from 0. Again, we shall begin with a simple concept called \emph{$\omega$-relative condition number}, with $\omega$ being a 1D sampling set. \begin{defn}[$\omega$-relative condition number]\label{def:rcn:1} Let $M\in\Re^{m\times{}l}$ and $\omega\subseteq\{1,\cdots,m\}$. Suppose that $[M]_{\omega,:}\neq0$. Then the $\omega$-relative condition number of the matrix $M$ is denoted as $\gamma_{\omega}(M)$ and given by \begin{align*} \gamma_{\omega}(M) = 1/\|M([M]_{\omega,:})^+\|^2, \end{align*} where $(\cdot)^+$ and $\|\cdot\|$ are the pseudo-inverse and operator norm of a matrix, respectively. \end{defn} Regarding the bound of the $\omega$-relative condition number $\gamma_{\omega}(M)$, simple calculations yield \begin{align*} \sigma_{min}^2/\|M\|^2\leq\gamma_{\omega}(M)\leq1, \end{align*} where $\sigma_{min}$ is the smallest singular value of $[M]_{\omega,:}$. Hence, the sampled submatrix $[M]_{\omega,:}$ has a large minimum singular value is sufficient for ensuring that $\gamma_{\omega}(M)$ is large, not necessary. Roughly, the value of $\gamma_{\omega}(M)$ measures how much information of a matrix $M$ is contained in the sampled submatrix $[M]_{\omega,:}$. The more information $[M]_{\omega,:}$ contains, the larger $\gamma_{\omega}(M)$ is (this will be more clear later). For example, $\gamma_{\omega}(M)=1$ whenever $\omega=\{1,\cdots,m\}$. The concept of $\omega$-relative condition number can be extended to the case of 2D sampling sets, as shown below. \begin{defn}[$\Omega$-relative condition number]\label{def:rcn:2} Let $M\in\Re^{m\times{}l}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose that $[M]_{\Omega^j,:}\neq0$, $\forall{}1\leq{}j\leq{}n$. Then the $\Omega$-relative condition number of $M$ is denoted as $\gamma_{\Omega}(M)$ and given by \begin{align*} \gamma_{\Omega}(M) = \min_{1\leq{}j\leq{}n}\gamma_{\Omega^j}(M), \end{align*} where $\Omega^j$ is a 1D sampling set corresponding to the $j$th column of $\Omega$. Again, note here that $l\neq{}n$ is allowed. \end{defn} Using the notation of $\Omega^T$, we can define the concept of $\Omega/\Omega^T$-relative condition number as in the following. \begin{defn}[$\Omega/\Omega^T$-relative condition number]\label{def:rcn:3} Let $M\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Suppose that $[M]_{\Omega^j,:}\neq0$ and $[M]_{:,\Omega_i}\neq0$, $\forall{}1\leq{}i\leq{}m,1\leq{}j\leq{}n$. Then the $\Omega/\Omega^T$-relative condition number of $M$ is denoted as $\gamma_{\Omega,\Omega^T}(M)$ and given by \begin{align*} \gamma_{\Omega,\Omega^T}(M) = \min(\gamma_{\Omega}(M), \gamma_{\Omega^T}(M^T)). \end{align*} \end{defn} To make sure that an arbitrary matrix $L_0$ is recoverable from a subset of the matrix entries, we need to assume that $\gamma_{\Omega,\Omega^T}(L_0)$ is reasonably large; this is the so-called \emph{relative well-conditionedness}. Under the standard settings of uniform sampling and incoherence, we have the following theorem to bound $\gamma_{\Omega,\Omega^T}(L_0)$. \begin{theo}\label{thm:rcn:bound} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,$ $n\}$. Denote $n_1 = \max(m,n)$, $n_2=\min(m,n)$, $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Suppose that $\Omega$ is a set sampled uniformly at random, namely $\mathrm{Pr}((i,j)\in\Omega)=\rho_0$ and $\mathrm{Pr}((i,j)\notin\Omega)=1-\rho_0$. For any $\alpha>1$, if $\rho_0>\alpha{}c\mu_0r_0(\log{n_1})/n_2$ for some numerical constant $c$ then, with probability at least $1-n_1^{-10}$, $\gamma_{\Omega,\Omega^T}(L_0)>(1-1/\sqrt{\alpha})\rho_0$. \end{theo} The above theorem illustrates that, under the setting of uniform sampling \emph{plus} incoherence, the relative condition number approximately corresponds to the fraction of the observed entries. Actually, the relative condition number can be bounded from below without the assumption of uniform sampling. \begin{theo}\label{thm:iso:rcn} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote $\mu_0=\mu(L_0)$ and $r_0=\rank{L_0}$. Denote by $\rho$ the smallest fraction of the observed entries in each column and row of $L_0$; namely, \begin{align*} \rho = \min(\min_{1\leq{}i\leq{}m}\frac{|\Omega_{i}|}{n}, \min_{1\leq{}j\leq{}n}\frac{|\Omega^{j}|}{m}). \end{align*} For any $0\leq\alpha<1$, if $\rho>1-(1-\alpha)/(\mu_0r_0)$ then the matrix $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>\alpha$. \end{theo} It is worth noting that the relative condition number could be large even if the coherence of $L_0$ is extremely high. For the example shown in~\eqref{eq:example:1}, it can be calculated that $\gamma_{\Omega,\Omega^T}(L_0)=1$. \section{Theories and Methods}\label{sec:mainbody} In this section, we shall prove some theorems pertaining to matrix completion as well as missing data recovery. In addition, we suggest a method termed IsoDP for matrix completion, which possesses some remarkable features that we miss in the traditional bilinear programs. \subsection{Missing Data Recovery}\label{sec:clue} Before exploring the matrix completion problem, we would like to consider a missing data recovery problem studied by~\cite{Zhang06}, which is described as follows: Let $y_0\in\Re^m$ be a data vector drawn form some low-dimensional subspace, denoted as $y_0\in\mathcal{S}_0\subset\Re^m$. Suppose that $y_0$ contains some available observations in $y_b\in\Re^k$ and some missing entries in $y_u\in\Re^{m-k}$. Namely, after a permutation, \begin{align}\label{eq:y} y_0 = \left[\begin{array}{c} y_b\\ y_u\\ \end{array}\right], y_b\in\Re^k, y_u\in\Re^{m-k}. \end{align} Given the observations in $y_b$, we seek to restore the unseen entries in $y_u$. To do this, we consider the prevalent idea that represents a data vector as a linear combination of the bases in a given dictionary: \begin{align}\label{eq:ax} y_0 = Ax_0, \end{align} where $A\in\Re^{m\times{}p}$ is a dictionary constructed in advance and $x_0\in\Re^{p}$ is the representation of $y_0$. Utilizing the same permutation used in~\eqref{eq:y}, we can partition the rows of $A$ into two parts according to the locations of the observed and missing entries: \begin{align}\label{eq:A} A = \left[\begin{array}{c} A_b\\ A_u\\ \end{array}\right], A_b\in\Re^{k\times{}p}, A_u\in\Re^{(m-k)\times{}p}. \end{align} In this way, the equation in~\eqref{eq:ax} gives that \begin{align*} y_b = A_bx_0\quad\text{and}\quad{}y_u = A_ux_0. \end{align*} As we now can see, the unseen data $y_u$ is exactly restored, as long as the representation $x_0$ is retrieved by only accessing the available observations in $y_b$. In general cases, there are infinitely many representations that satisfy $y_0 = Ax_0$, e.g., $x_0=A^+y_0$, where $(\cdot)^+$ is the pseudo-inverse of a matrix. Since $A^+y_0$ is the representation of minimal $\ell_2$ norm, we revisit the traditional $\ell_2$ program: \begin{align}\label{eq:l2} \min_{x} \frac{1}{2}\norm{x}_2^2,\quad\textrm{s.t.}\quad{}y_b = A_bx, \end{align} where $\|\cdot\|_2$ is the $\ell_2$ norm of a vector. The above problem has a closed-form solution given by $A_b^+y_b$. Under some verifiable conditions, the above $\ell_2$ program is indeed \emph{consistently successful} in a sense as in the following: For any $y_0\in\mathcal{S}_0$ with an arbitrary partition $y_0=[y_b;y_u]$ (i.e., arbitrarily missing), the desired representation $x_0=A^+y_0$ is the unique minimizer to the problem in~\eqref{eq:l2}. That is, the unseen data $y_u$ is exactly recovered by firstly computing $x_*=A_b^+y_b$ and then calculating $y_u=A_ux_*$. \begin{theo}\label{thm:l2} Let $y_0=[y_b;y_u]\in\Re^m$ be an authentic sample drawn from some low-dimensional subspace $\mathcal{S}_0$. Denote by $k$ the number of available observations in $y_b$. Then the convex program~\eqref{eq:l2} is consistently successful, as long as $\mathcal{S}_0\subseteq\mathrm{span}\{A\}$ and the given dictionary $A$ is $k$-isomeric. \end{theo} The above theorem says that, in order to recover an $m$-dimensional vector sampled from some subspace determined by a given $k$-isomeric dictionary $A$, one only needs to see $k$ entries of the vector. \subsection{Convex Matrix Completion} Low rank matrix completion concerns the problem of seeking a matrix that not only attains the lowest rank but also satisfies the constraints given by the observed entries: \begin{eqnarray*} \min_{L} \rank{L},\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{eqnarray*} Unfortunately, this idea is of little practical because the problem above is essentially NP-hard and cannot be solved in polynomial time~\cite{Chistov:1984}. To achieve practical matrix completion, Cand{\`e}s and Recht~\cite{Candes:2009:math,Recht2008} suggested an alternative that minimizes instead the nuclear norm; namely, \begin{eqnarray}\label{eq:numin} \min_{L} \|L\|_*,\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega, \end{eqnarray} where $\|\cdot\|_*$ denotes the nuclear norm, i.e., the sum of the singular values of a matrix. Under the context of uniform sampling, it has been proved that the above convex program succeeds in recovering the target $L_0$. Although its theory is built upon the assumption of missing at random, as observed widely in the literatures, the convex program~\eqref{eq:numin} actually works even when the locations of the missing entries are distributed in a correlated and nonuniform fashion. This phenomenon could be explained by the following theorem, which states that the solution to the problem in~\eqref{eq:numin} is \emph{unique} and \emph{exact}, provided that the isomeric condition is obeyed and the relative condition number of $L_0$ is large enough. \begin{theo}\label{thm:convex} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. If $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ then $L_0$ is the unique minimizer to the problem in~\eqref{eq:numin}. \end{theo} Roughly speaking, the assumption $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ requires that more than three quarters of the information in $L_0$ is observed. Such an assumption is seemingly restrictive but technically difficult to reduce in general cases. \subsection{Nonconvex Matrix Completion}\label{sec:mainres} The problem of missing data recovery is closely related to matrix completion, which is actually to restore the missing entries in multiple data vectors simultaneously. Hence, we would transfer the spirits of the $\ell_2$ program~\eqref{eq:l2} to the case of matrix completion. Following~\eqref{eq:l2}, one may consider Frobenius norm minimization for matrix completion: \begin{align}\label{eq:fnorm} \min_{X} \frac{1}{2}\norm{X}_F^2,\textrm{ s.t. }[AX]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega, \end{align} where $A\in\Re^{m\times{}p}$ is a dictionary matrix assumed to be given. Similar to~\eqref{eq:l2}, the convex program~\eqref{eq:fnorm} can also exactly recover the desired representation matrix $A^+L_0$, as shown in the theorem below. \begin{theo}\label{thm:fnorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Provided that $L_0\in\mathrm{span}\{A\}$ and the given dictionary $A$ is $\Omega$-isomeric, the desired representation $X_0=A^+L_0$ is the unique minimizer to the problem in~\eqref{eq:fnorm}. \end{theo} Theorem~\ref{thm:fnorm} tells us that, in general, even when the locations of the missing entries are placed arbitrarily, the target $L_0$ is restored as long as we have a proper dictionary $A$. This motivates us to consider the commonly used bilinear program that seeks both $A$ and $X$ simultaneously: \begin{align}\label{eq:isodp:f} \hspace{-0.05in}\min_{A,X}\frac{1}{2} (\norm{A}_F^2\hspace{-0.02in}+\hspace{-0.02in} \norm{X}_F^2),\textrm{ s.t. }[AX]_{ij} \hspace{-0.02in}= \hspace{-0.02in} [L_0]_{ij},\forall{}(i,j)\hspace{-0.02in}\in\hspace{-0.02in}\Omega, \end{align} where $A\in\Re^{m\times{}p}$ and $X\in\Re^{p\times{}n}$. The problem above is bilinear and therefore nonconvex. So, it would be hard to obtain a strong performance guarantee as done in the convex programs, e.g.,~\cite{Candes:2009:math,liu:tsp:2016}. What is more, the setup of deterministic sampling requires a deterministic recovery guarantee, the proof of which is much more difficult than a probabilistic guarantee. Interestingly, under the very mild condition of isomerism, the problem in~\eqref{eq:isodp:f} is proven to include the exact solutions that identify the target matrix $L_0$ as the critical points. Furthermore, when the relative condition number of $L_0$ is sufficiently large, the local optimality of the exact solutions is guaranteed surely. \begin{theo}\label{thm:isodp:f} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the rank and the SVD of $L_0$ as $r_0$ and $U_0\Sigma_0V_0^T$, respectively. Define \begin{align*} &A_0 = U_0\Sigma_0^{\frac{1}{2}}Q^T, X_0= Q\Sigma_0^{\frac{1}{2}}V_0^T, \forall{}Q\in\Re^{p\times{}r_0}, Q^TQ = \mathtt{I}. \end{align*} Then we have the following: \begin{itemize} \item[1.]If $L_0$ is $\Omega/\Omega^T$-isomeric then the exact solution, denoted as $(A_0, X_0)$, is a critical point to the problem in~\eqref{eq:isodp:f}. \item[2.]If $L_0$ is $\Omega/\Omega^T$-isomeric, $\gamma_{\Omega,\Omega^T}(L_0)>0.5$ and $p=r_0$ then $(A_0, X_0)$ is a local minimum to the problem in~\eqref{eq:isodp:f}, and the local optimality is strict while ignoring the differences among the exact solutions that equally recover $L_0$. \end{itemize} \end{theo} The condition of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, roughly, demands that more than half of the information in $L_0$ is observed. Unless some extra assumptions are imposed, this condition is not reducible, because counterexamples do exist when $\gamma_{\Omega,\Omega^T}(L_0)<0.5$. Consider a concrete case with \begin{align}\label{eq:example:3} \Omega = \{(1, 1), (2, 2)\} \textrm{ and } L_0 =\left[\begin{array}{cc} 1 &\sqrt{\alpha^2-1}\\ \frac{1}{\sqrt{\alpha^2-1}} &1 \end{array}\right], \end{align} where $\alpha>\sqrt{2}$. Then it can be verified that $L_0$ is $\Omega/\Omega^T$-isomeric. Via some calculations, we have (assume $p=r_0$) \begin{align*} &\gamma_{\Omega,\Omega^T}(L_0) = \min(1-\frac{1}{\alpha^2},\frac{1}{\alpha^2})=\frac{1}{\alpha^2} < 0.5,\\ &A_0 = \left[\begin{array}{c} (\alpha^2-1)^{\frac{1}{4}}\\ \frac{1}{(\alpha^2-1)^{\frac{1}{4}}}\\ \end{array}\right]\textrm{ and } X_0 = \left[\frac{1}{(\alpha^2-1)^{\frac{1}{4}}}, (\alpha^2-1)^{\frac{1}{4}}\right]. \end{align*} Now, construct \begin{align*} &A_{\epsilon} = \left[\begin{array}{c} \frac{(\alpha^2-1)^{\frac{1}{4}}}{1+\epsilon}\\ 1/(\alpha^2-1)^{\frac{1}{4}}\\ \end{array}\right]\textrm{ and } X_{\epsilon} = \left[\frac{1+\epsilon}{(\alpha^2-1)^{\frac{1}{4}}}, (\alpha^2-1)^{\frac{1}{4}}\right], \end{align*} where $\epsilon>0$. It is easy to see that $(A_{\epsilon},X_{\epsilon})$ is a feasible solution to~\eqref{eq:isodp:f}. However, as long as $0<\epsilon<\sqrt{\alpha^2-1}-1$, it can be verified that \begin{align*} \|A_{\epsilon}\|_F^2 + \|X_{\epsilon}\|_F^2 < \|A_0\|_F^2 + \|X_0\|_F^2, \end{align*} which implies that $(A_0,X_0)$ is not a local minimum to~\eqref{eq:isodp:f}. In fact, for the particular example shown in~\eqref{eq:example:3}, it can be proven that a global minimum to~\eqref{eq:isodp:f} is given by $(A_*=[1 ;1], X_*=[1,1])$, which cannot correctly reconstruct $L_0$. \subsection{Isomeric Dictionary Pursuit} Theorem~\ref{thm:isodp:f} illustrates that program~\eqref{eq:isodp:f} relies on the assumption of $p=\rank{L_0}$. This is consistent with the widely observed phenomenon that program~\eqref{eq:isodp:f} may not work well while the parameter $p$ is far from the true rank of $L_0$. To overcome this drawback, again, we recall Theorem~\ref{thm:fnorm}. Notice, that the $\Omega$-isomeric condition imposed on the dictionary matrix $A$ requires that \begin{align*} \rank{A}\leq|\Omega^j|,\forall{}j=1,\cdots,n. \end{align*} This, together with the condition of $L_0\in\mathrm{span}\{A\}$, motivates us to combine the formulation~\eqref{eq:fnorm} with the popular idea of nuclear norm minimization, resulting in a bilinear program termed IsoDP, which estimates both $A$ and $X$ by minimizing a mixture of the nuclear and Frobenius norms: \begin{align}\label{eq:isodp} \hspace{-0.05in}\min_{A,X}\norm{A}_*\hspace{-0.03in}+\hspace{-0.03in}\frac{1}{2}\norm{X}_F^2,\textrm{ s.t. }[AX]_{ij} \hspace{-0.03in}= \hspace{-0.03in} [L_0]_{ij},\hspace{-0.02in}\forall{}(i,j)\hspace{-0.02in}\in\hspace{-0.02in}\Omega, \end{align} where $A\in\Re^{m\times{}p}$ and $X\in\Re^{p\times{}n}$. The above formula can be also derived from the framework of Schatten quasi-norm minimization~\cite{rahul:jlmr:2010,Shang:2016:SAT,xu:2017:aai}. It has been proven in~\cite{Shang:2016:SAT,xu:2017:aai} that, for any rank-$r$ matrix $L\in\Re^{m\times{}n}$ with singular values $\sigma_1,\cdots,\sigma_r$, the following holds: \begin{align}\label{eq:snorm} \frac{1}{q}\|L\|_{q}^q = \min_{A,X}\frac{1}{q_1} \|A\|_{q_1}^{q_1} + \frac{1}{q_2}\|X\|_{q_2}^{q_2}, \textrm{ s.t. } AX = L, \end{align} as long as $p\geq{}r$ and $1/q = 1/q_1+1/q_2$ ($q,q_1,q_2>0$), where $\|L\|_q = (\sum_{i=1}^r\sigma_i^q)^{1/q}$ is the Schatten-$q$ norm. In that sense, the IsoDP program~\eqref{eq:isodp} is related to the following Schatten-$q$ quasi-norm minimization problem with $q = 2/3$: \begin{align}\label{eq:stmin} \min_{L} \frac{3}{2}\|L\|_{2/3}^{2/3} ,\quad\textrm{s.t.}\quad{}[L]_{ij} = [L_0]_{ij},\forall{}(i,j)\in\Omega. \end{align} Nevertheless, programs~\eqref{eq:stmin} and~\eqref{eq:isodp} are not equivalent to each other; this is obvious if $p<m$ (assume $m\leq{}n$). In fact, even when $p\geq{}m$, the conclusion~\eqref{eq:snorm} only implies that the global minima of~\eqref{eq:stmin} and~\eqref{eq:isodp} are equivalent, but their local minima and critical points could be different. More precisely, any local minimum to~\eqref{eq:stmin} certainly corresponds to a local minimum to~\eqref{eq:isodp}, but not vice versa\footnote{Suppose that $L_1$ is a local minimum to the problem in~\eqref{eq:stmin}. Let $(A_1,X_1) = \arg\min_{A,X} \norm{A}_*+0.5\norm{X}_F^2$, s.t. $AX=L_1$. Then $(A_1,X_1)$ has to be a local minimum to~\eqref{eq:isodp}. This can be proven by the method of reduction to absurdity. Assume that $(A_1,X_1)$ is not a local minimum to~\eqref{eq:isodp}. Then there exists some feasible solution, denoted as $(A_2, X_2)$, that is arbitrarily close to $(A_1, X_1)$ and satisfies $\norm{A_2}_*+0.5\norm{X_2}_F^2 < \norm{A_1}_*+0.5\norm{X_1}_F^2$. Taking $L_2=A_2X_2$, we have that $L_2$ is arbitrarily close to $L_1$ and $\frac{3}{2}\|L_2\|_{2/3}^{2/3}\leq\norm{A_2}_*+0.5\norm{X_2}_F^2 < \norm{A_1}_*+0.5\norm{X_1}_F^2=\frac{3}{2}\|L_1\|_{2/3}^{2/3}$, which contradicts the premise that $L_1$ is a local minimum to~\eqref{eq:stmin}. So, a local minimum to~\eqref{eq:stmin} also gives a local minimum to~\eqref{eq:isodp}. But the converge of this statement may not be true, and~\eqref{eq:isodp} might have more local minima than~\eqref{eq:stmin}.}. For the same reason, the bilinear program~\eqref{eq:isodp:f} is not equivalent to the convex program~\eqref{eq:numin}. Regarding the recovery performance of the IsoDP program~\eqref{eq:isodp}, we establish the following theorem that reproduces Theorem~\ref{thm:isodp:f} without the assumption of $p=r_0$. \begin{theo}\label{thm:isodp} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the rank and the SVD of $L_0$ as $r_0$ and $U_0\Sigma_0V_0^T$, respectively. Define \begin{align*} &A_0 = U_0\Sigma_0^{\frac{2}{3}}Q^T, X_0= Q\Sigma_0^{\frac{1}{3}}V_0^T,\forall{}Q\in\Re^{p\times{}r_0}, Q^TQ = \mathtt{I}. \end{align*} Then we have the following: \begin{itemize} \item[1.]If $L_0$ is $\Omega/\Omega^T$-isomeric then the exact solution $(A_0, X_0)$ is a critical point to the problem in~\eqref{eq:isodp}. \item[2.]If $L_0$ is $\Omega/\Omega^T$-isomeric and $\gamma_{\Omega,\Omega^T}(L_0)>0.5$ then $(A_0, X_0)$ is a local minimum to the problem in~\eqref{eq:isodp}, and the local optimality is strict while ignoring the differences among the exact solutions that equally recover $L_0$. \end{itemize} \end{theo} Due to the advantages of the nuclear norm, the above theorem does not require the assumption of $p=\rank{L_0}$ any more. Empirically, unlike~\eqref{eq:isodp:f}, which exhibits superior performance only if $p$ is close to $\rank{L_0}$ and the initial solution is chosen carefully, IsoDP can work well by simply choosing $p=m$ and using $A=\mathtt{I}$ as the initial solution. \subsection{Optimization Algorithm}\label{sec:opt} Considering the fact that the observations in reality are often contaminated by noise, we shall investigate instead the following bilinear program that can also approximately solve the problem in~\eqref{eq:isodp}: \begin{align}\label{eq:isodp:noisy} &\hspace{-0.05in}\min_{A,X} \lambda(\norm{A}_*\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\norm{X}_F^2)\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\sum_{(i,j)\in\Omega}([AX]_{ij}\hspace{-0.02in}-\hspace{-0.02in}[L_0]_{ij})^2, \end{align} where $A\in\Re^{m\times{}m}$ (i.e., $p=m$), $X\in\Re^{m\times{}n}$ and $\lambda>0$ is taken as a parameter. The optimization problem in~\eqref{eq:isodp:noisy} can be solved by any of the many first-order methods established in the literatures. For the sake of simplicity, we choose to use the proximal methods by~\cite{proximal:2009:mp,Bolte2014}. Let $(A_t,X_t)$ be the solution estimated at the $t$th iteration. Define a function $g_t(\cdot)$ as \begin{align*} g_t(A) = \frac{1}{2}\sum_{(i,j)\in\Omega}([AX_{t+1}]_{ij}-[L_0]_{ij})^2. \end{align*} Then the solution to~\eqref{eq:isodp:noisy} is updated via iterating the following two procedures: \begin{align}\label{eq:proximal} &\hspace{-0.05in}X_{t+1}\hspace{-0.02in}= \hspace{-0.02in}\arg\min_{X} \frac{\lambda}{2}\|X\|_F^2\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\sum_{(i,j)\in\Omega}([A_tX]_{ij}-[L_0]_{ij})^2,\\\nonumber &\hspace{-0.05in}A_{t+1}\hspace{-0.02in}=\hspace{-0.02in} \arg\min_{A} \frac{\lambda}{\mu_t}\|A\|_*+\frac{1}{2}\|A - (A_t-\frac{\partial{}g_t(A_t)}{\mu_t})\|_F^2, \end{align} where $\mu_t>0$ is a penalty parameter and $\partial{}g_t(A_t)$ is the gradient of the function $g_t(A)$ at $A=A_t$. According to~\cite{proximal:2009:mp}, the penalty parameter $\mu_t$ could be set as $\mu_t = \|X_{t+1}\|^2$. The two optimization problems in~\eqref{eq:proximal} both have closed-form solutions. To be more precise, the $X$-subproblem is a least square regression problem: \begin{align}\label{eq:x-sub} [X_{t+1}]_{:,j} = (A_j^TA_j+\lambda\mathtt{I})^{-1}A_j^Ty_j, \forall{1\leq{}j\leq{}n}, \end{align} where $A_j = [A_t]_{\Omega^j,:}$ and $y_j=[L_0]_{\Omega^j,j}$. The $A$-subproblem is solved by Singular Value Thresholding (SVT)~\cite{svt:cai:2008}: \begin{align}\label{eq:a-sub} A_{t+1}=U\mathcal{H}_{\lambda/\mu_t}(\Sigma)V^T, \end{align} where $U\Sigma{}V^T$ is the SVD of $A_t-\partial{}g_t(A_t)/\mu_t$ and $\mathcal{H}_{\lambda/\mu_t}(\cdot)$ denotes the shrinkage operator with parameter $\lambda/\mu_t$. The whole optimization procedure is also summarized in Algorithm~\ref{alg1}. Without loss of generality, assume that $m\leq{}n$. Then the computational complexity of each iteration in Algorithm~\ref{alg1} is $O(m^2n)+O(m^3)$. \begin{algorithm}[htb] \caption{Solving problem~\eqref{eq:isodp:noisy} by alternating proximal} \label{alg1} \begin{algorithmic}[1] \STATE \textbf{Input}: $\{[L_0]_{ij} |(i,j)\in\Omega\}$. \STATE \textbf{Output}: the dictionary $A$ and the representation $X$. \STATE \textbf{Initialization}: $A=\mathtt{I}$. \REPEAT \STATE Update the representation matrix $X$ by~\eqref{eq:x-sub}. \STATE Update the dictionary matrix $A$ by~\eqref{eq:a-sub}. \UNTIL{convergence} \end{algorithmic} \end{algorithm} \section{Mathematical Proofs}\label{sec:proof} This section shows the detailed proofs of the theorems proposed in this work. \subsection{Notations} Besides of the notations presented in Section~\ref{sec:notation}, there are some other notations used throughout the proofs. Letters $U$, $V$, $\Omega$ and their variants (complements, subscripts, etc.) are reserved for left singular vectors, right singular vectors and support set, respectively. For convenience, we shall abuse the notation $U$ (resp. $V$) to denote the linear space spanned by the columns of $U$ (resp. $V$), i.e., the column space (resp. row space). The orthogonal projection onto the column space $U$, is denoted by $\mathcal{P}_U$ and given by $\mathcal{P}_U(M)=UU^TM$, and similarly for the row space $\mathcal{P}_V(M)=MVV^T$. Also, we denote by $\mathcal{P}_T$ the projection to the sum of the column space $U$ and the row space $V$, i.e., $\mathcal{P}_T(\cdot) = UU^T(\cdot)+(\cdot)VV^T-UU^T(\cdot)VV^T$. The same notation is also used to represent a subspace of matrices (i.e., the image of an operator), e.g., we say that $M\in\mathcal{P}_{U}$ for any matrix $M$ which satisfies $\mathcal{P}_{U}(M)=M$. The symbol $\mathcal{P}_{\Omega}$ denotes the orthogonal projection onto $\Omega$: \begin{align*} [\mathcal{P}_\Omega(M)]_{ij}=\left\{\begin{array}{cc} [M]_{ij},&\text{if }(i,j)\in\Omega,\\ 0, &\text{otherwise.}\\ \end{array}\right. \end{align*} Similarly, the symbol $\mathcal{P}_{\Omega}^{\bot}$ denotes the orthogonal projection onto the complement space of $\Omega$; that is, $\mathcal{P}_{\Omega}+\mathcal{P}_{\Omega}^{\bot}=\mathcal{I}$, where $\mathcal{I}$ is the identity operator. \vspace{-0.1in}\subsection{Basic Lemmas} While its definitions are associated with a certain matrix, the isomeric condition is actually characterizing some properties of a space, as shown in the lemma below. \begin{lemm}\label{lem:basic:L02U} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Denote the SVD of $L_0$ as $U_0\Sigma_0V_0^T$. Then we have: \begin{itemize} \item[1.] $L_0$ is $\Omega$-isomeric iff $U_0$ is $\Omega$-isomeric. \item[2.] $L_0^T$ is $\Omega^T$-isomeric iff $V_0$ is $\Omega^T$-isomeric. \end{itemize} \end{lemm} \begin{proof} It can be manipulated that \begin{align*} [L_0]_{\Omega^j,:} = ([U_0]_{\Omega^j,:})\Sigma_0V_0^T, \forall{}j=1,\cdots, n. \end{align*} Since $\Sigma_0V_0^T$ is row-wisely full rank, we have \begin{align*} \rank{[L_0]_{\Omega^j,:}} = \rank{[U_0]_{\Omega^j,:}},\forall{}j=1,\cdots,n. \end{align*} As a consequence, $L_0$ is $\Omega$-isomeric is equivalent to $U_0$ is $\Omega$-isomeric. Similarly, the second claim is proven. \end{proof} The isomeric property is indeed subspace successive, as shown in the next lemma. \begin{lemm}\label{lem:basic:subsucc} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $U_0\in\Re^{m\times{}r}$ be the basis matrix of a subspace embedded in $\Re^m$. Suppose that $U$ is a subspace of $U_0$, i.e., $U = U_0U_0^TU$. If $U_0$ is $\Omega$-isomeric then $U$ is $\Omega$-isomeric as well. \end{lemm} \begin{proof}By $U = U_0U_0^TU$ and $U_0$ is $\Omega$-isomeric, \begin{align*} &\rank{[U]_{\Omega^j,:}} = \rank{([U_0]_{\Omega^j,:})U_0^TU}=\rank{U_0^TU}\\ &=\rank{U_0U_0^TU}=\rank{U}, \forall{}1\leq{}j\leq{}n. \end{align*} \end{proof} The following lemma reveals the fact that the isomeric property is related to the invertibility of matrices. \begin{lemm}\label{lem:basic:positive} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $U_0\in\Re^{m\times{}r}$ be the basis matrix of a subspace of $\Re^m$. Denote by $u_i^T$ the $i$th row of $U_0$, i.e., $U_0 = [u_1^T;\cdots;u_m^T]$. Define $\delta_{ij}$ as \begin{align}\label{eq:delta} \delta_{ij}=\left\{\begin{array}{cc} 1,&\text{if }(i,j)\in\Omega,\\ 0, &\text{otherwise.}\\ \end{array}\right. \end{align} Then the matrices, $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$, $\forall{}1\leq{}j\leq{}n$, are all invertible iff $U_0$ is $\Omega$-isomeric. \end{lemm} \begin{proof} Note that \begin{align*} &([U_0]_{\Omega^j,:})^T([U_0]_{\Omega^j,:})=\sum_{i=1}^{m}(\delta_{ij})^2u_iu_i^T=\sum_{i=1}^{m}\delta_{ij}u_iu_i^T. \end{align*} Now, it is easy to see that the matrix $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$ is invertible is equivalent to the matrix $([U_0]_{\Omega^j,:})^T([U_0]_{\Omega^j,:})$ is positive definite, which is further equivalent to $\rank{[U_0]_{\Omega^j,:}}=\rank{U_0}$, $\forall{}j=1,\cdots,n$. \end{proof} The following lemma gives some insights to the relative condition number. \begin{lemm}\label{lem:basic:rcn} Let $M\in\Re^{m\times{}l}$ and $\omega\subseteq\{1,\cdots,m\}$. Define $\{\delta_i\}_{i=1}^m$ with $\delta_i = 1$ if $i\in\omega$ and 0 otherwise. Define a dialog matrix $D\in\mathbb{R}^{m\times{}m}$ as $D=\diag{\delta_1,\delta_2,\cdots,\delta_m}$. Denote the SVD of $M$ as $U\Sigma{}V^T$. If $\rank{[M]_{\omega,:}} = \rank{M}$ then \begin{align*} \gamma_{\omega}(M) = \sigma_{min}, \end{align*} where $\sigma_{min}$ is the the smallest singular value (or eigenvalue) of the matrix $U^TDU$. \end{lemm} \begin{proof} First note that $[M]_{\omega,:}$ can be equivalently written as $DU\Sigma{}V^T$. By the assumption of $\rank{[M]_{\omega,:}} = \rank{M}$, $DU$ is column-wisely full rank. Thus, \begin{align*} &M([M]_{\omega,:})^+ = U\Sigma{}V^T(DU\Sigma{}V^T)^+ = U\Sigma{}V^T(\Sigma{}V^T)^+(DU)^+\\ &=U(DU)^+=U(U^TDU)^{-1}U^TD, \end{align*} which gives that \begin{align*} &M([M]_{\omega,:})^+(M([M]_{\omega,:})^+)^T = U(U^TDU)^{-1}U^T. \end{align*} As a result, we have $\|M([M]_{\omega,:})^+\|^2 = 1/\sigma_{min}$, and thereby \begin{align*} \gamma_{\omega}(M) = 1/\|M([M]_{\omega,:})^+\|^2 = \sigma_{min}. \end{align*} \end{proof} It has been proven in~\cite{siam_2010_minirank} that $\|L\|_*=\min_{A,X}\frac{1}{2}(\|A\|_F^2+\|X\|_F^2), \textrm{ s.t. }AX=L$. We have an analogous result, which has also been proven by~\cite{rahul:jlmr:2010,Shang:2016:SAT,xu:2017:aai}. \begin{lemm}\label{lem:basic:ax} Let $L\in\Re^{m\times{}n}$ be a rank-$r$ matrix with $r\leq{}p$. Denote the SVD of $L$ as $U\Sigma{}V^T$. Then we have the following: \begin{align*} \frac{3}{2}\trace{\Sigma^{\frac{2}{3}}} = \min_{A\in\Re^{m\times{}p},X\in\Re^{p\times{}n}}\|A\|_*+\frac{1}{2}\|X\|_F^2, \textrm{ s.t. }AX = L, \end{align*} where $\trace{\cdot}$ is the trace of a square matrix. \end{lemm} \begin{proof} Denote the singular values of $L$ as $\sigma_1\geq\cdots\geq\sigma_r>0$. We first consider the case that $\rank{A}=\rank{L}=r$. Since $AX=L$, the SVD of $A$ must have a form of $UQ\Sigma_AV_A^T$, where $Q$ is an orthogonal matrix of size $r\times{}r$ and $\Sigma_A = \diag{\alpha_1,\cdots,\alpha_r}$ with $\alpha_1\geq\cdots\geq\alpha_r>0$. Since $A^+L = \arg\min_{X} \|X\|_F^2, \textrm{ s.t. } AX=L$, we have \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 \geq \|A\|_* + \frac{1}{2}\|A^+L\|_F^2\\ &=\trace{\Sigma_A}+\frac{1}{2}\trace{\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}}. \end{align*} It can be proven that the eigenvalues of $\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}$ are given by $\{\sigma_i^2/\alpha_{\pi_i}^2\}_{i=1}^r$, where $\{\alpha_{\pi_i}\}_{i=1}^r$ is a permutation of $\{\alpha_i\}_{i=1}^r$. By rearrangement inequality, \begin{align*} \trace{\Sigma_A^{-1}Q^T\Sigma^2Q\Sigma_A^{-1}}=\sum_{i=1}^r\frac{\sigma_i^2}{\alpha_{\pi_i}^2}\geq\sum_{i=1}^r \frac{\sigma_i^2}{\alpha_i^2}. \end{align*} As a consequence, we have \begin{align*} &\|A\|_*\hspace{-0.02in} + \hspace{-0.02in}\frac{1}{2}\|X\|_F^2 \hspace{-0.02in}\geq\hspace{-0.02in} \sum_{i=1}^r \left(\alpha_i+\frac{\sigma_i^2}{2\alpha_i^2}\right)\hspace{-0.02in}=\hspace{-0.02in} \sum_{i=1}^r \left(\frac{1}{2}\alpha_i\hspace{-0.02in}+\hspace{-0.02in}\frac{1}{2}\alpha_i\hspace{-0.02in}+\hspace{-0.02in}\frac{\sigma_i^2}{2\alpha_i^2}\right)\\ &\geq{}\sum_{i=1}^r \frac{3}{2}\sigma_i^{\frac{2}{3}}=\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}. \end{align*} Regarding the general case of $\rank{A}\geq{}\rank{L}$, we can construct $A_1 = UU^TA$. By $AX=L$, $A_1X=L$. Since $\rank{A_1} = \rank{L}$, we have \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 \geq{}\|A_1\|_* + \frac{1}{2}\|X\|_F^2\\ &\geq{}\|A_1\|_* + \frac{1}{2}\|A_1^+L\|_F^2\geq\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}. \end{align*} Finally, the optimal value of $\frac{3}{2}\trace{\Sigma^{\frac{2}{3}}}$ is attained by $A_*=U\Sigma^{\frac{2}{3}}H^T$ and $X_*=H\Sigma^{\frac{1}{3}}V^T$, $\forall{}H^TH=\mathtt{I}$. \end{proof} The next lemma will be used multiple times in the proofs presented in this paper. \begin{lemm}\label{lem:basic:inverse} Let $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$ and $\mathcal{P}$ be an orthogonal projection onto some subspace of $\Re^{m\times{}n}$. Then the following are equivalent: \begin{itemize} \item[1.] $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ is invertible. \item[2.] $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$. \item[3.] $\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot=\{0\}$. \end{itemize} \end{lemm} \begin{proof} \textbf{1$\rightarrow$2:} Let $\mathrm{vec}(\cdot)$ denote the vectorization of a matrix formed by stacking the columns of the matrix into a single column vector. Suppose that the basis matrix associated with $\mathcal{P}$ is given by $P\in\Re^{mn\times{}r}, P^TP=\mathtt{I}$; namely, \begin{align*} \mathrm{vec}(\mathcal{P}(M)) = PP^T\mathrm{vec}(M),\forall{}M\in\Re^{m\times{}n}. \end{align*} Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a diagonal matrix $D$ as \begin{align*} D = \mathrm{diag}(\delta_{11},\delta_{21},\cdots,\delta_{ij},\cdots,\delta_{mn})\in\Re^{mn\times{}mn}. \end{align*} Notice that \begin{align*} &\mathcal{P}(M) = \mathcal{P}(\sum_{i,j}\langle{}M,e_ie_j^T\rangle{}e_ie_j^T)=\sum_{i,j}\langle{}M,e_ie_j^T\rangle{}\mathcal{P}(e_ie_j^T), \end{align*} where $e_i$ is the $i$th standard basis and $\langle\cdot\rangle$ denotes the inner product between two matrices. With this notation, it is easy to see that \begin{align*} &[\mathrm{vec}(\mathcal{P}(e_1e_1^T)),\mathrm{vec}(\mathcal{P}(e_2e_1^T)),\cdots,\mathrm{vec}(\mathcal{P}(e_me_n^T))]=PP^T. \end{align*} Similarly, we have \begin{align*} \mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M) = \sum_{i,j}\langle\mathcal{P}(M),e_ie_j^T\rangle(\delta_{ij}\mathcal{P}(e_ie_j^T)), \end{align*} and thereby \begin{align*} &\mathrm{vec}(\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M))= PP^TD\mathrm{vec}(\mathcal{P}(M))\\ &=PP^TDPP^T\mathrm{vec}(M). \end{align*} For $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ to be invertible, the matrix $P^TDP$ must be positive definite. Because, whenever $P^TDP$ is singular, there exists $z\in\Re^{mn}$ that satisfies $z\neq0$ and $P^TDPz=0$, and thus there exists $M\in\mathcal{P}$ and $M\neq0$ such that $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M)=0$; this contradicts the assumption that $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$ is invertible. Denote the minimal singular value of $P^TDP$ as $0<\sigma_{min}\leq1$. Since $P^TDP$ is positive definite, we have \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F = \|\mathrm{vec}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M))\|_2\\ &= \|(\mathtt{I}-P^TDP)P^T\mathrm{vec}(M)\|_2\leq(1-\sigma_{min})\|P^T\mathrm{vec}(M)\|_2 \\ &= (1-\sigma_{min})\|\mathcal{P}(M)\|_F, \end{align*} which gives that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|\leq1-\sigma_{min}<1$. \textbf{2$\rightarrow$3:} Suppose that $M\in{}\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot$, i.e., $M =\mathcal{P}(M)= \mathcal{P}_{\Omega}^\bot(M)$. Then we have $M=\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)$ and thus \begin{align*} &\|M\|_F = \|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F\leq\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|\|M\|_F\leq\|M\|_F. \end{align*} Since $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$, the last equality above can hold only when $M=0$. \textbf{3$\rightarrow$1:} Consider a nonzero matrix $M\in\mathcal{P}$. Then we have \begin{align*} &\|M\|_F^2 = \|\mathcal{P}(M)\|_F^2 = \|\mathcal{P}_{\Omega}\mathcal{P}(M)+\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2\\ &=\|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2+\|\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2, \end{align*} which gives that \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2\leq\|\mathcal{P}_{\Omega}^\bot\mathcal{P}(M)\|_F^2=\|M\|_F^2 - \|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2. \end{align*} By $\mathcal{P}\cap{}\mathcal{P}_{\Omega}^\bot=\{0\}$, $\mathcal{P}_{\Omega}\mathcal{P}(M)\neq0$. Thus, \begin{align*} &\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|^2 \leq 1 - \inf_{\|M\|_F=1}\|\mathcal{P}_{\Omega}\mathcal{P}(M)\|_F^2<1. \end{align*} Provided that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\|<1$, $\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i$ is well defined. Notice that, for any $M\in\mathcal{P}$, the following holds: \begin{align*} &\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(\mathcal{I}-\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i-\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}-\sum_{i=2}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)(M)\\ &=\mathcal{P}(M) = M. \end{align*} Similarly, it can be also proven that $(\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i)$ $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}(M)=M$. Hence, $\mathcal{I}+\sum_{i=1}^{\infty}(\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P})^i$ is indeed the inverse operator of $\mathcal{P}\mathcal{P}_{\Omega}\mathcal{P}$. \end{proof} The lemma below is adapted from the arguments in~\cite{siam:stewart:1969}. \begin{lemm}\label{lem:basic:pinv} Let $A\in\mathbb{R}^{m\times{}p}$ be a matrix with column space $U$, and let $A_1 = A +\Delta$. If $\Delta\in{}U$ and $\|\Delta\|<1/\|A^+\|$ then \begin{align*} \rank{A_1} = \rank{A} \textrm{ and } \|A_1^+\|\leq{}\frac{\|A^+\|}{1 - \|A^+\|\|\Delta\|}. \end{align*} \end{lemm} \begin{proof}By $\Delta\in{}U$, \begin{align*} A_1 = A + UU^T\Delta= A + AA^+\Delta = A(\mathtt{I} + A^+\Delta). \end{align*} By $\|\Delta\|<1/\|A^+\|$, $\mathtt{I} + A^+\Delta$ is invertible and thus $\rank{A_1} = \rank{A}$. To prove the second claim, we denote by $V_1$ the row space of $A_1$. Then we have \begin{align*} V_1V_1^T = A_1^+A_1 = A_1^+A(\mathtt{I} + A^+\Delta), \end{align*} which gives that $A_1^+A = V_1V_1^T(\mathtt{I} + A^+\Delta)^{-1}$. Since $A_1\in{}U$, we have \begin{align*} A_1^+ = A_1^+UU^T = A_1^+AA^+ = V_1V_1^T(\mathtt{I} + A^+\Delta)^{-1}A^+, \end{align*} from which the conclusion follows. \end{proof} \subsection{Critical Lemmas} The following lemma has a critical role in the proofs. \begin{lemm}\label{lem:critical:inverse} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. Then we have the following: \begin{itemize} \item[1.] $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible iff $U_0$ is $\Omega$-isomeric. \item[2.] $\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}$ is invertible iff $V_0$ is $\Omega^T$-isomeric. \end{itemize} \end{lemm} \begin{proof} The above two claims are proven in the same way, and thereby we only present the proof of the first one. Since the operator $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is linear and $\mathcal{P}_{U_0}$ is a linear space of finite dimension, the sufficiency can be proven by showing that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is an injection. That is, we need to prove that the following linear system has no nonzero solution: \begin{align*} \mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0, \textrm{ s.t. }M\in\mathcal{P}_{U_0}. \end{align*} Assume that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0$. Then we have \begin{align*} U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM) = 0. \end{align*} Denote the $i$th row and $j$th column of $U_0$ and $U_0^TM$ as $u_i^T$ and $b_j$, respectively; that is, $U_0 = [u_1^T;u_2^T;\cdots;u_m^T]$ and $U_0^TM = [b_1,b_2,\cdots,b_n]$. Define $\delta_{ij}$ as in~\eqref{eq:delta}. Then the $j$th column of $U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM)$ is given by $(\sum_{i=1}^{m}\delta_{ij}u_iu_i^T)b_j$. By Lemma~\ref{lem:basic:positive}, the matrix $\sum_{i=1}^{m}\delta_{ij}u_iu_i^T$ is invertible. Hence, $U_0^T\mathcal{P}_{\Omega}(U_0U_0^TM) = 0$ implies that \begin{align*} b_j=0,\forall{}j=1,\cdots,n, \end{align*} i.e., $U_0^TM=0$. By the assumption of $M\in\mathcal{P}_{U_0}$, $M=0$. It remains to prove the necessity. Assume $U_0$ is not $\Omega$-isomeric. By Lemma~\ref{lem:basic:positive}, there exists $j_1$ such that the matrix $\sum_{i=1}^{m}\delta_{ij_1}u_iu_i^T$ is singular and therefore has a nonzero null space. So, there exists $M_1\neq{}0$ such that $U_0^T\mathcal{P}_{\Omega}(U_0M_1)=0$. Let $M=U_0M_1$. Then we have $M\neq0$, $M\in\mathcal{P}_{U_0}$ and \begin{align*} \mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}(M) = 0. \end{align*} This contradicts the assumption that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible. As a consequence, $U_0$ must be $\Omega$-isomeric. \end{proof} The next four lemmas establish some connections between the relative condition number and the operator norm. \begin{lemm}\label{lem:critical:rnc2optnorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| = 1 - \gamma_{\Omega}(L_0),\textrm{ }\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\| = 1 - \gamma_{\Omega^T}(L_0^T). \end{align*} \end{lemm} \begin{proof} We only need to prove the first claim. Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a set of diagonal matrices $\{D_j\}_{j=1}^n$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\Re^{m\times{}m}$. Denote the $j$th column of $\mathcal{P}_{U_0}(M)$ as $b_j$. Then we have \begin{align*} &\|[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M)]_{:,j}\|_2 = \|U_0U_0^Tb_j - U_0(U_0^TD_jU_0)U_0^Tb_j\|_2\\ &=\|(\mathtt{I}-U_0^TD_jU_0)U_0^Tb_j\|_2\leq\|(\mathtt{I}-U_0^TD_jU_0)\|\|U_0^Tb_j\|_2. \end{align*} By Lemma~\ref{lem:basic:positive}, $U_0^TD_jU_0$ is positive definite. As a consequence, $\sigma_j\mathtt{I}\preccurlyeq{}U_0^TD_jU_0\preccurlyeq\mathtt{I}$, where $\sigma_j>0$ is the minimal eigenvalue of $U_0^TD_jU_0$. By Lemma~\ref{lem:basic:rcn} and Definition~\ref{def:rcn:2}, $\sigma_j\geq{}\gamma_{\Omega}(L_0)$, $\forall{}1\leq{}j\leq{}n$. Thus, \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M)\|_F^2\leq\sum_{j=1}^{n}(1-\sigma_j)^2\|b_j\|_2^2\\ &\leq{}(1-\gamma_{\Omega}(L_0))^2\|\mathcal{P}_{U_0}(M)\|_F^2, \end{align*} where gives that $\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|\leq1-\gamma_{\Omega}(L_0)$. It remains to prove that the value of $1-\gamma_{\Omega}(L_0)$ is attainable. Without loss of generality, assume that $j_1 = \arg\min_j\sigma_j$, i.e., $\sigma_{j_1} = \gamma_{\Omega}(L_0)$. Construct a $r_0\times{}r_0$ matrix $B$ with the $j_1$th column being the eigenvector corresponding to the smallest eigenvalue of $U_0^TD_{j_1}U_0$ and everywhere else being zero. Let $M_1 = U_0B$. Then it can be verified that $\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(M_1)\|_F = (1-\gamma_{\Omega}(L_0))\|M_1\|_F$. \end{proof} \begin{lemm}\label{lem:critical:optnorm:big} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$ and $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then: \begin{align*} &\|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot\| =\sqrt{\frac{1}{\gamma_{\Omega}(L_0)}-1},\\ &\|(\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0})^{-1}\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}^\bot\|=\sqrt{\frac{1}{\gamma_{\Omega^T}(L_0^T)} - 1}. \end{align*} \end{lemm} \begin{proof} We shall prove the first claim. Let $M\in\Re^{m\times{}n}$. Denote the $j$th column of $M$ and $(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)$ as $b_j$ and $y_j$, respectively. Denote $\delta_{ij}$ as in~\eqref{eq:delta} and define a set of diagonal matrices $\{D_j\}_{j=1}^n$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\Re^{m\times{}m}$. Then we have \begin{align*} &y_j = [(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)]_{:,j}\\ &= U_0(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)b_j. \end{align*} It can be calculated that \begin{align*} &\|y_j\|_2^2 \leq \|(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)\|^2\|b_j\|_2^2=\\ &\|(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)DU_0(U_0^TD_jU_0)^{-1}\|\|b_j\|_2^2\\ &=\|(U_0^TD_jU_0)^{-1} - \mathtt{I}\|\|b_j\|_2^2\leq\left(\frac{1}{\gamma_{\Omega}(L_0)}-1\right)\|b_j\|_2^2, \end{align*} which gives that \begin{align*} \|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot\|\leq\sqrt{\frac{1}{\gamma_{\Omega}(L_0)}-1}. \end{align*} Using a similar argument as in the proof of Lemma~\ref{lem:critical:rnc2optnorm}, it can be proven that the value of $\sqrt{1/\gamma_{\Omega}(L_0)-1}$ is attainable. To be more precise, assume without loss of generality that $j_1 = \arg\min_{j}\sigma_{j}$, where $\sigma_j$ is the smallest singular value of $U_0^TD_{j}U_0$. Denote by $\sigma^*$ and $v^*$ the largest singular value and the corresponding right singular vector of $(U_0^TD_jU_0)^{-1}U_0^TD_j(\mathtt{I} - U_0U_0^T)$, respectively. Then the above justifications have already proven that $\sigma^*=\sqrt{1/\gamma_{\Omega}(L_0)-1}$. Construct an $m\times{}n$ matrix $M$ with the $j_1$th column being $v^*$ and everywhere else being zero. Then it can be verified that $\|(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot(M)\|_F = \sqrt{1/\gamma_{\Omega}(L_0)-1}\|M\|_F$. \end{proof} \begin{lemm}\label{lem:critical:optnorm:ptpo} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. If $L_0$ is $\Omega/\Omega^T$-isomeric then \begin{align*} \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| \leq{}2(1-\gamma_{\Omega,\Omega^T}(L_0)). \end{align*} \end{lemm} \begin{proof} Using the same arguments as in the proof of Lemma~\ref{lem:basic:inverse}, it can be proven that $\|\mathcal{P}\mathcal{P}_{\Omega}^\bot\mathcal{P}\| = \|\mathcal{P}\mathcal{P}_{\Omega}^\bot\|^2$, with $\mathcal{P}$ being any orthogonal projection onto a subspace of $\mathbb{R}^{m\times{}n}$. Thus, we have the following \begin{align*} & \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| = \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\|^2 = \sup_{\|M\|_F=1}\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 \\ &= \sup_{\|M\|_F=1}\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2\\ &= \sup_{\|M\|_F=1}(\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 + \|\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2)\\ &\leq\sup_{\|M\|_F=1}\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 + \sup_{\|M\|_F=1}\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot(M)\|_F^2 \\ &= \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\|^2 + \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\|^2, \end{align*} which, together with Lemma~\ref{lem:critical:rnc2optnorm}, gives that \begin{align*} &\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| \leq \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| + \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|\\ &= 1 - \gamma_{\Omega} (L_0) + 1 - \gamma_{\Omega^T} (L_0^T)\leq2(1 - \gamma_{\Omega,\Omega^T}(L_0)) \end{align*} \end{proof} \begin{lemm}\label{lem:critical:optnorm:invpt} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$, and let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. If the operator $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, then we have \begin{align*} \|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\| = \sqrt{\frac{1}{1-\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|}-1}. \end{align*} \end{lemm} \begin{proof} We shall use again the two notations, $\mathrm{vec}(\cdot)$ and $D$, defined in the proof of Lemma~\ref{lem:basic:inverse}. Let $P\in\mathbb{R}^{mn\times{}r}$ be a column-wisely orthonormal matrix such that $\mathrm{vec}(\mathcal{P}_{T_0}(M)) = PP^T\mathrm{vec}(M)$, $\forall{}M$. Since $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, it follows that $P^TDP$ is positive definite. Denote by $\sigma_{min}(\cdot)$ the smallest singular value of a matrix. Then we have the following: \begin{align*} &\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\|^2 \\ &= \|P(P^TDP)^{-1}P^TD(\mathtt{I}-PP^T)\|^2\\ &=\|P(P^TDP)^{-1}P^TD(\mathtt{I}-PP^T)DP(P^TDP)^{-1}P^T\| \\ &= \|(P^TDP)^{-1}-\mathtt{I}\| = \frac{1}{\sigma_{min}(P^TDP)} -1 \\ &= \frac{1}{1 - \|P^T(\mathtt{I} - D)P\|} - 1=\frac{1}{1 - \|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|} - 1. \end{align*} \end{proof} The following lemma is more general than Theorem~\ref{thm:fnorm}. \begin{lemm}\label{lem:critical:uinorm} Let $L_0\in\Re^{m\times{}n}$ and $\Omega\subseteq\{1,\cdots,m\}\times\{1,\cdots,n\}$. Consider the following convex problem: \begin{align}\label{eq:uinorm} \min_{X} \norm{X}_{UI},\textrm{ s.t. }\mathcal{P}_{\Omega}(AX-L_0)=0, \end{align} where $\norm{\cdot}_{UI}$ generally denotes a convex unitary invariant norm and $A\in\Re^{m\times{}p}$ is given. If $L_0\in\mathrm{span}\{A\}$ and $A$ is $\Omega$-isomeric then $X_0=A^+L_0$ is the unique minimizer to the convex optimization problem in~\eqref{eq:uinorm}. \end{lemm} \begin{proof} Denote the SVD of $A$ as $U_A\Sigma_AV_A^T$. Then it follows from $\mathcal{P}_{\Omega}(AX-L_0)=0$ and $L_0\in\mathrm{span}\{A\}$ that \begin{align*} \mathcal{P}_{U_A}\mathcal{P}_{\Omega}\mathcal{P}_{U_A}(AX-L_0) = 0. \end{align*} By Lemma~\ref{lem:basic:L02U} and Lemma~\ref{lem:critical:inverse}, $\mathcal{P}_{U_A}\mathcal{P}_{\Omega}\mathcal{P}_{U_A}$ is invertible and thus $AX = L_0$. Hence, $\mathcal{P}_{\Omega}(AX-L_0)=0$ is equivalent to $AX=L_0$. Notice, that Theorem 4.1 of~\cite{tpami_2013_lrr} actually holds for any convex unitary invariant norms. That is, \begin{align*} A^+L_0 = \arg\min_{X} \|X\|_{UI}, \textrm{ s.t. } AX = L_0, \end{align*} which implies that $A^+L_0$ is the unique minimizer to the problem in~\eqref{eq:uinorm}. \end{proof} \subsection{Proofs of Theorems~\ref{thm:iso},~\ref{thm:iso:necessary} and~\ref{thm:rcn:bound}} We need to use some notations as follows. Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot)$, $\mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T$ and $\mathcal{P}_{T_0}(\cdot) = \mathcal{P}_{U_0}(\cdot)+\mathcal{P}_{V_0}(\cdot)-\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\cdot)$. \begin{proof}({\bf proof of Theorem~\ref{thm:iso}}) Define an operator $\mathcal{H}$ in the same way as in~\citep{Candes:2009:math}: \begin{align*} \mathcal{H} = \mathcal{P}_{T_0} - \frac{1}{\rho_0}\mathcal{P}_{T_0}\mathcal{P}_{\Omega_{\mathcal{A}}}\mathcal{P}_{T_0}. \end{align*} According to Theorem 4.1 of~\citep{Candes:2009:math}, there exists some numerical constant $c>0$ such that the inequality, \begin{align*} \|\mathcal{H}\|\leq\sqrt{\frac{c\mu_0r_0\log{n_1}}{\rho_0n_2}}, \end{align*} holds with probability at least $1-n_1^{-10}$ provided that the right hand side is smaller than 1. So, $\|\mathcal{H}\|<1$ provided that \begin{align*} \rho_0>\frac{c\mu_0r_0\log{n_1}}{n_2}. \end{align*} When $\|\mathcal{H}\|<1$, we have \begin{align*} &\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\| = \|\rho_0\mathcal{H}+(1-\rho_0)\mathcal{P}_{T_0}\|\\ &\leq{}\rho_0\|\mathcal{H}\|+(1-\rho_0)\|\mathcal{P}_{T_0}\|<1. \end{align*} Since $\mathcal{P}_{U_0}(\cdot)=\mathcal{P}_{U_0}\mathcal{P}_{T_0}(\cdot)=\mathcal{P}_{T_0}\mathcal{P}_{U_0}(\cdot)$, we have \begin{align*} &\|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\| = \|\mathcal{P}_{U_0}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\mathcal{P}_{U_0}\|\leq\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|<1. \end{align*} Due to the virtues of Lemma~\ref{lem:basic:inverse}, Lemma~\ref{lem:critical:inverse} and Lemma~\ref{lem:basic:L02U}, it can be concluded that $L_0$ is $\Omega$-isometric with probability at least $1-n_1^{-10}$. In a similar way, it can be also proven that $L_0^T$ is $\Omega^T$-isometric with probability at least $1-n_1^{-10}$. \end{proof} \begin{proof}({\bf proof of Theorem~\ref{thm:iso:necessary}}) When $L_0$ is not $\Omega$-isomeric, Lemma~\ref{lem:basic:L02U} and Lemma~\ref{lem:critical:inverse} give that $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is not invertible. By Lemma~\ref{lem:basic:inverse}, $\mathcal{P}_{U_0}\cap{}\mathcal{P}_\Omega^\bot\neq\{0\}$. Thus, there exists $\Delta\neq0$ that satisfies $\Delta\in\mathcal{P}_{U_0}$ and $\Delta\in\mathcal{P}_\Omega^\bot$. Now construct $L = L_0 + \Delta$. Then we have $L\neq{}L_0$, $\mathcal{P}_\Omega(L) = \mathcal{P}_\Omega(L_0)$ and $\rank{L} = \rank{\mathcal{P}_{U_0}(L_0+\Delta)}\leq\rank{L_0}$. Since $\mathcal{P}_{U_0}\cap{}\mathcal{P}_\Omega^\bot$ is a nonempty linear space, there are indeed infinitely many choices for $L$. \end{proof} \begin{proof}({\bf proof of Theorem~\ref{thm:rcn:bound}}) Using the same arguments as in the proof of Theorem~\ref{thm:iso}, we conclude that the following holds with probability at least $1-n_1^{-10}$: \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|<1-\rho_0+\frac{\rho_0}{\sqrt{\alpha}}, \end{align*} which, together with Lemma~\ref{lem:critical:rnc2optnorm}, gives that $\gamma_{\Omega}(L_0)>(1-1/\sqrt{\alpha})\rho_0$. Similarity, it can be also proven that $\gamma_{\Omega^T}(L_0^T)>(1-1/\sqrt{\alpha})\rho_0$ with probability at least $1-n_1^{-10}$. \end{proof} \subsection{Proof of Theorem~\ref{thm:iso:rcn}} Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote the $i$th row of $U_0$ as $u_i^T$, i.e., $U_0 = [u_1^T;u_2^T;\cdots;u_{m}^T]$. Define $\delta_{ij}$ as in~\eqref{eq:delta}, and define a collection of diagonal matrices $\{D_j\}_{j=1}^{n}$ as $D_j = \mathrm{diag}(\delta_{1j},\delta_{2j},\cdots,\delta_{mj})\in\mathbb{R}^{m\times{}m}$. With these notations, we shall show that the operator norm of $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}$ can be bounded from above. Considering the $j$th column of $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X), \forall{}X, j$, we have \begin{align*} &[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X)]_{:,j} = U_0U_0^T(\mathtt{I}-D_j)U_0U_0^T[X]_{:,j}, \end{align*} which gives that \begin{align*} \|[\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}(X)]_{:,j}\|_2\leq\|U_0U_0^T(\mathtt{I} - D_j)U_0U_0^T\|\|[X]_{:,j}\|_2. \end{align*} Since the diagonal of $D_j$ has at most $(1-\rho)m$ zeros, \begin{align*} &\|U_0U_0^T(\mathtt{I} - D_j)U_0U_0^T\|= \|\sum_{i=1}^{m_1}(1-\delta_{ij})u_iu_i^T\|\\ &\leq\sum_{i=1}^{m}(1-\delta_{ij})\|u_iu_i^T\|\leq(1-\rho)\mu_0r_0, \end{align*} where the last inequality follows from the definition of coherence. Thus, we have \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|\leq(1-\rho)\mu_0r_0. \end{align*} Similarly, based on the assumption that at least $\rho{}n$ entries in each row of $L_0$ are observed, we have \begin{align*} \|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|\leq(1-\rho)\mu_0r_0. \end{align*} By the assumption $\rho>1 - (1-\alpha)/(\mu_0r_0)$, \begin{align*} \|\mathcal{P}_{U_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{U_0}\|<1-\alpha\quad\textrm{and}\quad\|\mathcal{P}_{V_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{V_0}\|<1-\alpha. \end{align*} By Lemma~\ref{lem:basic:inverse} and Lemma~\ref{lem:critical:inverse}, $L_0$ is $\Omega/\Omega^T$-isomeric. In addition, it follows from Lemma~\ref{lem:critical:rnc2optnorm} that $\gamma_{\Omega,\Omega^T}(L_0)>\alpha$. \subsection{Proofs of Theorems~\ref{thm:l2} and~\ref{thm:fnorm}} Theorem~\ref{thm:fnorm} is indeed an immediate corollary of Lemma~\ref{lem:critical:uinorm}. So we only prove Theorem~\ref{thm:l2}. \begin{proof}By $y_0\in\mathcal{S}_0\subseteq\mathrm{span}\{A\}$, $y_0=AA^+y_0$ and therefore $y_b = A_bA^+y_0$. That is, $x_0=A^+y_0$ is a feasible solution to the problem in~\eqref{eq:l2}. Provided that $y_b\in\Re^k$ and the dictionary matrix $A$ is $k$-isomeric, Definition~\ref{def:iso:k} gives that $\rank{A_b} = \rank{A}$, which implies that \begin{align*} \mathrm{span}\{A_b^T\}=\mathrm{span}\{A^T\}. \end{align*} On the other hand, it is easy to see that $A^+y_0\in\mathrm{span}\{A^T\}$. Hence, there exists a dual vector $w\in\Re^p$ that obeys \begin{align*} A_b^Tw = A^+y_0, \textrm{ i.e., } A_b^Tw \in\partial\frac{1}{2}\|A^+y_0\|_2^2. \end{align*} By standard convexity arguments~\cite{book:convex}, $x_0=A^{+}y_0$ is an optimal solution to the problem in~\eqref{eq:l2}. Since the squared $\ell_2$ norm is a strongly convex function, it follows that the optimal solution to~\eqref{eq:l2} is unique. \end{proof} \subsection{Proof of Theorem~\ref{thm:convex}} \begin{proof} Let the SVD of $L_0$ be $U_0\Sigma_0V_0^T$. Denote $\mathcal{P}_{T_0}(\cdot)=U_0U_0^T(\cdot)+(\cdot)V_0V_0^T-U_0U_0^T(\cdot)V_0V_0^T$. Since $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, it follows from Lemma~\ref{lem:critical:optnorm:ptpo} that $\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|$ is strictly smaller than 1. By Lemma~\ref{lem:basic:inverse}, $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible and $T_0\cap{}\Omega^\bot = \{0\}$. Given $\gamma_{\Omega,\Omega^T}(L_0)>0.75$, Lemma~\ref{lem:critical:optnorm:invpt} and Lemma~\ref{lem:critical:optnorm:ptpo} imply that \begin{align*} &\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\| = \sqrt{\frac{1}{1-\|\mathcal{P}_{T_0}\mathcal{P}_{\Omega}^\bot\mathcal{P}_{T_0}\|}-1}\\ &\leq\sqrt{\frac{1}{2\gamma_{\Omega,\Omega^T}(L_0)-1}-1}<1. \end{align*} Next, we shall consider a feasible solution $L=L_0+\Delta$ and show that the objective strictly increases unless $\Delta=0$. By $\mathcal{P}_{\Omega}(\Delta) = 0$, $\mathcal{P}_{\Omega}\mathcal{P}_{T_0}(\Delta) = -\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot(\Delta)$. Since the operator $\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}$ is invertible, we have \begin{align*} \mathcal{P}_{T_0}(\Delta) = -(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot(\Delta). \end{align*} By $\|(\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0})^{-1}\mathcal{P}_{T_0}\mathcal{P}_{\Omega}\mathcal{P}_{T_0}^\bot\|<1$, $\|\mathcal{P}_{T_0}(\Delta)\|_*<\|\mathcal{P}_{T_0}^\bot(\Delta)\|_*$ holds unless $\mathcal{P}_{T_0}^\bot(\Delta)=0$. By the convexity of the nuclear norm, \begin{align*} &\|L_0+\Delta\|_* - \|L_0\|_*\geq{}\langle{}\Delta,U_0V_0^T+W\rangle, \end{align*} where $W\in{}\mathcal{P}_{T_0}^\bot$ and $\|W\|\leq1$. Due to the duality between the nuclear norm and operator norm, we can construct a $W$ such that $\langle{}\Delta,W\rangle=\|\mathcal{P}_{T_0}^\bot(\Delta_0)\|_*$. Thus, \begin{align*} &\|L_0+\Delta\|_* - \|L_0\|_*\geq{}\|\mathcal{P}_{T_0}^\bot(\Delta)\|_* - \|\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta)\|_*\\ &\geq\|\mathcal{P}_{T_0}^\bot(\Delta)\|_* - \|\mathcal{P}_{T_0}(\Delta)\|_*. \end{align*} Hence, $\|L_0+\Delta\|_*$ is strictly greater than $\|L_0\|_*$ unless $\Delta\in{}T_0$. Since $T_0\cap\Omega^\bot=\{0\}$, it follows that $L_0$ is the unique minimizer to the problem in~\eqref{eq:numin}. \end{proof} \subsection{Proof of Theorem~\ref{thm:isodp:f}} \begin{proof} Since $A_0 = U_0\Sigma_0^{\frac{1}{2}}Q^T$ and $X_0= Q\Sigma_0^{\frac{1}{2}}V_0^T$, we have the following: 1) $A_0X_0=L_0$; 2) $L_0\in\mathrm{span}\{A_0\}$ and $A_0$ is $\Omega$-isomeric; 3) $L_0^T\in\mathrm{span}\{X_0^T\}$ and $X_0^T$ is $\Omega^T$-isomeric. Hence, according to Lemma~\ref{lem:critical:uinorm}, we have \begin{align*} &X_0 = A_0^+L_0=\arg\min_{X} \|X\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(A_0X - L_0)=0,\\ &A_0 = L_0X_0^+=\arg\min_{A} \|A\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(AX_0 - L_0)=0. \end{align*} Hence, $(A_0,X_0)$ is a critical point to the problem in~\eqref{eq:isodp:f}. It remains to prove the second claim. Suppose that $(A=A_0+\Delta_0, X = X_0+E_0)$ with $\|\Delta_0\|\leq\varepsilon$ and $\|E_0\|\leq\varepsilon$ is a feasible solution to~\eqref{eq:isodp:f}. We want to prove that \begin{align*} \frac{1}{2}(\|A\|_F^2+\|X\|_F^2) \geq \frac{1}{2}(\|A_0\|_F^2+\|X_0\|_F^2) \end{align*} holds for some small $\varepsilon$, and show that the equality can hold only if $AX=L_0$. Denote \begin{align}\label{eq:temp:notation} &\mathcal{P}_{U_0}(\cdot)=U_0U_0^T(\cdot), \mathcal{P}_{V_0}(\cdot)=(\cdot)V_0V_0^T,\\\nonumber &\mathcal{P}_1 = (\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}^\bot,\\\nonumber &\mathcal{P}_2 = (\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0})^{-1}\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}^\bot. \end{align} Define \begin{align}\label{eq:temp:notation:1} &\bar{A}_0 = A_0 + \mathcal{P}_{U_0}(\Delta_0) \textrm{ and } \bar{X}_0 = X_0 + \mathcal{P}_{V_0}(E_0). \end{align} Provided that $\varepsilon<\min(1/\|A_0^+\|, 1/\|X_0^+\|)$, it follows from Lemma~\ref{lem:basic:pinv} that \begin{align}\label{eq:temp:notation:pseinv} &\rank{\bar{A}_0} = \rank{\bar{X}_0} = r_0,\\\nonumber &\|\bar{A}_0^+\|\leq\frac{\|A_0^+\|}{1-\|A_0^+\|\varepsilon}\textrm{ and }\|\bar{X}_0^+\|\leq\frac{\|X_0^+\|}{1-\|X_0^+\|\varepsilon}. \end{align} By $\mathcal{P}_{\Omega}(AX-L_0)=0$, \begin{align*} \mathcal{P}_{\Omega}(A_0E_0+\Delta_0X_0+\Delta_0E_0)=0. \end{align*} Then it can be manipulated that \begin{align*} &\mathcal{P}_{\Omega}(\bar{A}_0E_0) \\ &= -\mathcal{P}_{\Omega}(\Delta_0\bar{X}_0- \mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta_0E_0) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)). \end{align*} Since $\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0}$ is invertible, we have \begin{align}\label{eq:temp:p1} &\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) = -\mathcal{P}_{V_0}^\bot(\mathcal{P}_{U_0}\mathcal{P}_{\Omega}\mathcal{P}_{U_0})^{-1}\mathcal{P}_{U_0}\mathcal{P}_{\Omega}(\Delta_0\bar{X}_0\\\nonumber &-\mathcal{P}_{U_0}\mathcal{P}_{V_0}(\Delta_0E_0) + \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)) \\\nonumber &= -\mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0) - \mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0) \end{align} Similarly, by the invertibility of $\mathcal{P}_{V_0}\mathcal{P}_{\Omega}\mathcal{P}_{V_0}$, \begin{align}\label{eq:temp:p2} &\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\\\nonumber &= -\mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) - \mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0). \end{align} The combination of~\eqref{eq:temp:p1} and~\eqref{eq:temp:p2} gives that \begin{align*} &\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) = \mathcal{P}_{V_0}^\bot\mathcal{P}_1\mathcal{P}_2\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0) + \\\nonumber &\mathcal{P}_{V_0}^\bot(\mathcal{P}_1\mathcal{P}_2-\mathcal{P}_1)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0). \end{align*} By $\rank{\bar{A}_0}=r_0=p$, \begin{align*} \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0) = \mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0\bar{A}_0^+\bar{A}_0E_0). \end{align*} By Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. Thus, \begin{align*} &\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\leq\|\mathcal{P}_1\mathcal{P}_2\|\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\\ &+\varepsilon(\|\mathcal{P}_1\mathcal{P}_2\|+\|\mathcal{P}_1\|)\|\bar{A}_0^+\|\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|\\ &\leq\left(\frac{1}{\gamma_{\Omega,\Omega^T}(L_0)}-1+\frac{2\varepsilon\|A_0^+\|}{1-\|A_0^+\|\varepsilon}\right)\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|. \end{align*} Let \begin{align*} \varepsilon < \min\left(\frac{1}{2\|A_0^+\|}, \frac{2\gamma_{\Omega,\Omega^T}(L_0)-1}{4\|A_0^+\|\gamma_{\Omega,\Omega^T}(L_0)}\right). \end{align*} Then we have that $\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|<\|\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)\|$ strictly holds unless $\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)=0$. Since $\rank{\bar{A}_0}=r_0=p$, $\mathcal{P}_{V_0}^\bot(\bar{A}_0E_0)=0$ simply leads to $E_0\in\mathcal{P}_{V_0}$. Hence, \begin{align*} A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{V_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}, \end{align*} which implies that $AX = L_0$. Thus, we finally have \begin{align*} \frac{1}{2}(\|A\|_F^2+\|X\|_F^2)\geq\|L_0\|_*=\frac{1}{2}(\|A_0\|_F^2+\|X_0\|_F^2), \end{align*} where the inequality follows from $\|AX\|_*=\min_{A,X}\frac{1}{2}(\|A\|_F^2+\|X\|_F^2)$~\cite{siam_2010_minirank}. \end{proof} \subsection{Proof of Theorem~\ref{thm:isodp}} \begin{proof} Since $A_0 = U_0\Sigma_0^{\frac{2}{3}}Q^T$ and $X_0= Q\Sigma_0^{\frac{1}{3}}V_0^T$, we have the following: 1) $A_0X_0=L_0$; 2) $L_0\in\mathrm{span}\{A_0\}$ and $A_0$ is $\Omega$-isomeric; 3) $L_0^T\in\mathrm{span}\{X_0^T\}$ and $X_0^T$ is $\Omega^T$-isomeric. Due to Lemma~\ref{lem:critical:uinorm}, we have \begin{align*} &X_0 = A_0^+L_0=\arg\min_{X} \|X\|_F^2,\textrm{ s.t. }\mathcal{P}_{\Omega}(A_0X - L_0)=0,\\ &A_0 = L_0X_0^+=\arg\min_{A} \|A\|_*,\textrm{ s.t. }\mathcal{P}_{\Omega}(AX_0 - L_0)=0. \end{align*} Hence, $(A_0,X_0)$ is a critical point to the problem in~\eqref{eq:isodp}. Regarding the second claim, we consider a feasible solution $(A=A_0+\Delta_0, X = X_0+E_0)$, with $\|\Delta_0\|\leq\varepsilon$ and $\|E_0\|\leq\varepsilon$. Define $\mathcal{P}_{U_0}$, $\mathcal{P}_{V_0}$, $\mathcal{P}_1$, $\mathcal{P}_2$, $\bar{A}_0$ and $\bar{X}_0$ in the same way as in~\eqref{eq:temp:notation} and~\eqref{eq:temp:notation:1}. Note that the statements in~\eqref{eq:temp:notation:pseinv} still hold in the general case of $p\geq{}r_0$. Denote the SVD of $\bar{X}_0$ as $\bar{Q}\bar{\Sigma}\bar{V}_0^T$. Then we have $V_0V_0^T = \bar{V}_0\bar{V}_0^T$. Denote \begin{align*} P_{\bar{Q}} = \bar{Q}\bar{Q}^T \textrm{ and }P_{\bar{Q}}^\bot = \mathtt{I} - \bar{Q}\bar{Q}^T. \end{align*} Denote the condition number of $X_0$ as $\tau_0$. With these notations, we shall finish the proof by exploring two cases. \subsubsection{Case 1: $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\geq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$} Denote the SVD of $L_0\bar{X}_0^+$ as $\tilde{U}_0\tilde{\Sigma}\tilde{Q}^T$. Then we have \begin{align*} \tilde{U}_0\tilde{U}_0^T = U_0U_0^T \textrm{ and }\tilde{Q}\tilde{Q}^T = \bar{Q}\bar{Q}^T. \end{align*} By the convexity of the nuclear norm, \begin{align}\label{eq:temp:ded:1} &\|A\|_* - \|L_0\bar{X}_0^+\|_*=\|A_0+\Delta_0\|_* - \|L_0\bar{X}_0^+\|_*\\\nonumber &\geq{}\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T+W\rangle, \end{align} where $W\in\mathbb{R}^{m\times{}p}$, $\tilde{U}_0^TW = 0$, $W\tilde{Q} = 0$ and $\|W\|\leq1$. Due to the duality between the nuclear norm and operator norm, we can construct a $W$ such that \begin{align}\label{eq:temp:ded:2} \langle{}\Delta_0,W\rangle=\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*. \end{align} We also have \begin{align*} &\langle{}A_0\hspace{-0.02in}+\hspace{-0.02in}\Delta_0\hspace{-0.02in}-\hspace{-0.02in}L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle\hspace{-0.02in}=\hspace{-0.02in}\langle{}\Delta_0+A_0E_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle\\ &=\langle{}\Delta_0\bar{X}_0\bar{X}_0^++A_0E_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle, \end{align*} which gives that \begin{align}\label{eq:temp:ded:3} &\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\leq\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}\mathcal{P}_{V_0}\\\nonumber &(\Delta_0\bar{X}_0+A_0E_0)\|_*\leq\|\bar{X}_0^+\|\|\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0)\|_*, \end{align} where we denote by $\mathrm{abs}(\cdot)$ the absolute value of a real number. By $\mathcal{P}_{\Omega}(A_0E_0+\Delta_0X_0+\Delta_0E_0)=0$, \begin{align*} &\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0) = -\mathcal{P}_2\mathcal{P}_{V_0}^\bot(A_0E_0) - \mathcal{P}_2\mathcal{P}_{V_0}^\bot(\Delta_0E_0)\\ &=-\mathcal{P}_2(-\mathcal{P}_{V_0}^\bot\mathcal{P}_1(\Delta_0X_0+\Delta_0E_0) - \mathcal{P}_{V_0}^\bot\mathcal{P}_{U_0}(\Delta_0E_0))\\ &- \mathcal{P}_2\mathcal{P}_{V_0}^\bot(\Delta_0E_0)=\mathcal{P}_2\mathcal{P}_1(\Delta_0X_0+\Delta_0E_0) - \mathcal{P}_2\mathcal{P}_{U_0}^\bot(\Delta_0E_0)\\ &= \mathcal{P}_2\mathcal{P}_1(\Delta_0\bar{X}_0) + \mathcal{P}_2\mathcal{P}_1\mathcal{P}_{V_0}^\bot(\Delta_0E_0) - \mathcal{P}_2\mathcal{P}_{U_0}^\bot(\Delta_0E_0). \end{align*} By Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. As a result, we have \begin{align}\label{eq:temp:ded:3b} &\|\Delta_0\bar{X}_0+\mathcal{P}_{V_0}(A_0E_0)\|_* \\\nonumber &\leq \|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*+2\|\mathcal{P}_{U_0}^\bot(\Delta_0E_0)\|_*. \end{align} Let \begin{align*} \varepsilon < \min\left(\frac{0.1\|X_0\|}{1+1.1\tau_0},\frac{0.175}{\|X_0^+\|}\right). \end{align*} Due to~\eqref{eq:temp:ded:3},~\eqref{eq:temp:ded:3b} and the assumption of $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\geq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$, it can be calculated that \begin{align}\label{eq:temp:ded:4} &\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\\\nonumber &\leq\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*+2\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0(P_{\bar{Q}}+P_{\bar{Q}}^\bot)E_0)\|_*\\\nonumber &\leq\|\bar{X}_0^+\|\|\bar{X}_0\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*+2\varepsilon\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*\\\nonumber &+2\varepsilon\|\bar{X}_0^+\|\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq1.1\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*\\\nonumber &+0.2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*+0.35\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\\\nonumber &\leq(0.65+0.35)\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*=\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*. \end{align} Now, combining~\eqref{eq:temp:ded:1},~\eqref{eq:temp:ded:2} and~\eqref{eq:temp:ded:4}, we have \begin{align*} &\|A\|_* - \|L_0\bar{X}_0^+\|_*\geq\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\\ &-\mathrm{abs}(\langle{}A_0+\Delta_0-L_0\bar{X}_0^+,\tilde{U}_0\tilde{Q}^T\rangle)\geq0, \end{align*} which, together with Lemma~\ref{lem:basic:ax}, simply leads to \begin{align*} &\|A\|_* + \frac{1}{2}\|X\|_F^2 = (\|A\|_*-\|L_0\bar{X}_0^+\|_*)\\ &+(\|L_0\bar{X}_0^+\|_*+\frac{1}{2}\|X\|_F^2)\geq\|L_0\bar{X}_0^+\|_*+\frac{1}{2}\|\bar{X}_0\|_F^2\\ &\geq{}\frac{3}{2}\trace{\Sigma_0^{\frac{2}{3}}}=\|A_0\|_* + \frac{1}{2}\|X_0\|_F^2. \end{align*} For the equality of $\|A\|_* + 0.5\|X\|_F^2=\|A_0\|_* + 0.5\|X_0\|_F^2$ to hold, at least, $\|X\|_F = \|\bar{X}_0\|_F$ must be obeyed, which implies that $E_0\in\mathcal{P}_{V_0}$. Hence, we have $A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{V_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}$, which gives that $AX = L_0$. \subsubsection{Case 2: $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$} Using a similar manipulation as in the proof of Theorem~\ref{thm:isodp:f}, we have \begin{align*} &\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0) = \mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)+ \\ &\mathcal{P}_{U_0}^\bot(\mathcal{P}_2\mathcal{P}_1-\mathcal{P}_2)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0E_0)=\mathcal{P}_{U_0}^\bot\mathcal{P}_2\mathcal{P}_1\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\\ &+\mathcal{P}_{U_0}^\bot(\mathcal{P}_2\mathcal{P}_1-\mathcal{P}_2)\mathcal{P}_{U_0}^\bot\mathcal{P}_{V_0}^\bot(\Delta_0P_{\bar{Q}}E_0 + \Delta_0P_{\bar{Q}}^\bot{}E_0) . \end{align*} Due to Lemma~\ref{lem:critical:optnorm:big} and the assumption of $\gamma_{\Omega,\Omega^T}(L_0)>0.5$, we have $\|\mathcal{P}_1\|<1$ and $\|\mathcal{P}_2\|<1$. By the assumption of $\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot\|_*\leq2\tau_0\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*$, \begin{align*} &\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\leq\|\mathcal{P}_2\mathcal{P}_1\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\\ &+(4\tau_0+2)\varepsilon\|\mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}\|_*= \|\mathcal{P}_2\mathcal{P}_1\|\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*\\ &+(4\tau_0+2)\varepsilon\|\mathcal{P}_{U_0}^\bot(\Delta_0)\bar{X}_0\bar{X}_0^+\|_* \\ &\leq\left(\frac{1}{\gamma_{\Omega,\Omega^T}(L_0)}-1+\frac{(4\tau_0+2)\varepsilon\|X_0^+\|}{1-\|X_0^+\|\varepsilon}\right)\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*. \end{align*} Let \begin{align*} \varepsilon < \min\left(\frac{1}{2\|X_0^+\|}, \frac{2\gamma_{\Omega,\Omega^T}(L_0)-1}{(8\tau_0+4)\|X_0^+\|\gamma_{\Omega,\Omega^T}(L_0)}\right). \end{align*} Then $\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*<\|\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)\|_*$ strictly holds unless $\mathcal{P}_{U_0}^\bot(\Delta_0\bar{X}_0)=0$. That is, \begin{align*} \mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}} = 0 \textrm{ and thus } \mathcal{P}_{U_0}^\bot(\Delta_0)P_{\bar{Q}}^\bot = 0. \end{align*} Hence, we have $\mathcal{P}_{U_0}^\bot(\Delta_0)=0$, which simply leads to \begin{align*} A_0E_0+\Delta_0X_0+\Delta_0E_0 \in\mathcal{P}_{U_0}\cap\mathcal{P}_{\Omega}^\bot = \{0\}, \end{align*} and which gives that $AX = L_0$. By Lemma~\ref{lem:basic:ax}, \begin{align*} &\|A\|_*+\frac{1}{2}\|X\|_F^2 \geq{}\frac{3}{2}\trace{\Sigma_0^{\frac{2}{3}}}=\|A_0\|_* + \frac{1}{2}\|X_0\|_F^2. \end{align*} \end{proof} \begin{figure} \begin{center} \includegraphics[width=0.48\textwidth]{rcn.pdf}\vspace{-0.15in} \caption{Left: The relative condition number $\gamma_{\Omega,\Omega^T}(L_0)$ vs the missing rate $1-\rho_0$ at $m=500$. Middle: The relative condition number vs the matrix size $m$. Right: Plotting the recovery performance of convex optimization as a function of the missing rate.}\label{fig:rcn}\vspace{-0.2in} \end{center} \end{figure} \section{Experiments}\label{sec:exp} \subsection{Investigating the Relative Condition Number}\label{sec:exp:rcn} To study the properties of the relative condition number, we generate a vector $x\in\mathbb{R}^{m}$ according to the model $[x]_t = \sin(2t\pi/m)$, $t=1,\cdots,m$. That is, $x$ is a univariate time series of dimension $m$. We consider the forecasting tasks of recovering $x$ from a collection of $l$ observations, $\{[x]_t\}_{t=1}^{l}$, where $l=\rho_0m$ varies from $0.1m$ to $0.9m$ with step size $0.1m$. Let $y\in\mathbb{R}^{m}$ be the mask vector of the sampling operator, i.e., $[y]_t$ is 1 if $[x]_t$ is observed and 0 otherwise. In order to recover $x$, it suffices to recover its \emph{convolution matrix}~\cite{liu:tip:2014}. Thus, the forecasting tasks here can be converted to matrix completion problems, with \begin{align*} L_0 = \mathcal{A}(x)\quad\textrm{and}\quad\Omega=\mathrm{supp}(\mathcal{A}(y)), \end{align*} where $\mathcal{A}(\cdot)$ is the convolution matrix of a tensor\footnote{Unlike~\cite{liu:tip:2014}, we adopt here the circulant boundary condition. Thus, the $j$th column of $\mathcal{A}(x)$ is simply the vector obtained by circularly shifting the elements in $x$ by $j-1$ positions.}, and $\mathrm{supp}(\cdot)$ is the support set of a matrix. In this example, $L_0\in\mathbb{R}^{m\times{}m}$ is a circulant matrix that is perfectly incoherent and low rank; namely, $\rank{L_0}\equiv2$ and $\mu(L_0)\equiv1$, $\forall{}m>2$. Moreover, each column and each row of $\Omega$ have exactly a cardinality of $\rho_0m$. We use the convex program~\eqref{eq:numin} to restore $L_0$ from the given observations. The results are shown in Figure~\ref{fig:rcn}. It can be seen that the relative condition number is independent of the matrix sizes and monotonously deceases as the missing rate grows. As we can see from the right hand side of Figure~\ref{fig:rcn}, the recovery performance visibly declines when the missing rate exceeds $30\%$ (i.e., $\rho_0<0.7$), which approximately corresponds to $\gamma_{\Omega,\Omega^T}(L_0)<0.55$. When $\rho_0<0.3$ (which corresponds approximately to $\gamma_{\Omega,\Omega^T}(L_0)<0.15$), matrix completion totally breaks down. These results illustrate that relative well-conditionedness is important for guaranteeing the success of matrix completion in practice. Of course, the lower bound on $\gamma_{\Omega,\Omega^T}(L_0)$ would depend on the characteristics of data, and the condition $\gamma_{\Omega,\Omega^T}(L_0)>0.75$ proven in Theorem~\ref{thm:convex} is just a universal bound for guaranteeing exact recovery in the worst case. In addition, the estimate given in Theorem~\ref{thm:iso:rcn} is accurate only when the missing rate is low, as shown in the left part of Figure~\ref{fig:rcn}. Among the other things, it is worth noting that the sampling complexity does not decrease as the matrix size $m$ grows. This phenomenon is in conflict with the uniform sampling based matrix completion theories, which prove that a small fraction of $O((\log{m})^2/m)$ entries should suffice to recover $L_0$~\cite{Chen:2015:tit}, and which implies that the sampling complexity should decrease to zero when the matrix size $m$ goes to infinity. Hence, as aforementioned, the theories built upon uniform sampling are no longer applicable when applying to the deterministic missing data patterns. \subsection{Results on Randomly Generated Matrices} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{location.pdf}\vspace{-0.15in} \caption{Visualizing the configurations of $\Omega$ used in our simulations. The white points correspond to the locations of the observed entries. In these two examples, 90\% entries of the matrix are missing.}\label{fig:location}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \subfigure[nonuniform]{\includegraphics[width=0.48\textwidth]{nonuniform.pdf}} \subfigure[uniform]{\includegraphics[width=0.48\textwidth]{uniform.pdf}}\vspace{-0.15in} \caption{Comparing IsoDP with convex optimization and LRFD. The numbers plotted on the above figures are the success rates within 20 random trials. The white and black areas mean ``succeed" and ``fail", respectively. Here, the success is in a sense that $\mathrm{PSNR_{dB}}$ $\geq$ 40.}\label{fig:cmp}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{iso.pdf}\vspace{-0.15in} \caption{Visualizing the regions in which the isomeric condition holds.}\label{fig:iso}\vspace{-0.2in} \end{center} \end{figure} To evaluate the performance of various matrix completion methods, we generate a collection of $m\times{}n$ ($m=n=100$) target matrices according to $L_0=BC$, where $B\in\Re^{m\times{}r_0}$ and $C\in\Re^{r_0\times{}n}$ are $\mathcal{N}(0,1)$ matrices. The rank of $L_0$, i.e., $r_0$, is configured as $r_0=1, 5, 10, \cdots, 90, 95$. Regarding the sampling set $\Omega$ consisting of the locations of the observed entries, we consider two settings: One is to create $\Omega$ by using a Bernoulli model to randomly sample a subset from $\{1,\cdots,m\}\times\{1,\cdots,n\}$ (referred to as ``uniform''), the other is to let the locations of the observed entries be centered around the main diagonal of a matrix (referred to as ``nonuniform''). Figure~\ref{fig:location} shows how the sampling set $\Omega$ looks like. The observation fraction is set as $|\Omega|/(mn)=0.01,0.05,\cdots,0.9, 0.95$. To show the advantages of IsoDP, we include for comparison two prevalent methods: convex optimization~\cite{Candes:2009:math} and Low-Rank Factor Decomposition (LRFD)~\cite{liu:tsp:2016}. The same as IsoDP, these two methods do not assume that rank of $L_0$ either. When $p = m$ and the identity matrix is used to initialize the dictionary $A$, the bilinear program~\eqref{eq:isodp:f} does not outperform convex optimization, thereby we exclude it from the comparison. The accuracy of recovery, i.e., the similarity between $L_0$ and $\hat{L}_0$, is measured by Peak Signal-to-Noise Ratio ($\mathrm{PSNR_{dB}}$). Figure~\ref{fig:cmp} compares IsoDP to convex optimization and LRFD. It can be seen that IsoDP works distinctly better than the competing methods. Namely, while handling the nonuniformly missing data, the number of matrices successfully restored by IsoDP is 102\% and 71\% more than convex optimization and LRFD, respectively. While dealing with the missing entries chosen uniformly at random, in terms of the number of successfully restored matrices, IsoDP outperforms both convex optimization and LRFD by 44\%. These results verify the effectiveness of IsoDP. Figure~\ref{fig:iso} plots the regions where the isometric condition is valid. By comparing Figure~\ref{fig:cmp} to Figure~\ref{fig:iso}, it can be seen that the recovery performance of IsoDP has not reached the upper limit defined by isomerism. That is, there is still some room left for improvement. \subsection{Results on Motion Data} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{dinosaur.pdf} \vspace{-0.15in}\caption{An example image from the Oxford dinosaur sequence and the locations of the observed entries in the data matrix of trajectories. In this dataset, 74.29\% entries of the trajectory matrix are missing.}\label{fig:din}\vspace{-0.2in} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{dinosaur-res.pdf} \vspace{-0.15in}\caption{Some examples of the originally incomplete and fully restored trajectories. (a) The original incomplete trajectories. (b) The trajectories restored by convex optimization~\cite{Candes:2009:math}. (c) The trajectories restored by LRFD~\cite{liu:tsp:2016}. (d) The trajectories restored by IsoDP.}\label{fig:dinosaur:res}\vspace{-0.2in} \end{center} \end{figure} We now consider the Oxford dinosaur sequence\footnote{Available at http://www.robots.ox.ac.uk/$\sim$vgg/data1.html}, which contains in total 72 image frames corresponding to 4983 track points observed by at least 2 among 36 views. The values of the observations range from 8.86 to 629.82. We select 195 track points which are observed by at least 6 views for experiment, resulting in a $72\times{}195$ trajectory matrix 74.29\% entries of which are missing (see Figure~\ref{fig:din}). The tracked dinosaur model is rotating around its center, and thus the true trajectories should form complete circles~\cite{zheng:cvpr:2012}. \begin{table} \caption{Mean square error (MSE) on the Oxford dinosaur sequence. Here, the rank of a matrix is estimated by $\#\{i |\sigma_i\geq10^{-4}\sigma_1\}$, where $\sigma_1\geq\sigma_2\geq\cdots$ are the singular values of the matrix. The regularization parameter in each method is manually tuned such that the rank of the restored matrix meets a certain value. Here, the MSE values are evaluated on the training data (i.e., observed entries).}\label{tb:motion}\vspace{-0.2in} \begin{center} \begin{tabular}{|c|c|c|c|}\hline rank of the & & &\\ restored matrix &convex optimization &LRFD &IsoDP\\\hline 6 &426.1369 &28.4649 &\textbf{0.6140}\\ 7 &217.9963 &21.6968 &\textbf{0.4682}\\ 8 &136.7643 &17.2269 &\textbf{0.1480}\\ 9 &94.4673 &13.954 &\textbf{0.0585}\\ 10 &53.9864 &6.3768 &\textbf{0.0468}\\ 11 &43.2613 &5.9877 &\textbf{0.0374}\\ 12 &29.7542 &4.5136 &\textbf{0.0302}\\\hline \end{tabular}\vspace{-0.2in} \end{center} \end{table} The results in Theorem~\ref{thm:isodp} imply that our IsoDP may possess the ability to attain a solution of strictly low rank. To confirm this, we evaluate convex optimization, LRFD and IsoDP by examining the rank of the restored trajectory matrix as well as the fitting error on the observed entries. Table~\ref{tb:motion} shows the evaluation results. It can be seen that, while the restored matrices have the same rank, the fitting error produced by IsoDP is much smaller than the competing methods. The error of convex optimization is quite large, because the method cannot produce a solution of exactly low rank unless a biased regularization parameter is chosen. Figure~\ref{fig:dinosaur:res} shows some examples of the originally incomplete and fully restored trajectories. Our IsoDP method can approximately recover the circle-like trajectories. \subsection{Results on Movie Ratings} We also consider the MovieLens~\cite{Harper:2015} datasets that are widely used in research and industry. The dataset we use is consist of 100,000 ratings (integers between 1 and 5) from 943 users on 1682 movies. The distribution of the observed entries is severely imbalanced: The number of movies rated by each user ranges from 20 to 737, and the number of users who have rated for each movie ranges from 1 to 583. We remove the users that have less than 80 ratings, and so for the movies. Thus the final dataset used for experiments is consist of 14,675 ratings from 231 users on 206 movies. For the sake of quantitative evaluation, we randomly select 1468 ratings as the testing data, i.e., those ratings are intentionally set unknown to the matrix completion methods. So, the percentage of the observed entries used as inputs for matrix completion is only 27.75\%. \begin{table} \caption{MSE on the MovieLens dataset. The regularization parameters of the competing methods have been manually tuned to the best. Here, the MSE values are evaluated on the testing data.}\label{tb:movie}\vspace{-0.2in} \begin{center} \begin{tabular}{cc}\hline methods & MSE \\\hline random & 3.7623\\ average & 1.6097 \\ convex optimization & 0.9350\\ LRFD & 0.9213 \\\hline IsoDP ($\lambda=0.0005$) & 0.8412 \\ IsoDP ($\lambda=0.0008$) & 0.8250 \\ IsoDP ($\lambda=0.001$) & \textbf{0.8228}\\ IsoDP ($\lambda=0.002$) & 0.8295\\ IsoDP ($\lambda=0.005$) & 0.8583\\\hline \end{tabular}\vspace{-0.2in} \end{center} \end{table} Despite convex optimization and LRFD, we also consider two ``trivial'' baselines: One is to estimate the unseen ratings by randomly choosing an integer from the range of 1 to 5, the other is to simply use the average rating of 3 to fill the unseen entries. The comparison results are shown in Table~\ref{tb:movie}. As we can see, all the considered matrix completion methods outperform distinctly the trivial baselines, illustrating that matrix completion is beneficial on this dataset. In particular, IsoDP with proper parameters performs much better than convex optimization and LRFD, confirming the effectiveness of IsoDP on realistic datasets. \section{Conclusion}\label{sec:con} This work studied the identifiability of real-valued matrices under the convex of deterministic sampling. We established two deterministic conditions, isomerism and relative well-conditionedness, for ensuring that an arbitrary matrix is identifiable from a subset of the matrix entries. We first proved that the proposed conditions can hold even if the missing data pattern is irregular. Then we proved a series of theorems for missing data recovery and convex/nonconvex matrix completion. In general, our results could help to understand the completion regimes of arbitrary missing data patterns, providing a basis for investigating the other related problems such as data forecasting. \section*{Acknowledgement} This work is supported in part by New Generation AI Major Project of Ministry of Science and Technology under Grant SQ2018AAA010277, in part by national Natural Science Foundation of China (NSFC) under Grant 61622305, Grant 61532009 and Grant 71490725, in part by Natural Science Foundation of Jiangsu Province of China (NSFJPC) under Grant BK20160040, in part by SenseTime Research Fund.
{ "timestamp": "2019-09-09T02:07:34", "yymm": "1805", "arxiv_id": "1805.02313", "language": "en", "url": "https://arxiv.org/abs/1805.02313" }
\section{Introduction} Backtracking search combined with constraint solving is the main approach to solve problems in Constraint Programming (CP). The key to effective search is having a good variable search heuristic to select a variable to branch as the size of the search tree is strongly dependent on the selected variables. In CP, many general purpose variable ordering search heuristics have been proposed and implemented in many CP solvers, such as the conflict-driven heuristic \textit{dom/wdeg} \cite{wdeg}, impact-based search (\textit{IBS}) heuristic \cite{impact}, and activity-based search (\textit{ABS}) heuristic \cite{activity}. Search heuristics by their nature are not designed to be optimal search strategies but merely good ones. Thus, our goal in this paper is a new search heuristic which can outperform existing heuristics on some instances across a range of problems. We propose a new idea which is correlation-based search (\textit{CRBS}), the search heuristic employs correlations between variables. The correlation of a pair of variables ($x_i$, $x_j$) is used to estimate the possibility of a conflict between $x_i$ and $x_j$ during search. We maintain a matrix corresponding to the paired variable correlation during search. The correlation matrix is turned into a search strategy by using a function to combine values in the matrix to estimate whether assigning a value to variable $x_i$ can cause a conflict. Domain changes during constraint propagation are used to measure the correlations between variables. We present two generic and new correlation-based variable heuristics, \emph{crbs-sum} and \emph{crbs-max}. Our experiments compare the correlation heuristics with the well known search heuristics \textit{dom/wdeg}, \textit{ABS} and \textit{IBS} on a large set of benchmarks. The results show that correlation heuristics are competitive with the existing heuristics, and can also be the fastest on many problem instances from different problem series. In particular, \emph{crbs-sum} is shown to be an effective search heuristic. \section{Related works} We briefly introduce several well-known general purpose heuristics. One of the simplest heuristics is \textit{dom}\cite{dom} which follows the fail first principle, selecting the variable with smallest domain size. Many general purpose heuristics combine domain size with other information. For example, the well-known heuristics \textit{dom/deg} \cite{domOverDeg} and \emph{dom/ddeg} \cite{ddeg} combine domain sizes with variable degrees, which can be better than \textit{dom}. The conflict-driven heuristic \textit{dom/wdeg} \cite{wdeg} associates a weight with each constraint to record conflicts during search. The weight of constraint $c$ is increased when the constraint solver finds $c$ to be inconsistent. The \textit{dom/wdeg} heuristic selects the next variable based on weight degrees and domain sizes, where the weight degree of a variable $x$ is the sum of the weights of the constraints involving $x$ and at least another uninstantiated variable. Some variants of \textit{dom/wdeg} exploit different information to update the weight of constraints such as the explanation-based weight \cite{ewdeg} and constraint tightness weight \cite{wtdeg}. The impact-based search (\textit{IBS}) heuristic \cite{impact} is motivated by the pseudo-costs used in mixed-integer programming. It uses impact to measure the importance of a variable to the rate of search space reduction. A variant of \textit{IBS} incorporates variances in reduction \cite{impactv}. Counting-based search \cite{counting} exploits solution counting information to guide search. The activity-based search (\textit{ABS}) heuristic \cite{activity} combines domain sizes with activity for variables where activity is a measure of how often a variable is reduced during search. We remark that it is different from the SAT activity heuristic \textit{VSIDS} \cite{vsids} which also records some conflict information during search. \section{Background} A constraint satisfaction problem (CSP) instance is a triplet $(C,X,D)$, where $C=\{c_1, c_2, ... c_e\}$ is a set of $e$ constraints, $X=\{x_1, x_2, ...x_n\}$ is a set of $n$ variables and $D=\{D(x_1), D(x_2), ... D(x_n)\}$ is the corresponding domains for the variables. $D(x_i)$ is the initial domain of variable $x_i$, and $dom(x_i)\subseteq D(x_i)$ is the current domain of $x_i$ during search. Every constraint $c$ consists of a constraint scope $scp(c)$ and a relation $R(c)$, where $scp(c)\subseteq X$ and $R(c)\subseteq \prod\limits_{x_i\in scp(c)}D(x_i)$. A solution of a CSP instance is the set of assignments $\{(x_1,a_1), ... (x_n,a_n)\}$ which satisfies all constraints in $C$, where $a_i\in D(x_i)$. During backtrack search, the search heuristic selects a variable to instantiate at each search node. The variables which have been instantiated during a path in the search tree are defined as \emph{past} variables while the variables which have not been instantiated are \emph{future} variables. \section{Correlation-based search} Typically the goal of a variable heuristic is to choose variables which can cause backtracking to occur earlier in the search. This suggests to choose variables which can lead to conflicts earlier in the search. In this paper, we propose correlation-based heuristics to achieve this objective. For each pair of variables $(x_i,x_j)$, we define a value $a_{i,j}$, called the \emph{correlation of $(x_i,x_j)$}, as a measure of the possibility of having a conflict between $x_i$ and $x_j$. During search, a \emph{correlation matrix} representing all variable pairs $a_{i,j}$ is maintained, where each value in the matrix represents the correlation of a pair of variables. A special case is $a_{i,i}$ which estimates the degree of conflict when choosing variable $x_i$. We propose two functions which use the correlation matrix to estimate the degree of conflict from assigning the variable. Then the heuristic will choose the variable which is estimated to cause more conflicts. \subsection{Updating the correlation matrix} We maintain the correlation matrix by using domain changes during constraint propagation. Some search heuristics have used the information about domain changes to guide search, such as activity-based search (\textit{ABS}) \cite{activity}. The idea of the \textit{ABS} heuristic is to select the variable which is the most often updated. It maintains an array $A$ during search to record the activities of variables. After constraint propagation, if the domain of variable $x_i$ is updated, then $A(x_i)$ is increased by 1, otherwise decreased by multiplying with $\gamma$ where $0 \le \gamma \le 1$. Then the heuristic selects the variable with maximal $A(x_i)/dom(x_i)$. We use a similar approach. We assume that the more frequent $dom(x_j)$ is updated after assigning $x_i$, the more likely a conflict between $x_i$ and $x_j$ can happen. As such, the correlations between variables are updated based on domain changes. After constraint propagation due to variable $x_i$ being assigned, the remaining variables can be split into two subsets, $U$ and $N$: \begin{displaymath} \begin{aligned} U=\{\forall x_j\in X'~|~dom'(x_j)\neq dom(x_j)\}\\ N=\{\forall x_j\in X'~|~dom'(x_j)= dom(x_j)\}\\ \end{aligned} \end{displaymath} where $X'=X\setminus\{x_i\}$ and $dom'(x_j)$ is the new domain of $x_j$ after constraint propagation. The $U$ variables are those whose domains are updated, while the $N$ variables are those whose domains are unchanged. If no conflict occurs, then the correlations are updated as follows: \begin{equation} \left\{ \begin{aligned} a_{i,j}=\check{a}_{i,j}+1,a_{j,i}=\check{a}_{j,i}+1\hspace{0.68cm}\forall x_j\in U\\ a_{i,j}=\check{a}_{i,j}-1,a_{j,i}=\check{a}_{j,i}-1~~~~~\forall x_j\in N\\ a_{i,i}=\check{a}_{i,i}-1\hspace{4.16cm}\\ \end{aligned} \right. \end{equation} where $\check{a}_{i,j}$ is the old correlation value before the update. If $dom'(x_j)$ is changed after assigning $x_i$, then the correlations $a_{i,j}$ and $a_{j,i}$ are increased by one. Otherwise, $a_{i,j}$ and $a_{j,i}$ are decreased by one. In addition, we decrease the correlation $a_{i,i}$, this is to make $a_{i,i}$ small if no conflicts happen after assigning $x_{i}$ repeatedly. Otherwise, if a conflict appears in the constraint propagation after assigning $x_i$, the correlations of all variables are increased as follows: \begin{equation} \left\{ \begin{aligned} a_{i,j}=\check{a}_{i,j}+1,a_{j,i}=\check{a}_{j,i}+1~~~~~\forall x_j\in X'\\ a_{i,i}=\check{a}_{i,i}+2\hspace{4.26cm}\\ \end{aligned} \right. \end{equation} We increase the correlation $a_{i,i}$ by 2 because the assignment of $x_{i}$ causes a conflict. In addition, $a_{i,j}$ and $a_{j,i}$ are updated in the same way as before. We see that this definition leads to the correlation matrix being symmetric. \subsection{Selecting variables using the correlation matrix} We propose two ways of using the correlation matrix with combining functions based on the matrix and problem variables, namely, the \textit{crbs-sum} and \textit{crbs-max} functions which estimate the potential of conflict after assigning variable $x_i$. The \textit{crbs-sum} function is a linear function of the relevant entries in the correlation matrix. First, we define two auxiliary functions, $P_c(x_i)$ and $F_c(x_i)$ on variable $x_i$: \begin{displaymath} \begin{aligned} P_c(x_i)=\sum\limits_{x_j\in P}a_{i,j}& ~~~~~~~ &F_c(x_i)=\sum\limits_{x_j\in F}a_{i,j} \end{aligned} \end{displaymath} where $P$ is a set of past variables and $F$ is a set of future variables. The variable to be considered, $x_i$ is part of the set $F$ of future variables. The idea is that $P_c(x_i)$ (past correlation) is the sum of correlations of past variables with respect to $x_i$, and $F_c(x_i)$ (future correlation) is similar but for the future variables. The \textit{crbs-sum} function for variable $x_i$ is defined as: \begin{equation} \mbox{\it crbs-sum}(x_i)=P_c(x_i)~+~\theta\times F_c(x_i) \label{cor} \end{equation} A parameter $0\le\theta\le1$ is used to control the combination of the past and future variable correlation. In particular, future variables are used when $\theta>0$, otherwise, we consider only past variables when $\theta=0$. We propose another simple combining function, the \textit{crbs-max} function, defined as follows: \begin{equation} \mbox{\it crbs-max}(x_i)=\max\limits_{x_j\in P}(a_{i,j}) \label{eq:max} \end{equation} The idea for \textit{crbs-max} is to choose a future variable which has the largest estimated correlation with the past variables. We also experimented with a variant of the max function on all variables (past and future), i.e. $\max\limits_{x_j\in X}(a_{i,j})$. Initial experiments found Equation \ref{eq:max} to give better results. In the rest of the paper, we use the max function as defined in Equation \ref{eq:max}. The variable chosen by either the \textit{crbs-sum} or \textit{crbs-max} heuristic is the variable $x_i$ which maximizes $f(x_i)/dom(x_i)$ where $f$ is either the \textit{crbs-sum} or \textit{crbs-max} function. \begin{table*} \renewcommand{\arraystretch}{0.9} \small \begin{center} \tabcolsep=0.7mm \vspace{0.4in} \centering \begin{tabular}{l|l|rrrrr|rrrrr} \toprule [1 pt] \multirow{2}{*}{series}&\multirow{2}{*}{}&\multicolumn{5}{c|}{mean time (s)}&\multicolumn{5}{c}{\textit{nodes}}\\ \cline{3-12} &&\textit{dom/wdeg}&\textit{ABS}&\textit{IBS}&\textit{crbs-sum}&\textit{crbs-max}&\textit{dom/wdeg}&\textit{ABS}&\textit{IBS}&\textit{crbs-sum}&\textit{crbs-max}\\ \toprule [1 pt] \toprule [1 pt] Ortholatin&total (4)&107.84 &1 TO&1 TO&1 TO&\textbf{31.75} &515K&-&-&-&\textbf{87K}\\ &solved by all (3)&2.09 &0.49 &9.42 &1.41 &0.70 &11K&868 &52K&6K&1K\\ \toprule [1 pt] \toprule [1 pt] TSP&total (30)&5.48 &13.12 &12.55 &\textbf{3.96} &7.93 &44K&228K&253K&\textbf{32K}&74K\\ &solved by all (30)&5.48 &13.12 &12.55 &3.96 &7.93 &44K&228K&253K&32K&74K\\ \hline Latin Square&total (6)&1 TO&1 TO&26.73 &\textbf{0.77} &1.43 &-&-&334K&\textbf{5K}&10K\\ &solved by all (5)&0.27 &0.27 &0.27 &0.28 &0.30 &31 &22 &31 &44 &117 \\ \hline Dubois&total (11)&4 TO&4 TO&3 TO&\textbf{17.58} &71.14 &-&-&-&\textbf{2M}&3M\\ &solved by all (7)&273.84 &222.59 &128.63 &5.11 &30.72 &57M&30M&38M&\textbf{979K}&1M\\ \hline Magic Square&total (11)&4 TO&5 TO&4 TO&\textbf{172.16} &1 TO&-&-&-&\textbf{566K}&-\\ &solved by all (6)&36.48 &1.06 &26.94 &0.47 &1.65 &254K&4K&225K&1K&8K\\ \hline Costas Array&total (9)&1 TO&2 TO&1 TO&\textbf{111.05} &1 TO&-&-&-&\textbf{158K}&-\\ &solved by all (7)&3.30 &3.44 &8.62 &1.54 &5.65 &5K&5K&18K&2K&8K\\ \hline Social Golfer&total (4)&1 TO&2 TO&1 TO&\textbf{52.16} &3 TO&-&-&-&\textbf{101K}&-\\ &solved by all (0)&- &- &- &- &- &-&-&-&-&-\\ \hline ii&total (41)&1 TO&3.63 &2 TO&\textbf{3.11} &2 TO&-&2K&-&\textbf{1K}&-\\ &solved by all (38)&10.34 &3.54 &30.49 &1.93 &111.14 &8K&2K&51K&526 &699 \\ \hline Register&total (8)&2 TO&2 TO&1 TO&\textbf{431.66} &4 TO&-&-&-&\textbf{1M}&-\\ &solved by all (4)&0.77 &0.72 &0.94 &0.68 &0.82 &158 &125 &467 &95 &170 \\ \hline Quasi Group&total (25)&15.43 &15.51 &48.20 &\textbf{9.36} &17.91 &84K&114K&471K&\textbf{52K}&62K\\ &solved by all (25)&15.43 &15.51 &48.20 &9.36 &17.91 &84K&114K&471K&52K&62K\\ \hline Super-jobShop&total (22)&2 TO&3 TO&2 TO& \textbf{2 TO} &5 TO&-&-&-&\textbf{-}&-\\ &solved by all (16)&1.36 &1.38 &1.73 &1.40 &1.42 &389 &140 &6K&\textbf{94} &139 \\ \toprule [1 pt] \toprule [1 pt] Nonogram&total (176)&4 TO&1.63 &\textbf{1.59} &1.61&4.89 &-&564 &181 &\textbf{102} &177 \\ &solved by all (172)&5.96 &1.59 &1.57 &1.59 &4.46 &62K&342 &172 &97 &158 \\ \hline Cril&total (8)&2 TO&2 TO&\textbf{2.53} &3.36 &23.62 &-&-&\textbf{82K}&120K&1M\\ &solved by all (6)&2.56 &1.72 &3.18 &4.33 &15.46 &310K&94K&110K&160K&2M\\ \hline Black hole&total (39)&19 TO&27.72 &\textbf{15.92} &34.04 &1 TO&-&3M&\textbf{1M}&4M&-\\ &solved by all (20)&161.23 &0.34 &0.39 &0.34 &0.34 &25M &40 &1613 &30 &29 \\ \hline Myciel&total (12)&12.36 &8.71 &\textbf{5.53} &6.65 &1 TO&429K&234K&\textbf{134K}&194K&-\\ &solved by all (11)&6.78 &3.11 &2.06 &3.73 &68.66 &392K&150K&87K&170K&2M\\ \hline Queen Knights&total (11)&2 TO&4 TO&\textbf{78.81} &3 TO&3 TO&-&-&\textbf{3K}&-&-\\ &solved by all (8)&37.77 &181.06 &5.77 &174.54 &234.65 &5K&29K&769 &24K&34K\\ \hline AllInterval&total (9)&1 TO&1 TO&\textbf{17.86} &71.34 &48.28 &-&-&\textbf{470K}&1M&1M\\ &solved by all (8)&4.00 &30.06 &0.91 &4.61 &1.21 &79K&1M&19K&110K&22K\\ \hline cc&total (13)&1 TO&1 TO&\textbf{24.77} &1 TO&2 TO&-&-&\textbf{118K}&-&-\\ &solved by all (11)&3.43 &2.70 &6.22 &2.80 &3.07 &2K&1K&3K&1K&2K\\ \hline Open Shop&total (49)&60.95 &1 TO&\textbf{51.00} &2 TO &5 TO&\textbf{55K}&-&80K&-&-\\ &solved by all (43)&39.83 &79.36 &47.33 &52.54 &79.11 &17K&16K&38K&19K&159K\\ \hline coloring&total (22)&1.87 &0.65 &\textbf{0.62} &0.99 &7.88 &144K&34K&\textbf{20K}&55K&493K\\ &solved by all (22)&1.87 &0.65 &0.62 &0.99 &7.88 &144K&34K&20K&55K&493K\\ \toprule [1 pt] \toprule [1 pt] Mug&total (8)&4 TO&\textbf{31.57} &3 TO&173.65 &3 TO&-&\textbf{7M}&-&28M&-\\ &solved by all (4)&0.33 &0.33 &0.33 &0.33 &0.34 &0 &0 &0 &0 &0 \\ \hline Knights&total (8)&1 TO&\textbf{158.37} &171.04 &172.61 &173.73 &-&\textbf{1K}&\textbf{1K}&\textbf{1K}&\textbf{1K}\\ &solved by all (7)&41.85 &29.40 &28.62 &28.89 &33.52 &1K&934 &934 &934 &934 \\ \hline Covering Array&total (9)&4 TO&\textbf{2.71} &1 TO&3.77 &3 TO&-&\textbf{3K}&-&4K&-\\ &solved by all (5)&79.40 &0.46 &0.48 &0.45 &9.98 &778K&219 &646 &128 &7K\\ \hline Insertion&total (21)&1 TO&\textbf{4.12} &5 TO&8.37 &3 TO&-&\textbf{187K}&-&523K&-\\ &solved by all (14)&1.31 &0.86 &8.64 &1.48 &1.58 &3K&832 &61K&823 &5K\\ \toprule [1 pt] \toprule [1 pt] Radar&total (62)&\textbf{8.77} &23.96 &4 TO&43.09 &38.63 &\textbf{107} &631 &3K&1K&583 \\ &solved by all (58)&4.54 &10.90 &45.95 &24.38 &24.66 &60 &523 &1K&618 &397 \\ \hline Queen Attack&total (5)&\textbf{2.33} &2 TO&1 TO&23.64 &1 TO&\textbf{41K}&-&-&579K&-\\ &solved by all (3)&0.37 &0.37 &0.48 &0.34 &1.17 &483 &914 &5K&219 &40K\\ \hline scen11&total (10)&\textbf{45.24} &4 TO&1 TO&60.65 &3 TO&1M&-&-&\textbf{963K}&-\\ &solved by all (6)&0.98 &0.96 &1.42 &1.08 &11.68 &3K&3K&15K&2K&114K\\ \hline Crossword&total (140)&\textbf{2.08} &7.63 &12 TO &3.77 &2.88 &\textbf{12K}&73K&-&26K&16K\\ &solved by all (128)&1.17 &6.09 &59.54 &2.25 &1.70 &10K&71K&643K&22K&13K\\ \hline Golomb Ruler&total (25)&\textbf{1 TO}&7 TO&5 TO&6 TO&5 TO&\textbf{-}&-&-&-&-\\ &solved by all (18)&25.89 &70.15 &38.17 &48.51 &23.59 &\textbf{57K}&150K&90K&99K&31K\\ \hline Schurr Lemma&total (9)&\textbf{52.85} &1 TO&2 TO&81.77 &1 TO&\textbf{144K}&-&-&416K&-\\ &solved by all (7)&18.14 &29.25 &33.54 &16.55 &27.93 &156K&211K&178K&143K&176K\\ \toprule [1 pt] \toprule [1 pt] \multirow{2}{*}{Total}&total (807)&56 TO& 43 TO&48 TO&\textbf{15 TO}&47 TO&-&-&-&-&-\\ &solved by all (692)&16.08&14.34&25.96&\textbf{9.53}&20.35&1M&364K&549K&\textbf{30K}&108K\\ \toprule [1 pt] \end{tabular} \caption{Mean results of 5 heuristics. For the Super-jobShop series, \textit{crbs-sum} is highlighted as the best it has the smallest total runtime on solving the 20 non-timeout instances compared with \textit{dom/wdeg} and \textit{IBS}.} \label{exp:table1} \end{center} \end{table*} \section{Experiments} We evaluate the correlation-based heuristics, \textit{crbs-sum} and \textit{crbs-max}, with well known, successful and commonly used heuristics: weighted degree (\textit{dom/wdeg}), activity (\textit{ABS}) and impact (\textit{IBS}). Experiments are run on a 3.40 GHz Intel core i7 CPU on Linux. The existing heuristics are the AbsCon\footnote{ We used the AbsCon solver (\url{https://www.cril.univ-artois.fr/~lecoutre/software.html}). } solver implementations of \textit{dom/wdeg}, \textit{ABS} and \textit{IBS}. For the \textit{ABS} and \textit{IBS} heuristics, we use the default parameter settings in Abscon. For the crbs-sum heuristic, we use $\theta=0.1$, chosen as a value for $\theta$ which we found to work well on many instances (see Section \ref{sec:parameter}). The initial values in the correlation matrix of CRBS are set to 0. All heuristics break ties lexicographically, and use the lexical value order heuristic. In all cases, a geometric restart search policy (the initial \textit{cutoff}=10 and $\rho=1.1$) was used, where \textit{cutoff} is the maximum number of failures before restart and $\rho$ controls the growth of the value of \textit{cutoff} after restart.\footnote{The value of \textit{cutoff} is updated using $\textit{cutoff}=\textit{cutoff'}+init\_\textit{cutoff} * {\rho}^k$, where \textit{cutoff'} is the cutoff of the last restart, $init\_\textit{cutoff}$ is the initial cutoff with value 10, and $k$ is the number of encountered restarts.} We apply the binary search branch strategy. The time-out is set to 1200s for all instances. We have used a large and varied set of well-known CSP benchmarks.\footnote{Benchmarks are from the 2009 CSP competition website: \url{http://www.cril.univ-artois.fr/CSC09/} and the XCSP3.0 website \url{http://www.xcsp.org/}} In total, there are 807 problem instances which come from the following 30 series: \begin{quote} All Interval Series (AllInterval), Black Hole, Chessboard Coloration (cc), Coloring, Costas Array, Covering Array, Nonogram, Cril, Crossword, Dubois, Golomb Ruler, ii, insertion, Open Shop (os-taillard), Knights, Latin Square, Schurr's lemma, Magic Square, Mug, Myciel, Orthogonal Latin Squares, Quasi Group, Queen Attacking, Queen Knights, Radar Surveillance (Radar), Register, RLFAP-scen11 (scen11), Social Golfers, Super-Jobshop, Travelling Salesman Problem (TSP). \end{quote} We include all instances from each series except those which are not solved by all the heuristics used within timeout. \begin{figure} \centering \includegraphics[scale=0.55]{num1.pdf} \caption{Runtime distribution of all heuristics. \label{Fig_1}} \end{figure} \subsection{Comparing heuristics} Figure \ref{Fig_1} shows a runtime distribution of the benchmark instances solved using the different heuristics. The y-axis is the CPU time (in seconds (s)) and the x-axis is the number of solved instances within the time limit. In the graph, instances which are too fast are ignored, namely, 397 instances where the average time needed by all heuristics is less than 1 second have not been plotted. Thus, there are 410 instances plotted in Figure \ref{Fig_1}. Note that in this graph, the best performance is towards the lower right corner. The best runtime distribution result is given by the \textit{crbs-sum} heuristic which also solves the most instances. In particular, with the time limit of 1200s, \textit{crbs-sum} can solve 395/410 instances, which is better than \textit{dom/wdeg}, \textit{ABS}, \textit{IBS} and \textit{crbs-max} with respectively 354/410, 367/410, 362/410 and 363/410 instances. Table \ref{exp:table1} gives the mean results of all five heuristics on each series. The row ``total ($n$)'' gives the average CPU times and number of search nodes for all instances in a series, where $n$ is the number of instances. The row ``solved by all ($n$)'' is the mean results on instances solved by all heuristics. ``$n$ TO'' denotes that the heuristic time-outs on $n$ instances. The bold numbers in Table \ref{exp:table1} highlight the best result for each series. Furthermore, for the Super-jobShop series, \textit{crbs-sum} has both the smallest total time and smallest number of time-outs. The last two rows, labelled as ``Total'' give the average results on all series for each heuristic with \textit{crbs-sum} giving the best overall results. Table \ref{exp:table2} highlights how many series are solved faster by a particular heuristic from the overall results in Table \ref{exp:table1}. The row ``Faster than \textit{dom/wdeg}'' (\textit{ABS} or \textit{IBS}) gives the number of series on which the heuristic is better than \textit{dom/wdeg}/\textit{ABS}/\textit{IBS} respectively. The row ``Fastest (Second fastest)'' is the number of series on which the heuristic is the best (second best) respectively. Overall, \textit{crbs-sum} solves more series. The exact performance of the heuristics vary on different series. \textit{crbs-sum} is the fastest on many series. For example, on the dubois series, \textit{crbs-sum} solve 11 instances in 193 seconds, but \textit{dom/wdeg}, \textit{IBS} and \textit{ABS} time-out on some instances. In total, \textit{crbs-sum}, \textit{crbs-max}, \textit{dom/wdeg}, \textit{IBS} and \textit{ABS} are the fastest on 10/30, 1/30, 6/30, 9/30 and 4/30 series respectively. \textit{crbs-sum} is also competitive or better with the other general purpose variable heuristics. On 19 series, \textit{crbs-sum} is either the fastest or the second fastest heuristic. Overall, \textit{crbs-sum} is faster than \textit{dom/wdeg}, \textit{ABS} and \textit{IBS} on 21, 20 and 19 series respectively. For the Super-jobshop series, the mean times of \textit{crbs-sum} and \textit{dom/wdeg} are respectively 5.61s and 94.99s on the instances solved by the two heuristics. The mean CPU time of \textit{crbs-sum} on all series is also less than that of other heuristics. Between \textit{crbs-sum} and \textit{crbs-max}, our experiments show that the sum of correlations is more useful than the maximal correlation---\textit{crbs-sum} is faster than \textit{crbs-max} on many series. On most series, the trend of mean times correlates with the trend on the number of nodes. We observe that for the RLFAP-\textit{scen11} series, the number of search nodes of \textit{crbs-sum} is less than that of \textit{dom/wdeg}, but \textit{crbs-sum} is slower than \textit{dom/wdeg}, thus, the cost of maintaining \textit{crbs-sum} may be more expensive than \textit{dom/wdeg}. Possibly, our implementation could be optimized further. \begin{table} \small \begin{center} \tabcolsep=0.7mm \vspace{0.4in} \centering \begin{tabular}{l|c|c|c|c|c} \toprule [1 pt] &\textit{dom/wdeg}&\textit{ABS}&\textit{IBS}&\textit{crbs-sum}&\textit{crbs-max}\\ \hline Faster than \textit{dom/wdeg}&-&13&17&\textbf{21}&11\\ \hline Faster than \textit{ABS}&16&-&\textbf{20}&\textbf{20}&14\\ \hline Faster than \textit{IBS}&13&10&-&\textbf{19}&12\\ \hline Faster than \textit{crbs-max}&19&16&18&\textbf{25}&-\\ \hline Fastest&6&4&9&\textbf{10}&1\\ \hline Second fastest&7&5&3&\textbf{9}&6\\ \toprule [1 pt] \end{tabular} \caption{Comparing heuristics.} \label{exp:table2} \end{center} \end{table} \subsection{Choosing the crbs-sum parameter} \label{sec:parameter} The parameter $\theta$ used in equation \ref{cor} affects the performance of the \textit{crbs-sum} heuristic. In this section, we explore the effect of different choices of $\theta$ on two problem series. Figure \ref{Fig_2a} gives the results on TSP series, where the y-axis is the mean solving times and the x-axis is the values of $\theta$. Correspondingly, Figure \ref{Fig_2b} gives the results on the Quasi Group series. Overall, we found that low values for the $\theta$ parameter generally give better results than higher values. For example, the mean times of solving TSP series is only 3.89s when $\theta=0.1$, which is 2 times faster than that of $\theta=0.9$. For extreme values of $\theta$, when $\theta=0$, the mean time on the Quasi Group series is 9 times faster than that of $\theta=1$. This suggests that the correlations between $x_i$ and past variables is more important for the \textit{crbs-sum} heuristic than correlations with future variables. However, we also should not ignore the future variables, for example, when $\theta=0.1$, the mean times on TSP and Quasi Group are faster than with $\theta=0$. \begin{figure} \centering \subfloat[TSP series] { \label{Fig_2a} \includegraphics[scale=0.29]{tsppara.pdf} } \hspace{0in} \subfloat[Quasi Group series] { \label{Fig_2b} \includegraphics[scale=0.29]{quasipara.pdf} } \caption{Effect of $\theta$ on \textit{crbs-sum}.\label{Fig_2}} \end{figure} \section{Conclusion} In this paper, we propose a new idea, measuring correlations between variables, which leads to various correlation-based heuristics. We measure and update the correlation matrix by using domain changes during constraint propagation. We propose two correlation heuristics, \textit{crbs-sum} and \textit{crbs-max}, which employ different strategies to estimate the potential of conflict for a variable based on the correlation matrix. The experiments show that correlation heuristics are promising. They are competitive with the state-of-the-art heuristics \textit{dom/wdeg}, \textit{ABS} and \textit{IBS} on a large set of benchmarks. Furthermore, the correlation heuristics can also be the fastest on many problem series. In the future, we will explore more accurate or efficient methods to update the correlations between variables, and design improved correlation heuristics. \section*{Acknowledgment} This work has been supported by grant MOE2015-T2-1-117. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-05-25T02:06:29", "yymm": "1805", "arxiv_id": "1805.02205", "language": "en", "url": "https://arxiv.org/abs/1805.02205" }
\section{Introduction} In this paper we consider various forms of tempered fractional derivatives. For a function $ f $ continuous and compactly supported on the positive real line, let us consider the Marchaud type operator defined by \begin{equation} \label{eq:march-def} \bigl(\mathscr{D}^{\alpha, \eta}f\bigr) (x) = \int _{0}^{\infty} \bigl(f(x) - f(x-y) \bigr) \, \varPi( \mathrm{d} y) \end{equation} where \begin{equation} \varPi(\mathrm{d} y) = \frac{\alpha}{\varGamma(1-\alpha)} \frac {e^{-\eta y }}{y^{\alpha+1}} dy ,\quad y>0\xch{,}{.} \label{LevyMH} \end{equation} with $ \eta>0, 0 < \alpha<1 $. The operator \eqref{eq:march-def} coincides with the classical Marchaud derivative for $ \eta= 0 $. The Laplace transform of the fractional operator \eqref{eq:march-def} reads \begin{align} \int_{0}^{\infty} e^{ - \lambda x} \, \bigl( \mathscr{D}^{\alpha, \eta} f \bigr) (x) \, \mathrm{d} x &= \Biggl( \int _{0}^{\infty} \bigl( 1 - e^{ - \lambda y} \bigr) \varPi( \mathrm{d} y) \Biggr) \tilde{f} (\lambda) \nonumber \\ &= \bigl( (\eta+ \lambda)^{\alpha}- \eta^{\alpha} \bigr) \tilde{f}( \lambda) . \label{symbolH} \end{align} Throughout the work we denote by $\tilde{f}$ the Laplace transform of $f$. In the Fourier analysis the factor $ (\eta+ i \lambda)^{\alpha }- \eta^{\alpha}$ is the multiplier of the Fourier transform of $ f $~\xch{\cite{meersc15}}{(\cite{meersc15})}. Tempered fractional derivatives emerge in the study of equations driving the tempered subordinators \xch{\cite{beghin,meersc15}}{(\cite{beghin,meersc15})}. In particular, the operator \eqref{eq:march-def} is the generator of the subordinator $ H_{t}, t>0 $, with L\'{e}vy measure \eqref{LevyMH} and density law whose Laplace transform is given by \eqref{symbolH}, that is, \begin{align} \mathbb{E} e^{- \lambda H_{t}} &= e^{-t ( (\eta+ \lambda)^{\alpha}- \eta^{\alpha} ) } = e^{ - t \int_{0}^{\infty} ( 1 - e^{-\lambda y}) \varPi(\mathrm{d} y)}, \quad\lambda >0. \nonumber \end{align} The process $ H_{t} $ is called relativistic subordinator and coincides, for $ \eta= 0 $, with a positively skewed L\'{e}vy process, that is a stable subordinator. Tempered stable subordinators can be viewed as the limits of Poisson random sums with tempered power law jumps \xch{\cite{meersc15}}{(\cite{meersc15})}. The fractional operator $ \mathscr{D}^{\alpha, \eta} f$ defined in \eqref{eq:march-def} is related to the tempered upper Weyl derivatives defined by \begin{equation} \label{eq:up-weyl-intro} \bigl(\hat{\mathscr{D}}^{\alpha, \eta }_{+}f\bigr) (x) = \frac{1}{\varGamma(1-\alpha)} \frac{\mathrm{d}}{\mathrm{d} x} \int _{-\infty}^{x} \frac{f(t)}{(x-t)^{\alpha}} e^{- \eta(x-t)} \, \mathrm{d} t. \end{equation} By combining \eqref{eq:up-weyl-intro} with the lower Weyl tempered derivatives we obtain the Riesz tempered fractional derivatives $ \frac{\partial^{\alpha, \eta} f}{\partial|x|^{\alpha}} $ from which we obtain the explicit Fourier transform in \eqref{eq:temp-riesz-fou}. We consider the Dzherbashyan--Caputo derivative of order $ \frac{1}{2} $, that is, \begin{equation} \bigl( D^{\frac{1}{2}} f \bigr) (t) = \frac{1}{\sqrt{\pi}} \int _{0}^{t} f'(s) (t-s)^{-\frac{1}{2}} \, \mathrm{d} s \end{equation} with the Laplace transform \begin{align*} \int_{0}^{\infty}e^{-\lambda t} \bigl( D^{\frac{1}{2}} f \bigr) (t)\, \mathrm{d} t = \lambda^{\frac{1}{2}} \tilde{f}( \lambda) - \lambda^{\frac{1}{2}-1} f(0), \quad\lambda>0. \end{align*} The relationship between the Riemann--Liouville and the Dzherbashyan--Caputo de\-ri\-vative can be given as follows, \begin{equation} \bigl( \mathscr{D} ^{\frac{1}{2}} f \bigr) (t) = \bigl( D^{\frac{1}{2}} f \bigr) (t) + \frac{t^{\frac{1}{2} -1}}{\varGamma(\frac{1}{2})} f(0), \end{equation} from which we observe that \begin{align} \int_{0}^{\infty}e^{-\lambda t} \bigl( \mathscr{D} ^{\frac{1}{2}} f \bigr) (t) \, \mathrm{d} t = \lambda^{\frac{1}{2}} \tilde{f}( \lambda). \label{Ltransf-RL} \end{align} We remark that the problems \begin{align*} \lleft\lbrace \begin{array}{@{}ll@{}} \displaystyle\bigl(D^{\frac{1}{2}} u\bigr)(t) = - \frac{\partial u}{\partial y}, \quad t>0, y>0\\[3pt] \displaystyle u(0,y)=\delta(y) \end{array} \rright. \quad\text{\textrm{and}} \quad\lleft\lbrace \begin{array}{@{}ll@{}} \displaystyle\bigl(\mathscr{D}^{\frac{1}{2}} u\bigr)(t) = - \frac{\partial u}{\partial y}, & t>0, y>0\\[3pt] \displaystyle u(0,y)=\delta(y)\\[3pt] \displaystyle u(t,0)= \frac{1}{\sqrt{\pi t}}, & t>0 \end{array} \rright. \end{align*} have a unique solution given by the density law of an inverse to a stable subordinator, say $L_{t}$ (see for example \cite[formulas 3.4 and 3.5]{dovidio}). It is well known that $L_{t}$ (with $L_{0}=0$) is identical in law to a folded Brownian motion $|B_{t}|$ (with $B_{0}=0$), that is, $u$ is the unique solution to the problem \begin{align*} \lleft\lbrace \begin{array}{@{}ll@{}} \displaystyle\frac{\partial u}{\partial t} = \frac{\partial^{2} u}{\partial y^{2}}, \quad t>0,\, y>0,\\[3pt] \displaystyle u(0, y) = \delta(y),\\[3pt] \displaystyle\frac{\partial u}{\partial y}(t, 0) =0. \end{array} \rright. \end{align*} Thus, by considering the theory of time changes, there exist interesting connections between fractional Cauchy problems and the domains of the generators of the base processes. In our view, concerning the drifted Brownian motion, the present paper gives new results also in this direction We denote by \begin{equation} \label{eq:temp-rl-def} \mathscr{D} _{t}^{\frac{1}{2}, \eta}f \coloneqq e^{-\eta t} \mathscr{D} _{t}^{\frac{1}{2}} \bigl( e^{\eta t} f \bigr) - \sqrt{\eta}f \end{equation} the tempered Riemann--Liouville type derivative. The equality between definitions \eqref{eq:temp-rl-def} and \eqref{eq:march-def} can be verified by comparing the corresponding Laplace transforms. Indeed, from \eqref{Ltransf-RL}, \begin{align*} \int_{0}^{\infty}e^{ - \lambda t}\, \mathscr{D} _{t}^{\frac{1}{2}, \eta}f \, \mathrm{d} t ={}& \int_{0}^{\infty}e^{ -(\lambda+ \eta) t} \mathscr{D} _{t}^{\frac{1}{2}} ( g ) \, \mathrm{d} t - \sqrt{\eta} \tilde{f} (\lambda) \\ ={}& \sqrt{\lambda+\eta} \int_{0}^{\infty}e^{-(\lambda+\eta) t} g(t)\, \mathrm{d} t - \sqrt{\eta}\tilde{f} (\lambda) \end{align*} where $g(t) = e^{\eta t} f(t)$ Let $B $ represent a Brownian motion starting at the origin with generator $\Delta$. In the paper we show that the transition density $ u = u(x,y,t) $ of the 1-dimensional process \begin{equation*} B^{\mu}(t) = B(t) + \mu t + x, \quad\mu>0, \, x \in\mathbb{R}, \end{equation*} satisfies the fractional equation on $(0, \infty) \times\mathbb{R}^{2}$ \begin{equation} \label{eq:frac-eq-u} \lleft\{ \begin{aligned} &\mathscr{D} _{t}^{\frac{1}{2}, \eta} u + \sqrt{\eta} \,u= a(x,y) \biggl( \frac{\partial u}{\partial x} + \sqrt {\eta} u \biggr) = - a(x,y) \biggl( \frac{\partial u}{\partial y} - \sqrt{\eta} u \biggr), \\ &u(x,y,0) = \delta(x-y) \end{aligned} \rright. \end{equation} where \begin{equation*} a(x,y) = \mathbh{1}_{(-\infty,y] } (x) - \mathbh{1} _{(y, \infty) } (x) \end{equation*} and \begin{align*} \eta= \frac{\mu^{2}}{4}. \end{align*} A different result concerns the reflected process \begin{equation*} |B^{\mu}(t) + \mu t| + x = |\hat{B}|^{\mu}(t) \end{equation*} whose transition density $v = v(x,y,t) $ satisfies the equation \begin{equation} \label{eq:frac-eq-v} \mathscr{D} _{t}^{\frac{1}{2}, \eta} v + \sqrt {\eta} \, v= \frac{\partial v}{\partial x} + \sqrt{\eta} \tanh\bigl(\sqrt{\eta} (y-x) \bigr) v, \quad t >0,\; y > x >0, \end{equation} with initial and boundary conditions \begin{align*} v(x,y,0) ={}& \delta( y - x), \\ v(x,x,t) ={}& \frac{e^{-\eta t}}{\sqrt{\pi t}}, \quad t>0, \end{align*} and \begin{align*} \eta= \frac{\mu^{2}}{4}. \end{align*} The fractional equation governing the iterated Brownian motion $ B^{\mu _{2}} ( | B^{\mu_{1}} (t) | )$\break ($ B^{\mu_{j}} $ being independent) has been studied in \cite{iafrate18} and in the special case $ B^{\mu }( | B (t) | ) $ explicitly derived. For the iterated Bessel process a similar analysis is performed in~\cite{DOVIDIO2011441}. A~general presentation of tempered fractional calculus can be found in the paper \cite{meersc15}. Many processes like Brownian motion, iterated Brownian motion, Cauchy process have transition functions satisfying different partial differential equations and also are solutions of fractional equations of different forms with various fractional derivatives. We here show that a similar situation arises when drifted reflecting Brownian motion is considered but in this case the corresponding fractional equations involve tempered Riemann--Liouville type derivatives. \section{A generalization of the tempered Marchaud derivative} In this section we study the tempered Weyl derivatives (upper and lower ones) and construct the Riesz tempered derivative. We are able to obtain the Fourier transform of the Riesz tempered derivatives and thus to solve some generalized fractional diffusion equation. We start by giving the explicit forms of the tempered Weyl derivative \begin{align} & \bigl(\hat{\mathscr{D}}^{\alpha, \eta}_{+}f\bigr) (x) \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \frac{\mathrm{d}}{\mathrm{d} x} \int _{-\infty}^{x} \frac{f(t)}{(x-t)^{\alpha}} e^{- \eta(x-t)} \, \mathrm{d} t \\ &= \frac{1}{\varGamma(1-\alpha)} \frac{\mathrm{d}}{\mathrm{d} x} \int _{0}^{\infty} \frac{f(x-t)}{t^{\alpha}} e^{-\eta t} \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \frac{\mathrm{d}}{\mathrm{d} x} \int _{0}^{\infty} f(x-t) e^{-\eta t } \int_{t}^{\infty} \alpha w^{-\alpha-1} \, \mathrm{d} w \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, \mathrm{d} w \int_{0}^{w} f'(x-t) e^{-\eta t} \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, \mathrm{d} w \int_{x-w}^{x} f'(t) e^{-\eta(x-t)} \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, e^{-\eta x} \Biggl\{ f(t) e^{\eta t} |_{x-w}^{x} - \eta\int_{x-w}^{x} f(t) e^{\eta t} \, \mathrm{d} t \Biggr\} \, \mathrm{d} w \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, e^{-\eta x} \bigl[ f(x) e^{\eta x} - f(x-w)e^{\eta(x-w)} \bigr] \, \mathrm{d} w \nonumber \\ & \quad- \frac{\eta}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \int_{x-w}^{x} f(t) e^{-\eta(x-t)}\, \mathrm{d} w\, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, \bigl[ f(x) - f(x-w)e^{-\eta w} \bigr] \, \mathrm{d} w \nonumber \\ & \quad- \frac{\eta}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \int_{0}^{w} f(x-t) e^{-\eta t}\, \mathrm{d} w \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \, \bigl[ f(x) + f(x) e^{-\eta w} - f(x) e^{-\eta w} - f(x-w)e^{-\eta w} \bigr] \, \mathrm{d} w \nonumber \\ & \quad- \frac{\eta}{\varGamma(1-\alpha)} \int_{0}^{\infty} f(x-t) e^{-\eta t} \int_{t}^{\infty}\alpha w^{-\alpha-1} \, \mathrm{d} w \, \mathrm{d} t \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \bigl(f(x) - f(x-w) \bigr) \, \alpha\frac{e^{-\eta w}}{w^{\alpha+1}} \,\mathrm{d} w \, \mathrm{d} t \nonumber \\ & \quad+ \frac{f(x)}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \bigl(1-e^{-\eta w} \bigr)\, \mathrm{d} w - \frac{\eta}{\varGamma(1-\alpha)} \int_{0}^{\infty} f(x-t) \frac{e^{-\eta t}}{t^{\alpha}} \, \mathrm{d} t \nonumber \\ &= \int_{0}^{\infty} \bigl(f(x) - f(x-w) \bigr) \varPi (\mathrm{d} w) + \eta\int_{0}^{\infty} \bigl(f(x) - f(x-w) \bigr) \frac{e^{-\eta w }}{w^{\alpha}\varGamma(1-\alpha)} \, \mathrm{d} w \nonumber \end{align} The derivative $ \hat{\mathscr{D}}_{+} ^{\alpha, \eta}$ can be expressed in terms of $ \mathscr{D}^{\alpha, \eta} $ as follows: \begin{equation*} \hat{\mathscr{D}}^{\alpha, \eta}_{+}f = \mathscr{D}^{\alpha, \eta} f - \eta\, \mathscr{D}^{\alpha-1, \eta} f. \end{equation*} In the same way we can obtain the upper Weyl derivative in the Marchaud form as \begin{align} & \bigl(\hat{\mathscr{D}}^{\alpha, \eta}_{-}f\bigr) (x) \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \frac{\mathrm{d}}{\mathrm{d} x} \int _{x}^{\infty} \frac{f(t)}{(x-t)^{\alpha}} e^{- \eta(x-t)} \, \mathrm{d} t \\ &= \int_{0}^{\infty} \frac{\alpha w ^{-\alpha-1}}{\varGamma(1-\alpha )} \bigl\{ e^{-\eta w} f(x+w) - f(x)\bigr\} \,\mathrm{d} w + \eta\int _{0}^{\infty} \frac{e^{\eta w} f(x+w)}{\varGamma(1-\alpha) w^{\alpha } } \,\mathrm{d} w \nonumber \\ &= \frac{1}{\varGamma(1-\alpha)} \int_{0}^{\infty} \bigl[ f(x+w) - f(x)\bigr] \frac{\alpha e^{-\eta w}}{w^{\alpha+1} } \nonumber \\ & \quad+ \frac{f(x)}{\varGamma(1-\alpha)} \int_{0}^{\infty} \alpha w^{-\alpha-1} \bigl(e^{-\eta w } -1\bigr) \, \mathrm{d} w + \eta\int _{0}^{\infty} f(x+t) \frac{e^{-\eta w}}{\varGamma(1-\alpha) t^{\alpha}} \, \mathrm{d} t \nonumber \\ &= \int_{0}^{\infty} \bigl[ f(x+w) - f(x)\bigr] \varPi (\mathrm{d} w) + \eta\int_{0}^{\infty} \bigl[ f(x+w) - f(x)\bigr] \frac{e^{-\eta w}}{\varGamma(1-\alpha) w^{\alpha}} \, \mathrm{d} w . \nonumber \end{align} For $ 0 < \alpha<1 $ the Riesz fractional derivative writes \begin{align} \label{eq:riesz-der} \frac{\partial^{\alpha}f}{\partial|x|^{\alpha }} &= - \frac{1}{2 \cos\frac{\alpha\pi}{2} \, \varGamma(1-\alpha)} \int _{-\infty}^{+\infty} \frac{f(t)}{|x-t|^{\alpha}} \, \mathrm{d} t \\ &= - \frac{1}{2 \cos\frac{\alpha\pi}{2} \, \varGamma(1-\alpha)} \Biggl[ \frac{\mathrm{d}}{\mathrm{d} x} \int_{-\infty}^{x} \frac{f(t)}{(x-t)^{\alpha}} \, \mathrm{d} t - \frac{\mathrm{d}}{\mathrm {d} x} \int_{x }^{\infty} \frac{f(t)}{(t-x)^{\alpha}} \, \mathrm{d} t \Biggr]. \nonumber \end{align} In the same way we define the tempered Riesz derivative as \begin{align*} \frac{\partial^{\alpha, \eta} f}{\partial|x|^{\alpha}} = C_{\alpha , \eta} \Biggl[ \frac{\mathrm{d}}{\mathrm{d} x} \int _{-\infty}^{x} \frac{f(t)}{(x-t)^{\alpha}} \frac{e^{- \eta(x-t)} }{ \varGamma(1-\alpha)}\, \mathrm{d} t - \frac{\mathrm{d}}{\mathrm{d} x} \int_{x }^{\infty} \frac{f(t)}{(t-x)^{\alpha}} \frac{e^{- \eta(t-x)} }{ \varGamma (1-\alpha)} \, \mathrm{d} t \Biggr] \end{align*} where $ C_{\alpha,\eta} $ is a suitable constant which will be defined below. In view of the previous calculations we have that \begin{align} \frac{\partial^{\alpha, \eta} f}{\partial|x|^{\alpha}} &= C_{\alpha, \eta} \Biggl[ \int_{0}^{\infty} \bigl(f(x) - f(x-w) \bigr) \frac{\alpha e^{-\eta w} \mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha+ 1} } \nonumber \\ &\quad + \eta\int_{0}^{\infty} \bigl(f(x) - f(x-w) \bigr) \frac{e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha}} \\ &\quad - \int_{0}^{\infty} \bigl(f(x+w) - f(x) \bigr) \frac{\alpha e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha+ 1} } \nonumber \\ &\quad - \eta\int_{0}^{\infty} \bigl(f(x+w) - f(x) \bigr) \frac{e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha}} \Biggr] \nonumber \\ & = C_{\alpha, \eta} \Biggl[ \int_{0}^{\infty} \bigl( 2 f(x) - f(x-w) - f(x+w) \bigr) \frac{\alpha e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha+ 1} } \nonumber \\ &\quad+ \eta\int_{0}^{\infty} \bigl( 2 f(x) - f(x-w) - f(x+w) \bigr) \frac{ e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha} } \Biggr]. \nonumber \end{align} We now evaluate the Fourier transform of the tempered Riesz derivative \begin{align} \label{eq:temp-riesz-fou} \int_{-\infty}^{+\infty} e^{i\gamma x} \frac{\partial^{\alpha, \eta} f}{\partial|x|^{\alpha}} \, \mathrm {d} x ={}& C_{\alpha, \eta} \Biggl\{ \hat{F}(\gamma ) \int_{0}^{\infty} \bigl( 1 - e^{i\gamma w} \bigr) \frac{\alpha e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha+ 1} } \\ & + \eta\hat{F}(\gamma) \int_{0}^{\infty} \bigl( 1 - e^{i\gamma w} \bigr) \frac{ e^{-\eta w}\mathrm{d} w}{\varGamma (1-\alpha) w^{\alpha} } \nonumber \\ & - \hat{F}(\gamma) \int_{0}^{\infty} \bigl( e^{i\gamma w} -1 \bigr) \frac{\alpha e^{-\eta w}\mathrm{d} w}{\varGamma (1-\alpha) w^{\alpha+ 1} } \nonumber \\ & - \eta\hat{F}(\gamma) \int_{0}^{\infty} \bigl( e^{i\gamma w} -1 \bigr) \frac{ e^{-\eta w}\mathrm{d} w}{\varGamma (1-\alpha) w^{\alpha} } \Biggr\} \nonumber \\ ={}& C_{\alpha, \eta} \hat{F} ( \gamma) \Biggl\{ 2 \int _{0}^{\infty} (1 - \cos\gamma w) \frac{\alpha e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha+ 1} } \nonumber \\ & + 2 \eta\int_{0}^{\infty} (1 - \cos\gamma w) \frac{ e^{-\eta w}\mathrm{d} w}{\varGamma(1-\alpha) w^{\alpha} } \Biggr\} \nonumber \\ ={}& C_{\alpha, \eta} \hat{F} ( \gamma) \Biggl\{ -2 w^{-\alpha} (1-\cos \gamma w) \frac{e^{-\eta w}}{\varGamma(1-\alpha)} |_{0}^{\infty} \nonumber \\ & - 2 \eta\int_{0}^{\infty} (1 - \cos\gamma w) \frac{e^{-\eta w} \mathrm{d} w}{w^{\alpha}\varGamma(1-\alpha)} \nonumber \\ & + 2 \gamma\int_{0}^{\infty} \frac{e^{-\eta w} \sin\gamma w}{w^{\alpha}\varGamma(1-\alpha)} \, \mathrm{d} w \nonumber \\ & + 2 \eta\int_{0}^{\infty} (1 - \cos\gamma w) \frac{e^{-\eta w}}{w^{\alpha}\varGamma(1-\alpha)} \mathrm{d} w \Biggr\} \nonumber \\ ={}& C_{\alpha, \eta} \hat{F} ( \gamma) 2 |\gamma| \int _{0}^{\infty} \frac{e^{-\eta w} \sin|\gamma| w}{w^{\alpha}\varGamma(1-\alpha)} \, \mathrm{d} w \nonumber \\ ={}& C_{\alpha, \eta} \hat{F} ( \gamma) \frac{ 2| \gamma| }{ ( \eta^{2} + \gamma^{2})^{ \frac{1 - \alpha}{2}}} \sin\biggl((1- \alpha) \arctan\frac{|\gamma|}{\eta} \biggr). \nonumber \end{align} In the last step we used the following formula (\cite{gradtable}, p.~490, formula 5) \begin{equation*} \int_{0}^{\infty} x^{\mu- 1 } e^{ - \beta x} \sin\delta x \, \mathrm{d} x = \frac{\varGamma(\mu)}{(\beta^{2} + \delta^{2}) ^{ \frac{\mu}{2}}} \, \sin\biggl( \mu\arctan \frac{\delta}{\mu} \biggr) \end{equation*} with $\mathrm{Re } \, \mu> -1$, $\mathrm{Re } \, \beta\geq\mathrm{Im } \, \delta$. \begin{remark} For $ \eta\to0 $ we have that \begin{align*} \lim_{\eta\to0} \sin\biggl((1-\alpha) \arctan\frac{|\gamma|}{\eta} \biggr) = \cos\biggl( \frac{\pi\alpha}{2} \biggr). \end{align*} Therefore \begin{align*} \lim_{\eta\to0} \int_{- \infty}^{+\infty} e^{ i \gamma x} \, \frac{\partial^{\alpha, \eta} f}{\partial |x|^{\alpha}} \, \mathrm{d} x= 2 C_{\alpha, 0} \, | \gamma|^{\alpha}\cos\biggl( \frac{\pi\alpha}{2} \biggr)\, \hat {F}(\gamma) \end{align*} and thus the normalizing constant must be $ C_{\alpha, 0} = - ( 2 \cos \frac{\pi\alpha}{2} ) ^{-1} $. This means that for $ \eta\to0 $ we obtain from \eqref {eq:temp-riesz-fou} the Fourier transform of the Riesz fractional derivative \eqref{eq:riesz-der}. This result shows that symmetric stable processes are governed by equations \begin{equation*} \frac{\partial u}{\partial t } = \frac{\partial^{\alpha}u}{\partial |x|^{\alpha}} \end{equation*} see, for example, \cite{toaldo14}, where the interplay between stable laws, including subordinators and inverse subordinators, and fractional equations is considered. \end{remark} \begin{remark} For fractional equations of the form \begin{equation} \lleft\{ \begin{aligned} &\frac{\partial u}{\partial t } = \frac{\partial^{\alpha, \eta} u}{\partial|x|^{\alpha}}, & & t> 0, \; x \in\mathbb{R}, \\ & u(x,0) = \delta(x), & & x \in\mathbb{R}, \end{aligned} \rright. \end{equation} the Fourier transform of the solution reads \begin{align*} & \int_{-\infty}^{+\infty} e^{i \gamma x} u(x,t) \, \mathrm{d} x \\ &= \exp\biggl\{ t \, C_{\alpha, \eta} \frac{2 |\gamma| }{ ( \eta ^{2} + \gamma^{2})^{ \frac{1 - \alpha}{2} } } \sin\biggl( (1-\alpha ) \arctan\frac{|\gamma|}{\eta} \biggr) \biggr\} \\ &= \exp\biggl\{ t \, C_{\alpha, \eta} \frac{2 |\gamma| }{ ( \eta ^{2} + \gamma^{2})^{1 - \frac{ \alpha}{2} } } \biggl[ |\gamma| \cos \biggl( \alpha\arctan\frac{|\gamma|}{\eta} \biggr) - \eta\sin \biggl( \alpha \arctan\frac{|\gamma|}{\eta} \biggr) \biggr] \biggr\} . \end{align*} \end{remark} \section{Fractional equations governing the drifted Brownian motion} The law of the drifted Brownian motion started at $x$ satisfies the equations \begin{equation*} \frac{\partial u}{\partial t } = \frac{\partial^{2} u }{\partial y^{2}} - \mu\frac{\partial u}{\partial y}, \quad t>0, y \in\mathbb{R}, \end{equation*} and \begin{equation*} \frac{\partial u}{\partial t } = \frac{\partial^{2} u }{\partial x^{2}} + \mu\frac{\partial u}{\partial x}, \quad t>0,\; x \in\mathbb{R}. \end{equation*} We show here that the drifted Brownian motion is related to time fractional equations with tempered derivatives. Let us consider the process \begin{equation} B^{\mu}(t) = B(t) + \mu t + x, \quad\mu\in\mathbb{R},\; x \in \mathbb{R}. \end{equation} The law $ u = u(x,y,t) $ of the process $ B^{\mu}$ is given by \begin{equation} \label{key} u ( x, y, t ) = \frac{ e^{ - \frac{(y-x-\mu t)^{2}}{4t} } }{\sqrt{4 \pi t}} = \frac{ e^{ - \frac{ (y-x)^{2}}{ 4t} } }{\sqrt{4 \pi t}} e^{ - \mu^{2} \frac{t}{4} + \frac{\mu}{2} (y-x)}, \quad t>0,\; x,y \in\mathbb{R}. \end{equation} \begin{thm} The law of $B^{\mu}$ solves the Cauchy problem \begin{equation} \label{eq:frac-eq-u-thm} \lleft\{ \begin{aligned} &\mathscr{D} _{t}^{\frac{1}{2}, \eta} u + \sqrt{\eta} \, u = a(x,y) \biggl( \frac{\partial u}{\partial x} + \sqrt {\eta} \, u \biggr), \quad t>0,\; x,y, \in\mathbb{R}, \\ & \textcolor{white} {\mathscr{D} _{t}^{\frac{1}{2}, \eta} u + \sqrt {\eta} \, u} = - a(x,y) \biggl( \frac{\partial u}{\partial y} - \sqrt{\eta} \, u \biggr), \quad t>0,\; x,y, \in\mathbb{R}, \\ &u(x,y,0) = \delta(x-y) \end{aligned} \rright. \end{equation} with \begin{align*} \eta= \frac{\mu^{2} }{4}. \end{align*} \end{thm} \begin{proof} We start by computing the Laplace--Fourier transform of the function \begin{equation*} g(x,y,t) = \frac{e^{ - \frac{ (y-x)^{2}}{4t}}}{\sqrt{4 \pi t }}, \end{equation*} that is, \begin{align*} \hat{\tilde{ g}} (y,\xi, \lambda) &= \int_{0}^{\infty} e^{ - \lambda t} \int_{ -\infty}^{+\infty} e^{i \xi x} g(x,y,t) \, \mathrm{d} x \,\mathrm{d}t \\ &= \int_{0}^{\infty}e^{- \lambda t} e^{i \xi y \,- \,\xi^{2} t} \,\mathrm{d} t \\ &= \frac{e^{i \xi y}}{\lambda+ \xi^{2}} . \end{align*} By using the fact that \begin{align} \tilde{g}(x,y,\lambda) = \frac{e^{-|y-x|\sqrt{\lambda}}}{2 \sqrt {\lambda}} = \lleft\lbrace \begin{array}{@{}ll} \displaystyle\frac{e^{-(y-x)\sqrt{\lambda}}}{2 \sqrt{\lambda}}, \quad y>x,\\[9pt] \displaystyle\frac{e^{-(x-y)\sqrt{\lambda}}}{2 \sqrt{\lambda}}, \quad y \leq x, \end{array} \rright. \label{lapBYUSING} \end{align} we now compute the double transform of $ a(x,y) \frac{ \partial g }{\partial x} $. \begin{align} \label{eq:fou-lap-der} &\int_{-\infty}^{\infty} e^{ i \xi x} \bigl[ \mathbh{1}_{(-\infty,y]}(x)- \mathbh{1} _{ (y, \infty)}(x) \bigr] \frac{\partial\tilde{g}}{\partial x} (x,y,\lambda) \, \mathrm{d} x \\ &= \frac{1}{2} \Biggl( \int_{ -\infty}^{y} e^{i \xi y} e^{-(y-x)\sqrt{\lambda}} \,\mathrm{d} x + \int_{y}^{\infty }e^{i \xi y} e^{-(x-y)\sqrt{\lambda}} \,\mathrm{d} x \Biggr) \nonumber \\ &= \frac{e^{i\xi y}}{2} \Biggl( \int_{0}^{\infty}e^{- i \xi x} e^{-x \sqrt{\lambda}} \,\mathrm{d} x + \int_{0}^{\infty}e^{ i \xi x} e^{-x \sqrt{\lambda}} \,\mathrm{d} x \Biggr) \nonumber \\ &= \frac{e^{i\xi y}}{2} \biggl( \frac{1}{i \xi+ \sqrt{\lambda}} + \frac{1}{ - i \xi+ \sqrt{\lambda}} \biggr) \nonumber \\ &= \sqrt{\lambda}\frac{e^{i \xi y}}{\lambda+ \xi^{2}} = \sqrt {\lambda}\hat{\tilde{g}} . \nonumber \end{align} This implies, by inverting the Fourier transform, that \begin{equation} \label{eq:inv-fou-g} a(x,y) \frac{\partial\tilde{ g}}{\partial x}= \sqrt{\lambda}\tilde{g}. \end{equation} We recall that \begin{equation} \int_{0}^{\infty} e^{-\lambda t} \mathscr{D} ^{\frac{1}{2}}_{t} g \, \mathrm{d} t = \sqrt{\lambda}\tilde{g}, \end{equation} thus by inverting the Laplace transform in \eqref{eq:inv-fou-g} we obtain \begin{equation} \label{eq:g-eq} \mathscr{D} ^{\frac{1}{2}}_{t} g = a(x,y) \frac{\partial g}{\partial x} \end{equation} and, by considering the same arguments (see \eqref{lapBYUSING}), \begin{align*} \mathscr{D} ^{\frac{1}{2}}_{t} g = -a(x,y) \frac{\partial g}{\partial y}. \end{align*} Returning to our initial problem, by using \eqref{eq:g-eq} and \eqref {eq:temp-rl-def} we have that \begin{align*} \mathscr{D} _{t}^{\frac{1}{2}, \frac{\mu^{2} }{4}} u &= e^{- \frac{\mu ^{2} }{4} t } \mathscr{D} ^{\frac{1}{2}}_{t} \bigl( e^{ \frac{\mu^{2}t }{4}} u \bigr) - \frac{\mu}{2} u \\ & = e^{+\frac{\mu}{2} (y-x) \,- \,\frac{\mu^{2}}{4} t } \, \mathscr {D} ^{\frac{1}{2}}_{t} g - \frac{\mu}{2} u \\ & = e^{- \frac{\mu^{2}}{4} t} \, e^{\frac{\mu}{2} (y-x) } \, \, a(x,y) \frac{\partial g}{\partial x} - \frac{\mu}{2} u \\ &= a(x,y) \biggl( \frac{\partial u}{\partial x} + \frac{\mu}{2} u \biggr) - \frac{\mu}{2} u \end{align*} and \begin{align*} \mathscr{D} _{t}^{\frac{1}{2}, \frac{\mu^{2} }{4}} u &= e^{- \frac{\mu ^{2} }{4} t } \mathscr{D} ^{\frac{1}{2}}_{t} \bigl( e^{ \frac{\mu^{2}t }{4}} u \bigr) - \frac{\mu}{2} u \\ & = e^{+\frac{\mu}{2} (y-x) \,- \,\frac{\mu^{2}}{4} t } \, \mathscr {D} ^{\frac{1}{2}}_{t} g - \frac{\mu}{2} u \\ & = -e^{- \frac{\mu^{2}}{4} t} \, e^{\frac{\mu}{2} (y-x) } \, \, a(x,y) \frac{\partial g}{\partial y} - \frac{\mu}{2} u \\ &= a(x,y) \biggl( -\frac{\partial u}{\partial y} + \frac{\mu}{2} u \biggr) - \frac{\mu}{2} u. \end{align*} This completes the proof. \end{proof} The drifted Brownian motion has therefore a transition function satisfying a time fractional equation where the fractional derivative is a tempered Riemann--Liouville derivative with parameter $ \eta$ which is related to the drift by the relationship $ \sqrt{\eta}= \frac{\mu}{2} $. \section{Fractional equation governing the folded drifted Brownian motion} We here consider the process \begin{equation} |B(t) + \mu t | + x = |B^{\mu}(t) | + x, \quad x>0. \end{equation} This process has distribution \begin{align*} P\bigl(| B(t) + \mu t| + x < y\bigr) ={}& P\bigl(x-y-\mu t < B(t) < y-x-\mu t \bigr) \\ ={}& \int_{x-y-\mu t}^{y-x-\mu t} \frac{e^{ - \frac{w^{2}}{4t }}}{\sqrt {4 \pi t }} \, \mathrm{d} w \end{align*} and therefore its transition function is \begin{align} \label{eq:v-tran-foo} P\bigl(|B ^{\mu}(t)| + x \in\mathrm{d} y\bigr) / \mathrm{d} y &= \frac{ e^{ - \frac{(y-x-\mu t)^{2}}{4t} } }{\sqrt{ 4 \pi t}} + \frac{ e^{ - \frac{(y-x+\mu t)^{2}}{4t} } }{\sqrt{ 4 \pi t}} \\ &= \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}} e^{ - \mu^{2} \frac{t}{4} } \bigl[ e^{ - \frac{\mu}{2} (y-x) } + e^{ \frac{\mu}{2} (y-x) } \bigr] \nonumber \\ &= v(x,y,t) \nonumber \end{align} for $ y>x $ and $t>0$. We now prove the following theorem. \begin{thm} The law $ v $ of $ | B^{\mu}(t) | +x $ satisfies the fractional equation \begin{equation} \label{eq:frac-eq-v-thm} \mathscr{D} _{t} ^{ \frac{1}{2} , \eta} v = - \frac{\partial v }{\partial y} + v \, \sqrt{\eta}\tanh\bigl(\sqrt {\eta}( y-x)\bigr) - \frac{\mu}{2} v, \quad y > x > 0, \end{equation} with initial and boundary conditions \begin{align*} v(x,y,0) ={}& \delta( y - x), \\ v(x,x,t) ={}& \frac{e^{-\eta t}}{\sqrt{\pi t}}, \quad t>0, \end{align*} and \begin{align*} \eta= \frac{\mu^{2}}{4}. \end{align*} \end{thm} \begin{proof} From \eqref{eq:v-tran-foo} and \eqref{eq:temp-rl-def} we have that \begin{align*} \mathscr{D} _{t}^{\frac{1}{2} } \bigl( e^{ \frac{\mu^{2} t}{4} } v \bigr) &= 2 \cosh\biggl( \frac{\mu}{2} (y-x) \biggr) \,\, \mathscr{D} _{t}^{ \frac{1}{2} } \biggl( \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt {4 \pi t}} \biggr) \end{align*} Let $E_{\frac{1}{2}}$ be the Mittag-Leffler function of order $1/2$ and $g$ be the function \begin{align*} g(x,y,t) = \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}}, \quad y>x>0. \end{align*} Since \begin{align*} \int_{0}^{\infty}e^{-\lambda t} \int _{x}^{\infty}e^{-\xi y} g(x,y,t) \, \mathrm{d} y\, \mathrm{d} t ={}& e^{-\xi x} \int_{0}^{\infty}e^{-\lambda t} E_{\frac{1}{2}}\bigl(- \xi t^{\frac{1}{2}}\bigr) \, \mathrm{d} t \\ ={}& e^{-\xi x} \frac{\lambda^{\frac{1}{2}-1}}{\xi+ \lambda^{\frac{1}{2}}} \end{align*} we obtain that \begin{align*} \mathscr{D}^{\frac{1}{2}}_{t} g(x,y,t) = - \frac{\partial g}{\partial y} \quad \text{\textrm{with boundary condition }} g(x,x, t) = \frac{1}{\sqrt {4\pi t}}. \end{align*} Then, for $y>x$, \begin{align*} & \mathscr{D} _{t}^{\frac{1}{2} } \bigl( e^{ \frac{\mu^{2} t}{4} } v \bigr) \\ & = 2 \cosh\biggl( \frac{\mu}{2} (y-x) \biggr) \,\, \biggl( - \frac{\partial}{\partial y} \biggl( \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}} \biggr) \biggr) \\ &= -\frac{\partial}{\partial y} \biggl( 2 \cosh\biggl( \frac{\mu}{2} (y-x) \biggr) \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}} \biggr) + \frac{\mu }{2} \cdot2 \sinh\biggl( \frac{\mu}{2} (y-x) \biggr) \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt {4 \pi t}} \\ &= -\frac{\partial}{\partial y} \bigl( e^{\frac{\mu^{2}}{4}t } v(x,y,t) \bigr) +\mu\, \sinh \biggl( \frac{\mu}{2} (y-x) \biggr) \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}} \end{align*} with boundary condition \begin{align*} e^{\frac{\mu^{2}}{4}t} v(x,x,t)= \frac{1}{\sqrt{4\pi t}} + \frac {1}{\sqrt{4\pi t}} . \end{align*} In view of \eqref{eq:temp-rl-def} we obtain that \begin{align*} \mathscr{D} ^{\frac{1}{2}, \eta}_{t} v + \frac{\mu}{2} v & = e^{ - \frac{ \mu^{2}}{4}t } \mathscr{D} _{t}^{\frac{1}{2}} \bigl( e^{ \frac{\mu^{2}}{4}t } v \bigr) \\ &= -\frac{\partial v}{\partial y} - \mu e^{ - \frac{\mu^{2}}{4}t } \sinh\biggl( \frac{\mu}{2} (y-x) \biggr) \frac{e^{ - \frac{(y-x)^{2}}{4t} }}{\sqrt{4 \pi t}} \\ &= -\frac{\partial v}{\partial y} - \frac{\mu}{2} \, \tanh\biggl( \frac{\mu}{2} (y-x) \biggr) v. \qedhere \end{align*} \end{proof} This result shows that the structure of the governing equation of the process $ |B(t) + \mu t | + x $ is substantially different from that of $ B(t) + \mu t + x $. The difference between \eqref{eq:frac-eq-u-thm} and \eqref{eq:frac-eq-v-thm} consists in the non-constant coefficient $ \tanh\frac{\mu}{2} (y-x) $ which converges to one as the difference $ |y-x| $ tends to infinite. Thus the two equations emerging in this analysis coincide for $ |x-y| \to\infty$.
{ "timestamp": "2019-02-11T02:07:35", "yymm": "1805", "arxiv_id": "1805.02511", "language": "en", "url": "https://arxiv.org/abs/1805.02511" }
\section{Introduction} Quantum walks (QWs)~\cite{kempe2003quantum}, a direct result of quantum interference of different paths, have been extensively studied in both theory and experiments~\cite{PhysRevA.48.1687, venegas2012quantum, du2003experimental, PhysRevA.65.032310}. QWs can be exploited to various fields, from universal quantum computing~\cite{childs2009universal}, efficient quantum algorithm~\cite{farhi1998quantum,shenvi2003quantum,ambainis2007quantum,childs2004spatial,JPhysA.41075303}, energy transfer~\cite{ChemicalTransfer}, to topological state detection~\cite{kitagawa2012,xiao2017}. Single-particle QWs have already been implemented by various systems including ultracold atoms~\cite{karski2009quantum}, ultracold ions~\cite{zahringer2010realization}, photoic waveguides~\cite{PhysRevLett.100.170506} and atomic spin-impurities~\cite{fukuhara2013quantum} etc. Moreover, it has also been demonstrated that single-particle QWs can be implemented via classical waves~\cite{PhysRevA.68.020301}. Beyond single-particle QWs, two-particle QWs have attracted extensively interests in recent years. The non-classical correlation between non-interacting particles, i.e., the bunching and anti-bunching behavior, are found to depend strongly on the quantum statistical properties \cite{peruzzo2010quantum,sansoni2012two,PhysRevLett.102.253904}. On the other hand, interaction between particles in a lattice is believed to be beneficial to universal quantum computation \cite{childs2013universal}. The interacting two-particle QWs have been discussed and implemented \cite{preiss2015strongly,Ahlbrecht2012,PhysRevA.86.011603,PhysRevA.96.043629}. The interaction is found to strongly affect the spatial correlations \cite{PRA.90.062301}. Particularly, the repulsively or attractively interacting (quasi-)particles can form a bound pair \cite{winkler2006repulsively,fukuhara2013microscopic}. Therefore, besides the independent QWs, there is the co-walking of the bound pair\cite{folling2007direct,PRA.90.062301,preiss2015strongly}. Although the QWs of interacting particles have been extensively studied, it still remains unclear about the QWs involving atom-molecule coupling. According to the two-channel theory~\cite{PhysRevA.77.021601,PhysRevA.78.023617, PhysRevA.83.031607, Grupp2007}, a pair of atoms can be converted into a molecule. For two bosons in optical lattices, due to the atom-molecule coupling, their energy spectrum includes two isolated bands and a continuum one~\cite{PhysRevA.77.021601, PhysRevA.78.023617, PhysRevA.83.031607, Grupp2007}. The states in isolated bands are in superposition of atomic bound state and molecular state, which are called the dressed bound states (DBS's) in the following context. Under specific conditions, the DBS's can be tuned to enter the continuum band and thus lead to so-called scattering resonance~\cite{PhysRevA.78.023617}. Although several equilibrium properties in hybrid atom-molecule systems have been studied, the QWs in these systems have not been revealed yet. In particular, it is intriguing to explore the signature of DBS's via QWs. In this article, by considering a one-dimensional (1D) Bose-Hubbard model with atom-molecular coupling, we study the QWs from two interacting Bose atoms occupying the same lattice site. We focus on exploring the interplay among atom-molecule coupling, atom-atom interaction and atom-molecule energy detuning. Without the atom-atom interaction, there are two kinds of DBS's supported by pure atom-molecule coupling. Such an atom-molecule coupling may play the role of atom-atom interaction and then result in the correlated QWs. Due to the atom-molecule energy difference, the atom-atom interaction can be balanced under certain resonant conditions and so that the DBS's are broken into scattering states. Under strong interactions, the QWs show two light-cones corresponding to the two DBS bands. By using the many-body degenerate perturbation theory, we give the effective models for the QWs of DBS's, in which the effective tunneling strengths of DBS's can be tuned by the atom-molecule energy difference. Specifically, the interplay between tunnelings of atoms and molecule can suppress the nearest-neighbor (NN) tunneling of DBS's. The paper is organized as follows. In Sec.~\ref{two_systems}, we introduce our hybrid atom-molecule system and solve its energy bands. In Sec.~\ref{hybridwalks}, we present the QWs from two atoms occupying the same site. In particular, we discuss how the QWs are affected by the pure atom-molecule coupling~(\ref{couplingwalks}) and the interplay between atom-atom interaction and atom-molecule coupling~(\ref{intercouplewalks}). In Sec.~\ref{dressedmodel}, we derive effective models for the QWs of DBS's and discuss the effective tunneling of DBS's. At last, we make a brief summary and discussion of our results. \section{HYBRID ATOM-MOLECULE ENERGY BANDS} \label{two_systems} We consider two interacting Bose atoms in 1D optical lattices, where the two atoms can be converted into a molecular state via atom-molecule coupling. The system obeys the Hamiltonian, \begin{eqnarray} \hat H= &&\mathop -\sum \limits_{l=-L}^{L} \left( {{J_a}\hat a_l^\dag {{\hat a}_{l + 1}} + {J_m}\hat m_l^\dag {{\hat m}_{l + 1}} + H.c.} \right) \nonumber \\ && + \frac{U}{2}\mathop \sum \limits_{l=-L}^{L} \hat n_l^a\left( {\hat n_l^a - 1} \right)+g\mathop \sum \limits_{l=-L}^{L} \left( {\hat a_l^\dag \hat a_l^\dag {{\hat m}_l} + H.c.} \right) \nonumber \\ && +\mathop \sum \limits_{l=-L}^{L} \left( {{\varepsilon _a}\hat n_l^a + {\varepsilon _m}\hat n_l^m} \right). \label{Hamiltonian} \end{eqnarray} Here, $g$ is on-site atom-molecule coupling strength, $U$ is on-site background atom-atom interaction, $J_a(J_m)$ is the atomic(molecular) tunneling strength, $\varepsilon_a$($\varepsilon_m$) is the atomic(molecular) on-site energy, the lattice site index $l$ ranges from $-L$ to $L$, the total number of lattice sites is $L_t=2L+1$ and the periodic boundary condition (PBC) is imposed. The bosonic operators $\hat a_l^\dag$ ($\hat m_l^\dag$) and $\hat a_l$ ($\hat m_l$) create and annihilate an atom (molecule) on the $l$-th site, respectively. Compared with the atomic tunneling strength $J_a$, the molecular tunneling strength $J_m$ is much smaller and so that it can be neglected~\cite{PhysRevA.83.031607, PRA.71.043604, PhysRevLett.114.195302}. Thus we set $J_m=0$ in our numerical calculations, but still keep it in our analytical calculations. The atom-molecule coupling $g$ can be realized by applying magnetoassociation \cite{RevModPhys.82.1225} or photoassociation \cite{RevModPhys.71.1,PhysRevLett.80.4402} technique. The on-site energies $\varepsilon_{a,m}$ can be tuned by applying external magnetic field. The hybrid atom-molecule Hilbert space can be spanned by a complete set of orthogonal basis, \begin{equation} {\cal H^{(\text 2)}} = \left\{ {\left| {{l_1}{l_2}} \right\rangle_a \oplus \left| j\right\rangle_m } \right\}. \label{eigeneqn:0} \end{equation} Here, $\left| j\right\rangle_m = \hat m_{j}^\dag \left| {\bf{0}} \right\rangle (-L \le j \le L)$ denotes the state of one molecule in the $j$-th lattice site, while $\left| {{l_1}{l_2}} \right\rangle_a= (1 + {\delta _{{l_1}{l_2}}})^{-1/2} \hat a_{{l_1}}^\dag \hat a_{{l_2}}^\dag \left| {\bf{0}} \right\rangle$ ($-L\le l_1\le l_2 \le L$) denotes the state of one atom in the $l_1$-th site and one atom in the $l_2$-th site, where $\delta_{l_1l_2}$ is Kronecker delta function. Hence, one can expand the eigenstates as, $|\Phi \rangle = {\sum _{{l_1'}\le {l_2'}}}{\phi _{{l_1'l_2'}}}|{l_1'}{l_2'}\rangle_a + {\sum _{j'}}{\varphi _{j'}}|j'\rangle_m $. Thus, the eigenstate problem $\hat H |\Phi\rangle=E|\Phi\rangle$ is described by the coupled equations \begin{eqnarray} &&\sum _{l_1'\le l_2'}\phi _{l_1'l_2'} {_a}\langle {l_1l_2}|\hat H|{l_1'}{l_2'}\rangle_a + \sum\limits_{j'} {{\varphi _j}{}_a{{\langle {l_1}{l_2}|\hat H|j'\rangle }_m}} = E \phi_{l_1l_2},\nonumber \\ &&\sum _{j'}{\varphi _{j'}}{_m}\langle j|\hat H|j'\rangle_m +\sum\limits_{l{'_1} \le l{'_2}} {{\phi _{{l_1}{l_2}}}{}_m{{\langle j|\hat H|l{'_1}l{'_2}\rangle }_a}} = E\varphi_{j}. \end{eqnarray} For simplicity, we define $\psi_{l_1'l_2'}={( {1 + {\delta _{{l_1}'}}{{_l}_{_2'}}})^{1/2}} \phi_{l_1'l_2'}$ and so that the normalization coefficient is eliminated. After some algebraic calculation, using commutation relations of bosonic operators, one can obtain \begin{subequations} \begin{eqnarray} E{\psi _{{l_1},{l_2}}} &=& - {J_a}\left( {{\psi _{{l_1},{l_2} + 1}} + {\psi _{{l_1} + 1,{l_2}}} + {\psi _{{l_1} - 1,{l_2}}} + {\psi _{{l_1},{l_2} - 1}}} \right), \nonumber\\ &+& {\delta _{{l_1},{l_2}}}U{\psi _{{l_1},{l_2}}} + 2{\varepsilon _a}{\psi _{{l_1},{l_2}}} + 2g{\delta _{{l_1},{j}}\delta _{{l_2},{j}}}{\varphi _j}, \label{eigeneqn:1} \end{eqnarray} \begin{equation} E{\varphi _j} = - {J_m}\left( {{\varphi _{j + 1}} + {\varphi _{j - 1}}} \right) + {\varepsilon _m}{\varphi _j} + g{\delta _{{l_1}{j}}\delta _{{l_2}{j}}}{\psi _{{l_1},{l_2}}}. \label{eigeneqn:2} \end{equation} \label{eigeneqn:3} \end{subequations} Obviously, Eq.~\eqref{eigeneqn:1} and Eq.~\eqref{eigeneqn:2} show the hybridization of atomic and molecular states. To solve these equations, we adopt the ansatz \begin{eqnarray} \psi _{{l_1},{l_2}} &=& C_a{e^{iK_aR_a}}\xi (r), \nonumber \\ {\varphi _j} &=& C_m{e^{iK_mR_m}}. \label{ansatz} \end{eqnarray} Here, $K_a$, $R_a=(l_1+l_2)/2$ and $r=l_2-l_1$ are respectively the center-of-mass (c.o.m.) quasi-momentum, c.o.m. position and relative position of atoms. Correspondingly, $K_m$ and $R_m=j$ are the molecular quasi-momentum and position, respectively. The coefficients $C_{a}$ and $C_{m}$ are the normalization constants. The function $\xi(r)$ is independent of $K_a$ and $R_a$, \begin{equation} \xi (r) = {{C_ + }{e^{ik|r|}} + {C_ - }{e^{ - ik|r|}}}, \label{ansatz:1} \end{equation} where $k$ can be real or complex and $C_{\pm}$ are unknown coefficients. From the physical point of view, the states of atoms $\psi _{{l_1},{l_2}}$ can be expressed as Bloch-like function with independent c.o.m. part and relative motion part. Before we go further, let us prove that $K_a=K_m=K$ for eigenstates. When $l_1 = l_2 = j$ ($R_m=R_a=R$), combining Eq.~\eqref{eigeneqn:2} and Eqs.~\eqref{ansatz}, we have \begin{equation} \frac{E + 2{J_m}\cos \left( {{K_m}} \right) - \epsilon_m}{g}C_m {e^{i{(K_m-K_a)}R}} = \xi \left( 0 \right){C_a}. \label{KmKa} \end{equation} Because Eq.~\eqref{KmKa} holds for all $R \in [-L,L]$, we have $K_m = K_a$. For simplicity, we denote $K_m = K_a=K$ and restrict it in the first Brillouin zone from now on. Since the PBC requires $\psi_{l_1,l_2+L_t}=\psi _{l_1+L_t,l_2}=\psi_{l_1,l_2}$ and $\varphi _{j+L_t}=\varphi _j$, the c.o.m. quasi-momentum obeys $K=2\pi n/L_t$ with $n=-L,-L+1, \ldots,L$. From Eqs.~\eqref{eigeneqn:3} and \eqref{ansatz}, introducing $\tilde E = E - 2{\varepsilon _a}$ and $\Delta = {\varepsilon _m} - 2{\varepsilon _a}$, one can obtain \begin{equation} \tilde{E}\xi (r) = J_a^K\left[ {\xi (r + 1) + \xi (r - 1)} \right]+ {\delta _{r,0}}U_\mathrm{eff}\xi (r), \label{EnergyEqn:1} \end{equation} where $U_\mathrm{eff}={U + 2{g^2}/({{\tilde{E} - \Delta-J_m^K}}})$ and $J_a^K = - 2J_a\cos (K/2)$, $J_m^K=-2J_m\cos (K)$. Obviously, the atom-molecule coupling contributes an additional energy-dependent term in the effective interaction $U_\mathrm{eff}$. This indicates that the atom-molecule coupling $g$ may play the role of atom-atom interaction $U$ and therefore DBS's may appear even the atom-atom interaction is absent. In the case of $\Delta \to \infty $ or $U \to \infty$, Eq.~\eqref{EnergyEqn:1} can be approximated as \begin{equation} \tilde{E}\xi (r) = J_a^K\left[ {\xi (r + 1) + \xi (r - 1)} \right] + {\delta _{r,0}}U\xi (r), \label{EnergyEqn:U0} \end{equation} which reduces to the case of no atom-molecule coupling~\cite{jopB.41.16.161002}. \begin{figure* \includegraphics[width = \textwidth ]{fig1} \caption{\label{fig:Spectrum} Energy spectrum under influence of atom-molecule coupling, atom-atom interaction and atom-molecule energy difference. % The circular and triangular dots denote the scattering states and the DBS's, respectively. % The color of each dot represents the proportion of molecular states, which is given by ${P_m} = \sum\nolimits_j {|{\varphi _j}{|^2}} $. % (A1)-(A4): Energy spectrum for different atom-molecule coupling $g=0, 1, 2, 4$ with $U=\Delta=0$. % (B1)-(B4): Energy spectrum for different atom-atom interaction $U/g=0, 0.25, 1, 4$ with $\Delta=0$ and $g=4$. % (C1)-(C4): Energy spectrum for atom-molecule energy difference $\Delta/g=-4, 1, 2, 4$ with $g=4, U=8$. % The other parameters are set as $J_a=1$, $J_m=0$ and $L_t=21$ by default. } \end{figure*} \subsection{Continuum band} The continuum band corresponds to scattering states whose $k$ are real numbers. For a real $k$, substituting Eq.~\eqref{ansatz:1} into Eq.~\eqref{EnergyEqn:1}, we have the eigenenergies \begin{equation} \tilde E = 2J_a^K\cos (k). \label{eigenergy:1} \end{equation} Here, the value of $k$ can be determined by the following procedure. Substituting Eqs.~\eqref{ansatz:1} and \eqref{eigenergy:1} into Eq.~\eqref{EnergyEqn:1}, one can find that the coefficients $C_{\pm}$ obey, \begin{equation} \frac{{{C_ + }}}{{{C_ - }}} = - \frac{{ - {J_a^K}2i\sin k + \left( {U + \frac{{2{g^2}}}{{2{J_K}\cos k - \Delta - J_m^K}}} \right)}}{{{J_a^K}2i\sin k + \left( {U + \frac{{2{g^2}}}{{2{J_K}\cos k - \Delta - J_m^K}}} \right)}}. \label{Cpm1} \end{equation} Furthermore, according to the PBC, $\xi(r)$ obeys $\xi(r+L_t)=e^{iKL_t/2}\xi(r)$ and therefore one can obtain the coefficients $C_{\pm}$ \begin{equation} \frac{{{C_ + }}}{{{C_ - }}} = - \frac{{{{\left( { - 1} \right)}^{iK{L_t}/2}} - {e^{ - ik{L_t}}}}}{{{{\left( { - 1} \right)}^{iK{L_t}/2}} - {e^{ik{L_t}}}}}. \label{Cpm2} \end{equation} Combining Eqs.~\eqref{Cpm1} and \eqref{Cpm2}, one can determine $k$ by solving the following equation, \begin{equation} \frac{{ {J_a^K}2i\sin k - \left( {U + \frac{{2{g^2}}}{{2{J_K}\cos k - \Delta - J_m^K}}} \right)}}{{{J_a^K}2i\sin k + \left( {U + \frac{{2{g^2}}}{{2{J_K}\cos k - \Delta - J_m^K}}} \right)}} ={{( - 1)^{K{L_t}/2}}} {e^{-ik{L_t}}}. \label{k:analytical} \end{equation} Obviously, the above equation is invariant under the transformation $k \to - k$ and thus $k$ can be restrained in the region $[0,\pi]$. Substituting the values of $k$ into Eq.~\eqref{eigenergy:1}, we obtain the eigenenergies of scattering states, which are denoted by the circular dots in Fig.~\ref{fig:Spectrum}. Correspondingly, the explicit expression of $\xi(r)$ is given as \begin{equation} \xi (r) \sim {( - 1)^{K{L_t}/2}}{e^{ - ik{L_t}}}{e^{ik|r|}} + {e^{ - ik|r|}}, \end{equation} which has the same form as the one of no atom-molecule coupling~\citep{PRA.90.062301,jopB.41.16.161002}. Besides, we calculate the proportion of molecular state for each eigenstate, \begin{equation} {P_m} = \sum\nolimits_j {|{\varphi _j}{|^2}}, \end{equation} which is denoted by the color in Fig.~\ref{fig:Spectrum}. Due to the atom-molecule coupling, the scattering states are hybridization of molecular states and atomic states. \subsection{Isolated bands} Isolated bands correspond to the states with complex values of $k$. If the atom-molecule coupling is absent, i.e. $g=0$, the atomic and molecular states are decoupled and there appears an isolated band corresponding to the molecular states, see Fig.~\ref{fig:Spectrum} (A1). When $J_m=0$, the isolated molecular band is exactly given as $\tilde{E}=\Delta$. For non-zero atom-molecule couplings $g$, the isolated bands correspond to DBS's, whose $k$ can be assumed as $k = \beta + i\eta$ (where $\beta$ and $\eta$ are both real numbers). Noting that the wavefunction must remain finite when $r \to \infty $, Eq.~\eqref{ansatz:1} can be rewritten as \begin{equation} \xi (r) = {e^{(i\beta - \eta )|r|}}. \end{equation} For simplicity, we introduce ${e^{i\beta - \eta }} \equiv {\alpha}$, which satisfies ${\alpha} \in \mathbb{C}$ and $0<|\alpha|<1$. Thus $\xi(r)$ can be rewritten as \begin{equation} \xi (r) ={\alpha ^{|r|}}. \label{ansatz:2} \end{equation} This expression indicates that the wavefunctions of atomic states decay exponentially when the relative distance increases~\citep{jopB.41.16.161002}. Combining Eqs.~\eqref{EnergyEqn:1} and \eqref{ansatz:2}, one can obtain \begin{equation} \tilde{E} = 2J_a^K\alpha + \left( U + \frac{{2{g^2}}}{{\tilde E - \Delta -J_m^K}} \right) \label{BoundBand1} \end{equation} for $r=0$, and \begin{equation} \tilde E = J_a^K({\alpha ^{ - 1}} + \alpha ) \label{BoundBand2} \end{equation} for $r>0$. Here, $\tilde{E}$ and $\alpha$ are unknown parameters. To ensure real eigenenergies $\tilde{E}$, the parameter $\alpha$ must be real as well and so that we have $\beta = m\pi$ and $m \in \mathbb{N}$. By numerically solving Eqs.~\eqref{BoundBand1} and \eqref{BoundBand2}, we obtain two isolated bands for DBS's, see the triangular dots in Fig.~\ref{fig:Spectrum}. The emergence of two isolated bands is consistent with the previous results obtained by other methods~\cite{PhysRevA.83.031607, PhysRevA.77.021601, PhysRevA.78.023617}. From Eqs.~\eqref{BoundBand1} and \eqref{BoundBand2}, when $J_m=U=\Delta=0$, we find that if $(\tilde{E},\alpha)$ are their solutions, then $(-\tilde{E}, -\alpha)$ are also their solutions. Furthermore, when the atom-molecule coupling strength $g$ increases, the two symmetric and isolated bands gradually separate from the continuum band, see Fig.~\ref{fig:Spectrum} (A1)-(A4), respectively. \subsection{Interplay among the atom-molecule coupling, the atom-atom interaction and the atom-molecule energy difference} \label{interaction_energy} Below, given $g=4J_a=4$ and $J_m=0$, we will show how the atom-atom interaction ($U$) and the atom-molecule energy difference ($\Delta$) affect the energy spectrum. To explore the interplay of $g$ and $U$, we choose $\Delta=0$. For simplicity, we concentrate our discussion on the case of $U>0$. Actually, the following discussion can be easily applied to the case of $U<0$. We present the energy bands for different values of $U/g$ in Fig.~\ref{fig:Spectrum} (B1)-(B4). Clearly, the repulsive interaction gradually lifts the energy of isolated bands. Under strongly repulsive interaction, the lower isolated band enters into the continuum band and results in the resonance between scattering and bound states~\cite{PhysRevA.77.021601,PhysRevA.78.023617}, see Fig.~\ref{fig:Spectrum}~(B4). Around resonance, the states display strong hybridization than other states in continuum band. When $U$ approaches to infinity, from Eq.~\ref{EnergyEqn:1}, the eigenenergies for the lower and upper isolated bands are given as $\tilde{E}=\Delta$ and $\tilde{E}=U$, respectively. In this instance, the lower isolated band purely corresponds to the bare molecule, while the upper isolated band corresponds to the bounded atomic pair. Given finite atom-atom interaction $U/g=2$, we then explore the interplay between $\Delta$ and $g$, as shown in Fig.~\ref{fig:Spectrum} (C1)-(C4). When $\Delta/g \ll -1$, the upper and lower isolated bands are respectively dominated by the bounded atomic pairs and the molecular states, see Fig.~\ref{fig:Spectrum} (C1). With the increase of $\Delta$, the lower isolated band is gradually shifted from the bottom to the upper of the continuum band, see Fig.~\ref{fig:Spectrum} (C2) and (C3). Particularly, for certain values of $\Delta$, the lower isolated band may completely merge into the continuum band, as shown in Fig.~\ref{fig:Spectrum} (C2). When $\Delta/g \gg 1$, the lower isolated band becomes dominated by the bounded atomic pair and the upper isolated band tends to be dominated by the molecular states, see Fig.~\ref{fig:Spectrum} (C4). However, if the atom-atom interaction is zero, the lower isolated band will never merge into the continuum band. To show this, we plot the eigenenergies for given c.o.m. quasi-momentum $K=0$ as a function $\Delta$, see Fig.~\ref{fig:FixK0}. In the absence of atom-atom interaction ($U=0$), the lower (upper) isolated band gradually approaches to the bottom (above) boundary of the continuum band when $\Delta \rightarrow +\infty$ ($\Delta \rightarrow -\infty$), see Fig.~\ref{fig:FixK0} (a). The two isolated bands for DBS's always sandwich the continuum band. For non-zero atom-atom interaction ($U\ne0$), the energy of DBS's can merge into the continuum band, causing the resonance between DBS's and continuum band, see Fig.~\ref{fig:FixK0} (b). In fact, one can prove that for a given $K$, there are two DBS solutions if $U=0$ and $g \ne 0$, while there can be only one solution if $U\ne0$, see Appendix.~\ref{apendix1} for more details. To summarize, the atom-atom interaction is essential for the occurrence of the resonance. \begin{figure \includegraphics[width=\columnwidth]{fig2} \caption{\label{fig:FixK0} Eigenenergies of the zero quasi-momentum states ($K=0$) versus the energy difference $\Delta$ and different ratios: (a) $U/g=0$ and (b) $U/g=2$. % The color represents the proportion of molecular states, which is given by $P_m=\sum_j {|{\varphi _j}{|^2}}$. % The other parameters are chosen as $J_a=1$, $\ J_m=0$, $g=4$ and $L_t=21$. } \end{figure} \subsection{Resonance between scattering states and DBS's} \label{resonance_condition} In this subsection, we discuss the resonance between scattering states and DBS's and give the resonant conditions. From Eqs.~\eqref{BoundBand1} and \eqref{BoundBand2}, one can give the energies for two isolated bands of DBS's. However, as mentioned in the previous subsection, for non-zero atom-atom interaction $U$ we have proved that there may be only one solution under some specific conditions. For a given $K$, the condition of only one solution of DBS's is given as \begin{equation} \frac{{2{g^2}}}{U} - 2|J_a^K| - J_m^K < \Delta < \frac{{2{g^2}}}{U} + 2|J_a^K| - J_m^K. \label{ineqn} \end{equation} This indicates that there exists resonance between scattering states and DBS's. If $J_m=0$, from Eq.~\eqref{ineqn}, one can find that there is only one DBS solution for all $K$ when $\Delta = 2g^2/U$, exactly corresponding to the result mentioned above in Fig.~\ref{fig:Spectrum} (C2). This can be understood by the atom-molecule conversion in the limit of $J_a=J_m=0$, see Appendix.~\ref{atomic}. By solving the eigen-equation, one can obtain three different kinds of eigenstates. One kinds of the eigenstates corresponds to separated atomic states $|a_{l_1l_2}\rangle=|l_1l_2\rangle_a$ with $l_1 < l_2 $. The other two kinds of eigenstates correspond to the \emph{dressed-molecule states}, which are in superposition of atomic state and molecular state ${|{d_l}\rangle = A_{\sigma}|l{\rangle _m} + B_{\sigma}|l,l{\rangle _a}}$. Here $A_{\sigma}$ and $B_{\sigma}$ are the coefficients of lower ($\sigma=1$) and upper ($\sigma=2$) dressed-molecule states. The lower dressed-molecule states and the separated atomic states are degenerate when $\Delta = 2g^2/U$ ($U>0$). Under this condition, a tiny atomic tunneling will immediately make the separated atomic states into the atomic scattering states, and then the atomic scattering states couple with the dressed-molecule states. That is why the degenerate condition is identical to condition where the lower isolated band merges into the continuum band. \section{Hybrid atom-molecule quantum walks}\label{hybridwalks} In this section, we analyze the QWs in our atom-molecule Hubbard system~\eqref{Hamiltonian}. The initial state is chosen as $|\Psi(0)\rangle=|0,0\rangle_a$, in which both two atoms occupy the $0$-th lattice site. The time-evolution is governed by the Schr{\"o}dinger equation, \begin{equation} |\Psi(t)\rangle=e^{-i\hat H t}|\Psi(0)\rangle. \end{equation} The atomic and molecular density distributions are respectively defined as \begin{eqnarray} n_{a,l}(t)&=&\langle \Psi(t)|a_l^{\dag}a_l|\Psi(t)\rangle, \nonumber \\ n_{m,l}(t)&=&\langle \Psi(t)|m_l^{\dag}m_l|\Psi(t)\rangle. \end{eqnarray} The spatial correlation of atoms is characterized by a second-order correlation function, \begin{equation} {\Gamma _{{l_1}{l_2}}}(t) = \langle \Psi (t)|{a^\dag _{l_1}}{a^\dag _{l_2}}{a_{{l_2}}}{a_{{l_1}}}|\Psi (t)\rangle, \label{cor_fun} \end{equation} which relates to the probability $P_{l_1,l_2}(t)=|\langle l_1,l_2|\Psi(t)\rangle|^2$ via ${\Gamma _{{l_1}{l_2}}}(t)=(1+\delta_{l_1,l_2})P_{l_1,l_2}(t)$. Thus ${\Gamma _{{l_1}{l_2}}}(t)$ gives the probability of detecting one particle at $l_1$-th site and the other particle at $l_2$-th site in the meantime. The diagonal terms $\Gamma_{l_1=l_2}(t)$ describes the correlated QWs of two atoms, in which the two atoms walk as a whole. The non-diagonal terms $\Gamma_{l_1\ne l_2}(t)$ describes the independent QWs of two atoms. If there is no atom-molecule coupling, the time evolution from the initial state $|0,0\rangle_a$ will evolve only in the subspace of the atomic states. Since the molecular subspace is not involved, the QWs of atoms is expected to only depend on $J_a/U$. When the atom-atom interaction is weak, the initial state has large overlaps with the atomic scattering states and so that the time-evolution is dominated by independent QWs~\citep{PRA.90.062301}. When the atom-atom interaction is strong, the two atoms in the same site will form stable bound state and so that the time-evolution is dominated by correlated QWs~\citep{jopB.41.16.161002, PhysRevA.83.031607, fukuhara2013microscopic, PRA.90.062301}. Indeed, under strong interaction, two atoms do perform correlated QWs, that is, the correlation function is dominated by the diagonal terms which recovers the results in Ref.~\citep{PRA.90.062301}. \subsection{QWs with atom-molecule coupling}\label{couplingwalks} Since the atom-molecule coupling may play the role of effective interaction, to show how the atom-molecule coupling affects the QWs, we turn off the atom-atom interaction ($U=0$) and the atom-molecule energy difference ($\Delta=0$). For comparison, we simulate the QWs with $g=0$ and $g=10$. The tunneling of atoms and molecule are chosen as $J_a=1,J_m=0$. Without atom-molecule coupling, the time-evolution of atomic density distribution and the final correlation function are shown in Fig.~\ref{fig:3rows} (a) and (b). The correlation function is dominated by the off-diagonal terms, which indicates that the two atoms walk independently. At the presence of atom-molecule coupling, there will be the atom-molecule Rabi oscillations~\citep{PhysRevLett.99.033201,donley2002atom}. If the atom-molecule coupling is strong enough, the atoms would go through many times of conversion before they walk to nearby lattice sites and thus experience a larger effective interaction. In Fig.~\ref{fig:3rows} (c) and (d), we show the atomic density distribution and the final correlation function for $g=10$ and $\Delta=0$. There appears notable stripes in the time-propagation of atomic density distribution, which can be explained by the fast atom-molecule conversion induced by strong atom-molecule coupling, see Fig.~\ref{fig:3rows} (c). The strongly correlated QWs are also identified by the final correlation functions which are dominated by diagonal terms, see Fig.~\ref{fig:3rows} (d). This is because the effective interaction is much larger than the tunneling strength, ${U_\mathrm{eff}} = 2{g^2}/( {\tilde E - \Delta - J_m^K})\gg J_a$. \begin{figure \includegraphics[width=\columnwidth]{fig3} \caption{\label{fig:3rows} The QWs with: (a-b) zero atom-molecule coupling $g/J_a = 0$, and (c-d) strong atom-molecule coupling $g/J_a=10$. % The left column shows the time-evolution of atomic density distribution and the right column show correlation functions of atoms for the final state. % The other parameters are chosen as $J_a=1,J_m=0$, $U=0$, $\Delta=0$ and $L_t=21$.} \end{figure} However, even for strong atom-molecule coupling, correlated QWs disappear when the atom-molecule energy difference $\Delta$ is much larger than the atom-molecule coupling $g$. In such a situation, the larger atom-molecule energy difference makes the atom-molecule conversion negligible. Therefore, atomic and molecular states are nearly decoupled and the two atoms walks independently since there is negligible effective atom-atom interaction from the atom-molecule conversion. \subsection{QWs near the resonance between scattering states and DBS's} \label{intercouplewalks} In above, we show that the time-evolution are either dominated by independent QWs or correlated ones. We wonder whether independent and correlated QWs may coexist. As mentioned in Sec.~\ref{resonance_condition}, under the conditions of $g \gg J_{a,m}$ and $U \gg J_{a,m}$, the resonance between scattering states and DBS's takes place around $\Delta \simeq 2g^2/U$. Below we will show the coexistence of independent and correlated QWs near the resonance between scattering states and DBS's. Given $J_a=1$, $J_m=0$, $g=10$, and $U=5$, we present the QWs in non-resonant ($\Delta = -40 \ll 2{g^2}/U$) and resonant ($\Delta= 40 = 2g^2/U$) conditions, see Fig.~\ref{fig:resonance}. Compared with Fig.~\ref{fig:3rows} (c), there is no clear stripes in the time-propagation of atomic density distribution for large $\Delta$, see Fig.~\ref{fig:resonance} (a) and (c). This is because large atom-molecule energy difference suppresses the atom-molecule conversion. In the non-resonant condition, the diagonal elements of correlation function dominate after the time-evolution, indicating the strong co-walking behavior, see Fig.~\ref{fig:resonance} (b). In the resonant condition, however, in addition to significant off-diagonal elements near the boundaries, there are significant diagonal elements on the diagonal line in the final correlation function, see Fig.~\ref{fig:resonance} (d). This indicates the coexistence of independent and correlated QWs, although the propagation speed of correlated QWs is smaller than the one of independent QWs. Such process can be explained by our argument in Sec.~\ref{resonance_condition}. \begin{figure \includegraphics[width=\columnwidth]{fig4} \caption{\label{fig:resonance} The hybrid atom-molecule QWs under: (a-b) non-resonant condition $\Delta = -40 \ll 2{g^2}/U$, and (c-d) resonant condition $\Delta= 40 = 2g^2/U$. % The left column shows the time-evolution of atomic density distribution and the right column show correlation functions of atoms for the final state. % The other parameters are chosen as $J_a=1$, $J_m=0$, $U=5$, $g=10$ and $L_t=21$. } \end{figure} \section{EFFECTIVE SINGLE-PARTICLE MODEL FOR STRONGLY CORRELATED QUANTUM WALKS} \label{dressedmodel} The strongly correlated QWs can be described by a single-particle model. By employing the many-body quantum degenerate perturbation theory~\cite{JPC1977Takahashi}, we derive an effective single-particle Hamiltonian for the strongly correlated QWs. To avoid the breakdown of DBS's near the resonance between scattering states and DBS's, we suppose $|\Delta - 2{g^2}/U| \gg 0$. When $J_{a,m} \ll g$ or $J_{a,m} \ll U$, the tunneling term $\hat T=-\sum \left( {{J_a}\hat a_l^\dag {{\hat a}_{l + 1}} + {J_m}\hat m_l^\dag {{\hat m}_{l + 1}} + H.c.} \right)$ in Hamiltonian ~\eqref{Hamiltonian} can be treated as a perturbation. Defining the subspace $\mathcal{H}^d_{\sigma} = \left\{ {|d_{\sigma,l}\rangle}, -L\le l \le {L}\right\}$ for DBS's (see Appendix.~\ref{atomic}), the projection operator is given by projecting the full Hilbert space $\mathcal{H}^{(2)}$ onto the unperturbed subspace $\mathcal H^d_{\sigma}$, \begin{equation} {{\hat P}_{\sigma}} = \sum\limits_l {|{d_{\sigma,l}}\rangle \langle {d_{\sigma,l}}|} , \end{equation} where $\sigma=\{1,2\}$ denotes the index for two different kinds of DBS's. Besides, the projection onto the orthogonal complement of $\mathcal H^d_\sigma$ reads as \begin{eqnarray} {\hat S_\sigma } &&= \sum\limits_{E_{{l_1}{l_2}}^{(0)} \ne E_\sigma ^{(0)}} {\frac{1}{{E_\sigma ^{(0)} - E_{{l_1}{l_2}}^{(0)}}}|{l_1}{l_2}\rangle \langle {l_1}{l_2}|} \nonumber \\ && + \sum\limits_{l,\sigma ' \ne \sigma } {{\frac{1}{E_\sigma ^{(0)} - E_{\sigma '}^{(0)}}}|{d_{\sigma ',l}}\rangle \langle {d_{\sigma ',l}}|} . \end{eqnarray} Therefore, according to the perturbation theory~\cite{JPC1977Takahashi} up to second order, we have \begin{eqnarray} {{\hat H}^{{\rm{eff}}}_\sigma} &&= {{\hat h}_{\sigma,0}} + {{\hat h}_{\sigma,1}} + {{\hat h}_{\sigma,2}} \nonumber \\ &&= {E_\sigma}{{\hat P}_\sigma} + {{\hat P}_\sigma}{{\hat T}}{{\hat P}_\sigma} + {{\hat P}_\sigma}{{\hat T}}\hat S_\sigma{{\hat T}}{{\hat P}_\sigma}. \end{eqnarray} Substituting the projection operators and perturbation term into the above equation, we can obtain \begin{eqnarray} {{\hat h}_{\sigma,0}} = &&{E_\sigma}\sum\limits_l {|{d_{\sigma,l}}\rangle \langle {d_{\sigma,l}}|}, \\ {{\hat h}_{\sigma,1}} = &&- {J_m}{A_{\sigma}^2}\sum\limits_l {(|{d_{\sigma,l}}\rangle \langle {d_{\sigma,l + 1}}| + |{d_{\sigma,l + 1}}\rangle \langle {d_{\sigma,l}}|)}, \\ {{\hat h}_{\sigma ,2}} = &&\frac{{2{J_a}^2B_\sigma ^2}}{{E_\sigma ^{(0)} - E_{{l_1},{l_2}}^{(0)}}}\sum\limits_l {\left( {\begin{array}{*{20}{l}} {2|{d_{\sigma ,l}}\rangle \langle {d_{\sigma ,l}}|+} \\ { |{d_{\sigma ,l + 1}}\rangle \langle {d_{\sigma ,l }}| + h.c.} \end{array}} \right)} \nonumber \\ &&+ \frac{{{J_m}^2{A_1^2}{A_2^2}}}{{E_\sigma ^{(0)} - E_{\sigma '}^{(0)}}}\sum\limits_l {\left( {\begin{array}{*{20}{l}} {2|{d_{\sigma ,l}}\rangle \langle {d_{\sigma ,l}}|+}\\ { |{d_{\sigma ,l + 2}}\rangle \langle {d_{\sigma ,l}}| + h.c.} \end{array}} \right)}, \nonumber \\ \end{eqnarray} Here, the coefficients $A_{\sigma}$ and $B_{\sigma}$ are given by calculating the unperturbed time-independent Schr{\"o}dinger equation (see Appendix.~\ref{atomic}). By introducing the mapping: $|{d_l}\rangle \langle {d_l}| \Leftrightarrow {d_l}^\dag {d_l},|{d_l}\rangle \langle {d_{l + 1}}| \Leftrightarrow {d_l}^\dag {d_{l + 1}},|{d_{l + 1}}\rangle \langle {d_l}| \Leftrightarrow {d_{l + 1}}^\dag {d_l}$, the effective single-particle Hamiltonian can be written as \begin{eqnarray} {{\hat H}^{{\rm{eff}}}_{\sigma}} &&=\sum\limits_l {\left({E_\sigma}+ { \frac{{4{J_a}^2{B_{1,2}^2}}}{{{E^{(0)}_\sigma} - {E^{(0)}_{l_1,l_2}}}}} +2\frac{{{J_m}^2A_1^2A_2^2}}{{E_\sigma ^{(0)} - E_{\sigma '}^{(0)}}}\right){d_{\sigma,l}}^\dag {d_{\sigma,l}}} \nonumber \\ && + \left( {\frac{{2{J_a}^2{B_{\sigma}^2}}}{E^{(0)}_\sigma-E^{(0)}_{l_1,l_2}} - {J_m}{A_{\sigma}^2}} \right)\sum\limits_j {\left( {{d_{\sigma,l}}^\dag {d_{\sigma,l + 1}} +H.c.} \right)} \nonumber \\ && + \left( {\frac{{{J_m}^2A_1^2A_2^2}}{{E_\sigma ^{(0)} - E_{\sigma '}^{(0)}}}} \right)\sum\limits_l {\left( {{d_{\sigma ,l}}^\dag {d_{\sigma ,l + 2}} + H.c.} \right)}. \label{ptb:effH} \end{eqnarray} In addition to the nearest-neighbor (NN) tunneling, there appears the next-nearest-neighbor (NNN) tunneling, which originates from the effects of molecular tunneling. The NNN tunneling brought by the atomic tunneling can be derived from 3rd-order perturbation theory, and we have neglected it since this term is extremely small compared with the lower order terms. Since $|E_1^{(0)}-E_2^{(0)}| \gg |E_1^{(0)}-E_0^{(0)}|$ or $|E_2^{(0)}-E_0^{(0)}|$, the NNN tunneling term is generally negligible compared with other terms. By implementing a Fourier transformation, the above single-particle Hamiltonian can be easily diagonalized and the eigenenergies are given as \begin{eqnarray} {{E_{\sigma}^{{\rm{eff}}}}} = && \left( {\frac{{8{J_a}^2{B_{\sigma}^2}}}{{{E^{(0)}_\sigma -E^{(0)}_{l_1,l_2}}}} - 4{J_m}{A_{\sigma}^2}} \right){\cos ^2}\left( {\frac{K}{2}} \right) \nonumber \\ && + 4{\frac{{{J_m}^2A_1^2A_2^2}}{{E_\sigma ^{(0)} - E_{\sigma '}^{(0)}}}}\cos^2K+{E_{\sigma}^{(0)}} + {J_m}{A_{\sigma}^2}, \label{ptb:energy} \end{eqnarray} which are well consistent with the ones from numerical diagonalization of the original Hamiltonian. In the effective single-particle Hamiltonian~\eqref{ptb:effH}, the effective NN tunneling strength is given as $J^{NN}_{{\rm{eff}},\sigma}= {{{2{J_a}^2{B_{\sigma}^2}}}/{(E^{(0)}_\sigma-E^{(0)}_{l_1,l_2})} - {J_m}{A_{\sigma}^2}} $. Obviously, $J^{NN}_{{\rm{eff}},\sigma}$ also depends the atom-molecule energy difference $\Delta$. In Fig.~\ref{fig:Jeff}~(a), we plot $J^{NN}_{{\rm{eff}},\sigma}$ as a function of $\Delta$, in which the solid and dashed lines respectively correspond to the upper and lower DBS bands. The parameters are chosen as $J_a=J_m=1$, $g=10$ and $U=0$. The effective tunneling strengths for the upper and lower DBS bands are always different except for the crossing point. The different effective tunneling strength will result in different propagation speeds in QWs. In Fig.~\ref{fig:Jeff} (b), we show the atomic density distribution with $\Delta=-10$ and other parameters as same as the ones for Fig.~\ref{fig:Jeff} (a). Since the initial state mostly occupies the two DBS bands, there appear two light-cones: the inner light cone and the outer one respectively correspond to the QWs of DBS's in the upper and lower bands. \begin{figure \includegraphics[width=\columnwidth]{fig5} \caption{\label{fig:Jeff} (a) The effective nearest-neighbor tunneling strength $J^{NN}_{\rm{eff}}$ versus the atom-molecule energy difference $\Delta$. The parameters are chosen as $J_a=J_m=1$, $g=10$ and $U=0$. (b) Time-evolution of molecular density distribution with $\Delta=-10$ and the same parameters with (a). (c) The energy bands with $\Delta=-19.125$, $L_t=21$ and other parameters given in (a). The blue-dotted lines and the red dots correspond to the bands of DBS's and atomic scattering states respectively. (d) Long time-evolution of atomic density distribution with $\Delta=-19.125$ and other parameters given in (a). } \end{figure} From Fig.~\ref{fig:Jeff} (a), near $\Delta = -19.125$, the effective tunneling strength of the DBS's in the upper band is almost zero, i.e. $J^{NN}_{\rm{eff}} \approx 0$. Given $\Delta = -19.125$, we plot the energy bands in Fig.~\ref{fig:Jeff} (c). The upper DBS's band is very flat, which indicates very small tunneling strength, while the lower DBS's band is not. This is concordant with the results of effective model in Fig.~\ref{fig:Jeff}(a). Noticing that $J^{NN}_{\rm{eff}} \approx 0$, there is only the NNN tunneling term ($J^{NNN}_{\rm{eff}} \simeq 0.005$) in the effective Hamiltonian~\eqref{ptb:effH}. Therefore, the odd sites are never occupied in the QWs from the 0-th site, which is a clear significant of the NNN tunneling, see Fig.~\ref{fig:Jeff} (d). Such novel phenomenon can be understood as the coherent interference between the atomic and molecular tunneling. As shown in the perturbative calculation, the effective NN tunneling of DBS's can be achieved via two paths, one of which is the second-order atomic tunneling and the other one is the first-order molecular tunneling. These two paths give rise to different values of effective tunneling energy. When these two values have opposite values with the same magnitude, the total effective tunneling is cancelled out. On the other hand, in the effective Hamiltonian~\eqref{ptb:effH}, the effective tunneling induced by the molecular tunneling is of first order, while the effective tunneling induced by atomic tunneling is of second order. This means that, as the molecular tunneling may has considerable effects, it should be treated carefully in realistic systems. \section{summary and discussions} In summary, we study the energy bands and hybrid atom-molecule QWs of a 1D coupled atom-molecule Hubbard system. We find that the atom-molecule coupling can play the role of effective atom-atom interaction. Unlike the conventional bounded atomic pair, the cooperation of the atom-atom interaction and the atom-molecule coupling induces two kinds of DBS's, which are the dressed molecule states in superposition of bounded atomic pair and bare molecule. Even if the atom-atom interaction is absent, one can observe correlated QWs induced by the atom-molecule coupling. Tuning the parameters (the atom-molecule energy difference $\Delta$, the atom-atom interaction $U$ and the atom-molecule coupling $g$) to satisfy the resonant condition, one of the DBS's will enter the continuum band and break into atomic scattering states. Thus, one can observe the coexistence of independent and correlated QWs near the resonance between scattering states and DBS's. Away from the resonant condition, we employ many-body quantum degenerate perturbation theory to derive the effective single-particle Hamiltonian for the two DBS bands. The nearest-neighbor tunneling strength in the effective single-particle model can be turned off by tuning the atom-molecule energy difference $\Delta$. Due to the two DBS's have different effective tunneling strengths, the QWs show two light cones with different propagation speeds. Moreover, we find that the NN tunneling of one of the DBS's can be suppressed to zero due to the interference between atomic tunneling and molecular tunneling. In this condition, the NNN tunneling become dominated and can be observed from the distribution of atomic density during the time-evolution. Our study not only provides a full description for the hybrid atom-molecule QWs with atom-molecule coupling, but also will shine some light on the two-photon QWs with spontaneous parametric down-conversion (SPDC)~\citep{PhysRevLett.108.023601, ANTONOSYAN201422, PhysRevX.4.031007}. In such a waveguide array, the near-degenerate signal and idler photons correspond to two identical atoms, the pump photon acts as the molecule, and the SPDC play the role of the atom-molecule coupling. The difference is that, in the waveguide array, the energy of signal and idler photons always equal to the pump photon, and there is no interaction between photons if the Kerr effects are absent. According to our study, the idler and signal photons may have effective on-site interaction induced by the SPDC even if there are no Kerr effects~\cite{PhysRevLett.113.173601}. Furthermore, the idler and signal photons can form dressed bound states with the pump photon when the SPDC is sufficiently strong. Therefore, there will appear two different kinds of dressed photonic bound states with different effective hopping strengths between waveguides. \begin{acknowledgments} This work was supported by the National Natural Science Foundation of China (NNSFC) under Grants No. 11574405. \end{acknowledgments}
{ "timestamp": "2018-08-09T02:07:35", "yymm": "1805", "arxiv_id": "1805.02370", "language": "en", "url": "https://arxiv.org/abs/1805.02370" }
\section{Introduction} In recent years, CNN achieved great success on various computer vision tasks. However, due to their huge model size and computation complexity, many CNN models cannot be applied on real world devices directly. Many previous works focus on how to accelerate CNNs. They can be roughly divided to four categories: quantization (\emph{e.g. } BinaryNet \cite{courbariaux2016binarized}), group convolution based method (\emph{e.g. } MobileNet \cite{howard2017mobilenets}), pruning (\emph{e.g. } channel pruning\cite{he2017channel}) and mimic (\emph{e.g. } Li \emph{et al.} \cite{li2017mimicking}). Although most of these works can accelerate models without degradation of performance, their speed-up ratios are limited (\emph{e.g. } compress VGG to VGG-1-4\footnote {In this paper \emph{-1-n} network means a network whose channel numbers of every layer is reduced to $\frac{1}{n}$ compared with original network.}). Few methods are experimented on very tiny models (\emph{e.g. } compress VGG to VGG-1-16). "Very tiny" is a relative concept and we define it as a model whose channel numbers of every layer is less than or equal to $\frac{1}{16}$ compared with original model. Our experiments show that our method outperform other approaches for very tiny models. \begin{figure}[tb] \centering \includegraphics[ width=0.7\linewidth]{pipeline.png} \caption{The pipeline of our method. First we train a full-precision teacher network. Then we operate quantization on the feature map of full-precision teacher network and we get a quantized network. Finally we use this quantized network as teacher model to teach a quantized student network. We emphasize that we do both quantization operation on feature maps of student and teacher networks in training stages. } \label{fig:pipeline} \end{figure} As two kinds of model acceleration methods, quantization and mimic are widely used to compress model. Quantization methods can transfer a full-precision model to a quantized model\footnote{The quantized network in this paper means a network whose output feature map is quantized but not means parameter is quantized} while maintaining similar accuracy. However, using quantization method to directly speed up models usually need extra specific implementation (\emph{e.g. } FPGA) and specific instruction set. Mimic methods can be used on different frameworks and easy to implement. The essence of these methods is knowledge transfer in which student networks learn the high-level representations from teacher networks. However, when applied on very tiny networks, mimic method does not work well either. This is also caused by the very limited representation capacity. It is a natural hypothesis that if we use quantization method to discretize the feature map of the teacher model, the search scope of the student network will get shrinked and it will be easier to transfer knowledge. And quantization on student network can increase the matching ratio on the discrete feature map from teacher network. In this paper, we propose a new approach utilizing the advantages of quantization and mimic methods to train very tiny networks. Figure \ref{fig:pipeline} illustrates the pipeline. Quantization operation is applied to the feature map of the teacher model and the student model. The quantized feature map of the teacher model is used as supervision of the student model. We propose that this quantization operation can facilitate feature map matching between two networks and make knowledge transfer easier. To summarize, the contributions of this paper are as follows: \begin{itemize} \item We propose an effective algorithm to train very tiny networks. To the best of our knowledge, this is the first work focusing on very tiny networks. \item We utilize quantized feature maps to facilitate knowledge distilling, \emph{i.e. } quantization and mimic. \item We use a complicated task object detection instead of image classification to verify our method. Sufficient experiments on various CNNs, frameworks and datasets validate our approach effective. \item The method is easy to implement and has no special limitation during training and inference. \end{itemize} \section{Related Work} \subsection{Object Detections} The target of object detection \cite{zeng2017crafting,liu2017recurrent,yan2015object,yan2014fastest,ouyang2015deepid,ouyang2017chained} is to locate and classify the objects in images. Before the success of convolutional neural network, some traditional pattern recognition algorithms (HOG \cite{wang2009hog}, DPM \cite{lowe2004distinctive} \emph{et al.}) are used on this task. Recently, R-CNN \cite{girshick2014rich} and its variants become the popular method for object detection task. The SPP-Net \cite{he2014spatial} and Fast R-CNN \cite{girshick2015fast} reuse feature maps to speed up R-CNN framework. Beyond the pipeline of Fast R-CNN, Faster R-CNN add region proposal networks and use joint-train method during training. R-FCN utilize position-sensitive score maps to reduce more computation. YOLO \cite{redmon2016you} and SSD \cite{liu2016ssd} are the typical algorithms of region-free methods. Although the frameworks used in this paper are from region proposal solutions family, Quantization Mimic can easily transform to YOLO and SSD methods. \subsection{Model Compression and Acceleration} \subsubsection{Group Convolution Based Methods:} The main point of this kind of methods is to use group convolution for acceleration. Mobilenet \cite{howard2017mobilenets} and Googlenet Xception \cite{chollet2016xception} utilize Depthwise Convolution to extract features and Pointwise Convolution to merge features. Beyond these works, Zhang \emph{et al.} \cite{zhang2017interleaved} propose a general group convolution algorithm and show that Xception is the special case of their method. Group operation will block the information flow between different group convolutions and most recently, Shufflenet \cite{zhang2017shufflenet} introduces channel shuffle approach to solve this problem. \subsubsection{Quantization:} Quantization methods \cite{rastegari2016xnor,zhou2016dorefa} can reduce the size of models efficiently and speed up for special implementation. BinaryConnect \cite{courbariaux2015binaryconnect}, binarized neural network (BNN) \cite{courbariaux2016binarized} and LBCNN \cite{juefei2016local} replace floating convolutional filter with binary filter. Furthermore, INQ \cite{zhou2017incremental} introduce a training method to quantize model whose weights are constrained to be either powers of two or zero without a decrease on performance. Despite these advantages, quantization models can only be used to speed up on special devices. \subsubsection{Pruning and Sparse connection:} \cite{alvarez2016learning,wen2016learning} set sparse constraint during training for pruning. \cite{anwar2016compact,li2016pruning} focus on the importance of different filter weights and do pruning operation according to weights’ importance. And these methods are training-based, which are more costly. Recently He \emph{et al.} \cite{he2017channel} propose an inference-time pruning method, using LASSO regression and least square construction to select channels in classification and detection task. Furthermore, Molchanov \emph{et al.} \cite{molchanov2016pruning} combine transfer learning and greedy criteria-based pruning. We use He \emph{et al.} \cite{he2017channel} and Molchanov \emph{et al.} \cite{molchanov2016pruning} for comparing our alogrithm and we will show that it is difficult for them to prune a large network (such as VGG) to a very tiny network (such as VGG-1-32). Sparse connection \cite{guo2016dynamic,han2016eie,han2015learning,yang2016designing} can be considered as parameter-wise pruning method, eliminating connection between neurons. \subsubsection{Mimic:} The principle of mimic is Knowledge Transfer. As a pioneering work, Knowledge Distillation (KD) \cite{hinton2015distilling} defines soft targets as outputs of the teacher network. Compared with labels, soft targets provide extra information about inter-class similarities. FitNet \cite{romero2014fitnets} develops Knowledge Transfer as whole feature map mimic learning to compress wide and shallow networks to thin and deep networks. Li \emph{et al.} \cite{li2017mimicking} extend mimic techniques for object detection task. We use their joint-train version as our baseline. \section{Our Approach} In this section, we first introduce the quantization method and mimic method we use separately, then combine them and propose the pipeline of Quanzition Mimic algorithm. In \S\ref{sec:analysis} we show the theoretical analysis of our approach. \subsection{Quantization} \cite{courbariaux2015binaryconnect,rastegari2016xnor,zhou2016dorefa} use quantization method to compress models directly. Unlike them, we use quantization to limit the range and help mimic learning. In details, the quantization for teacher network is to discretize its output and in the meanwhile we can guarantee the accuracy of teacher network when doing quantization. And quantizing the output of student network can help it match the discrete output of teacher network, which is the goal of mimic learning. In our work, we do quantization operation on the last activation layer of the teacher network. INQ \cite{zhou2017incremental} constrains the output to be either zero or power of two. Different from them, we use uniform quantization for the following reason. R-FCN \cite{dai2016r} and Faster R-CNN \cite{ren2015faster} use RoI pooling operation which is a kind of max pooling operation. The output of RoI pooling layer is determined by the max response of every block in RoIs. So it is important to describe strong response of feature maps more accurately. Uniform quantization can better describe large value than power of two quantization. We define the element-wise quantization function $Q$ as: \begin{equation} Q\left(f\right)= \beta \quad \text{if} \ {\frac{\alpha+\beta}{2}}<f\leq {\frac{\gamma+\beta}{2}} \label{eq:Q_function} \end{equation} where $\alpha$ ,$\beta$ and $\gamma$ are the adjacent entries in the code dictionary $D$: \begin{equation} D=\left\{0,s,2s,3s….. \right\} \end{equation} where s is the stride of uniform quantization. We use function $Q$ to convert full-precision feature maps to quantized feature maps: \begin{equation} \widetilde{f}=Q\left(f\right) \end{equation} where $f$ is the feature map. Figure \ref{fig:relu} illustrates quantized ReLU function. \begin{figure}[tb] \centering \setlength{\belowcaptionskip}{-5pt} \includegraphics[ width=0.5\linewidth]{relu.png} \caption{Quantized ReLU function. The new activation function is defined as $\widetilde{f}=Q\left(f\right)$,where $f$ is the original activation function.} \label{fig:relu} \end{figure} As for backward propagation, inspired by BNN \cite{courbariaux2016binarized}, we use the full-precision gradient. We find that quantized gradient will cause the student network difficult to converge. \subsection{Mimic} \label{sec:mimic} In popular CNN detectors, the feature map from feature extractors (\emph{e.g. } VGG, Resnet) will affect both localization and classification accuracy. We use L2 regression to let student networks learn the feature map from the teacher networks and utilize Li \emph{et al.} \cite{li2017mimicking} joint-train version as our backbone. Unlike soft target \cite{hinton2015distilling} whose dimension is equal to the number of categories, the dimension of feature maps is related to the size of inputs and networks architecture. Sometimes number can be millions. Simply mimicking the whole feature maps is difficult for student network to converge. Faster R-CNN \cite{ren2015faster} and R-FCN \cite{dai2016r} are region-based detectors and both of them use RoI-Pooling operation. So the region of interest plays more important role than other regions. We use mimic learning between the region of interest on student’s and teacher’s feature maps. The whole loss function of mimic learning is described as follows. \begin{equation} L=L_{cls}^{r}+L_{reg}^{r}+L_{cls}^{d}+L_{reg}^{d}+\lambda L_{m} \end{equation} \begin{equation} L_{m}={\frac{1}{2N}}\sum_{i}\left\| f_{t}^{i}-r\left(f_{s}^{i}\right) \right\|_{2}^{2} \end{equation} where $ L_{cls}^{r}$,$ L_{reg}^{r}$ are the loss function of region proposal networks \cite{girshick2015fast} while $ L_{cls}^{d}$,$ L_{reg}^{d}$ are the function of R-FCN or Faster R-CNN detectors. We define $L_{m}$ as the mimic-loss and $\lambda$ is the loss weight. N is the number of region proposals. $ f_{t}^{i}$ and $ f_{s}^{i}$ represent the $i$th region proposal on teacher and student network’s feature maps. Function $r$ transfers the feature map from student network to the same size of teacher network. The mimic learning is on the last year of feature extractor networks. Though RoI mimic learning reduces the dimension of feature maps and helps student network convergence, very tiny network is sensitive to mimic loss weight $\lambda$. If $\lambda$ is small, it will weaken the effectiveness of mimic learning. In the contrast, large $\lambda$ will also bring bad results. Due to the poor learning capacity of very tiny network, large $\lambda$ will cause it focus on the learning of teacher network's feature map at the begining of training. In this way, it will ignore other loss. We name this phenomenon as `gradient focus' and we set $\lambda$ as 0.1, 1 and 10 for experiments. \subsection{Quantization Mimic} The pipeline of our algorithm is as follows: First we train a full-precision teacher network. Then we use function $Q$ to compress full-precision teacher network to a quantized network. To get high performance compressed model, we finetune on full-precision network. Finally, we utilize quantized teacher network to teach student network using mimic loss as supervision. And during training, we both quantize the feature map of teacher and student network. Figure \ref{fig:introduction} illustrates our method. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth ]{introduction.png} \caption{The effect of quantizatuon operation. We use quantized teacher network to guide quantized student network. The quantization on teacher network can discretize its feature maps and convert a continous high dimension space to a discrete high dimension space. And for student network, quantization helps low dimension manifold to match a discrete high dimension feature map. In this way, mimic learning becomes easier .} \label{fig:introduction} \end{figure} Because of quantization operation, the mimic loss $L_{m}$ is redefined as: \begin{small} \begin{equation} L_{m}={\frac{1}{2N}}\sum_{i}\left\| Q\left(f_{t}^{i}\right)-Q\left(r\left(f_{s}^{i}\right)\right) \right\|_{2}^{2} \end{equation} \end{small} where quantization function $Q$ is defined in Equation \ref{eq:Q_function} \subsection{Analysis} \label{sec:analysis} We will show that the quantization of both teacher and student networks will facilitate feature maps matching between student and teacher networks and help student network learn better. Figure \ref{fig:introduction} shows the effect of quantization operation. We assume that $f_{t}^{n}$ is the feature map of full-precision teacher network with the input $I_{n}$. The width, height and channel numbers of $f_{t}^{n}$ are $W_{t}^{n}$,$H_{t}^{n}$ and $C_{t}^{n}$.We squeeze $f_{t}^{n}$ as a column vector $y_{n}$ whose dimension is $ W_{t}^{n}H_{t}^{n}C_{t}^{n}$. The target of mimic learning is to get approximate solution of the following equation: \begin{equation} Y=w_{s}I \end{equation} \begin{equation} Y=\left[y_{1},y_{2},...,y_{n}\right] \end{equation} \begin{equation} I=\left[I_{1},I_{2},...,I_{n}\right] \end{equation} where $w_{s}$ is the weights of student network. However, due to the high dimensionality of $y_{n}$ and large image numbers, the rank of $Y$ can be very high. On the other hand, very tiny networks have few parameters and the rank of $w_{s}$ is low. Therefore, it is difficult for very tiny student networks to mimic high dimension feature maps directly. The target of Quantization Mimic is changed as: \begin{equation} Q\left(Y\right)=Q\left(w_{s}I\right) \end{equation} where $Q$ is quantization function. The quantization operation on the output of teacher network discretizes its feature maps. Furthermore, because of the range of element in feature maps is bounded, the value of every entry in matrix $ Q\left(Y\right)$ is discrete and finite. For example, if the range of element in $f_{t}^{n}$ is $\left[0,40\right]$ and the stride of uniform quantization is 8, the possible value of entry in $ Q\left(Y\right)$ is from $\left\{0,8,16,24,32,40\right\}$. In this way, we convert continuous high dimension space to discrete high dimension space. The quantization on student networks makes it easier to match the $Q\left(f_{t}^{n}\right)$. Every axis of target space for student network can be separated by entries in code dictionary. And the whole space is separated by several high dimension cubes. For simplicity, we assume the dimension of target space $\phi$ is 3, \emph{i.e. }, the dimension of $y_{n}$ is 3. The code dictionary is selected as $\left\{1,3\right\}$. Because of quantization operation, this 3-dimension space is separated by 8 cubes (See Figure~ \ref{fig:cube}). If a vector $v$ is in cube $c$ , after quantization operation, it will be the center of cube $c$. For example, $v=\left[1.2,2.2,1.8\right]^\mathrm{T}$,$Q\left(v\right)=\left[1,3,1\right]^ \mathrm{T}$, and $\left[1,3,1\right]^ \mathrm{T}$ is the center of a cube. We suppose that feature maps of student network consist a low dimension manifold. The goal of mimic learning is to use this manifold to fit all 8 cube centers, \emph{i.e. }, we want these 8 centers on the manifold. However, after introducing quantization on student network, if the manifold intersect a cube, the manifold can achieve the center of this cube. Thus, instead of matching all centers, we only need the manifold to intersect 8 cubes, which weaken matching conditions. And in this way, there are more suitable manifolds , which promotes feature maps matching between two networks. Experiments in \S\ref{ablation_quantization} shows that our approach is still effective in high dimension case. Figure \ref{fig:cube} illustrates a manifold in 3-dimension space which intersect all cubes. \begin{figure}[tb] \setlength{\belowcaptionskip}{-5pt} \centering \includegraphics[ width=0.46\linewidth]{cube.png} \caption{A manifold in 3-dimension space. The manifold intersect all 8 cubes. The point '*' represent the center of cube, which is the vector after quantization operation. } \label{fig:cube} \end{figure} \subsection{Implementation Details} We train networks with Caffe \cite{jia2014caffe} using C++ on 8 Nvidia GPU Titan X Pascal. We use stochastic gradient descent (SGD) algorithm. The weight decay is 0.0005 and momentum is 0.9. We set uniform quantization stride as 1 for all experiments. \paragraph{VGG with R-FCN:} In this experiment we rescale the images such that their shorter side is 600 and we use original images for test. We use gray images as input. The learning rate is 0.001 for the first 50K iterations and 0.0001 for the next 30K iterations. The teacher network is VGG-1-4 with R-FCN and we set mimic loss weight $\lambda$ as 1. For RPN anchors, we use one aspect ratio and 4 scales with box areas of $4^{2}$, $8^{2}$, $16^{2}$, $32^{2}$. 2000 RoIs are used to sample the features on the feature maps of teacher and student networks. The ROI output size of R-FCN detector is set as $3\times3$. We utilize OHEM \cite{shrivastava2016training} algorithm to help training. \paragraph{Resnet with Faster R-CNN:} We rescale all the images such that shorter side is 600 for both training and test. We totally train 40K iterations. The learning rate is 0.001 for the first 30K iterations and 0.001 for the last 10k iterations. We set $\lambda$ as 0.1, 1 and 10 for Resnet experiment respectively. And for RPN anchors, we use 2 aspect ratios (2:1, 3:1) and 3 scales with box areas of $4^{2}$, $8^{2}$ and $16^{2}$. 128 RoIs are used to sample the features on the feature maps of teacher and student networks. The ROI output size of Faster R-CNN detector is set as $7\times7$. \section{Experiments} To prove the generalization ability of our method, we evaluate our approach for different frameworks on different datasets. In detail, we use VGG with R-FCN and Resnet with Faster R-CNN as our backbones. Results are reported on WIDER FACE \cite{yang2016wider} and Pascal VOC \cite{everingham2010pascal}. \subsection{Experiments on WIDER FACE Dataset} WIDER FACE dataset \cite{yang2016wider} contains about 32K images with 394K annotated faces. The size of faces in WIDER FACE dataset vary a lot. The validation and Test set are divided into \emph{easy} , \emph{medium} and \emph{hard} subsets. We find that VGG and VGG-1-4 have similar performance on WIDER FACE dataset (See Table \ref{tab:vgg_self}) and we use VGG-1-4 with R-FCN detector as our teacher network (large model). To show the superiority of our algorithm, VGG-1-32 with R-FCN detector is selected as the student network (small model). Table \ref{tab:speed_vgg} illustrate the speed and size of our very tiny model student network compared with large models. It has extremely small size and fast speed. \begin{table}[h] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{ The comparision between VGG, VGG-1-4, VGG-1-32 with R-FCN detector on speed and size. The size is calculated theoretically. VGG-1-32 with R-FCN has tiny size and amazing speed, which can be applied on embedded devices. Tested on Titan X GPU with a single image of which the longer side is resized to 1024.} \begin{tabular}{c|c|c} \hline Method & Speed & Size \\ \hline \hline \tabincell{c}{VGG \\ with R-FCN} & 103.6ms & 79.8M \\ \hline \tabincell{c}{VGG-1-4\\ with R-FCN} & 30.2ms & 5.04M \\ \hline \tabincell{c}{VGG-1-32 \\with R-FCN} & \bd{9.6ms} & \bd{0.132M} \\ \hline \end{tabular} \label{tab:speed_vgg} \end{table} \subsubsection{Main Results} We implement our algorithm on VGG-1-32 with R-FCN detector. We compare our method with Li \emph{et al.} \cite{li2017mimicking}, He \emph{et al.} \cite{he2017channel} and group convolution based accelerating method including using Depthwise Convolution and Group Convolution. The results are shown in Table \ref{tab:vgg_other_method} (we set input as $1000\times600$ and compute the complexity). For fair comparison, we use the same implementation details for all experiments. We involve Depthwise Convolution and Group Convolution into VGG-1-32 structures, guaranteeing the similar complexity with the original network. For example, we extend the channel numbers of every convolution layers $c$ to $\lceil\sqrt{3}c\rceil$ and we set group number as 3. We also compare with pruning methods \cite{he2017channel} and \cite{molchanov2016pruning}. The pruning ratio is set as 8, which means the model we get after pruning has the same size with VGG-1-32. \begin{table}[tb] \tablestyle{4.0pt}{1.2} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{ Comparison with other methods. The results show that our method outperforms others (higher is better). Group convolution based approach(Depthwise Convolution and Group Convolution) don`t work well on the very tiny model. Quantization Mimic also outperforms than Li \emph{et al.} , who only uses mimic learning.} \begin{tabular}{c|c|p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \hline solution & \tabincell{c}{complexity\\(MFLOPS)}&easy & medium & hard \\ \hline \hline scratch & 227 &71.3 & 55.4 & 23.8\\ \hline Depthwise Convolution & 232 & 69.1 & 51.1 & 21.6 \\ \hline Group Convolution(group 2) & 286 & 67.8 & 51.9 & 22.4 \\ \hline Group Convolution(group 3) & 273 & 65.8 & 50.8 & 22.1 \\ \hline He \emph{et al.} \cite{he2017channel} & 227 & 68.0 & 50.7 & 22.1 \\ \hline Molchanov \emph{et al.} \cite{molchanov2016pruning} & 227 & 73.2 &58.2 & 25.2\\ \hline Li \emph{et al.} \cite{li2017mimicking}(only mimic) & 227 & 71.9 &58.2 & 25.6\\ \hline Quantization Mimic & 227 & \bd{73.9} & \bd{62.1} & \bd{27.6} \\ \hline \end{tabular} \label{tab:vgg_other_method} \end{table} The results demonstrate that our algorithm outperforms other methods. We find that group convolution based methods are not suitable for very tiny networks. This is mainly because very tiny networks usually have small channel numbers and using group convolutions will block the information flow. Compared with pruning methods \cite{molchanov2016pruning}\cite{he2017channel}, Quantization Mimic also works better. We argue that pruning methods can get good results on large models (\emph{e.g. }, VGG and Resnet). However, none of these works try to prune a network to $\frac{1}{16}$ times. Compared with mimic method \cite{li2017mimicking}, Quantization Mimic outperforms it by 2.0 points, 3.9 points and 1.9 points on \emph{easy}, \emph{medium} and \emph{hard} subsets. We find that quantized teacher network has better performance than full-precision teacher network. Ablation experiments are conducted to diagnose how Quantization Mimic brings improvement. Table \ref{tab:vgg_self} further shows the effectiveness of our approach. We can see that our method can increase AP of very tiny models by 2.6 points, 6.7 points and 3.7 points on \emph{easy}, \emph{medium} and \emph{hard} subsets respectively. Results on \emph{medium} and \emph{hard} subsets, the small model can even achieve comparable results with large model. \begin{table}[tb] \tablestyle{4.0pt}{1.2} \centering \caption{ The comparision between VGG and VGG-1-4 on the WIDER FACE dataset. We suggest that VGG has abundant structures and it has similar performance with VGG-1-4. And we choose VGG-1-4 as teacher model. } \begin{tabular}{c|c|p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \hline Model & solution & easy & medium & hard \\ \hline \hline VGG & full-precision & 83.9 & 61.0 & 26.8\\ \hline \multirow{2}*{VGG-1-4} & full-precision & 82.4 & 62.5 & 26.3 \\ \cline{2-5} ~ & quantized & 83.7 & 65.0 & 27.4 \\ \hline \multirow{2}*{VGG-1-32} & scratch & 71.3 & 55.4 & 23.8 \\ \cline{2-5} ~ & Quantization Mimic & 73.9 & 62.1 & 27.6 \\ \hline \end{tabular} \label{tab:vgg_self} \end{table} \subsubsection{Ablation Study on Quantization Operation} \label{ablation_quantization} To verify the effectiveness of quantization operation, we do several experiments. As demonstrated in Table \ref{tab:vgg_ablation_quantization}, the performance of teacher network directly impact the performance of student network. Also, the quantization operation help mimic learning and improves the performance of student network. For the same quantized teacher network, doing quantization operation on the student network increase AP by 0.9 point, 2.8 points and 2.0 points on three subsets. \begin{table}[tb] \tablestyle{4.0pt}{1.2} \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{Quantization \emph{vs. } Nonquantization: The ablation study shows that the performance of student network depends on the performance of teacher network. The results also suggest that quantization method do help mimic learning.} \begin{tabular}{c|c|p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \hline \tabincell{c}{ teacher \\ quantization?}& \tabincell{c}{ student \\ quantization?} & easy & medium & hard \\ \hline \hline & &71.9 & 58.2 & 25.6 \\ \hline \checkmark & & 73.0 & 59.3 &25.6 \\ \hline \checkmark &\checkmark & \bd{73.9} & \bd{62.1} & \bd{27.6} \\ \hline \end{tabular} \label{tab:vgg_ablation_quantization} \end{table} We notice that quantization operation has regularization effect on network. To exclude that it is the regularization that bring improvement of performance, we also do experiments with and without quantization on student network. In Table \ref{tab:vgg_ablation_qua}, we find that only doing quantization has no influence on the performance, \emph{i.e. } , the improvement comes from Quantization Mimic. \begin{table}[h] \tablestyle{4.0pt}{1.2} \centering \caption{ The influence of quantization only on small networks. The results suggest quantization only does not bring improvement.} \begin{tabular}{c|c|p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \hline Model & quantization? & easy & medium & hard \\ \hline \hline \multirow{2}*{VGG-1-32}& \checkmark & 71.9 & 55.2 & 23.7 \\ \cline{2-5} ~ & & 71.3 & 55.4 & 23.8 \\ \hline \end{tabular} \label{tab:vgg_ablation_qua} \end{table} To further show that quantization operation can help student networks learn better, we illustrate the matching ratio of each RoI. In \S\ref{sec:analysis} we show that quantization operation promotes feature map matching between two networks. And in \S\ref{sec:mimic}, we introduce that our mimic learning is based on RoIs. Thus, we consider the matching ratio of each RoI, \emph{i.e. }, the percantage of elements in a RoI whose distance between two feature maps smaller than a threshold. We define the distance between $i$th entries of two feature maps as $|f_t^{i}-f_s^{i}|$, where $f_t^{i}$ and $f_s^{i}$ are the $ith$ element of teacher and student feature maps. If this distance is smaller than a threshold (we set 0.3 in this paper), then these two entries match. We evaluate on the validation set of WIDER FACE. We compare the results between full-precision and quantized network. The horizontal axis represents bin of matching ratio, i.e. the percentage of matched entries in a RoI. Figure \ref{fig:matching} demonstrates the results. The result shows that quantization operation can increase matching ratio of RoIs and promote feature maps matching process. Thus, quantization operation can help mimic learning. \begin{figure}[tb] \centering \setlength{\belowcaptionskip}{-10pt} \includegraphics[ width=0.48\linewidth]{scores_bar.png} \caption{Histogram of matching ratio. The plot suggests that using quantiation operation both on teacher and student networks can help student network's feature maps to better match teacher network's. The horizontal axis represents bin of matching ratio, \emph{i.e. } the percentage of matched entries in a RoI. The vertical axis represents the frequency of RoIs within this bin. } \label{fig:matching} \end{figure} \subsubsection{Ablation Study on Quantization Method} Different quantization method will bring different effects. The quantization methods we use in our work is uniform quantization. Another popular quantization method is power of 2 quantization, constraining the output to be either zero or power of 2. Table \ref{tab:vgg_ablation_uniform} illustrates the comparison of uniform quantization and power of 2 quantization. Teacher networks using different quantization methods have similar performance. However, the student network using uniform quantization is much better than using power of 2 quantization. We think this is probably because that our mimic learning is based on RoIs and strong responses in these areas are more important. So we should describe large number more accurately. And for power of 2 quantization method, it describes small numbers (\emph{e.g. } the number less than 1) accurately but roughly for large numbers. Thus, uniform quantization method is more reasonable and can bring better results. \begin{table}[tb] \tablestyle{4.0pt}{1.2} \centering \caption{Uniform Quantization \emph{vs. } Power of 2 Quantization: Using uniform quantization as quantization method can get better result than using power of 2 quantization. } \begin{tabular}{c|c|p{0.8cm}<{\centering}p{0.8cm}<{\centering}p{0.8cm}<{\centering}} \hline Model & Quantization method & easy & medium & hard \\ \hline \hline VGG-1-4 & power of 2 & 83.9 & 64.8 & 27.8 \\ \cline{2-5} (teacher) & uniform(stride:1) & 83.7 & 65.0 & 27.4 \\ \hline VGG-1-32 & power of 2 & 73.0 & 59.5 & 26.6 \\ \cline{2-5} (student) & uniform(stride:1) & 73.9 & 62.1 & 27.6 \\ \hline \end{tabular} \label{tab:vgg_ablation_uniform} \end{table} \subsection{Experiments on Pascal VOC Dataset} We also carry out experiments on more complicated common object detection task. In this section we implement our approach on Resnet18 with Faster R-CNN detector for Pascal VOC object detection benchmark \cite{everingham2010pascal}. The experiments show that Quantization Mimic can extend to more complicated tasks. Following \cite{ren2015faster}, we use Pascal VOC 2007 test set for test and trainval images in VOC 2007 and VOC 2012 for training (07+12). Hyperparameters in Faster R-CNN are same as \cite{ren2015faster}. Mean Average Precision (mAP) is used as the criterion to evaluate the performance of model. We use Resnet18 with Faster R-CNN framework as teacher networks. And Resnet18-1-16 with Faster R-CNN framework are selected as student networks accordingly. We aim at improving the performance of the student works using Quantization Mimic method. \subsubsection{Main Results} First we compare the model using Quantization Mimic method with the model trained from scratch . Because of the poor learning ability of very tiny models, it is difficult to train them on complicated task, such as classification on Imagenet \cite{deng2009imagenet} and common object detections on Pascal VOC. Our method can improve a large margin of performance for very tiny networks on common object detections. Table \ref{tab:resnet_self} illustrates the results. We suggest that our method increase mAP 6.5 points for Resnet18-1-16 with Faster R-CNN framework. Relatively, we improve the performance for $16.0\%$. The experiments also show that Quantization Mimic is easy to implement and can be extended to different frameworks. \begin{table}[tb] \centering \caption{ The comparision between Resnet18-1-16 with Faster R-CNN detector finetuned on Imagenet dataset and using Quantization Mimic method. Our method can also bring huge improvement for very tiny networks on complicated common object tasks. } \begin{tabular}{c|c|c} \hline Model & solution & mAP \\ \hline \hline \multirow{2}*{Resnet18} & full-precision & 72.9 \\ \cline{2-3} ~ & quantized & 73.3 \\ \hline \multirow{2}*{Resnet18-1-16} & scratch & 40.5 \\ \cline{2-3} ~ & Quantization Mimic & 47.0 \\ \hline \end{tabular} \label{tab:resnet_self} \end{table} We also do experiments compared with other accelerating and compressing methods. Same as the experiments on WIDER FACE dataset, we compare our method with Li \emph{et al.} \cite{li2017mimicking}, who only use mimic learning. In Table \ref{tab:resnet_other_method}, our method outperforms than Li \emph{et al.} \cite{li2017mimicking}. Our results are 2.4 points higher than our backbone, Li \emph{et al.} \cite{li2017mimicking} on Resnet-1-16, which is a large margin. \begin{table}[tb] \newcommand{\tabincell}[2]{\begin{tabular}{@{}#1@{}}#2\end{tabular}} \centering \caption{ Comparison with backbone on Resnet18-1-16 with Faster R-CNN framework. Our method outperforms our backbone Li \emph{et al.} \cite{li2017mimicking} methods for Resnet18 (higher is better). } \begin{tabular}{c|c|c} \hline Model & solution & mAP \\ \hline \hline \multirow{2}*{Resnet-1-16} & Li \emph{et al.} \cite{li2017mimicking}(only mimic) & 44.6 \\ \cline{2-3} ~ & Quantization Mimic & \bd{47.0} \\ \hline \end{tabular} \label{tab:resnet_other_method} \end{table} \subsubsection{Ablation Study on Mimic Loss Weight} We propose that very tiny networks can be sensitive to loss weight in multi-loss task. We do this experiment on Resnet18-1-16 to find a suitable mimic loss weight. In Table \ref{tab:resnet_ablation_loss}, we can see that the result of $\lambda=1$ is much better than the result of $\lambda=0.1$ and $\lambda=10$. We suggest that if mimic loss is too small (\emph{e.g. } $\lambda=0.1$) , the effectiveness of mimic learning will decline. However, if we set mimic loss weight too large (\emph{e.g. } $\lambda=10$), the very tiny network will mainly focus the gradient produced by mimic loss and ignore other gradients. And we call this phenomenon as `gradient focus' phenomenon. \begin{table}[h] \setlength{\abovecaptionskip}{-10pt} \centering \caption{Mimic Loss Weight $\lambda$: The results show that very tiny networks are sensitive to the mimic loss weight. Either too large or too small loss weight will decrease the effectiveness of mimic learning. } \begin{tabular}{c|c|c} \hline Model & mimic loss weight & mAP \\ \hline \hline \multirow{3}*{Resnet18-1-16}&10 & 44.1 \\ \cline{2-3} ~ & 1 & \bd{47.0} \\ \cline{2-3} ~ & 0.1 & 43.0 \\ \hline \end{tabular} \label{tab:resnet_ablation_loss} \end{table} \section{Conclusion} In this paper, we propose Quantization Mimic to improve the performance of very tiny CNNs. We show quantization operation on both teacher and student networks can promote feature map matching. It becomes easier for the student network to learn after quantization operation. The experiments on WIDER FACE and Pascal VOC dataset demonstrate that quantization mimic outperforms state-of-the-art methods. We hope our approach can facilitate future research on training very tiny CNNs for cutting-edge applications. \clearpage \bibliographystyle{splncs}
{ "timestamp": "2018-09-14T02:06:42", "yymm": "1805", "arxiv_id": "1805.02152", "language": "en", "url": "https://arxiv.org/abs/1805.02152" }
\section{Introduction} In the past few years high contrast and high resolution observations across a large wavelength range have revealed a variety of distinct features in planet forming disks. Multiple ringed systems were uncovered such as HL\,Tau (\citealt{2015ApJ...808L...3A}), HD\,97048 (\citealt{2016ApJ...831..200W}, \citealt{2016A&A...595A.112G}, \citealt{2017A&A...597A..32V}) or TW\,Hya (\citealt{2016ApJ...820L..40A}, \citealt{2017ApJ...837..132V}). Other systems like MWC\,758 (\citealt{2013ApJ...762...48G}, \citealt{2015A&A...578L...6B}), HD\,100453 (\citealt{2015ApJ...813L...2W}, \citealt{2017A&A...597A..42B}) or Elias\,2-27 (\citealt{2016Sci...353.1519P}) show huge spiral arms or variable shadows (HD\,135344\,B, \citealt{2017ApJ...849..143S}). It is still unclear whether these features in general or in part are linked to ongoing planet formation, or rather to other processes within the disks. In addition to ever more detailed images of circumstellar disks, a growing number of giant planets at wide orbital separations (typically $>$100\,au) are discovered (e.g. HD\,106906\,b, \citealt{2014ApJ...780L...4B}; HD\,203030\,b, \citealt{2006ApJ...651.1166M}; CVSO\,30\,c, \citealt{2016A&A...593A..75S}). These objects are of particular interest to understand planet formation mechanisms, since they are the youngest planets that we have discovered and we can study their atmospheres in great detail via resolved spectroscopy. Yet these objects are also particularly puzzling, because typical planet formation mechanisms such as core accretion should take much longer than 100\,Myr at these distances (\citealt{1996Icar..124...62P}), while the typical dissipation timescale of gas rich disks is at least an order of magnitude shorter (\citealt{2001ApJ...553L.153H}). Clearly, detailed characterization of other, younger, systems is required to refine the current paradigm and to understand whether the observed disk structures are linked to planet formation. In this work we concentrate on a previously unresolved disk around a nearby T Tau object.\\ CS\,Cha is a young (2$\pm$2\,Myr, \citealt{2008ApJ...675.1375L}) classical T Tauri object of spectral type K2Ve (\citealt{1977A&A....61...21A}, \citealt{2014A&A...568A..18M}), located in the Chamaeleon I association at a distance of 165$\pm$30\,pc (combined estimate from \citealt{1997A&A...327.1194W}, \citealt{1999A&A...352..574B} following \citealt{2008A&A...491..311S}).\footnote{We note that in a recent study by \cite{2017arXiv171004528V}, the distance to the Cha\,I cloud was estimated to be slightly larger at 179\,pc. This is well covered by our uncertainties and we prefer to use the smaller distance for better comparability with previous studies until a direct distance measurement for CS\,Cha by Gaia becomes available.} \cite{2007A&A...467.1147G} found that CS Cha is likely a single lined spectroscopic binary with a minimum mass of the secondary component of 0.1\,M$_\odot$ and a minimum orbital period of 2482\,d ($\sim$4\,au semi-major axis, assuming a system mass of 1\,M$_\odot$). In a later study by \cite{2012ApJ...745..119N} the binary nature of CS\,Cha was confirmed. They could fit the broadened spectral lines with two Gaussian profiles, making the system potentially a double lined spectroscopic binary. They found a flux ratio of the two components of 1.0$\pm$0.4.\\ CS\,Cha is well known to feature a large infrared excess in its spectral energy distribution (SED) with a pronounced dip at 10\,$\mu$m (see e.g. \citealt{1992ApJ...385..217G}). The lack of emission at this wavelength regime was attributed to a large cavity by several studies (\citealt{1992ApJ...385..217G}, \citealt{2007ApJ...664L.111E}, \citealt{2009ApJ...700.1017K}, \citealt{2011ApJ...728...49E}, \citealt{2016MNRAS.458.1029R}), indicating that the system might be in a transition stage from a young gas-rich disk to a debris disk. The radius of the cavity has been a subject of intense modeling using unresolved photometric measurements. \cite{2007ApJ...664L.111E, 2011ApJ...728...49E} find rather large cavity radii between 38\,au and 43\,au, while a more recent study by \cite{2016MNRAS.458.1029R} based on Herschel data estimates a smaller radius of 18$^{+6}_{-5}$au. The most likely explanation is that the disk cavity is caused entirely by the stellar binary companion, since the cavity size is within a factor of a few of the binary semi-major axis. ALMA band 3 observations by \cite{2016ApJ...823..160D} did not resolve the disk with a beam size of 2.7$\times$1.9\,arcsec, limiting the outer extent of the disk to radii smaller than 169\,au for the population of mm-sized dust grains.\\ Radiative transfer modeling of the unresolved photometry by \cite{2007ApJ...664L.111E} suggested that significant dust settling and large dust grains (5\,$\mu$m) are needed to fit the SED in the far infrared and mm wavelength ranges. This hints at an advanced stage of dust evolution. \cite{2014ApJ...795....1P} note that they resolve circumstellar structure around CS\,Cha with 3.3\,cm ATCA observations outside of 30\,arcsec. Since it can be excluded that this emission stems from the disk itself, they conclude that it is likely a jet, which is launched from the disk at a position angle of $\sim162^\circ$.\\ We used the SPHERE (Spectro-Polarimetric High-contrast Exoplanet REsearch, \citealt{2008SPIE.7014E..18B}) extreme adaptive optics imager to study the circumstellar environment of CS\,Cha in polarized near infrared light. Our goals were to resolve the disk cavity for the first time and to study potential features of dust evolution or planet disk interaction such as rings/gaps and spiral arms. In addition to our SPHERE observations we used archival high contrast data to strengthen our conclusions. \section{Observations and data reduction} \subsection{The initial SPHERE polarimetric observations} CS Cha was first observed with SPHERE/IRDIS (Infra-Red Dual Imaging and Spectrograph, \citealt{2008SPIE.7014E..3LD}) in Differential Polarization Imaging mode (DPI, \citealt{2014SPIE.9147E..1RL}) in J-band on February 17th 2017 as part of our ongoing program to understand dust evolution in transition disks via the distribution of small dust particles. Conditions during the night were excellent with clear sky and an average seeing in the optical of 0.6\,arcsec and a coherence time of $\sim$5\,ms.\\ The (unresolved) central binary was placed behind a coronagraph with a diameter of 185\,mas (\citealt{2009A&A...495..363M}, \citealt{2011ExA....30...39C}). We used an integration time of 96\,s in individual exposures and one exposure per half wave plate (HWP) position. A total of 11 polarimetric cycles were recorded with a combined integration time of 70.4\,min. In addition to the science data, we recorded star center frames at the beginning and end of the sequence as well as flux calibration frames and sky frames. For the star center frames a symmetrical waffle pattern is induced on the deformeable mirror that produces 4 satellites spots in the image. These spots can be used to accurately determine the position of the source behind the coronagraph (\citealt{2013aoel.confE..63L}). For the flux frames the central source was offset from the coronagraph and a total of 10 images were taken with an individual exposure time of 2\,s and a neutral density filter in place in order to prevent saturation.\\ The data reduction follows generally the description given in \cite{2016A&A...595A.112G}, with the main difference being the instrumental polarization and cross-talk correction. We give a short summary here. Our reduction approach uses the double difference method (\citealt{2001ApJ...553L.189K}, \citealt{2004IAUS..221..307A}). For this purpose we first subtract the ordinary and the extraordinary beam to create individual Q$^+$, Q$^-$, U$^+$ and U$^-$ images, corresponding to HWP positions of 0$^\circ$,45$^\circ$, 22.5$^\circ$ and 67.5$^\circ$. We then subtract Q$^-$ and Q$^+$ (and U$^-$ and U$^+$) to remove the instrumental polarization downstream from the HWP within SPHERE. This is done on a cycle by cycle basis before all resulting images are median combined to obtain the Stokes Q and U images. We also create a total intensity image, i.e. Stokes I, from our data. This is done by adding in all cases ordinary and extraordinary beams and then median combining all resulting images over all polarimetric cycles. The Stokes Q and U images still contain residual instrumental polarization mainly induced by the VLT/UT3 mirror 3 and SPHERE mirror 4. To most accurately determine the angles and degree of linear polarization, it is necessary to correct for the instrumental polarization and cross-talk. For this purpose we used the detailed Mueller matrix model and correction method of van Holstein et al., in prep. (including telescope mirrors, instrument common path and IRDIS itself). This model was calibrated using an unpolarized standard star as well as the SPHERE/IRDIS internal (polarized) calibration light source and was validated with polarimetric observations of the TW\,Hya disk. The correction was performed on each individual double difference image taking the rotation angles of all optical components from the image headers into account. The instrument polarization model was successfully applied in several recent studies of circumstellar disks imaged with SPHERE, such as the cases of T\,Cha (\citealt{2017A&A...605A..34P}), DZ\,Cha (\citealt{2018A&A...610A..13C}) and TWA\,7 (\citealt{2018arXiv180401929O}), as well as for the observation of substellar companion polarization in the case of the HR\,8799 system (\citealt{2017arXiv170907519V}).\\ Finally we used the Stokes Q and U images to compute the radial Stokes parameters Q$_\phi$ and U$_\phi$ (see \citealt{2006A&A...452..657S}). The Q$_\phi$ image contains all azimuthally polarized flux as positive signal and radially polarized flux as negative signal. U$_\phi$ contains all flux with polarization angles 45$^\circ$ offset from radial or azimuthal directions. In the case of single scattering by a central source, we expect all signal to be contained in Q$_\phi$ and thus U$_\phi$ can be used as a convenient noise estimate. This is typically a valid assumption for disks seen under low inclination (\citealt{2015A&A...582L...7C}). We show our final reduced polarimetric images in Fig.~\ref{disk-main}. We show the total intensity image in Fig~\ref{companion-main}.\\ In our polarimetric images we clearly detected a compact, low inclination circumstellar disk in scattered light around CS\,Cha. Furthermore, we detected in our total intensity images a faint companion candidate approximately 1.3\,arcsec to the West of CS\,Cha. After inspection of the polarized intensity images at the companion position, it became apparent that the companion is also detected in polarized light. We show the final polarized intensity image including circumstellar disk and companion overlaid with the angle of linear polarization in Fig~\ref{pol-vector}. Since the companion was detected in polarized light as well as in total intensity we can calculate its degree of linear polarization. We discuss this in detail in section \ref{comp: pol-degree}. \subsection{Archival NACO imaging data} The CS Cha system was previously observed with VLT/NACO (\citealt{2003SPIE.4841..944L}, \citealt{2003SPIE.4839..140R}) as part of a stellar and sub-stellar multiplicity survey among young Chamaeleon members (see \citealt{2012A&A...546A..63V} for results of that survey). Observations were carried out on February 17th 2006, i.e. exactly 11 years before our new SPHERE observations. The data was taken in standard jitter mode in the Ks-band. Integration time for each individual exposure was 1\,s, and 35 exposures were taken and co-added per jitter position. The total integration time of the data set was 11.7\,min. \\ We used ESO-Eclipse for the standard data reduction of the NACO data. This consisted of flat-fielding and bad pixel masking, as well as sky subtraction. The individual reduced images were then registered with respect to the central source and median combined.\\ In addition to standard data reduction, we removed the radial symmetric part of the stellar PSF by subtracting a 180$^\circ$ rotated version of the image from itself. This was done in order to highlight faint companions at close angular separations and enable an accurate photometric and astrometric measurement without influence of residual stellar flux. The final reduced images are shown in Fig.~\ref{companion-main}. We re-detected the faint companion candidate first seen in our SPHERE observations in the final reduced NACO image. \subsection{Archival HST/WFPC2 observations} In addition to the NACO archival observations, CS\,Cha was also observed with the Hubble Space Telescope's Wide Field and Planetary Camera 2 (HST/WFPC2, \citealt{1994AAS...184.2402T}) on February 18th 1998. CS\,Cha was centered in the Planetary Camera sub-aperture of WFPC2 with an effective pixel scale of 46\,mas/pixel. The observations consisted of two exposures each in the F606W and the F814W filters, i.e. the WFPC2 equivalents of R and I-band. Exposure times for the F606W filter were 8\,s for the first exposure and 100\,s for the second exposure with gain settings of 14 and 7 respectively. In the F814W filter the exposure times were 7\,s and 80\,s with the same gain setup. The innermost 2 pixels of the primary PSF as well as additional pixels along the central pixel readout column were saturated in all exposures. The data was reduced using the standard archival HST/WFPC2 pipeline.\\ To increase the detectability of faint point sources around the primary star we subtracted a scaled reference star PSF from the long exposure images. As reference star we used the K5 star HD\,17637, which was imaged for that purpose in the same program as the science data. As noted by \cite{2000ApJ...538..793K}, the two main factors in achieving a good PSF subtraction result are a similar spectral type of the reference star and science target, as well as the placement of the reference star on the detector. Due to the under-sampling of the HST PSF, it is important to use a reference star that was imaged as close in detector position as possible to the science target. From the multiple images that were taken of HD\,17637 we thus chose the one with the smallest angular separation from the position of the science target in both filters. Since the reference star and CS\,Cha both had a saturated PSF core, we could not use the PSF peak for scaling of the reference star PSF. We instead used an annulus along the unsaturated flanks of the PSF to compute the scaling factor.\\ After subtraction of the reference star, we detected a faint point source at the expected companion candidate position in the F814W images with a signal-to-noise ratio of 5.0. The companion candidate was hidden under one of the bright diffraction spikes of the primary PSF. We show the subtracted and non-subtracted image in Fig.~\ref{companion-main}. In the F606W data set we could not find a significant detection at the companion candidate position. \subsection{NACO L-band follow-up observations} To image CS Cha in the thermal infrared, we used again VLT/NACO. The observations were acquired on April 28th 2017, using the angular differential imaging (ADI) mode of NACO with the L' filter and the L27 camera following the strategy described by \cite{2012A&A...548A..33C}. The NACO detector cube mode was in addition used for frame selection with exposure time of 0.2\,s. A classical dithering sequence was used with the repetition of five offset positions to properly remove the sky contribution. In the end, the typical observing sequence represented a total of 57 cubes of 100 frames each, i.e a total integration time of 19 min for an observing sequence of 45 min on target. Two sequences of non saturated PSFs were acquired using a neutral density filter at the beginning and the end of each observing sequence to monitor the image quality. These data also served for the calibration of the relative photometric and astrometric measurements. The reduction of the ADI saturated dithered datacubes was performed with the dedicated pipeline developed at the Institut de Plan\'{e}tologie et d’Astrophysique de Grenoble (IPAG; \citealt{2012A&A...548A..33C}) providing various flavors of ADI algorithms. At the separation of the candidate, the background noise is the main source of limitation. Spatial filtering and simple derotation or classical ADI are therefore sufficient to process the ADI data.\\ After final data reduction we did not detect the companion candidate in our NACO L$_p$-band data. \subsection{SPHERE polarimetric follow-up observations} Our initial SPHERE observations were followed-up on June 18th 2017 with SPHERE/IRDIS DPI observations in H-band with the goal to obtain H-band photometry of the companion candidate and to confirm its detection in polarized light. The conditions during the observations were overall poor. Even though the seeing in the optical was on average only 0.7\,arcsec, the coherence time was very short, on the order of 2\,ms on average during the observations. This lead to a much poorer AO correction compared to the previous J-band observations.\\ The setup of the June observations was similar to the first set of observations in February. The central binary was again placed behind the 185\,mas coronagraph. We used a slightly shorter individual exposure time of 64\,s due to the unstable weather conditions. We recorded a total of 7 polarimetric cycles with a combined integration time of 29.9 minutes. \\ Data reduction was performed analogously to the previous J-band data set. We found that the circumstellar disk and the companion candidate were again detected in polarized light. We could also detect the companion in our stacked total intensity images. Final reduced images are shown in Fig.~\ref{disk-main} and Fig.~\ref{companion-main}. \begin{figure*} \centering \includegraphics[width=0.95\textwidth]{disk_revised_2.pdf} \caption[]{\textit{1st row:} Reduced SPHERE DPI J-band Q$_\phi$ and U$_\phi$ as well as intensity image. North is up and East to the left. \textit{2nd Row:} The same for our H-band observations. Color scale (linear) and stretch are the same for all Q$_\phi$ and U$_\phi$ images. We did not correct for the 1/r$^2$ drop-off in stellar irradiation. The grey hatched disk overplotted on the images shows the size of the utilize coronagraph. } \label{disk-main} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{companion_circle_HST.pdf} \caption[]{\textit{1st row:} SPHERE J and H-band intensity images as well as the NACO K$_{\mathrm{s}}$-band image and the WFPC2 F814W image of CS\,Cha. North is up and East to the left. In all images the position of the faint companion candidate is marked by a white dashed circle. \textit{2nd Row:} The same images as above but subtracted with a 180$^\circ$ rotated version of themselves to remove the bright stellar halo. In the case of WFPC2 we subtracted a reference star scaled to the flux of CS\,Cha to remove the bright stellar PSF and especially the bright diffraction spike on top of the companion position. In the WFPC2 images we removed the central columns containing the PSF peak since they were heavily saturated.} \label{companion-main} \end{figure*} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{pol_vector_2.pdf} \caption[]{Polarized intensity image of CS\,Cha and its companion in J-band, after instrumental polarization correction. The circumbinary disk as well as the companion are well detected in polarized light. We overlayed the angle of the linear polarized light with light blue bars. The companion deviates by $\sim$20$^\circ$ from an azimuthal polarization w.r.t. CS\,Cha indicating that it is intrinsically polarized and does not just scatter stellar light. } \label{pol-vector} \end{figure} \section{Astrometric confirmation of the companion} Since the companion was detected in our new SPHERE data as well as the archival NACO and HST data, we were able to to test if the companion is co-moving with CS\,Cha. To ensure minimal contamination by the central stars' flux in the SPHERE and NACO images, we subtracted a 180$^\circ$ rotated version of the images from the original (see Fig.~\ref{companion-main}). For the HST F814W image we used the reference star subtracted image to determine the companion position. \\ In the case of the SPHERE and NACO images we used IDL \texttt{starfinder} (\citealt{2000A&AS..147..335D}) to fit a reference PSF to the companion and extract its position in detector coordinates. As reference PSF we used the unsaturated stellar primary in the NACO image and the dedicated flux frames taken for the SPHERE images. Since the separation of the companion is smaller than the average isoplanatic angle at Paranal (see e.g. \citealt{2000A&AS..144...39M}), no significant distortions of the companion PSF compared to the primary star PSF are expected. For the HST PSF under-sampling and residuals from the diffraction spike made PSF fitting problematic. Instead we used ESO-MIDAS (\citealt{1983Msngr..31...26B}) to fit a two dimensional Gaussian to the companion position. Measurements were repeated several times with different input parameters in terms of measuring box size and starting position to ensure that the fit converged well. We used the average value of all measurements as the final extracted companion position. We used the individual fitting uncertainties of the Gaussian fit as the uncertainty of the companion's detector position. We ensured that this uncertainty was significantly larger than the standard deviation of multiple repeated measurements with different initial parameters.\\ To extract the stellar position, we used different approaches for the SPHERE, NACO and HST data. For the NACO data no coronagraph was used, so we used the same approach to extract the stellar position as was used for the companion position. However, for SPHERE no direct measurement was possible since the central source is obscured by the coronagraph. Instead we used the center calibration frames to determine stellar position, as described in \cite{2013aoel.confE..63L}. Since we had multiple center frames taken at the beginning and end of the sequence, we used the deviation between the recovered positions as the uncertainty of the central source position measurement. For the HST image the primary star was heavily saturated with significant column bleeding, making a fit to the remaining stellar PSF difficult. Instead we fit linear functions to the positions of the diffraction spikes and used their intersection as stellar center position.\\ To translate the recovered detector position for the central binary and the companion into on-sky separation and position angle, our observations required an astrometric calibration. For the archival NACO data several binary stars were imaged as astrometric calibrators during the same night as the science data as part of the original program. The results of these astrometric calibrations (including also potential orbital motion of the binary calibrators), are given by \cite{2012A&A...546A..63V}.\\ For the SPHERE data, calibrators are regularly imaged during the ongoing SPHERE GTO survey. Primary calibrators are stellar clusters such as 47\,Tuc, $\Theta$\,Ori\,B and NGC\,6380. The results of these astrometric calibrations are given in \cite{2016SPIE.9908E..34M}. In addition, detailed solutions for the geometric distortions were calculated by these authors. The instrument has proven to be extremely stable within their given uncertainties. We thus utilize their results for the broad band J and H filter to calibrate our data. We also use their distortion solution to correct geometric distortions in our SPHERE image. For the true north of the J-band data we use the more recent measurement published in \cite{2017A&A...605L...9C}, done within a few days of our observations. The uncertainties for the SPHERE data include those of the detector coordinates of central source and companion as well as the calibration uncertainty and the uncertainty of the distortion solution. Lastly, for the HST data we used the astrometric calibration provided in the image header. We list final results in table ~\ref{tab: astrometry}.\\ After we extracted the astrometry in all epochs, we measured the proper motion of the companion relative to CS\,Cha. The final results are shown in Fig.~\ref{pm-plots}. We show three different diagrams, since the proper motion of CS\,Cha is given with slightly different values in the NOMAD (\citealt{2004AAS...205.4815Z}) and SPM4 (\citealt{2011AJ....142...15G}) catalogs, as well as by \cite{2014A&A...570A..87S}. In all three cases we can clearly reject the background hypothesis with 7.1 to 8.7\,$\sigma$ in separation and with 4.4 to 8.5\,$\sigma$ in position angle. Within the given uncertainties we observe no significant relative motion in separation over our $\sim$19\,yr baseline. However we observe relative motion in position angle, which is consistent with a circular face-on or low inclination orbit, i.e. with a similar inclination as is observed for the resolved circumbinary disk. Within the given error bars the companion is thus co-moving with the primary stars. This is a very strong indication that the companion is gravitationally bound to CS\,Cha. In particular it is extremely unlikely that the companion is a blended extragalactic source, since such a source would have to move at very high velocity and would need to be by-chance aligned in proper motion and close to CS\,Cha. The probability for a blended galactic source might be slightly higher. To quantify this we used the \texttt{TRILEGAL} (\citealt{2005A&A...436..895G, 2012ASSP...26..165G}) population synthesis model to compute the number of expected galactic sources in close vicinity of CS\,Cha. As input we gave the galactic coordinates of CS\,Cha as well as the J-band magnitude of the companion as limiting magnitude. Following \cite{2012A&A...546A..10L} the number of expected objects can then be translated into a probability to find a background object at a certain separation. Using this approach we find that the chance of a faint blended galactic source within 1.3\,arcsec of CS\,Cha is 0.4\,\%, i.e. improbable at the 2.9\,$\sigma$ level. Such a source would then still need to be by-chance aligned in proper motion with CS\,Cha making this scenario even less likely. One last concern might be that the companion could be a blended local source within the Cha\,I cloud but several pc behind CS\,Cha. For example \cite{1998A&A...338..977C} found a number of very faint and highly embedded YSOs in Cha\,I. To test the likelihood of a by chance aligned local source in Cha\,I we checked the dispersion of proper motions of known members. As input we used the catalog by \cite{2000A&A...361.1143T}, which contains 29 such members, including CS\,Cha. We find that the dispersion in proper motion is quite high with $\sim$18\,mas/yr in right ascension and $\sim$73\,mas/yr in declination. In contrast we find that the companion shows no significant deviation from the proper motion of CS\,Cha in right ascension and only 3.6$\pm$1.8\,mas/yr in declination, which can be well explained by orbital motion as mentioned earlier. In Fig.~\ref{app: cambresy}, in the appendix, we furthermore show that the recovered colors of the companion do not match the YSO colors in Cha\,I by \cite{1998A&A...338..977C}. We can thus firmly exclude a blended local object as well.\\ We overall conclude that the companion is in all likelihood gravitationally bound to CS\,Cha. We explore the orbital motion of the companion in detail in section \ref{comp:nature-discussion}. \begin{table*}[t] \small \centering \caption{Astrometric meassurements and calibrations of all observation epochs.} \begin{tabular}{@{}llcccc@{}} \hline \hline Epoch & Instrument & Pixel Scale [mas/pixel] & True North [deg] & Separation [arcsec] & Position Angle [deg]\\ \hline 1998.1339 & WFPC2 & 45.52$\pm$0.01 & 31.69$\pm$0.005 & 1.314$\pm$0.039 & 258.26$\pm$1.21 \\ 2006.1311 & NACO & 13.24$\pm$0.18 & 0.18$\pm$1.24 & 1.299$\pm$0.018 & 260.30$\pm$1.24 \\ 2017.1311 & SPHERE/IRDIS & 12.263$\pm$0.009 & -1.71$\pm$0.06 & 1.314$\pm$0.002 & 261.41$\pm$0.12 \\ 2017.4617 & SPHERE/IRDIS & 12.251$\pm$0.009 & -1.75$\pm$0.11 & 1.319$\pm$0.001 & 261.40$\pm$0.23 \\ \hline\end{tabular} \label{tab: astrometry} \end{table*} \begin{figure*} \centering \subfloat[][PM from NOMAD catalogue]{ \includegraphics[scale=0.35]{pm-nomad_HST_tobi.pdf} \label{pm-nomad} } \subfloat[][PM from SPM4 catalogue]{ \includegraphics[scale=0.35]{pm-spm4_HST_tobi.pdf} \label{pm-spm4} } \subfloat[][PM from Smart et al. 2014]{ \includegraphics[scale=0.35]{pm-smart_HST_tobi.pdf} \label{pm-smart} } \caption[]{Proper motion diagrams of the companion relative to CS\,Cha. The wobbled gray lines are the area in which a non-moving background object would be expected. The "wobbles" are due to the parallactic displacement of such an object visible during the Earths revolution around the Sun. The dashed lines mark the area in which a co-moving companion would be located. The dashed lines take potential orbital motion into account assuming an inclination (circular face-on for the position angle and circular edge-on for the separation) and total system mass (1\,M$_\odot$, i.e. this assumes that the mass of the companion is small compared to CS\,Cha). In all three diagrams the companion is co-moving with CS\,Cha and thus in all likelihood gravitationally bound. We note that we see a small differential motion in position angle across our 19\,year observation baseline, which is consistent with a circular face-on (or close to face-on) orbit.} \label{pm-plots} \end{figure*} \section{Photometric measurements and detection limits} \subsection{SPHERE and NACO photometry} \newpage To understand the nature of the faint companion, we performed photometric measurements in all bands in which the companion was detected and derived upper limits for the non-detections. In our SPHERE J and H-band epochs, as well as in the NACO K$_s$-band epoch, the companion was well detected but was still located close enough to CS\,Cha so that the background at the companion position is dominated by the bright stellar halo. We did not image PSF reference stars (neither was a PSF reference available for the archival NACO data), thus we assumed that the low frequency structure of the stellar halo is approximately radial symmetric. To remove this radial symmetric halo, we subtracted 180$^\circ$ rotated versions of the images from themselves. The results are shown in Fig.~\ref{companion-main}. While strong signal remains within $\sim$0.5\,arcsec of CS\,Cha, the companion position appears free of strong residuals.\\ After this initial background subtraction we utilized IDL \texttt{starfinder} to perform PSF fitting photometry to measure relative brightness between the companion and CS\,Cha (the latter in the unsubtracted images). We used the flux calibration frames for the SPHERE observations to obtain an unsaturated reference PSF. For the NACO K$_s$-band image we used CS\,Cha itself as reference PSF since it was not saturated during the science sequence. Once a PSF fitting result was obtained we subtracted the companion from the data to check for strong residuals at the companion position. The results are given in table~\ref{tab: photometry} as differential magnitudes. To convert the differential J, H and K$_s$ magnitudes to apparent magnitudes of the companion we used the corresponding 2MASS (\citealt{2003yCat.2246....0C}) magnitudes of CS\,Cha as calibration. We then also list absolute magnitudes for which we assumed a distance of 165$\pm$30\,pc. For conversion to physical fluxes we used a HST/STIS Vega spectrum as well as the filter curves of SPHERE and NACO. \\ We note that we find a clear systematic uncertainty in the SPHERE H-band observations, induced by the poor observing conditions. In particular the coherence time of the atmosphere degraded during the sequence, with longer values at the start than at the end of the sequence. However, our flux calibration frames were only taken at the end of the sequence, i.e. in the worst observing conditions, and thus have lower Strehl than the previous science images in the sequence. Thus using them for the flux measurement of the companion during the whole sequence over-predicts the companion flux. To estimate this systematic effect we sub-divided the science sequence into four equally long bins, which we reduced individually in order to detect the companion in each bin. We then measured the relative loss of signal in the companion due to changing weather conditions between all bins. We found a deviation of 0.46\,mag between the first and the last bin. We consider this as an additional error term for the lower limit (since we know the direction of the effect) of the companion flux in H-band.\\ As mentioned earlier the companion was not detected in our NACO L-band observation. We thus evaluated the detection limit of our observation. The detection performances reached by our observation were estimated by computing 2D detection limit maps, at 5$\sigma$ in terms of L$_p$ contrast with respect to the primary. We computed the pixel-to-pixel noise within a sliding box of 1.5 $\times$ 1.5 FWHM. The detection limits were then derived by taking the ADI flux loss using fake planet injection and the transmission of the neutral-density filter into account, and were normalized by the unsaturated PSF flux. Our final detection limits map is shown in Fig.~\ref{naco-l-limit} and the computed detection limit at the companion position is given in table~\ref{tab: photometry}. We use the WISE (\citealt{2012wise.rept....1C}) W1 magnitude as close proxy for the L-band magnitude of the primary star to convert contrast limits to apparent and absolute magnitude limits. \subsection{WFPC2 photometry and detection limits} To estimate the brightness of the companion in the WFPC2 F814W filter, several analysis steps were necessary. The primary star was saturated in the long exposure in which we detected the companion. To enable a relative measurement of the companion brightness we thus first determined the brightness of the primary star in the exposure. For this purpose we used TinyTim (\citealt{2011SPIE.8127E..0JK}), a program to generate HST point spread functions based on the instrument setup, target spectral type, time of observations and position on the detector. We created a matching PSF for the WFPC2 F814W observations and then fitted this theoretical PSF to the unsaturated flanks of the CS\,Cha PSF by application of a scaling factor. We then used this scaled theoretical PSF for the relative brightness measurement with the companion.\\ The photometry of the companion in the WFPC2 image is challenging since it is contaminated by the bright diffraction spike of the primary star. Even after subtraction of a reference star, residuals of this diffraction spike are still visible around the companion position. Due to the low S/N of the detection and the under-sampling of the HST PSF, we decided against PSF fitting photometry in this case and instead applied aperture photometry. For this purpose we measured the flux of the companion in a 3$\times$3 pixel box centered on the brightest pixel of the companion PSF. We then estimated the local background by measurements with the same box 3 pixels moved in radial direction towards and away from the the central star along the diffraction spike. The average of both measurements was then subtracted from the companion measurement. To estimate the uncertainty of the background measurement we computed the standard deviation in the background apertures and multiplied it by the surface area of the aperture. In addition to the uncertainty of the background we took into account the read noise of the WFPC2 planetary camera for a gain setting of 7e$^-$/DN. We used a read noise of 5 e$^-$/pix. Overall the measurement is strongly dominated by the uncertainty of the background, which is a factor 4 higher than the estimated read noise. We give our result for the relative brightness measurement in magnitudes in table~\ref{tab: photometry}.\\ To convert this relative measurement to an apparent magnitude of the companion, we determined the Vega magnitude of CS\,Cha in the F814W filter. For this purpose we calculated the total flux of CS\,Cha in the F814W filter using the filter curve and the spectral energy distribution of CS\,Cha, shown in Fig.~\ref{sed}. We then converted this to a Vega magnitude by comparison with the flux of Vega in the same filter. We give apparent and absolute magnitudes for the companion also in table~\ref{tab: photometry}, along with the physical flux of the companion in the F814W filter.\\ In the F606W filter the companion was not detected. We thus estimated detection limits at the companion position. For this purpose we measured the standard deviation at the companion position in a 3$\times$3 pixel aperture. We again used TinyTim to create an unsaturated reference PSF scaled to the primary star brightness on the detector. Given the noise at the companion position and using the primary star as reference we then computed the limiting magnitudes for a 5\,$\sigma$ detection. The result is given in table~\ref{tab: photometry}. \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{naco_l.pdf} \caption[]{Detection limit map derived from our NACO L$_p$-band observations. Detection limits are given in relative contrast to the the primary star. We mark the expected position of the companion with a black, dashed circle. The companion was not detected in these observations and should thus exhibit a contrast larger than 8.2\,mag relative to CS\,Cha.} \label{naco-l-limit} \end{figure} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{sed_lambda_updated.pdf} \caption[]{Spectral energy distribution of CS\,Cha (blue dots) and its companion (red dots and triangles). Pointing down triangles denote upper limits. Spectral flux densities were computed from broad band photometry using a Vega spectrum and the broad band filter curves. All values for the companion are given in table~\ref{tab: photometry}} \label{sed} \end{figure} \section{Polarization of the companion} \label{comp: pol-degree} \begin{figure} \centering \includegraphics[width=0.9\columnwidth]{cha_cloud_pol.pdf} \caption[]{Reproduction of Fig.~2 from \cite{1997A&AS..122...95C}. Shown is a visual polarization map of the Cha\,I dark cloud using 33 stars they observed. Average angle and degree of polarization is indicated by the solid line vector field. We show the position of CS\,Cha in this map to indicate the expected degree of linear polarization introduced by the Cha\,I dark cloud in the optical.} \label{polmap} \end{figure} Since we detected the companion both in the total intensity and in the polarized intensity images in our SPHERE/IRDIS J and H-band epochs, we could calculate the degree of linear polarization of the companion. For this purpose we used aperture photometry in both images in each band. We used aperture photometry over PSF fitting photometry due to the slight change in the companion PSF during the double difference steps of the DPI reduction. We checked in J-band that PSF fitting photometry and aperture photometry give consistent results in the total intensity image. We found that the results were consistent within 0.01\,mag. We used an aperture radius of 3\,pix in J-band, which corresponds to the full width at half maximum of 2.77\,pix as measured by fitting a Moffat profile to the stellar PSF. In H-band we used a value of 4\,pix due to the poorer observing conditions. As in the PSF fitting photometry, we first subtracted the radial symmetric bright stellar halo from the intensity images by rotating them 180$^\circ$ and subtracting them from themselves. We then estimated the local background with two sub-apertures in each band. In the J-band case, the companion is slightly contaminated by a stellar diffraction spike. We thus used two sub-apertures in radial direction along this spike. In H-band we used two azimuthal sub-apertures at the same separation from the central star and offset by a few degrees from the companion position.\\ The measurement in polarized intensity was performed in the same way as in the intensity image. However, the measurements were actually performed in the Stokes Q and U images rather than in the combined polarized intensity image, since all signal becomes positive in this image and thus even background noise might give a spurious polarization signal.\\ We find a degree of linear polarization in J-band of 13.7$\pm$0.4\,\% and an angle of linear polarization of 153.0$^\circ \pm$0.8$^\circ$. Our H-band results are consistent with these measurements. We find a degree of linear polarization of 14.1$\pm$1.4\,\% and an angle of linear polarization of 154.0$^\circ \pm$2.9$^\circ$. The uncertainties in the H-band measurements are higher due to the poorer signal-to-noise compared to the J-band data. In both cases the error bars are strongly dominated by measurement uncertainties (due to photon, speckle and background noise), while the instrument model allows for a factor of $\sim$10 higher accuracy.\\ We now need to investigate if the polarization of the companion is intrinsic to the object, i.e. either due to scattered light from the primary stars or a central object within the companion itself, or if it is caused by interstellar dust between Earth and the CS\,Cha system. This is of particular importance since CS\,Cha is indeed located within close proximity or behind the Cha\,I dark cloud. Detailed optical polarization measurements of the region have been performed by \cite{1997A&AS..122...95C}. In Fig.~\ref{polmap} we show their optical polarization map of the Cha\,I cloud region and superimpose the position of CS\,Cha. Using the nine stars in their study located closest to the position of CS\,Cha, we find an average maximum polarization degree of 6.7$\pm$1.7\,\% at a peak wavelength of 0.65\,$\mu m$. The average angle of polarization in the I-band that they measure for the same stars is 132.08$^\circ \pm$12.9$^\circ$. Using Serkowski's empirical law (\citealt{1975ApJ...196..261S}) we can extrapolate the expected polarization degree in the J and H-band. \begin{equation} p(\lambda) = p(\lambda_\mathrm{max}) \exp \left[ -K \ln^2 (\lambda / \lambda_\mathrm{max})\right] \end{equation} Wherein $p(\lambda)$ is the polarization degree at the wavelength $\lambda$. The factor K was empirically determined to be dependent on the peak wavelength of the polarization degree by \cite{1982AJ.....87..695W}. \begin{equation} K = 1.86 \, \lambda_\mathrm{max} - 0.1 \end{equation} Using these relations with the average values from \cite{1997A&AS..122...95C} we find that the degree of linear polarization could lie between 5.5\,\% and 3.3\,\% for the J-band and between 3.4\,\% and 2.0\,\% for the H-band. These degrees of polarization are significantly lower than what we find for the companion giving a first indication that the polarization of the companion is indeed intrinsic and not caused by interstellar dust.\\ To test this more rigorously we measured the degree of linear polarization of the unresolved primary stars. For this purpose we used an annulus at the bright speckle halo that marks the adaptive optics correction radius and that contains only stellar light. We find that the primary stars have a degree of linear polarization of 0.57$\pm$0.28\,\% in J-band with an angle of polarization of 133.1$^\circ \pm$8.2$^\circ$. For the H-band we find similar values of 0.34$\pm$0.02\,\% and 141.1$^\circ \pm$5.4$^\circ$ for the degree and angle of linear polarization respectively. Both of these values are consistent with a previous measurement in the optical of CS\,Cha by \cite{2000A&AS..144..285Y} who find a degree of linear polarization of 0.7\,\% (but did not provide uncertainties). The degree of polarization that we find for the primary stars is much lower than suggested by the optical data by \cite{1997A&AS..122...95C} in combination with Serkowski's law. It could be that the average value we assumed is not a good proxy for the cloud density at the position of CS\,Cha or that CS\,Cha is located slightly in front of the cloud. In any case, the low degree of stellar polarization strongly suggests that the high degree of polarization found for the companion is intrinsic to the object and not caused by interstellar dust if we assume both objects are located at the same distance as suggest by their common proper motion.\\ This conclusion is additionally supported by the disagreement between the angle of polarization of the companion and that of stellar sources. In both bands the angle of linear polarization of the stellar binary is within 1\,$\sigma$ consistent with the average polarization angle in the region as determined by \cite{1997A&AS..122...95C}. In our data we find that the companion polarization angle deviates by $\sim22^\circ$ (2.6$\sigma$) from the stellar polarization angle in J-band and by $\sim15^\circ$ (2.4$\sigma$) in H-band. Thus it seems again plausible that the cause for the polarization of the stellar binary and the companion are different and that the companion polarization is not caused by interstellar dust.\\ Given the angle of polarization of the companion, we can finally try to understand which is the dominating source of illumination, assuming polarization by single scattering of light. If the companion is primarily illuminated by the central stellar binary we would expect its angle of linear polarization to be azimuthal with respect to the binary position. The expected angle of linear polarization for azimuthally scattered light at the companion position is 171.7$^\circ \pm$0.1$^\circ$. Comparing this to the more accurate angle of linear polarization in J-band, we find a significant deviation of 18.7$^\circ \pm$0.8$^\circ$. We can thus conclude that the origin of the polarized light is not (entirely) single-scattered light emitted by the primary stars. It is of course still possible that the linear polarization that we measure is a superposition of scattered stellar and companion emission. However, given the angle of polarization we can already conclude that the companion object contains a central source massive enough that we can detect its' emission.\\ Polarization can give us important information about the structure of the atmosphere of low-mass objects, as well as their direct environment. Polarization has indeed been measured for field brown dwarfs previously (see e.g. \citealt{2002A&A...396L..35M}, \citealt{2005ApJ...621..445Z} and \citealt{2013A&A...556A.125M}), but was not detected so far for companions to nearby stars (see e.g. \citealt{2016ApJ...820..111J}, \citealt{2017arXiv170907519V}). This is to the best of our knowledge the first time a faint and thus likely low-mass companion to a nearby star was detected in polarized light and its degree of polarization measured. We discuss the implications for the object in detail in section~\ref{nature-section}. \begin{table*}[t] \small \centering \caption{Photometric measurements of the companion. The apparent magnitudes in J, H and K$_s$-band were calculated using the closest 2MASS magnitude of CS\,Cha as calibration. The apparent magnitude in the HST F814W filter was computed using the theoretical Vega magnitude of CS\,Cha in this band given its SED. The absolute magnitudes were computed from the apparent magnitudes assuming a distance of 165$\pm$30\,pc. We give the central wavelength and spectral width of all filters along with the measurements. Spectral flux densities were computed using the filter curves of the instruments as well as a Vega spectrum taken with HST/STIS.} \begin{tabular}{@{}llcccccc@{}} \hline \hline Instrument & Filter & $\lambda_c$ [$\mu$m] & $\Delta \lambda$ [$\mu$m] & $\Delta$mag & app. magnitude & abs. magnitude & F$_\lambda$ [W$\cdot$m$^{-2}\cdot\mu$m$^{-1}$]\\ \hline HST/WFPC2 & F606W & 0.5997 & 0.1502 & $>$8.9 & $>$20.4 & $>$14.3 & $<2.03\cdot10^{-16}$ \\ HST/WFPC2 & F814W & 0.8012 & 0.1539 & 9.81$\pm$0.48 & 19.71$\pm$0.48 & 13.62$\pm$0.62 & $(1.37\pm0.76)\cdot10^{-16}$ \\ SPHERE & BB-J & 1.245 & 0.240 & 10.05$\pm$0.21 & 19.16$\pm$0.21 & 13.07$\pm$0.45 & $(6.31\pm1.39)\cdot10^{-17}$ \\ SPHERE & BB-H & 1.625 & 0.290 & 9.20$^{+0.61}_{-0.15}$& 17.65$^{+0.62}_{-0.16}$& 11.56$^{+0.74}_{-0.43}$ & $(2.54^{+0.41}_{-1.95})\cdot10^{-16}$ \\ NACO & Ks & 2.18 & 0.35 & 9.21$\pm$0.16 & 17.40$\pm$0.16 & 11.32$\pm$0.43 & $(3.44\pm0.56)\cdot10^{-17}$ \\ NACO & Lp & 3.80 & 0.62 & $>$8.2 & $>$16.4 & $>$10.3 & $<1.35\cdot10^{-17}$\\ \hline\end{tabular} \label{tab: photometry} \end{table*} \section{The circumbinary disk around CS\,Cha} \subsection{Position angle and inclination} As visible in Fig.~\ref{disk-main}, we resolve for the first time a small disk around the central stellar binary in the CS\,Cha system. The disk appears compact, smooth and close to face-on. From our scattered light images we can extract the orientation of the disk. For this purpose we measure the disk diameter in radial disk profiles with orientations between 0$^\circ$ and 360$^\circ$ in steps of 2$^\circ$. The resulting disk diameter versus disk orientation data was fitted with the corresponding value for an ellipse. The disk diameter was defined in our radial profiles as the separation between the two outermost points at which the disk flux reaches a certain threshold. To determine this threshold we measured the standard deviation of the background outside of the disk signal and set the threshold to a multiple of this standard deviation. In practice we found that there is a small dependency on the threshold value and the recovered disk orientation. We thus used multiples between 5 and 100 in steps of 2 and considered the recovered median values for disk inclination and position angle, and the standard deviation between these values, as the uncertainty of our measurement. Assuming a radial symmetric disk that only appears elliptical due to its relative inclination towards us, we find a inclination of 24.2$^\circ \pm$3.1$^\circ$ and a position angle of 75.6$^\circ \pm$2.2$^\circ$ from our J-band observation. This disk position angle is well consistent with the position angle of the suspected jet emission of $\sim162^\circ$ detected by \cite{2014ApJ...795....1P}, since the jet position angle should be offset by 90$^\circ$ from the disk major axis. The H-band observation has much lower signal-to-noise than the J-band observation and suffers from convolution with a rather distorted PSF (see Fig.~\ref{app: sphere_stellar}). We find an inclination of 34.9$^\circ \pm$10.6$^\circ$ and a position angle of 86.1$^\circ \pm$2.2$^\circ$ for this data set. The $\sim$10$^\circ$ larger position angle can be explained by the elongated PSF shape and orientation of this observation. We thus consider the J-band measurements as final values for inclination and position angle. \subsection{Inner and outer radius} \label{disk-radius} To measure the outer radius of the disk we considered a radial profile along the major axis as determined in the previous section. We then computed the radial extent at which the disk signal is for the first time 5$\sigma$ above the image background value. We again used the J-band images, due their higher quality. We found an outer radius of scattered light of 337\,mas, i.e. 55.6\,au at a distance of 165\,pc. This is consistent with the upper limit of 169\,au given by \cite{2016ApJ...823..160D} from their unresolved ALMA observations. Note that we are only tracing small dust at the disk surface, so that it is possible that the disk has a larger size but is partially self shadowed. Another possibility is that the disk outer extent is larger, but that it is below the noise floor in our images due to the 1/r$^2$ drop-off of the stellar irradiation.\\ We show an azimuthally averaged radial profile of the disk in Fig.~\ref{disk: profile}. In this profile a decline in brightness inside of $\sim$115\,mas is visible. To investigate if this is a tentative detection of a cavity, we compared the radial disk profile with a model profile of the coronagraph attenuation. The NIR APLC coronagraph normalization profile was calculated based on IRDIS DB\_H23 dual-band imaging observations of the 0.6" diameter disk of Ceres, performed on the 14th of December 2016. This was carried out in the N\_ALC\_YJH\_S coronagraph imaging mode and the Ceres disk was nodded off-center by 490 mas to provide a non-coronagraphic reference. This was used to produce a 2D attenuation profile of the coronagraph for an extended, incoherent source. Monochromatic Fourier modeling of the three-plane APLC coronagraph was also performed, using the APO1 SPHERE amplitude apodiser, ACL2 (185 mas diameter) focal-plane mask and NIR Lyot stop including dead actuator masks (\citealt{2011ExA....30...59G}, \citealt{2016JATIS...2b5003S}). This model confirmed that the observed Ceres attenuation profile is nearly diffraction-limited and azimuthally symmetric. The radial profile outside of 85 mas is dominated by direct throughput of the target, while that inside 85 mas is dominated by internally scattered light in the instrument (for full results see Wilby et al., in prep.). The close agreement between the forward model and observed data allows the H23-band profile to be extrapolated to J-band via an equivalent model at 1.26\,$\mu$m. This was then used to correct the radial CS Cha profile for coronagraph attenuation.\\ As visible in Fig.~\ref{disk: profile}, after the correction with the coronagraph throughput profile, no significant decline in flux is visible outside the coronagraphic mask. We can thus put an upper limit on the size of the inner cavity of the CS Cha disk of 15.3\,au (92.5\,mas at 165\,pc) from the scattered light imaging (tracing small dust grains). \begin{figure*} \centering \subfloat[][]{ \includegraphics[scale=0.4]{CSCha_average_profile_mike_last_reduction_revised.pdf} \label{profile1} } \subfloat[][]{ \includegraphics[scale=0.4]{CSCha_average_profile_JH_revised.pdf} \label{profile2} } \caption[]{\textit{Left:} Azimuthal average of the polarized intensity profile of the circumbinary disk around CS\,Cha in J-band (red squares). The profile was measured in the Q$_\phi$ image, while the estimated uncertainties were determined in the U$_\phi$ image. We indicate the radius of the coronagraphic mask with the black dotted line. In addition, we show the throughput curve of the utilized coronagraph as discussed in section \ref{disk-radius} (green solid line). Finally we show the azimuthal disk profile corrected by the coronagraph throughput (blue diamonds). \textit{Right:} Azimuthal average of the polarized intensity profile of the circumbinary disk around CS\,Cha in J-band (blue solid line) and H-band (red dash-dotted line). Angular separations were converted to projected separations using the distance of 165\,pc.} \label{disk: profile} \end{figure*} \section{The nature of the companion} \label{nature-section} To understand the nature of this new companion, we compare its SED to known substellar objects in Chamaeleon as well as theoretical model atmospheres. We then use the astrometry over a 19\,yr base line to determine if it is possible to constrain the companion mass from the orbital motion. Finally we use our own radiative transfer models to explain the photometry and degree of linear polarization of the companion. \subsection{A planetary mass object on a wide orbit?} \label{comp:nature-discussion} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{sed_small_compare_updated.pdf} \caption[]{Spectral energy distribution of the CS\,Cha companion (red dots and triangles). Pointing down triangles denote upper limits. We show the known substellar companion CT\,Cha\,b (\citealt{2008A&A...491..311S}, \citealt{2015ApJ...801....4W}) as well as the free floating planetary mass object in Chamaeleon Cha\,J11110675-7636030 (\citealt{2017AJ....154...46E}) as comparison.} \label{sed-small} \end{figure} \emph{Photometry}\\ \\ In Fig.~\ref{sed} we show the measured SED of the companion, and compared it with theoretical models of low mass substellar objects calculated with the \texttt{Phoenix} (\citealt{2008ApJ...675L.105H}) atmosphere code using the AMES-Dusty models (\citealt{2000ApJ...542..464C}, \citealt{2001ApJ...556..357A}) for an age of 2\,Myr as input. We tentatively explored a mass range between 2\,M$_\mathrm{Jup}$ and 20\,M$_\mathrm{Jup}$. The closest fit is achieved with a 5\,M$_\mathrm{Jup}$ planet corresponding to an effective temperature of 1700\,K. However, it is clearly visible that even this best fit does not properly explain the measured photometry of the companion. While the J-H color may be explained by such an object (taking some red-ward shift due to extinction by circum-planetary material into account), the model clearly over-predicts the flux in K and L-band by an order of magnitude. A significantly lower mass object of 2\,M$_\mathrm{Jup}$ corresponding to an effective temperature of 1100\,K could explain the K-band photometry, but still significantly over-predicts the L-band flux and is not compatible with the flux in the shorter wavelengths. Generally there is no model that can explain all photometric data points and upper limits. This is a strong indication that we are looking at an object that is either significantly more complex than a "naked" planetary photosphere, or that the object (or the primary) is for some reason strongly variable. Variability is indeed possible since all observation epochs have been taken months and sometimes decades apart.\\ One explanation for the peculiar shape of the SED could be a companion with a small (unresolved) surrounding disk. There are in fact two comparably faint objects in Chamaeleon known around which a circumplanetary disk is expected. One of them is the wide direct imaging companion to the young T Tau star CT\,Cha (\citealt{2008A&A...491..311S}). CT\,Cha\,b shows Pa\,$\beta$ emission in the J-band (\citealt{2008A&A...491..311S}), as well as strong H\,$\alpha$ emission in the R-band (\citealt{2015ApJ...801....4W}), both strong indicators for ongoing accretion of material on the companion. The companion mass is estimated to be 9-35\,M$_\mathrm{Jup}$ with a temperature range between 2500\,K and 2700\,K (\citealt{2008A&A...491..311S}, \citealt{2014A&A...562A.127B}, \citealt{2015ApJ...801....4W}). We show the near infrared spectrum of CT\,Cha\,b along with optical photometry in Fig.~\ref{sed-small}. We overplot the photometry of the companion to CS\,Cha for comparison. While R, I and H-band photometry are comparable in both objects, the J and K-band fluxes of CT\,Cha\,b are significantly larger than for the CS\,Cha companion. A second comparison object is the recently discovered free floating planetary mass object Cha\,J11110675-7636030 (\citealt{2017AJ....154...46E}), for which we also show available photometry in Fig.~\ref{sed-small}. Assuming an age range of 1-3\,Myr and using a variety of planet evolutionary models, \cite{2017AJ....154...46E} find a mass range of 3-6\,M$_\mathrm{Jup}$ for this object. They note that the mid-IR photometry suggests the existence of excess emission best explained by circum-planetary material. The object shows J-K colors similar to the CS\,Cha companion, and is also consistent with the L-band non-detection. The H-band photometry of both objects, on the other hand, differs significantly. For both comparison objects CT\,Cha\,b and Cha\,J11110675-7636030 we have no information on the geometry of the surrounding circum-planetary material. In particular we do not know the inclination of these inferred disks. It is possible that the companion around CS\,Cha is indeed more massive than both these objects, but is strongly extincted by a very inclined circum-companion disk. This scenario is indeed also supported by the high degree of linear polarization that we find for the companion. We thus explore several models with circum-companion material in section~\ref{model-section}.\\ \\ \emph{Astrometry}\\ \\ Since we have an observational baseline of $\sim$19\,yr, we attempted to fit the orbital motion of the companion around the primary stars. For this purpose we used the Least-Squares Monte-Carlo (LSMC) approach as described in \cite{2013MNRAS.434..671G}. We generated 10$^7$ random orbit solutions from uniform priors and then used these as starting points for a least squares minimization with the Levenberg-Marquardt algorithm. In contrast to \cite{2013MNRAS.434..671G}, we did not assume a system mass but left it as a free parameter. To limit the large parameter space we constrained the semi-major axis to values between 0.5\,arcsec and 3.0\,arcsec. This seems justified given the current position of the companion at $\sim$1.3\,arcsec and the fact that we see no significant change in separation between astrometric epochs. In addition, we limited the total system mass to values between 0.9\,$M_\odot$ and 2.0\,$M_\odot$. The lower end of this mass interval is determined by the lower limit of the combined mass of the central binary star, i.e. in this case the companion mass would be small compared to the primary mass in the planet or brown dwarf regime. The upper end is given by twice the upper limit of the central binary mass, i.e. in this case the companion would have roughly one solar mass. We do not expect the companion to be more massive than the primary stars, since the resolved circumbinary disk would otherwise likely be truncated to an even smaller outer radius.\\ In Fig.~\ref{companion:lsmc} we show the resulting semi-major axis, inclination and mass versus eccentricity distributions of the 1\% best fitting orbits. Since the uncertainties of the NACO and HST epochs are large compared to the SPHERE measurements, the fits are strongly dominated by the latter.\\ We find that the current astrometric epochs do not allow for constraint of the mass of the companion, since we find valid orbital solutions for the full range of input masses. However, we can make a few observations about the system architecture. If the companion is indeed a Jovian planet or brown dwarf, then we can conclude that it must be on an eccentric orbit with the lower limit of the eccentricity between 0.2 and 0.26 depending on the central stars' masses. In fact the total system mass should be above 1.4\,$M_\odot$ to allow for circular orbits. In this case the companion would be a low mass star with a mass between 0.4\,$M_\odot$ and 0.5\,$M_\odot$. Independent of the mass, we find an upper limit for the eccentricity of 0.8. This upper limit is, however, introduced by our artificial cut off of the semi-major axis at 3\,arcsec. If we allow for larger semi-major axes, then we find even more eccentric orbits. This correlation between semi-major axis and eccentricity is indeed common for orbits which are not well covered with observations (e.g. \citealt{2014MNRAS.444.2280G}). Overall we find a peak of the eccentricity at $\sim$0.6. The vast majority of these eccentric orbits exhibits a face-on inclination. \\ It is interesting to investigate if co-planar orbits with the resolved circumbinary disk are possible since this could give an indication of the formation history. We find that such co-planar orbits indeed exist. However, regardless of the total system mass there are no circular (e\,=\,0) co-planar orbits recovered. Overall the distribution of the total mass and eccentricity closely match the non-coplanar case.\\ In Fig.~\ref{orbits} we show the three best fitting orbit solutions that were recovered by our LSMC fit as well as the best fitting solutions for a circular, co-planar and low mass (companion mass below 0.03\,M$_\odot$) orbit. The respective orbital elements are given in table~\ref{tab: orbit-elements}. The best fitting orbits are not co-planar and exhibit eccentricities between 0.41 and 0.63. Since most of these orbits are seen face-on there would be a significant misalignment between the inclination of the resolved circumbinary disk and the orbital plane, as well as a misalignment with a putative highly inclined circum-companion disk. Such spin-orbit and spin-spin misalignments in multiple systems are indeed predicted by hydrodynamic simulations of stellar formation in clusters (see e.g. \citealt{2012MNRAS.419.3115B, 2018MNRAS.475.5618B}) and were more recently observed in multiple systems with ALMA (see the case of IRAS\,43, \citealt{2016ApJ...830L..16B}). The total system masses for the best fitting orbits lie between 1.28\,$M_\odot$ and 1.84\,$M_\odot$, which puts the companion in the low stellar mass regime. However, we stress that lower (e.g. planetary) masses for the companion can not be ruled out with the existing astrometry. One example for an orbital solution that fits the astrometry and requires only a companion mass below 0.03\,M$_\odot$ is shown in Fig.~\ref{orbits}. In general these best fitting orbits may still change significantly, with the availability of new high precision astrometric epochs in the future. Thus while the recovered distributions of orbital elements are meaningful, we caution to over interpret these specific orbit solutions.\\ \\ \begin{figure*} \centering \subfloat[][]{ \includegraphics[scale=0.3]{a_vs_e_corr_10mio_tobi_chi3_3_arcsec.pdf} \label{a_vs_e} } \subfloat[][]{ \includegraphics[scale=0.3]{i_vs_e_corr_10mio_tobi_chi3_3_arcsec.pdf} \label{i_vs_e} } \subfloat[][]{ \includegraphics[scale=0.3]{e_vs_mass_corr_10mio_tobi_chi3_3_arcsec.pdf} \label{e_vs_mass} } \caption[]{\textit{Left:} Semi-major axis versus eccentricity distribution of all recovered orbit solutions for the companion following the LSMC approach. Shown are the 1\% best fitting orbits. \textit{Middle:} Same as left, but for eccentricity versus inclination of the orbital plane. \textit{Right:} Same as left, but for eccentricity versus total system mass.} \label{companion:lsmc} \end{figure*} \begin{figure} \centering \includegraphics[width=0.95\columnwidth]{orbit_inset2_updated.pdf} \caption[]{Best fitting orbits to the current astrometry of the companion as recovered by our LSMC fit. The inset in the upper left is zoomed-in on the data points.} \label{orbits} \end{figure} \begin{table} \centering \caption{Orbit elements of the three best-fitting orbits shown in Fig.~\ref{orbits}, as well as the best fitting circular (c.), co-planar (c.-p.) and low mass (l.m.) orbits.} \begin{tabular}{@{}l@{\hspace{2pt}}cccccc@{}} \hline \hline & 1 & 2 & 3 & c. & c.-p. & l.m. \\ \hline a\,["] & 1.82 & 2.65 & 1.43 & 1.46 & 1.48 & 2.14 \\ m\,[M$_\odot$] & 1.66 & 1.28 & 1.84 & 1.96 & 1.96 & 1.02 \\ e & 0.49 & 0.64 & 0.41 & 0 & 0.45 & 0.56 \\ P\,[yr] & 4048.4 & 8076.2 & 2667.0 & 2678.6 & 2731.6 & 6579.6 \\ i\,[deg] & 0 & 0 & 0 & 45.2 & 21.6 & 0 \\ $\Omega$\,[deg] & 200.0 & 185.4 & 190.0 & 110.1 & 75.3 & 184.0 \\ $\omega$\,[deg] & 337.0 & 3.3 & 327.7 & 275.7 & 83.3 & 0 \\ T$_0$\,[yr] & 1651.9 & 1668.1 & 1602.1 & 3011.5 & 4363.7 & 1593.6 \\ \hline\end{tabular} \label{tab: orbit-elements} \end{table} \subsection{Detection of a circum-companion disk?} \label{model-section} To test whether the peculiar measurements of the companion can be explained by a substellar object surrounded by a disk, we aimed to model its photometric (SED) and polarimetric (degree of polarization) properties using the radiative transfer code RADMC3D. In all our models, we consider astronomical silicates (\citealt{2001ApJ...548..296W}), and use a single dust size for simplicity. We again consider the AMES-DUSTY atmosphere models as input for the central object. We run 3 different families of model. (a) Our first model includes a disk around a substellar companion. The circumplanetary disk extends up to 2 au. This maximum outer radius was inferred from the fact that we do not resolve the companion. Its density is described as: \begin{equation} \rho_{\rm{disk}} (r,z) = \frac{\Sigma(r)}{\sqrt{2\pi}H_{\rm{p}}(r)} \exp\left(\frac{-z^{2}}{2H_{\rm{p}}(r)^{2}}\right) \end{equation} where $\Sigma(r)$ is the surface density and $H_{\rm{p}}(r)$ is the pressure scale height. Both quantities are described as power laws with radius, with exponents $\zeta$ (flaring) and $p$. The model parameters are given in Table~\ref{tab:modelparam}. The model has a complex parameter space, which we explored qualitatively by varying the mass and thus the luminosity of the central object, the grain size, the disk mass and inclination. To produce a significant level of polarization (above 5\%), the disk must be strongly inclined (seen close to edge-on), in turn extinguishing the thermal emission from the companion and the innermost disk radii. To match the SED at high inclinations we thus must increase the mass of the central object. An increase in disk mass has an effect similar to an increase of the inclination on the SED of the central object. Finally the grain size can be varied to modulate the polarization efficiency. We considered grain sizes of 0.5\,$\mu m$, 1\,$\mu m$ and 2\,$\mu m$. We show a sketch of the model, along with the best fitting results for a 5\,M$_\mathrm{Jup}$ and a 20\,M$_\mathrm{Jup}$ companion in the left column of Fig.~\ref{models}. For the lower mass we require a low disk inclination in order to get enough flux of the companion in the near-infrared. However, this model under-predicts the I-band flux and over-predicts the L-band flux, in addition to not revealing significant polarization. For the higher mass we find a much better fit. We can increase the disk inclination to much higher values, which match the J and H-band polarization well. Furthermore, the resulting SED is a close fit to all photometric measurements of the companion, excluding the H-band. We have also investigated whether we can derive an upper mass limit for the companion by placing a 72\,M$_\mathrm{Jup}$ companion in the center of the disk. We found that even for high inclinations, such a model severely over-predicts the K and L-band flux.\\ Note that in this model, we do not consider the binary as a source of irradiation. The only way that the companion would scatter a significant amount of light from the central binary, at that distance, would be if it stands outside of the plane of the circumbinary disk. Otherwise the light from the central binary would be blocked by the disk. We therefore tested the same model, but with an additional irradiation source (the central binary) at 214\,au, and after placing the companion and disk outside of the plane of the circumbinary disk. We find that the scattered light signal from the central binary alone is between 2-3 orders of magnitude fainter than our measurements (see Fig.~\ref{app: binary_irradiation} for comparison). It thus only has a marginal influence on our modeling results and was ignored for simplicity. (b) We then changed our models to test a different geometry, and considered an envelope of dust grains surrounding the companion, as this should enhance the amount of scattered light. The density structure is given by: \begin{equation} \rho_{\rm{env}} (r,z) = \rho_{0}\left( \frac{\sqrt{r^{2} + z^{2}}}{ \sqrt{R_{\rm{out}}^{2} + z_{\rm{out}}^{2}}} \right) ^{q} \end{equation} where $\rho_{0}$ is the density of the envelope at its outer radius. We chose the mass of the envelope so that its optical depth would be the same as in the disk model. Since in this model we do not have a disk to modulate the flux of the companion, we only tested models for a 5\,M$_\mathrm{Jup}$ central object, which provided the closest match to the companion SED. The results are shown in the middle column of Fig.~\ref{models}. We find a similar match to the SED as in the disk model for a low mass companion, but our models underestimate the degree of polarization with values at most in the order of a few \% ($\sim$7\% for micron-size dust grains). (c) Our final model is a combination of the previous two. We consider a companion surrounded by a disk plus an additional envelope. For this model, we consider a 20\,M$_\mathrm{Jup}$ companion since it provided the best fit for the disk-only model. The density at each (r,z) is taken as the maximum between $\rho_{\rm{disk}}$ and $\rho_{\rm{env}}$. Note that the mass of the envelope is negligible compared to that of the disk ($\sim$0.4\%). This configuration allows us to obtain a large degree of polarization, by increasing the amount of scattered light with the envelope, while reducing the total intensity from the central object with an inclined disk. In the right column of Fig.~\ref{models}, we show the results for three models with different grain sizes. Note that we also varied the inclination of the disk in order to obtain a good fit of the data. Although none of our models fit both the photometry and the level of polarization perfectly, we find that a disk composed of 1\,$\mu$m sized dust grains and a high inclination of 80$^\circ$ is consistent with the observed photometry. This model still under predicts the polarization in the J-band by a factor of $\sim$1.7. Our model using smaller grains on the other hand will fit the HST and SPHERE J-band photometry slightly better, while it misses the SPHERE H-band and NACO L-band measurements. Smaller grains also lead to a dramatic over prediction of the degree of polarization in the near infrared. Larger grains than 1\,$\mu$m do not significantly contribute to the degree of polarization in the J and H-bands. Given these results it is conceivable that a more complex grain size distribution (instead of a single grain size) including grain sizes between 0.1\,$\mu$m and 1\,$\mu$m may be able to reproduce the degree of polarization as well as the photometry. However, we would like to point out that the parameter space is complex and degenerate between multiple parameters, such as companion mass, disk inclination and dust grain size. Thus we do not claim that the disk plus dust envelope model with the given parameters is the only model that can reproduce our measurements. Additional measurements are needed before an attempt is made to constrain the nature of the companion to CS\,Cha further. An observation with SPHERE/ZIMPOL to detect the companion in optical polarized light could help to constrain the dust grain sizes as well as the presence of a dust envelope. An ALMA observation on the other hand may constrain the mm-dust mass at the companion position and thus indirectly the mass of the companion itself.\\ From the angle of polarization we can deduce the geometry of such a system. The angle of polarization will mostly be determined by the region of the unresolved disk from which we receive the largest amount of polarized light. In the disk only model, this is the earth-facing forward scattering side of the disk, and in the the disk+envelope model these are the "poles" of the circular envelope away from the disk. In both cases we would thus expect that the angle of polarization is aligned or closely aligned with the position angle of the circum-companion disk. We note that this scenario would change in the presence of an outflow which dominates as the source of scattered light. In such a case we would expect the angle of polarization to be perpendicular to the disk plane (\citealt{1989AJ.....98.1368T}). However, we have not modelled such a scenario. We have in general, not included this geometrical consideration in our models since the degree of polarization and the photometry are independent of the disk position angle.\\ \begin{table}[!h] \begin{center} \caption{Radiative transfer model parameters.} \begin{tabular}{lccc} \hline \hline Model & (a) Disk & (b) Envelope & (c) Disk \\ Parameters & & & \& Envelope \\ \hline M$_{\rm{comp}}$ [M$_{\rm{Jup}}$]& 5/20 & 5 & 20 \\ T$_{\rm{eff}}$ [K] & 1580/2500 & 1580 & 2500 \\ R$_{\rm{comp}}$ [R$_{\odot}$] & 0.17/0.25 & 0.17 & 0.25\\ R$_{\rm{in}}$ [au] & 0.003 & 0.003 & 0.003 \\ R$_{\rm{out}}$ [au] & 2 & 2 & 2 \\ M$_{\rm{disk}}$ [M$_{\odot}$]& 1.9 10$^{-7}$ & - & 1.9 10$^{-7}$ \\ M$_{\rm{env}}$ [M$_{\odot}$]& - & 8.5 10$^{-9}$ & 2.3 10$^{-10}$ \\ $\rho_{0}$ [g/cm$^{3}$]& - & 1 10$^{-16}$ & 5 10$^{-17}$\\ H$_{\rm{p}}$(R$_{\rm{out}}$)/R$_{\rm{out}}$ & 0.18 & - & 0.18 \\ $\zeta$ & 0.25 & - & 0.25 \\ $p$ & -1 & - & -1 \\ $q$ & - & -1 & -1 \\ \hline \end{tabular} \label{tab:modelparam} \end{center} \end{table} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{models_new_updated.pdf} \caption[]{\textit{1st row:} Sketches of the model families described in section \ref{model-section}. We show in all cases a cross-section. \textit{2nd row:} Photometry of the companion along with the model photometry for different model parameters. The legend gives information about the assumed companion mass, the size of the considered dust grains, as well as the circumplanetary disk inclination (in models a) and c)). \textit{3rd row:} Same as the second row, but for degree of linear polarization. Colors and line styles represent the same models as in the previous row.} \label{models} \end{figure*} \section{Summary and conclusions} We observed the CS\,Cha system for the first time in high resolution polarimetry with SPHERE/IRDIS in J and H-band. We resolved a circumbinary disk with an outer extent of 55.6\,au in scattered light. The disk cavity predicted by previous studies was not detected due to the limited inner working angle of the coronagraph. The upper limit for the radius of the disk cavity is 15.3\,au, consistent with previous models by \cite{2016MNRAS.458.1029R} using unresolved Herschel data. We find that the disk has an inclination of 24.2$^\circ \pm$3.1$^\circ$.\\ Outside of the disk at a projected separation of 214\,au we find a faint companion with an extreme degree of linear polarization. To our knowledge this is the first faint and likely very low mass companion to a nearby star that has been discovered in polarized light. With HST and NACO archival data we show with high confidence that the companion is co-moving with the primary stars, and is thus bound to the system, placing it at the same distance and age as CS\,Cha. The complex photometry of the companion could not be explained with current atmosphere models. If just J and H-band were considered, a 5\,M$_\mathrm{Jup}$ mass may be inferred. However, this does not fit the photometry in other bands, in particular the non-detection in L-band. Furthermore, a "naked" substellar companion is expected to have a low intrinsic polarization. \cite{2017arXiv170609427S} showed recently that the expected degree of linear polarization from such a companion due to rotational oblateness or patchy cloud covers should not exceed 3\,\% and is typically lower\footnote{See also the previous work by \cite{2001ApJ...561L.123S}, who find a similar range for the degree of linear polarization.}. Thus we suggest that we are looking at a companion with a surrounding disk or dust-envelope. We explored the wide parameter space for such a model with the radiative transfer code RADMC3D. We find that we can explain the companion SED and polarization reasonably well, with either a highly inclined disk around a 20\,M$_\mathrm{Jup}$ object or with a disk and additional dust envelope around an object of the same mass. This puts the companion clearly in the substellar regime: either a very low mass brown dwarf or a high mass planet.\\ From our orbit fit to the available astrometry over a time base line of 19\,yr, we can conclude that the orbit of the companion is likely eccentric with a minimum eccentricity of 0.3. This gives some indication of how the companion may have formed. For an in-situ formation, either by core-accretion or by gravitational collapse in the outer circumbinary disk, one would not expect an eccentric orbit. Also the strong misalignment of the circumbinary and the circum-companion disk do not fit these scenarios. However, the eccentricity may be explained by dynamical interaction with the unresolved stellar binary. The two systems could be caught in Kozai-Lidov type resonances effectively exchanging relative inclination and eccentricity (see e.g. \citealt{2005ApJ...627.1001T}). Another possibility for an eccentric orbit would be the formation at close separations in the circumbinary disk and a subsequent dynamical scattering event in which again the central binary may have played a role. However, in such a scenario one would expect that the companion lost the surrounding disk and that some sign of perturbation would be visible in the circumbinary disk. Both do not seem to be the case.\\ While for typical planet formation scenarios the location and eccentricity of the orbit of the CS\,Cha companion are problematic, this is less so for a more star-like formation by collapse in the molecular cloud in which also the CS\,Cha binary formed. In such a case the misaligned disks around the companion and the stellar sources would also not be problematic as many such examples are known, most prominently the HK\,Tau system (\citealt{1998ApJ...502L..65S}) with a similar configuration as the CS\,Cha system.\\ To better constrain the mass and properties of the companion and its surrounding disk, additional observational data is necessary, in particular, ALMA observations will allow to detect the amount of mm-sized dust around the companion, and likely, reveal its true nature. Additional SPHERE/ZIMPOL observations that would help to determine the grain size distribution and also potentially if the disk or disk plus envelope scenario explains the system configuration best. \\ Only few other systems are known that harbor a sub-stellar companion with a disk around it, such as the FW\,Tau (\citealt{2014ApJ...781...20K, 2015ApJ...798L..23K}) system or the 1SWASP\,J140747.93-394542.6 system (\citealt{2012AJ....143...72M}, \citealt{2015MNRAS.446..411K}). The former confirmed by ALMA observations, while the latter was detected in transit. However, the CS\,Cha system is the only systems in which a circumplanetary disk is likely present as well as a resolved circumstellar disk. It is also to the best of our knowledge the first circumplanetary disk directly detected around a sub-stellar companion in polarized light, constraining its geometry. Once the system is well understood it might be considered a benchmark system for planet and brown dwarf formation scenarios. \begin{acknowledgements} We thank an anonymous referee for significantly improving our original manuscript. We acknowledge I. Pascucci, M. Min, M. Hogerhijde, C. Dominik and G. Muro-Arena for interesting discussions. MB acknowledges funding from ANR of France under contract number ANR-16-CE31-0013 (Planet Forming Disks). AJ acknowledges the support by the DISCSIM project, grant agreement 341137 funded by the European Research Council under ERC-2013-ADG. JO acknowledges support from the Universidad de Valpara\'iso and from ICM N\'ucleo Milenio de Formaci\'on Planetaria, NPF. The research of FS leading to these results has received funding from the European Research Council under ERC Starting Grant agreement 678194 (FALCONER). SPHERE is an instrument designed and built by a consortium consisting of IPAG (Grenoble, France), MPIA (Heidelberg, Germany), LAM (Marseille, France), LESIA (Paris, France), Laboratoire Lagrange (Nice, France), INAF - Osservatorio di Padova (Italy), Observatoire de Geneve (Switzerland), ETH Zurich (Switzerland), NOVA (Netherlands), ONERA (France) and ASTRON (Netherlands) in collaboration with ESO. SPHERE was funded by ESO, with additional contributions from CNRS (France), MPIA (Germany), INAF (Italy), FINES (Switzerland) and NOVA (Netherlands). SPHERE also received funding from the European Commission Sixth and Seventh Framework Programmes as part of the Optical Infrared Coordination Network for Astronomy (OPTICON) under grant number RII3-Ct-2004-001566 for FP6 (2004-2008), grant number 226604 for FP7 (2009-2012) and grant number 312430 for FP7 (2013-2016). This research has made use of the SIMBAD database as well as the VizieR catalogue access tool, operated at CDS, Strasbourg, France. This research has made use of NASA's Astrophysics Data System Bibliographic Services. Finally CG would like to thank Donna Keeley for language editing of the manuscript. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-05-08T02:12:37", "yymm": "1805", "arxiv_id": "1805.02261", "language": "en", "url": "https://arxiv.org/abs/1805.02261" }
\section{Introduction} The local theory of linear ordinary differential equations exists in two closely related but different flavors. First, one can consider systems of first order linear equations near a singular point. Such systems form an infinite-dimensional space on which several groups of gauge transformations act naturally. Then the classification problem arises: what is the simplest normal form to which a given system can be reduced by a gauge transformations. This theory is well developed, in particular, delicate results explaining the difference between formal and convergent classification (the Stokes phenomenon) were obtained half a century ago. Another flavor of the theory deals with (scalar) higher order linear differential equations involving only one unknown function. Formally such equations can be reduced to systems of first order equations and vice versa, but the natural group action is lost by such reduction. Instead a notion of Weyl equivalence can be introduced, which makes the classification problem meaningful once again. The two theories are closely parallel (but clearly different) for the mildest type of singularities, the Fuchsian (regular) ones, as was shown in \cite{shira}. In this paper we discuss the theorem by Bernard Malgrange \cite{malgrange} which is an analogue of a theorem of formal diagonalization of non-resonant irregular singularities \cite{thebook}*{Theorem 20.7}. We start with a brief summary of the theory of systems of first order equations. For simplicity from the very beginning we concentrate on the formal case, leaving the issue of convergence for remarks. \subsection{Systems of first order linear ordinary differential equations} Denote by $\C[[t]]$ the differential ring of formal Taylor series and $\Bbbk=\C[t^{-1}][[t]]$ its quotient differential field of Laurent polynomials with the usual derivation $\d=\frac{\mathrm d}{\mathrm dt}$. A \emph{system of first order linear ordinary differential equations} over $\Bbbk$ is defined by an $n\times n$ matrix $M=\{M_{ij}\}\in\Mat(n,\Bbbk)$ and has the form \begin{equation*} \tfrac{\mathrm d}{\mathrm dt} x_i=\sum _{j=1}^n M_{ij}(t)x_j,\qquad i=1\dots,k. \end{equation*} It is more convenient to write this equation in the matrix form with respect to the unknown $n\times n$-matrix function $X$, specifically singling out the order of the pole of the coefficients matrix as follows, \begin{equation}\label{ls-2} t^{1+r}\,\tfrac{\mathrm d}{\mathrm dt} X=A(t)X,\quad r\in\Z_+,\ A=A_0+A_1t+A_2t^2+\cdots\in\Mat(n,\C[[t]]). \end{equation} The integer $r\ge 0$ is called the \emph{Poincar\'e index} of the system \eqref{ls-2}; if $r=0$, the system is called \emph{Fuchsian}, the leading matrix $A_0$ of the matrix formal Taylor series $A(t)$ is assumed nonzero. The group of \emph{formal gauge transformations} $\GL(n,\C[[t]])$ acts naturally on linear systems of the form \eqref{ls-2} by ``change of variables'': if $H(t)=H_0+H_1t+H_2t^2+\cdots$, $\det H_0\ne 0$ is a formal matrix series, then the transformed system for the new ``unknown'' matrix $Y=H(t)X$ takes the form $t^{1+r}\,\tfrac{\mathrm d}{\mathrm dt} Y=B(t)Y$ with the new matrix coefficient $B(t)=t^{r+1}(\tfrac{\mathrm d}{\mathrm dt}H)H^{-1}-HA(t)H^{-1}$. The natural question is to describe the orbits of this action, in particular, determine what is the ``simplest'' form to which a given system can be reduced by a suitable formal gauge transformation. This question is almost completely settled for Fuchsian systems, including the issue of convergence for holomorphic systems and holomorphic gauge transformations. The question for non-Fuchsian systems is much more subtle, especially the issue of convergence, yet the first step of the formal classification is rather simple. \begin{Def} An non-Fuchsian system \eqref{ls-2} is resonant, if among the eigenvalues $\l_1,\dots,\l_n$ of the leading matrix $A_0$ occur equal numbers with zero differnce. Otherwise (when all eigenvalues are pairwise different) the system is non-resonant, see \cite{thebook}*{\parasymbol 20C}. \end{Def} \begin{Rem} For Fuchsian systems the resonance condition means that some of the eigenvalues differ by a natural number, i.e., $\l_i-\l_j\in\N$ for some $i\ne j$. \end{Rem} \begin{Thm}\label{thm:diag} A non-resonant non-Fuchsian system can be formally diagonalized, i.e., there exists a formal gauge transformation such that the corresponding transform $B(t)$ becomes a diagonal matrix. \end{Thm} In other words, in the non-resonant case the system can be decomposed into a Cartesian product of one-dimensional equations. Appearance of resonances (multiple eigenvalues of $A_0$) leads to a more involved formal normal form. The analytic reasons for the divergence (in general) of the diagonalizing gauge transform (the Stokes phenomenon) are also well understood, see \cite{thebook}*{\parasymbol 16 and \parasymbol 20}. \subsection{Higher order linear operators}\label{sec:operators} The other flavor of the theory deals with linear equations involving only one unknown (scalar) function $u$, but several derivatives. For simplicity we will consider only the \emph{homogeneous} equations of this type, which can always be written under the form \begin{equation}\label{lode} a_0(t)u^{(n)}+a_1(t)u^{(n-1)}+\cdots+a_{n-2}(t)u''+a_{n-1}(t)u'+a_n(t)u=0, \end{equation} where $a_0,\dots,a_n\in\Bbbk$ are the coefficients, defined modulo a multiplication by a nonzero Laurent series from $\Bbbk$. In particular, one can assume that the leading coefficient $a_0$ is identically one, or on the contrary, assume that all $a_0,\dots,a_n$ are formal Taylor series (not involving negative powers of $t$). However, it turns out that the smart decision, simplifying many formulations is to use a different derivation when expanding a linear dependence between derivatives of the unknown function. The equation \eqref{lode} can be rewritten in the operator form. Denote by $\d\:\Bbbk\to\Bbbk$, $\d\:u\mapsto\tfrac{\mathrm d}{\mathrm dt}$ the standard derivation, and identify each element $a\in\Bbbk$ with a ``zero order operator'' $u\mapsto au$. Then the left hand side of \eqref{lode} can be interpreted as the result of application of the differential operator \begin{equation}\label{lodo} L=\sum_{j=0}^n a_j\d^{n-j},\qquad a_0,\dots, a_n\in\Bbbk. \end{equation} to the unknown function $u$ (which may well be in any extension of the field $\Bbbk$). The Leibniz rule implies the commutation law \begin{equation}\label{leib} \d t^k=kt^{k-1}\d+t^k,\qquad k\in\Z. \end{equation} Denote by $\eu$ the Euler derivation, \begin{equation}\label{euler} \eu=t\,\d=t\,\tfrac{\mathrm d}{\mathrm dt}. \end{equation} Then any linear operator of the form \eqref{lodo} can be re-expanded as the sum \begin{equation}\label{lodo-1} L=\sum_{j=0}^n b_j(t)\eu^{n-j}, \qquad b_j\in\Bbbk, \end{equation} where the coefficients $b_j\in\Bbbk$ as before are defined modulo a nonzero element in $\Bbbk$. The $\C$-linear space of such operators will be denored $\Bbbk[\eu]$. It is a non-commutative algebra with respect to the operation of composition. The commutation law in $\Bbbk[\eu]$ is given by the formula \begin{equation}\label{comm} \eu^j t^k=t^k(\eu+k)^j, \qquad t\in\Z,\ j\in\Z_+, \end{equation} which looks especially simple compared with the law \eqref{leib} extended on arbitrary monomials. \begin{Def} A \emph{canonical representation} of an $n$th order linear differential operator is the representation \eqref{lodo-1} in which: \begin{itemize} \item all coefficients $b_0,\dots,b_n\in\C[[t]]$ are formal Taylor polynomials, not involving negative powers of $t$, and at least one of them has a nonzero free term, $b_j(0)\ne0$; \item all coefficients $b_0,\dots,b_n$ appear to the left from the symbols of the iterated Euler derivations $\eu^j$. \end{itemize} \end{Def} Alternatively, any operator in the canonical representation can expanded as an infinite series of the form \begin{equation}\label{oper-series} L=\sum_{j\ge 0}t^j p_j(\eu),\qquad p_j\in\C[\eu],\ \max_j\deg_\eu p_j=n=\ord L. \end{equation} \begin{Def}\label{def:Fuchsian} An operator is Fuchsian, or \emph{regular}, if in any canonical representation the leading coefficient is nonvaninishing, $b_0(0)\ne0$. Otherwise it is called \emph{irregular}. The expansion \eqref{oper-series} corresponds to a Fuchsian operator, if and only if $\deg_\eu p_0=n=\ord L$. \end{Def} \begin{Rem}\label{rem:reduction} A Fuchsian equation \eqref{lodo-1} can be reduced to a Fuchsian system \eqref{ls-2} in the standard way by introducing the formal variables $x_j=\eu^{j-1}u$, $j=1,\dots,n-1$. The condition of regularity is defined for equations with meromorphic coefficients in terms of the growth rate of their solutions. For scalar equations regularity is equivalent to Fuchsianity. Conversely, a Fuchsian system can be written in the (matrix) operator form as $(\eu-A)X=0$ which \emph{mutatis mutandis} is Fuchsian in the sense of the above Definition. \end{Rem} \subsection{Weyl equivalence} The (infinite-dimensional $\C$-linear) space of differential operators admits no natural action of a gauge transformations group that would be large enough (changes of variable of the form $v=h(t)u$ with a formal series $h\in\C[[t]]$ are obviously insufficient for a meaningful classification). Instead one can use the fact that differential operators form a \emph{noncommutative algebra}. The following definition was suggested in \cite{shira} based on the fundamental work by \Ore.~Ore \cite{ore}. \begin{Def} Two linear ordinary differential operators $L,M\in\Bbbk[\eu]$ are \emph{Weyl equivalent}, if there exist two \emph{Fuchsian} operators $H,K\in\Bbbk[\eu]$ such that: \begin{enumerate} \item $MH=KL$, and \item $\gcd (H,L)=1$, i.e., there is no nontrivial operator $P\in\Bbbk[\eu]$ such that both $H$ and $L$ are divisible (from the right) by $P$. \end{enumerate} \end{Def} Informally, two operators are Weyl equivalent, if there exists a Fuchsian operator $u\mapsto v=Hu$ which maps in a bijective way solutions of the equation $Lu=0$ to solutions of the equation $Mv=0$. It is not obvious why this is indeed an equivalence relation (in particular, why it is symmetric), yet this can be verified \cite{ore,shira}. It turns out that the Weyl classification of \emph{Fuchsian} operators is very much similar to that of the gauge equivalence of the Fuchsian systems of linear equations. In particular, see \cite{shira}: \begin{itemize} \item in the generic (non-resonant) case a Fuchsian operator is Weyl equivalent to an Euler operator from $\C[\eu]$ (i.e., with constant coefficients); \item in the resonant case the normal form is a composition of \emph{polynomial} first order operators $\eu-\l_j(t)$, $\l_j\in\C[t]$, with the degrees of the polynomials $\l_j$ depending on the combinatorial structure of resonances; \item the normal form is Liouville integrable; \item for a Fuchsian operator $L$ with holomorphic (convergent) coefficients, the normal form and the conjugating operators $H,L$ also have holomorphic coefficients. \end{itemize} \subsection{Non-commutative factorization} The mere possibility of non-commutative factorization of Fuchsian operators into terms of first order is a simple fact. It suffices to note that any Fuchsian equation always has a solution of the form $u(t)=t^\l v(t)$ with $\l\in\C$ and an invertible series $v\in\C[[t]]$ (in the analytic category this solution is generated by an eigenvector of the monodromy operator). Such solution immediately produces a right Fuchsian factor of order 1 for the corresponding operator. The difficult part of \cite{shira} is to reduce \emph{simultaneously} all factors to polynomial forms by a suitable Weyl equivalence. In this paper we discuss a much simpler question on the \emph{possibility} of the noncommutative factorization of irregular differential operators. To the best of our knowledge, this problem was first addressed by B.~Malgrange, who in 1979 sketched a solution in a preprint published only in 2008 \cite{malgrange}. Soon a different proof based on the valuations theory was published by P. Robba \cite{robba}. In both cases the answer was given in terms of the Newton diagram of the differential operator, yet the proof was essentially noncommutative. An analogous question in the commutative \emph{algebra of pseudopolynomials} $\C[[t]][\xi]$ was first studied by I.~Newton in 1676 in a letter to H.~Oldenburg that was published only in 1960 according to \cite{arnold,brieskorn}. Newton invented his method of a rotating ruler which today is formalized using the Newton polygon (resp., Newton diagram) to solve this problem. Even in the commutative case the Newton's solution was considerably involved, see \cite{vain-tren} for the modern exposition; an appropriate modification of this proof allows to treat also the noncommutative case of differential operators, see the excellent textbook \cite{vdp-sing}. However, the modern techniques of the singularity theory (blow-up) allow to obtain the same results in a much simpler way. In our paper we develop a formal technique which allows to transfer \emph{all} results for commutative pseudopolynomials to the noncommutative case of differential operators and outline the similarity between the respective results. In particular, we prove a result that is a direct analogue of Theorem~\ref{thm:diag} (the necessary definitions are introduced below). \begin{Thm} Let $L$ be a single-slope differential operator $L$ with the rational slope $r=p/q$, $\gcd(p,q)=1$, and the roots $\l_1,\dots,\l_m\in\C$ of the corresponding characteristic polynomial are nonresonant, i.e., pairwise different. Then the operator $L$ can be formally decomposed as a noncommutative product of $m$ irreducible operators, $L=L_1\cdots L_m$, with the same slope $r$ having the form $L_j=t^p\eu^q-\l_j+\cdots$. \end{Thm} Note that if the slope is integer, then $q=1$ and the irreducible factors are of order $1$. The general factorization statement is given in Theorem~\ref{thm:main} below: its structure is completely analogous to the structure of the classical factorization theorem for pseudopolynomials. We start with a brief recap of the commutative theory in the form most suitable for our purposes. \subsection{Acknowledgements} This paper appeared after a thorough rethinking of the thesis of the first author \cite{leanne}. We are grateful to many friends and colleagues who came out with most helpful remarks after hearing conference presentations of the results, especially Jeanne-Pierre Ramis, Michael Singer, Daniel Bertrand, Gal Binyamini and Dmitry Novikov. The second author is incumbent of the Gershon Kekst Chair of Mathematics. \section{Pseudopolynomials and their factorization} \subsection{Pseudopolynomials} A \emph{pseudopolynomial} is a family of polynomials of degree $\le n=\deg P$ which formally depends on a local parameter $t\in(\C,0)$. The space of pseudopolynomials can be naturally identified with the commutative algebra that in a sense cross-bread between the algebra of polynomials and the algebra of formal Taylor series, \begin{equation}\label{comm-alg} \^\Cs=\C[\xi]\otimes_\C\C[[t]]. \end{equation} Each element of this algebra can be expanded into the formal series \begin{equation}\label{taylor} P(t,\xi)=\sum_{j=0}^\infty t^j p_j(\xi),\qquad p_j\in\C[\xi],\quad \deg P=\sup_{j}\deg p_j<+\infty \end{equation} (note the boundedness of $\deg p_j$) or as a formal double sum \begin{equation}\label{double} P(t,\xi)=\sum_{(i,j)\in S}c_{ij}t^j\xi^i,\qquad S\subset \Z^2_+ \end{equation} The set $S\subset\Z^2_+$ which belongs to the vertical strip $0\le i\le n=\deg P$ is called the \emph{support} of the pseudopolynomial $P$ and denoted by $\supp P$. \begin{Def}\label{def:NPC} The Newton polygon $\D_P\subseteq\R_+^2$ of a pseudopolynomial $P\in\^\Cs$ is the minimal closed convex set containing the origin $(0,0)$ and the support $\supp P$, which is invariant by the vertical translation $(i,j)\mapsto (i,j+1)$. \end{Def} One can immediately see that the boundary of any Newton polygon consists of two vertical rays over the points $i=0$ and $i=n$ and the graph of a convex piecewise linear function $\chi_P\:[0,n]\to\R_+$, called the \emph{gap function}. This function is non-decreasing, which implies the following obvious conclusion. \begin{figure} \centering \includegraphics[width=0.4\hsize]{NewtonDiag1}\\ \caption{Newton diagram of a pseudopolynomial}\label{fig:ND} \end{figure} \begin{Prop}\label{prop:left} If $(i,j)\in \D_P$ and $0\le i'<j$, then $(i',j)\in\D_P$. \qed \end{Prop} \begin{Rem}\label{rem:local} This definition is an obvious modification of the standard notion of the Newton polygon $\D_f$ for Taylor series $f$ from $\C[[x,y]]$, defined as the minimal closed convex set which contains the support $\supp f$ and is invariant by the shifts $(i,j)\mapsto (i,j+1)$ and $(i,j)\mapsto (i+1,j)$, see \cite{wall}. It suffices to notice that for a pseudopolynomial $P(t,\xi)$ of degree $n$ the Laurent series $f(x,y)=y^n P(x,\frac 1y)$ does not involve negative powers of $y$. The (usual) Newton polygon for $f$ is obtained by reflection of $\D_P$ in the horizontal axis and shift upwards by $n$. \end{Rem} The following properties of the gap function immediately follow from its construction: \begin{enumerate} \item $\chi$ is defined on $[0,n]$ and $\chi(0)=0$; \item $\chi$ is convex, monotone non-decreasing and piecewise-linear (more accurately, piecewise-affine); \item $\chi$ may be non-differentiable at a point $i\in[0,1]$ if and only if $i$ and $\chi(i)$ are both integer numbers (in which case we call $(i,j)\in\D_P$ the called the \emph{corner point} or the \emph{vertex} of $\D_P$). \end{enumerate} \begin{Rem} The inverse to the gap function is the smallest concave majorant of the \emph{degree function} $j\mapsto \deg p_j$ derived from the expansion \eqref{taylor}. \end{Rem} \begin{Def} The union of all finite edges of the Newton polygon $\D_P$ is called the \emph{Newton diagram} of the pseudopolynomial $P$ and denoted by $\G_P$. Thus the Newton polygon is the epigraph of the gap function. \end{Def} \begin{Def}\label{def:admissible} A closed convex polygon $\D\subset\R^2_+$ which is the epigraph of a convex piecewise-linear function $\chi=\chi_\D$ as above, is called \emph{admissible}. The function $\chi=\chi_\D$ will be called the \emph{gap function} for $\D$. Collection of different slopes (derivatives) of the affine pieces of the function $\chi$, all of them nonnegative rational numbers, will be called the \emph{Poincar\'e spectrum} $\PS(\D)\subset\Q_+$ of the polygon $\D$. \end{Def} \begin{Def} We call a pseudopolynomial $P$ (resp., its Newton polygon $\D$) a \emph{single-slope} pseudopolynomial (resp., polygon), if its Poincar\'e spectrum consists of a single value, $\PS(P)=\{\rho\}$. The corresponding gap function is linear on some segment $[0,d]$, $\chi_P(i)=\rho i$, $\rho\in\Q_+$. The value $\rho=0$ is not excluded. \end{Def} \begin{Ex}\label{ex:FuchsianPP} By this definition $\PS(P)=\{0\}$ if and only if $\deg P=\max_j \deg p_j=\deg p_0$. We call such (single-slope) pseudopolynomials \emph{Fuchsian}, cf.~with Definition~\ref{def:Fuchsian}. \end{Ex} \begin{Rem} The reason why the collection of slopes is referred to as the Poincar\'e spectrum, is as follows. A linear system \eqref{ls-2} of Poincar\'e rank $r\in\Z_+$ can be written as $t^r\eu X=A(t)X$, and after reduction to a scalar equation as explained in Remark~\ref{rem:reduction}, it will generically produce a single-slope operator with the integer slope $r$. \end{Rem} \subsection{Newton polygon of a product} The key property of the Newton polygon is its ``logarithmic behavior'' with respect to multiplication in $\^\Cs$, which generalizes the geometry of superscripts in the identity $\xi^n\xi^m=\xi^{n+m}$. \begin{Prop}\label{prop:minksum} For any $P,Q\in\^\Cs$, \begin{equation}\label{NPlog} \D_{PQ}=\D_P+\D_Q, \end{equation} where the right hand side is the Minkowsky sum $\{u+v\:u\in \D_P,\ v\in \D_Q\}$. \qed \end{Prop} For monomials this follows from the identity for their (one-point) supports, $\supp (t^{i+i'}\xi^{j+j'})=\supp (t^i\xi^j)+\supp (t^{i'}\xi^{j'})$. This immediately implies the inclusions \begin{equation}\label{additive} \supp(PQ)\subseteq\supp (P)+\supp (Q),\qquad\text{hence}\qquad \D_{PQ}\subseteq\D_P+\D_Q. \end{equation} The inclusion for supports can be strict, since a lattice point from $\supp (P)+\supp (Q)$ can be represented in several possible ways as the sum of points from $\supp(P)$ and $\supp(Q)$. A cancellation of different contributions is possible so that the corresponding coefficient of $PQ$ could be zero. The not-so-obvious claim is that the coefficients corresponding to the \emph{corner} points of $\D_P+\D_Q$ cannot vanish because of such cancellation. \begin{Cor} $$ \PS(P+Q)=\PS(P)\cup \PS(Q).\qed $$ \end{Cor} As follows from the Proposition~\ref{prop:minksum}, the problem of factorization of a pseudopolynomial $R\in\^\Cs$ reduces (although is \emph{not equivalent}) to the problem of representing the Newton polynomial $\D=\D_R$ as the Minkowski sum of two \emph{admissible} polynomials, $\D=\D'+\D''$. The admissibility constraints (nonnegativity, vertices only at the integer points of the lattice, two vertical bounding rays etc.) imply the following two geometrically rather obvious statements. \begin{Lem} An admissible \textup(in the sense of Definition~\ref{def:admissible}\textup) polygon is \emph{indecomposable}, i.e., cannot be represented as a Minkowski sum of two nontrivial admissible polygons, if and only if it has a single slope and the non-vertical edge carries no lattice points of $\Z^2_+$. Any admissible polygon can be decomposed into the Minkowski sum of the indecomposable polygons. \end{Lem} \begin{proof} It can be immediately verified that any admissible polygon can be decomposed into the Minkowski sum of the single-slope polygons. The claim of (in)decomposability for the single-slope polygons is essentially a one-dimensional statement about lattice segments in $\Z^1$. \end{proof} \subsection{Quasihomogeneous pseudopolynomials}\label{sec:qhg} Let $w\in\Q_+$ be a nonnegative rational number and $\wgt=\wgt_w\:\Z^2\to\Q$ the weight function which associates to a monomial $t^j\xi^i$ the weight $\wgt(t^j\xi^i)=\wgt(i,j)=j-wi$. \begin{Def} A pseudopolynomial is called $w$-quasihomogeneous, if all its monomials have the same $w$-weight $\alpha\in\Q$, $$P_\alpha=\sum_{(i,j)\:\wgt_w(i,j)=\alpha}c_{ij}t^j\xi^i.$$ One can instantly see that support of a quasihomogeneous polynomial belongs to a line with the slope $w$ and is finite (i.e., $P_\alpha\in\C[t,\xi]$ is a genuine polynomial). \end{Def} A quasihomogeneous polynomial of \emph{weight zero} is essentially a polynomial of a single variable. If $w=p/q$ is an irreducible fraction, then all monomials of weight zero are necessarily powers of the generating monomial $t^q\xi^p$ of weight zero, thus $P_0(t,\xi)=\sigma(t^q\xi^p)$ for some $\sigma=\sigma_P\in\C[\l]$. It is always reducible: if $\l_1,\dots,\l_k$ are the complex roots of $\sigma$, then $$ P_0(t,\xi)=c\prod_{s=1}^k (\l_s-t^q\xi^p),\qquad c\in\C,\ c\ne0,\quad \sigma_P(\l_s)=0. $$ An arbitrary quasihomogeneous (pseudo)polynomial $P_\alpha$ of weight $\alpha$ can be represented as a nontrivial monomial of weight $\alpha$ times a quasihomogeneous polynomial of weight zero. To make this representation unique, we will require that this quasihomogeneous polynomial is \emph{without zero roots}, $$ P_\alpha(t,\xi)=ct^j\xi^i\cdot \prod_{s=1}^k (\l_s-t^q\xi^p),\qquad c\prod_s\l_s\ne 0,\quad \sigma_P(\l_s)=0,\ \wgt(i,j)=\alpha. $$ \begin{Def}\label{def:char-poly} The univariate polynomial $\sigma=\sigma_P$ introduced by the above construction, is called the \emph{characteristic polynomial} of the quasihomogeneous pseudopolynomial $P$. Its (nonzero) roots are called \emph{characteristic numbers}. \end{Def} \subsection{Graded algebra of pseudopolynomials} Let as before $w\in\Q_+$ be a rational weight and $\wgt(\cdot)$ the corresponding weight. The algebra $\^\Cs$ is naturally \emph{graded} by this weight, i.e., represented as a countable direct sum, \begin{equation}\label{graded} \^\Cs=\bigoplus_{\alpha\in\Q}\Cs_\alpha,\qquad \Cs_\alpha=\{P\in\^\Cs :\wgt P=\alpha\}. \end{equation} The index $\alpha$ effectively ranges over the set $\Z_+-w\Z_+\in\Q$ which is completely ordered (i.e., it is discrete, bounded from below and unbounded from above). All other terms $\Cs_\alpha$ are trivial. This grading agrees with the structure of algebra: \begin{equation}\label{gr-alg} \Cs_\alpha\cdot\Cs_\beta\subseteq\Cs_{\alpha+\beta},\qquad \forall \alpha,\beta\in\Q. \end{equation} Consequently, any pseudopolynomial $P\in\^\Cs$ can be expanded as a series $P=\sum_{\alpha\in\Q}P_\alpha$, which is in general infinite but always has a well-defined $w$-\emph{leading term} $P_*$ of the minimal weight $\alpha_*=\min \wgt\big|_{\D_P}$. It is tempting to use the imperfect notation $P=P_*+\cdots$ to state the corresponding fact. \subsection{Commutative factorization problem}\label{sec:comm-factorization} We will focus on the following special form of the factorization problem for pseudopolynomials: given $P\in\^\Cs$ with a Newton polygon $\D=\D_P$ and an admissible decomposition $\D=\D'+\D''$ into the Minkowski sum, construct a factorization $P=QR$ with $\D_Q=\D'$, $\D_R=\D''$. Some additional assumptions will be required. \begin{Ex} Assume that $P$ is a Fuchsian pseudopolynomial with pairwise distinct characteristic numbers $\l_1,\dots,\l_d$. Then its Newton polygon decomposes as the Minkowski sum of $d$ identical copies of $[0,1]\times\R_+$, so one can expect that it factors as a product of $d$ linear pseudopolynomials of the form $P(t,\xi)=c(t)\prod_{s=1}^d (\xi-\boldsymbol \l_s(t))$. This is indeed the case, as follows from the (formal) Implicit Function Theorem: if all roots of $p_0$ are simple, $p_0'(\l_s)\ne 0$, then each root can be expressed as $\xi_s=\boldsymbol\l_s(t)\in\C[[t]]$, $\boldsymbol\l_s(0)=\l_s$. \end{Ex} \subsubsection{Factorization in $\C[[x,y]]$} The factorization problem for pseudopolynomials can be instantly reduced to that for formal series in two variables, as mentioned in Remark~\ref{rem:local}. The factorization problem for such objects is well known, see \cite{wall}. If the series were convergent, then this would be the problem of determining all irreducible branches of the germ of a planar analytic curve in $\{f(x,y)=0\}\subset(\C^2,0)$. The answer is determined by the (classical) Newton diagram of the germ $f$ which is the graph of a piecewise affine convex function $\chi_f\:\R^+\to\R^+$ which \emph{decreases} to zero at some point $\le d$, cf.~with Remark~\ref{rem:local}. Slopes of this function are negative; as before, $f$ is called \emph{single-slope}, if its Newton diagram consists of a single edge. \begin{Thm}\label{thm:locA} A formal series $f\in\C[[x,y]]$ admits factorization into single-slope series $f=f_1\cdots f_m$. \qed \end{Thm} With a single-slope series $f$ one can associate the its leading part (a quasihomogeneous polynomial), the corresponding characteristic polynomial $\sigma=\sigma_f(\l)$ and its (nonzero) roots $\l_1,\dots,\l_d$, the \emph{characteristic numbers}, exactly as in \secref{sec:qhg}. The only difference is that the weights assigned to $x,y$ are both natural numbers. Obviously, $\sigma_{f_1f_2}=\sigma_{f_1}\sigma_{f_2}\in\C[\l]$. \begin{Thm}\label{thm:locB} Assume that the characteristic numbers of a single-slope series $f\in\C[[x,y]]$ form two disjoint groups so that $\sigma_f(\l)=\sigma_1(\l)\sigma_2(\l)$ and $\gcd(\sigma_1,\sigma_2)=1$. Then $f$ admits factorization $f=f_1f_2$ so that $\sigma_{f_i}=\sigma_i$, $i=1,2$. \qed \end{Thm} \begin{Cor} Any single-slope series can be factored out as a product of terms each having a single characteristic number, eventually with nontrivial multiplicity. \qed \end{Cor} These theorems immediately imply the following two factorization results for pseudopolynomials. \begin{Thm}\label{thm:ppA} Any pseudopolynomial $P\in\^\Cs$ admits factorization into single-slope terms $P=P_1\cdots P_m$. \end{Thm} \begin{Thm}\label{thm:ppB} Assume that the characteristic numbers of a single-slope pseudopolynomial $P\in\^\Cs$ form two disjoint groups so that $\sigma_P$ factors as $\sigma_P(\l)=\sigma_1(\l)\sigma_2(\l)$ with $\gcd(\sigma_1,\sigma_2)=1$. Then $P$ admits factorization $P=P_1P_2$ so that $\sigma_{P_i}=\sigma_i$, $i=1,2$. \end{Thm} \begin{Cor} Any single-slope series can be factored out as a product of terms each having a single characteristic number, eventually with nontrivial multiplicity. \end{Cor} \begin{proof}[Proof by reduction to the local case] For a pseudopolynomial $P(t,\xi)\in\^\Cs$ of degree $n$ denote $f(x,y)=y^n P(x,1/y)$. Then $f$ is a formal series in $x$ and a polynomial in $y$, that is, an element from $\C[[x,y]]$. Let $f=f_1f_2$ be the factorization of $f$ in the assumptions of Theorem~\ref{thm:locA} (Theorem~\ref{thm:locB} respectively). By the Weierstrass theorem (formal), one can assume that modulo an invertible series $f_i$ are polynomial in $y$ of degrees $n_1,n_2$ respectively, with $n_1+n_2=n$. The invertible series must also be polynomial in $y$ of degree $0$, that is, a formal series from $\C[[x]]$. Setting $P_i(t,\xi)=\xi^{n_i}f_i(t,1/\xi)$ gives the required factorization of $P$. \end{proof} \subsubsection{About the proofs}\begin{small} The modern proof of Theorems~\ref{thm:ppA} and~~\ref{thm:ppB} relies on the desingularization, a sequence of rational monomial transformations which simplify the curve (or the formal series). These transformations (blow-ups) have the form \begin{equation}\label{bup} (x,y)\longmapsto (x,y/x)\qquad\text{or}\qquad (x,y)\longmapsto (x/y,y) \end{equation} and act on the support of a series by an affine transformation which allows to extract factors of the form $x^p$ or $y^q$. For instance, if $\D_f$ has a single ``homogeneous edge'' connecting the vertices $(0,p)$ and $(p,0)$ and the corresponding characteristic numbers $\l_1,\dots,\l_p$ are pairwise different, then after a single blow-up one can refer to the implicit function theorem for the proof that $f$ admits factorization into terms corresponding to nonsingular branches, \begin{equation*} f(x,y)=\prod_{i=1}^p (x-\boldsymbol \l_i(x)y),\qquad \boldsymbol \l_i\in\C[[x]],\quad \boldsymbol \l_i(0)=\l_i. \end{equation*} In case of multiple characteristic values one has to refer to the Weierstrass Preparation theorem instead of the implicit function theorem. A single-slope series with the single edge connecting $(0,p)$ and $(q,0)$ requires several blow-ups whose number and types are determined by the Euclid algorithm for computation of $\gcd(p,q)$. A slightly more delicate considerations are required when $f$ has more than one slope, but the idea remains the same. \par\end{small} \section{Homological equation and its solvability} In this section we return to the algebra of pseudopolynomials $\^\Cs=\C[[t]][\xi]$ and attempt to construct factorization in this algebra directly, following Newton's ideas. \subsection{Formal factorization}\label{sec:formfact} Consider an admissible polygon $\D\in\R_+^2$ and the weight function $\wgt=\wgt_w\:\Z^2\to\Q$ associated with a rational weight $w\in\Q_+$. Denote \begin{equation}\label{qhn} \Cs_\alpha(\D)=\Cs_\alpha\cap\{\supp P\subseteq\D\}. \end{equation} Then the property \eqref{gr-alg} can be refined as follows: for any two admissible polygons $\D',\D''\subseteq\R^2_+$ and any $\alpha,\beta\in\Q_+$, \begin{equation}\label{gr-alg-D} \Cs_\alpha(\D')\cdot\Cs_\beta(\D'')\subseteq\Cs_{\alpha+_\beta}(\D'+\D''). \end{equation} Let $P\in\^\Cs$ be a pseudopolynomial expanded into $w$-quasihomogeneous terms as $P=\sum_\gamma P_\gamma$, and assume that $\D=\D_P=\D'+\D''$ is the admissible decomposition of its Newton polygon. The factorization under the form $P=QR$ can be achieved by two formal expansions $Q=\sum_\alpha Q_\alpha$, $R=\sum_\beta R_\beta$, if and only if \begin{equation}\label{factor-comm} P_\gamma=\sum_{\alpha+\beta=\gamma}Q_\alpha R_\beta,\qquad Q_\alpha\in\Cs_\alpha(\D'),\ R_\beta\in\Cs_\beta(\D''). \end{equation} Denote the leading terms of the three pseudopolynomials by $P_*,Q_*,R_*$ respectively (of weights $\gamma_*=\min \wgt\big|_\D$, $\alpha_*=\min \wgt\big|_{\D'}$, $\beta_*=\min \wgt\big|_{\D''}$) and assume that \begin{equation}\label{factor-seed} P_*=Q_*R_*\in\Cs_{\gamma_*}(\D). \end{equation} Then \eqref{factor-comm} becomes is an infinite \emph{triangular} system of \emph{linear algebraic equations} with respect to the unknown terms $Q_\alpha,R_\beta$ from the corresponding finite-dimensional linear spaces $\Cs_\alpha(\D')$, $R_\beta\in\Cs_\beta(\D'')$. Indeed, each equation can be rewritten as \begin{equation}\label{homolog} Q^*R_{\gamma-\alpha_*}+Q_{\gamma-\beta_*}R^*=-P_\gamma+\sum_{\alpha>\alpha_*,\ \beta>\beta_*}Q_\alpha R_\beta. \end{equation} The condition on the weights in the right hand side means that it involves only the terms of the weights strictly less than $Q_{\gamma-\beta_*}$ (resp., $R_{\gamma-\alpha_*})$. If these terms were already determined recursively from the equations \eqref{homolog} solved for all smaller values of $\gamma$, then the right hand side is known and we can study its solvability the equation in the weight $\gamma$ as well. The equation \eqref{factor-seed} serves as a base for this inductive process. Solvability of these equations depends on the following data: the weight $w$, the two admissible polygons $\D',\D''$ and the initial quasihomogeneous polynomials $Q_*,R_*$ of the appropriate weights. Denote by $\H$ the linear operator (more precisely, a family (sequence) of linear operators $\H_\gamma$, $\gamma\in\Q_+$) \begin{equation}\label{hom-op} \H\:\Cs_{\gamma-\alpha_*}(\D'')\times \Cs_{\gamma-\beta_*}(\D')\to\Cs_\gamma(\D'+\D''),\qquad (U,V)\mapsto Q_*U+R_*V. \end{equation} \begin{Def}\label{def:homolog} The equation(s) $\H(U,V)=W$ is called the \emph{homological equation} associated with the data $\mathscr H=(w,\D',\D'',Q_*,R_*)$. The homological equation is called \emph{solvable}, if each operator $\H_\gamma$ is \emph{surjective} for all $\gamma\ge\gamma_*=\alpha_*+\beta_*$. \end{Def} This equation can be considered as the linearization of the nonlinear equation $P=QR$ at the ``point'' $Q_*,R_*$ in the same way as it appears in the theory of local normal forms of vector fields etc., see \cite{thebook}*{\parasymbol 4}. Its solvability very strongly depends on the corresponding data $\mathscr H$, in particular, on the choice of the seed polynomials $Q_*,R_*$. \subsection{Examples} We start with the extreme case where $w=0$. It implies that the ``Fuchsian'' part of a pseudopolynomial can be always factored out. \begin{Ex}\label{ex:horizontal} Let $w=0$. Then $\wgt=\deg_t$, and the the quasihomogeneous components are of the form $P_j=t^j p_j(\xi)$, $j=0,1,2,\dots$. Let $d=\deg p_0<n=\deg P$. Since the Newton diagram of $P$ contains a nontrivial horizontal segment of length $d<n=\deg P$, then $\D_P=\D'+\D''$, where $\D'$ is a vertical semistrip $[0,d]\times\R_+$. In other words, the gap function $\chi_{\D'}$ vanishes identically on $[0,d]$, and $\chi_{\D''}$ is strictly positive on $(0,n-d]$, i.e., the corresponding Newton diagram has only nonzero slopes. Let $Q_*=P_*=p_0(\xi)$, $R_*=1$. Substituting the expansions $$ Q=p_0(\xi)+\sum_{j=1}^\infty t^j q_j(\xi), \deg q_j\le d,\quad R=1+\sum_{j=1}^\infty t^j r_j(\xi),\ \deg r_j\le n-d $$ into \eqref{homolog}, we obtain an infinite series of identities in $\C[\xi]$, \begin{equation}\label{fuchs-out} \begin{aligned} p_0&=p_0,\\ p_1&=q_1+r_1p_0,\\ p_2&=q_2+r_1q_1+r_2p_0,\\ \dots&{\makebox[0.3\columnwidth]{\dotfill}}\\ p_j&=q_j+r_1q_{j-1}+\cdots+r_j p_0, \end{aligned} \end{equation} The initial identity is trivially satisfied. The requirements that the support of $Q_\beta$ belongs to $\D'$ means that $\deg q_j\le d$ (then the second requirement will be automatically satisfied). The homological equation \eqref{fuchs-out} can be inductively solved with respect to $q_j,r_j$ by the division with remainder of the polynomial $p_j-\sum_{k=1}^{j-1}r_kq_{j-k}$ by $p_0$. The remainder term $q_j$ can be guaranteed to be of degree $\le d-1$ (and then it will be uniquely determined), while $r_j$ will be the respective incomplete ratio. This gives a direct proof of a particular case of Theorem~\ref{thm:ppA}. \end{Ex} \subsection{Sylvester map} As was explained in \secref{sec:qhg}, quasihomogeneous polynomials can be expressed as univariate polynomials in the basic monomial of weight zero. An analog of the homological equation \eqref{homolog} for univariate polynomials looks as follows. Denote by $\C_n[\l]$ the linear space of polynomials of degree $\le n-1$, so that $\dim_\C\C_n[\l]=n$, and assume $q_*\in\C_n[\l]$, $r_*\in\C_m[\l]$. Then there is a linear map, called the \emph{Sylvester map}, \begin{equation}\label{sylv} \boldsymbol S\:\C_m[\l]\times\C_n[\l]\to \C_{m+n}[\l], \qquad (u,v)\longmapsto q_*u+r_*v \end{equation} (the matrix of this map in the natural basis is the Sylvester matrix of the two polynomials $p,q$). It is well known that the Sylvester map is bijective if and only if $\gcd(p,q)=1$. However, it is very difficult to apply this result to study the homological equation \eqref{homolog}: the dimensions $\dim\Cs_\alpha(\D)$ depend on the weight $\alpha$ in a rather irregular way. In general, the homological operator $\H_\gamma$ acts between spaces of different dimensions. Thus proving directly its surjectivity is problematic. However, it follows indirectly from the established in \secref{sec:comm-factorization} results on factorization of pseudopolynomials. \subsection{Solvability of the homological equation}\label{sec:solvability} Let $\mathscr H=(w,\D',\D'',Q_*,R_*)$ be the data defining the homological operator $\H$. Consider first the case where one of the polygons, say, $\D'$ is single-slope, $\PS(\D')=\{\rho\}$, and choose the weight $w=\rho$. Then $\alpha_*=0$, and $Q_*$ is a quasihomogeneous polynomial of weight zero with nonzero characteristic roots. If $\rho\notin \PS(\D'')$, then $\beta_*<0$, the weight achieves its minimum at a corner point and the leading term $R_*$ is a (nontrivial) monomial. \begin{Thm}\label{thm:homA} If $w=\rho\notin \PS(\D'')$, i.e., the polygons $\D',\D''$ have no common slope, then all homological operators are surjective and the homological equation is solvable in any weight $\gamma$. \end{Thm} The second case deals with factorization of the quasihomgeneous polynomials into terms of lower weight. Assume that $\PS(\D')=\PS(\D'')=\{\rho\}$ and the weight is chosen accordingly, $w=\rho$. Then $\alpha_*=\beta_*=0$, and the corresponding characteristic $\sigma'=\sigma_{Q_*}$ and $\sigma''=\sigma_{R_*}$ are defined as in Definition~\ref{def:char-poly}. \begin{Thm}\label{thm:homB} If $\gcd(\sigma',\sigma'')=1$, i.e., the two characteristic polynomials have no common roots, then all homological operators are surjective and the homological equation is solvable in any weight $\gamma$. \end{Thm} \begin{proof}[Proof of both theorems] Consider a pseudopolynomial $P$ with the leading part $P_*=Q_*R_*$ with $Q_*,R_*$ as in, say, Theorem~\ref{thm:homA}. By Theorem~\ref{thm:ppA}, $P$ admits factorization of the form $P=(Q_*+\cdots)(R_*+\dots)$ \emph{regardless} of the higher terms of $P$. Substituting this factorization, we see that for each weight $\gamma>\gamma_*$ the homological equation \eqref{homolog} admits a solution for \emph{some} right hand side. Yet since the term $P_\gamma$ can be changed \emph{arbitrarily} without affecting reducibility, we conclude that the equation $\H_\gamma(U,V)=W$, see \eqref{hom-op}, is solvable for \emph{any} $W$. In exactly the same way Theorem~\ref{thm:homB} follows from Theorem~\ref{thm:ppA}. \end{proof} \begin{Rem}\label{rem:convergence} Theorems~\ref{thm:ppA} and~\ref{thm:ppB} describe factorization of the pseudopolynomials both in the formal context (as stated) and in the analytic context. Consider the commutative algebra $\Cs=\C[\xi]\otimes_\C\mathscr O(t)$, cf.~with \eqref{comm-alg}, where $\mathscr O(t)$ is the algebra of germs of analytic functions at $(\C,0)$ which can be identified with the algebra of convergent Taylor series, the corresponding objects are called \emph{analytic pseudopolynomials}. Then each analytic pseudopolynomial $P\in\Cs$ can be factored as a product if two analytic pseudopolynomials $Q,R\in\Cs$. Moreover, among the (many) formal solutions $(Q,R)$ constructed using the homological equations, one can always find a convergent solution. \end{Rem} \section{Weyl algebra and factorization of differential operators} \subsection{Weyl algebra} Motivated by the arguments from \secref{sec:operators} on various representations of linear ordinary differential operators, we introduce the (formal) Weyl algebra $\W$ as the algebra of formal series\footnote{The classical Weyl algebra is \emph{generated} by two symbols with the same commutation relation, so consists of noncommutative \emph{polynomials} in these variables.} in the two non-commutative variables $t,\eu$ related by the commutation identity \eqref{comm}, which are in fact \emph{polynomials} in $\eu$. Using the commutation rule, any element $L\in\W$ can be reduced to the infinite formal sum \begin{equation}\label{formal-oper} L=L(t,\eu)=\sum_{(i,j)\in S}c_{ij} t^j\eu^i,\qquad S=\supp L\subset [0,\dots,n]\times\Z_+,\ c_{ij}\in\C\ssm\{0\}, \end{equation} where all powers of $t$ always occur to the left from powers of $\eu$ (the canonical representation). The integer $n=\ord L$ is the order of the operator $L$, and $S$ is called its support, $S=\supp (L)$. The Newton diagram $\D_L$ is obtained from the support in exactly the same way as in the commutative case (convex hull and invariance by translations). Because of the non-commutativity of $\W$, in general $\supp (LM)\not\subseteq \supp (L)+\supp (M)$. However, the identity \eqref{comm} implies that \begin{equation}\label{e-decrease} t^j\eu^i\cdot t^{j'}\eu^{i'}=t^{j+j'}\eu^{i+i'}+\sum_{k<i+i'}c_{kl}\,t^l\eu^k. \end{equation} This together with Proposition~\ref{prop:left} proves that \begin{equation}\label{minksum-W} \forall L,M\in\W\qquad \D_{LM}=\D_L+\D_M \end{equation} (cf.~with Proposition~\ref{prop:minksum}). \begin{Def} For any $L\in\W$ with the canonical representation \eqref{formal-oper} the pseudopolynomial $P=P(t,\xi)=\sum_{\supp L}c_{ij}t^j\xi^i$ with the same coefficients $c_{ij}$ will be called\footnote{The classical notion of the symbol of a differential operator collects only the terms involving the highest order derivatives.} the \emph{pseudosymbol} of $L$ and denoted by $\PP_L$. Conversely, for a pseudopolynomial $P=P(t,\xi)=\sum c_{ij}t^j\xi^i \in\^\Cs$ we will denote $P(t,\eu)\in\W$ the result of substitution of $\eu$ instead of $\xi$, $L=\sum_{i,j}c_{ij}t^j\eu^i$. \end{Def} Needless to warn that the pseudosymbol is by no means functorial: in general $\PP_{LM}\ne\PP_L\PP_M$ and $P(t,\eu)Q(t,\eu)\ne PQ(t,\eu)$. The correspondence $\W\to\^\Cs$, $L\mapsto\PP_L$ allows to associate with operators from $\W$ all notions that were introduced for the pseudopolynomials. Thus we define Fuchsian operators, single-slope operators, the Poincar\'e spectrum e.a. Obviously, the pseudosymbols of Fuchsian operators become what they should be. \subsection{Filtration of the Weyl algebra} Let $w\in\Q_+$ be a rational weight and $\wgt_w(\cdot)$ the corresponding weight function. However (unfortunately) since $\PP_{LM}\ne \PP_L\PP_M$, we do not have the grading of $\W$ by different weights, only \emph{filtration}. Recall that each grading of an algebra, in particular, the grading $\Cs(\D)=\bigoplus_\alpha\Cs_\alpha(\D)$, canonically defines a filtration by subspaces $$ \U_\alpha(\D)=\bigcup_{\gamma\ge \alpha}\Cs_\alpha(\D),\qquad \alpha,\gamma\in\Q. $$ This filtration is monotone decreasing, $\U_\alpha(\D)\subseteq\U_\beta(\D)$ if $\alpha\ge\beta$, satisfies the condition $\U_\alpha(\D)\cdot\U_\beta(\D)\subseteq\U_{\alpha+\beta}(\D)$. Conversely, the grading can be restored from the filtration as follows, \begin{equation}\label{filtograd} \Cs_\alpha(\D)=\U_\alpha(\D)/\U_\alpha^+(\D), \qquad\text{where}\qquad \U_\alpha^+(\D)=\bigcup_{\gamma>\alpha}\U_\alpha(\D). \end{equation} \begin{Def} Let $\alpha\in\Q_+$ be a rational number and $\D$ an admissible polygon. We define $\W_\alpha(\D)$ as the subspace \begin{equation}\label{filter} \W_\alpha(\D)=\{L\in\W: \PP_L\in\U_\gamma(\D)\}. \end{equation} In other words, $\W_\alpha(\D)$ denotes the $\C$-space of operators from $\W$ whose pseudosymbol contains only terms of weight $\alpha$ and higher. \end{Def} By definition, $\W_\alpha(\D)\supseteq\W_\beta(\D)$ if $\alpha\ge\beta$, so the spaces $\W_\alpha(\D)$ form a decreasing filtration of $\W(\D)$. This filtration agrees with the composition in $\W$ in the sense that \begin{equation}\label{filter} \W_\alpha(\D)\cdot\W_\beta(\D)\subseteq\W_{\alpha+\beta}(\D)\qquad \forall \alpha,\beta\in\Q, \end{equation} cf.~with \eqref{gr-alg}. Indeed, after reducing the composition of operators $L,M$ of weights $\alpha,\beta$ respectively to the canonical representation where all powers of $\eu$ occur to the left from all powers of $\eu$, we affect only terms of of order strictly greater than $\alpha+\beta$, as follows from \eqref{e-decrease} (recall that the weight of $\eu$ is negative $-w$). Recall that for any choice of the weight we used \emph{all} rational numbers for labeling in the graded algebra $\^\Cs=\bigoplus_{\alpha\in\Q}\Cs_\alpha$: the homogeneous spaces $\Cs_\alpha$ could be nonzero only for countably many values forming an arithmetic progression (depending on $w$). In the same way the decreasing filtration of $\W$ by $\W_\alpha$ has ``jumps'' only at these values. \begin{Prop} For any rational $\alpha\in\Q$, \begin{equation}\label{quotient} \W_\alpha(\D)/\W^+_\alpha(\D)=\Cs_\alpha(\D),\qquad\text{where}\quad \W_\alpha^+(\D)=\bigcup_{\gamma>\alpha}\W_{\gamma}(\D). \end{equation} \end{Prop} \begin{proof} This follows from \eqref{filtograd} and the definition of the subspaces $\W^+\alpha(\D)$. \end{proof} \subsection{Factorization in the Weyl algebra} Assume that $L\in\W$, $\ord L=n$ and $\D_L=\D'+\D''$. We look for conditions guaranteeing that $L$ can be decomposed as $L=MN$ with $M,N\in\W$ and $\D_M=\D'$, $\D_N=\D''$. Choose a weight $w\in\Q$ and expand $L$ as the series $L=\sum_\gamma P_\gamma(t,\eu)$, where $P_\gamma$ are the corresponding quasihomogeneous components of the pseudopolynomial $P=\PP_L\in\^\Cs(\D)$. We will look for the factorization in the form $L=MN$ defined by indeterminate pseudopolynomials $Q=\PP_M\in\Cs(\D')$, $R=\PP_N\in\Cs(\D'')$. by inductively constructing them and try to mimic the formal arguments from \secref{sec:formfact}. All notations will be kept as similar as possible in the commutative case. We assume that both $Q,R$ are expanded as sums of quasihomogeneous components $Q=\sum Q_\alpha$, $R=\sum R_\beta$. The leading quasihomogeneous terms $Q_*,R_*$ of the minimal weights $\alpha_*,\beta_*$ respectively, must yield factorization of the leading term $P_*$. Fix them and consider the equation $\PP_L=\PP(MN)$. Since $\W$ is non-commutative, the right hand side is not equal to $\PP_M\PP_N$, but for any $\alpha,\beta$ from \eqref{e-decrease} it follows that \begin{equation}\label{pssymb-weight} Q_\alpha(t,\eu)R_\beta(t,\eu)=(Q_\alpha R_\beta)(t,\eu)\bmod \W_\gamma,\qquad \gamma=\alpha+\beta, \end{equation} that is, after reducing the composition of operators to the canonical form, the result will have the same leading terms of order $\gamma=\alpha+\beta$ as if the algebra were commutative. This means that the pseudopolynomials $Q_\alpha$, $R_\beta$ can be inductively defined from the infinite ``triangular'' system of equations of the form \begin{equation}\label{triang-w} P_\gamma=\sum_{\alpha+\beta=\gamma}Q_\alpha R_\beta + S_\gamma, \qquad Q_\alpha\in\Cs_\alpha(\D'),\ R_\beta\in\Cs_\beta(\D''), \end{equation} cf.~with \eqref{factor-comm}, where $S_\gamma\in\Cs_\gamma(\D)$ is the collection of terms accumulated from re-expansion of terms $P_{\alpha'},Q_{\beta'}$ with $\alpha'+\beta'<\gamma$, which were already found by the induction hypothesis. The equations \eqref{triang-w} are identical to the equations \eqref{factor-comm}, and their solvability depends only on the properties of $Q_*,R_*$ and the Newton diagrams $\D,\D''$ as described in \secref{sec:solvability}. In particular, Theorems~\ref{thm:homA} and~\ref{thm:homB} imply the following results. \begin{Thm}\label{thm:operA} If $\D_L=\D'+\D''$ and the admissible polygons $\D',\D''$ have no common slope, then any operator $L\in\W(\D)$ admits a formal decomposition $L=MN$ with $M\in\W(\D')$, $N\in\W(\D'')$. \end{Thm} \begin{Thm}\label{thm:operB} If $\D$ is a single-slope admissible polygon, $L\in\W(\D)$ has a characteristic polynomial $\sigma=\sigma_L\in\C[\l]$, then for any factorization $\sigma=\sigma'\sigma''$ with mutually prime polynomials $\sigma',\sigma''$ one can find a formal factorization $L=MN$ by two single-slope operators such that $\sigma_M=\sigma'$, $\sigma_N=\sigma''$. \end{Thm} \begin{proof}[Proof of both Theorems] Each equation in the infinite series \eqref{triang-w} is of the form \eqref{homolog} with the only difference being an extra term $S_\gamma$ coming from the preceding equation. Its solvability follows from the surjectivity of the corresponding homological operator associated with the data $\mathscr H=(w,\D',\D'',Q_*,R_*)$. \end{proof} As an immediate corollary to these two theorems, we have the following result on reducibility. \begin{Def} A differential operator $L\in\W$ is called \emph{monic}, if it has a single slope, and the corresponding characteristic polynomial $\sigma_L$ has a single root. \end{Def} \begin{Thm}\label{thm:main} Any differential operator $L\in\W$ admits a decomposition into the non-commutative product of monic operators. \end{Thm} \subsection{Remark on the convergence} It is absolutely imperative to stress that all results on factorization of the differential operators, unlike their counterparts on pseudopolynomials, are only formal (cf.~with Remark~\ref{rem:convergence}). Technically, the difference between the two theories can be attributed to the fact that the passage from grading to filtration results in the growth of the number of terms in the right hand side of the homological equation \eqref{triang-w} compared with \eqref{factor-comm}. However, the issue of the divergence of formal transformations, diagonalizing (say, in the non-resonant case) irregular singularities was studied in detail, and geometric obstructions were identified as a Stokes matrices \cite{thebook}*{\parasymbol 20G}. The ambitious goal beyond this paper and its precursor \cite{shira} is to identify analytic obstructions to the formal Weyl classification and formal factorization in a similar form as a suitable cocycle over a punctured neighborhood $(\C,0)$. However, this project is still in its rudimentary stage. \begin{bibdiv} \begin{biblist} \bib{arnold}{book}{ author={Arnold, V. I.}, title={Huygens and Barrow, Newton and Hooke}, note={Pioneers in mathematical analysis and catastrophe theory from evolvents to quasicrystals; Translated from the Russian by Eric J. F. Primrose}, publisher={Birkh\"auser Verlag, Basel}, date={1990}, pages={118}, isbn={3-7643-2383-3}, review={\MR{1078625}}, doi={10.1007/978-3-0348-9129-5}, } \bib{brieskorn}{book}{ author={Brieskorn, Egbert}, author={Kn\"orrer, Horst}, title={Plane algebraic curves}, series={Modern Birkh\"auser Classics}, note={Translated from the German original by John Stillwell; [2012] reprint of the 1986 edition}, publisher={Birkh\"auser/Springer Basel AG, Basel}, date={1986}, pages={x+721}, isbn={978-3-0348-0492-9}, review={\MR{2975988}}, doi={10.1007/978-3-0348-5097-1}, } \bib{sqh}{article}{ author={Greuel, Gert-Martin}, author={Pfister, Gerhard}, title={On moduli spaces of semiquasihomogeneous singularities}, conference={ title={Algebraic geometry and singularities}, address={La R\'abida}, date={1991}, }, book={ series={Progr. Math.}, volume={134}, publisher={Birkh\"auser, Basel}, }, date={1996}, pages={171--185}, review={\MR{1395180}}, } \bib{thebook}{book}{ author={Ilyashenko, Yulij}, author={Yakovenko, Sergei}, title={Lectures on analytic differential equations}, series={Graduate Studies in Mathematics}, volume={86}, publisher={American Mathematical Society, Providence, RI}, date={2008}, pages={xiv+625}, isbn={978-0-8218-3667-5}, review={\MR{2363178 (2009b:34001)}}, } \bib{kamgarpour}{article}{ author={Kamgarpour, Masoud}, author={Wheatherhog, Samuel}, title={A New Approach to Jordan Decomposition for Formal Differential Operators}, journal={\texttt{ArXiv}}, volume={1702.03608v1}, year={2017}, month={2}, pages={1--12}, note={Preprint published on February 13, 2017.}, } \bib{malgrange}{article}{ author={Malgrange, Bernard}, title={Sur la r\'eduction formelle des \'equations diff\'erentielles \'a singularit\'es irr\'eguli\`eres}, journal={Pr\'epublication de l'Inst. Fourier, Grenoble}, date={1979}, reprint={ title={Singularit\'es irr\'eguli\`eres}, series={Documents Math\'ematiques}, volume={5}, author={Deligne, Pierre}, author={Malgrange, Bernard}, author={Ramis, Jean-Pierre}, publisher={Soci\'et\'e Math\'ematique de France}, date={2007}, isbn={978-2-85629-241-9}, review={\MR{2387754}}, pages={97--107}, } } \bib{leanne}{thesis}{ author={Mezuman, Leanne}, school={Weizmann Institute of Science}, year={2017}, title={Classification of non-Fuchsian linear differential equations}, type={M.Sc.~thesis} } \bib{mero-flat}{article}{ author={Novikov, Dmitry}, author={Yakovenko, Sergei}, title={Lectures on meromorphic flat connections}, conference={ title={Normal forms, bifurcations and finiteness problems in differential equations}, }, book={ series={NATO Sci. Ser. II Math. Phys. Chem.}, volume={137}, publisher={Kluwer Acad. Publ., Dordrecht}, }, date={2004}, pages={387--430}, review={\MR{2085816 (2005f:34255)}}, } \bib{ore}{article}{ author={Ore, \Ore ystein}, title={Theory of noncommutative polynomials}, journal={Ann. of Math. (2)}, volume={34}, date={1933}, number={3}, pages={480--508}, issn={0003-486X}, review={\MR{1503119}}, doi={10.2307/1968173}, } \bib{vdp-sing}{book}{ author={van der Put, Marius}, author={Singer, Michael F.}, title={Galois theory of linear differential equations}, series={Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences]}, volume={328}, publisher={Springer-Verlag, Berlin}, date={2003}, pages={xviii+438}, isbn={3-540-44228-6}, review={\MR{1960772}}, doi={10.1007/978-3-642-55750-7}, } \bib{robba}{article}{ author={Robba, P.}, title={Lemmes de Hensel pour les op\'erateurs diff\'erentiels. Application \`a la r\'eduction formelle des \'equations diff\'erentielles}, language={French}, journal={Enseign. Math. (2)}, volume={26}, date={1980}, number={3-4}, pages={279--311 (1981)}, issn={0013-8584}, review={\MR{610528}}, } \bib{shira}{article}{ author={Tanny, Shira}, author={Yakovenko, Sergei}, title={On local Weyl equivalence of higher order Fuchsian equations}, journal={Arnold Math. J.}, volume={1}, date={2015}, number={2}, pages={141--170}, issn={2199-6792}, review={\MR{3370063}}, doi={10.1007/s40598-015-0014-6}, } \bib{vain-tren}{book}{ author={Vainberg, M. M.}, author={Trenogin, V. A.}, title={Theory of branching of solutions of non-linear equations}, note={Translated from the Russian by Israel Program for Scientific Translations}, publisher={Noordhoff International Publishing, Leyden}, date={1974}, pages={xxvi+485}, review={\MR{0344960}}, } \bib{wall}{book}{ author={Wall, C. T. C.}, title={Singular points of plane curves}, series={London Mathematical Society Student Texts}, volume={63}, publisher={Cambridge University Press, Cambridge}, date={2004}, pages={xii+370}, isbn={0-521-83904-1}, isbn={0-521-54774-1}, review={\MR{2107253}}, doi={10.1017/CBO9780511617560}, } \end{biblist} \end{bibdiv} \end{document} \end{document}
{ "timestamp": "2018-05-08T02:11:45", "yymm": "1805", "arxiv_id": "1805.02210", "language": "en", "url": "https://arxiv.org/abs/1805.02210" }
\section{Introduction} Chord-wise elasticity of airfoils is common in biology, applications such as energy harvesting, sails, and more recently in the context of morphing wing sections which change their shape continuously \citep{macphee2016fluid,tiomkin2017stability,tang2018aeroelastic}. In this work we examine a chord-wise elastic airfoil in potential flow, clamped to a rigid supporting beam at an arbitrary location along the camber. The configuration can be viewed as a problem of a rear cantilevered elastic sheet connected to an inverted front sheet. Under simplifying limits of thin-airfoil-theory and the Euler-Bernoulli beam model, we aim to study the dynamics and instabilities of such configurations. Of primary interest are aerodynamic interactions between the front- and rear-segments of the elastic airfoil, and the effect of such interactions on the onset of instability. The configuration examined in the current study is particularly relevant to shape-morphing airfoils involving chord-wise elasticity. Shape-morphing airfoils are currently extensively studied due to their potential to enhance performance of aircraft structures and energy harvesting systems \citep[see ][ among many others]{nguyen2015aeroelastic,takahashi2016development,moosavian2017parametric}. Common current approaches to shape-morphing of airfoils include piezoelectric actuation, shape memory alloys, pneumatic artificial muscles, as well as deployable and foldable structures (see detailed discussions in \cite{thill2008morphing}, \cite{barbarino2011review} and references therein). Realization of shape-morphing airfoils is commonly accompanied by increased chord-wise elasticity, which for sufficiently soft airfoils, may govern the aeroelastic response of the structure. In addition, chord-wise elasticity is a governing mechanism in tension-dominant membrane wings, common in biology and sail-like structures \citep{tiomkin2013membrane}. Previous research on membrane wings includes \cite{song2008aeromechanics}, who preformed an extensive experimental study, as well as comparison to a theoretical model of a wing camber under aerodynamic loading. Membrane wings were shown to provide greater lift, and improved lift slopes, due to modification of the camber at different angles of attack. \cite{alon2017steady} presented a framework for the analysis of the aeromechanics of membrane-wings. A similar approach was used by \cite{tiomkin2017stability} to examine the stability of membrane wings as a function of the mass and tension of the membrane. Distributed actuation of membrane wings with variable compliance was studied experimentally by \cite{curet2014aerodynamic}, and later numerically by \cite{buoso2015electro}. Both works showed that increased aerodynamic efficiency may be obtained by leveraging distributed actuation of membrane wings. The current work is also relevant to the field of cantilevered elastic sheets, or flags, in uniform flow \citep{eloy2008aeroelastic,alben2008flapping,manela2009forced,alben2015flag,mougel2016synchronized}. The dynamics and stability of configurations in which the flow impinges over the clamped-end of the sheet have attracted significant interest \citep[we refer the reader to][ for a recent review]{shelley2011flapping}. In addition, in recent years interest emerged in the study of the inverted sheet configuration, where flow impinges over the free-end of the elastic sheet \citep{kim2013flapping,gilmanov2015numerical,gurugubelli2015self,sader2016stability,sader2016stabilitya}. These works are mainly motivated by the reduced flow speed required to induce self-oscillations in inverted sheets, which is relevant to energy harvesting applications. The aim of the current work is to connect between regular and inverted cantilevered elastic sheets, and examine the effect of interaction between the sheets on the stability of the entire configuration. The structure of this work is as follows: In \S 2 we define the problem and obtain the governing aeroelastic equation. In \S 3 we present several steady-state solutions based on regular asymptotic expansions and inverse solutions (solving the required actuation for pre-defined deformations). The results are compared with numerical calculations. In \S 4 we examine the effect of solid inertia on transient dynamics by applying multi-scale asymptotic expansions. We obtain stability requirements from the compatibility equations. Concluding remarks are provided in \S 5. \section{Problem Formulation and Scaling} We examine the stability and dynamic response of an elastic two-dimensional airfoil actuated by the pressure-field of an external potential laminar flow. The airfoil elastic deformation is modelled by the Euler-Bernoulli equation, which is coupled to aerodynamic forces calculated by the thin airfoil theory. The examined configuration is illustrated in figure \ref{Figure1}. \begin{figure} \centering \includegraphics[width=1\textwidth]{Figure_1.eps} \caption{Illustration of the examined configuration. $w_0$ (dashed line) is the camber of the airfoil at rest. $w$ (smooth line) is the total camber including elastic deformation. The black rectangle marks a rigid support beam to which the front and rear sections of the airfoil are clamped. The airfoil is sufficiently thin so as to allow the use of the thin airfoil and Euler-Bernoulli beam approximations.} \label{Figure1} \end{figure} We denote $w_0(x)$ as the camber at rest. $d_e(x,t)$ as the elastic deformation due to aerodynamic forces and $d_a(x,t)$ is forced actuation of the elastic airfoil. Total deformation from the initial state is $d(x,t)=d_e(x,t)+d_a(x,t)$ and total camber is $w(x,t)=w_0(x)+d(x,t)$. Chord-length is $c$. The $x$-coordinate is defined by the edges of camber at rest so that $w_0(0)=w_0(c)=0$. Angle-of-attack is $\alpha$ and velocity far from the airfoil is $(u_\infty\cos(\alpha),u_\infty\sin(\alpha))$. The airfoil is clamped by a rigid support beam at $x=x_c$. The parameters $s$ and $q$ denote sheet stiffness and aerodynamic loading per-unit-length in the perpendicular $z$-direction. The parameters $\mu_s$ and $r_d$ denote mass-per-area and elastic damping. The elastic deformation of the airfoil can be described by the Euler-Bernoulli equation \begin{subequations} \label{governing_eq_dimenional} \begin{equation} \frac{\partial^2}{\partial x^2}\left[s\frac{ \partial^2 }{\partial x^2 } \left(w-w_0-d_a\right) \right]+r_d \frac{\partial w}{\partial t}=-\mu_s \frac{ \partial^2 w}{\partial t^2 }+q \end{equation} where $q$ is the aerodynamic load (per unit length in the $z$-direction). We aim to focus on the onset of instabilities, and simplify the governing equations by applying quasi-steady-state aerodynamic calculations. While such simplifications are commonly used \citep[e.g.][]{dowell1967generalized,fitt2001unsteady,mougel2016synchronized,sader2016large}, this assumption limits the validity of the results described in this work to configurations with negligible effects of vortex shedding (see discussion on the validity of this approximation at \S 2 of \cite{dowell1974aeroelasticity} and \S 5-6 of \cite{bisplinghoff2013aeroelasticity}). Under this approximation, the aerodynamic load $q$ can be presented by \begin{multline} \label{ThinWing} q=2\rho_\infty u_\infty^2 \Big\{\left[\alpha-\frac{1}{\pi}\int_0^\pi \left(\frac{\partial w}{\partial x} +\frac{1}{u_\infty} \frac{\partial w}{\partial t}\right) d\theta \right] \cotT+ \\ \sum_{n=1}^{\infty}\left[\frac{2}{\pi}\int_0^\pi \left(\frac{\partial w}{\partial x} +\frac{1}{u_\infty} \frac{\partial w}{\partial t} \right) \cos(\theta)d\theta\right] \sin(n\theta) \Big\}. \end{multline} \end{subequations} where the auxiliary coordinate $\theta$ is defined by $x=c(1-\cos(\theta))/2$ \citep[e.g.][]{johnston2004review}. Hereafter, Capital letters denote normalized variables and asterisk superscripts denote characteristic values (i.e., the normalized function $F$, is defined by $F=f/f^*$ where $f^*$ is a characteristic value of dimensional function $f$). We define the normalized axial coordinate $X={x}/{c}=(1-\cos(\theta))/{2}$, normalized time $T={t}/{t^*}$, normalized camber at rest $W_0(X)={w_0(x)}/{w_0^*}$, normalized actuation of the profile, $D_a(X,T)=d_a(x,t)/d_a^*$, normalized total deformation $D(X,T)= d(x,t)/{d^*}$, normalized rigidity $S(X)={s(x)}/{s^*}$, normalized damping for unit-length $B={r_d}/{r^*_d}$, and normalized mass-per-unit-length $ M_s(X)={\mu_s(x)}/{\mu_s ^*}$. Substituting normalized variables and coordinates into (\ref{governing_eq_dimenional}) yields the normalized governing partial integro-differential equation \begin{multline}\label{main_equation} \frac{\partial^2}{\partial X^2}\left[S(X)\frac{\partial^2 }{\partial X^2}\left(D-\Pi_1D_A\right)\right] + \Pi_2 B \frac{\partial D}{\partial T} +\Pi_3 M_s (X)\frac{\partial^2 D}{\partial T^2}\\ =\left[\Pi_4-\frac{1}{\pi} \int_0^\pi \left(\Pi_5 \frac{\partial W_0}{\partial X}+ \Pi_6 \frac{\partial D}{\partial X}+ \Pi_7 \frac{\partial D}{\partial T} \right) d\theta\right]\cotT+\\ \sum_{n=1}^{\infty}{\left[\frac{2}{\pi} \int_0^\pi \left(\Pi_5 \frac{\partial W_0}{\partial X}+ \Pi_6 \frac{\partial D}{\partial X}+ \Pi_7 \frac{\partial D}{\partial T}\right) \cos(n\theta)d\theta\right] \sin(n \theta)} \end{multline} and $\Pi_1-\Pi_7$ are dimensionless ratios defined by \begin{gather} \Pi_1=\frac{d_a^*}{d^*},\quad \Pi_2=\frac{r^* c^4}{s^* t^*},\quad \Pi_3=\frac{\mu_s^* c^4 }{t^{*2} s^*},\quad \Pi_4=\frac{2\rho_\infty u_\infty^2 c^4\alpha}{s^* d^*}, \nonumber\\ \Pi_5=\Pi_4 \frac{w_0^*}{c\alpha},\quad \Pi_6=\Pi_4 \frac{d^*}{c\alpha},\quad \Pi_7=\Pi_4 \frac{d^*}{u_\infty t^*\alpha}.\label{ratios} \end{gather} The dimensionless number $\Pi_1$ represent the ratio of deformation due to actuation to the total deformation of the wing. Dimensionless numbers $\Pi_2-\Pi_7$ are all scaled by elastic bending forces, where $\Pi_2$ represents scaled damping, and $\Pi_3$ represents scaled inertia. $\Pi_4-\Pi_7$ represent scaled aerodynamic forces due to angle-of-attack of a flat camber $\Pi_4$, camber curvature at rest $\Pi_5$, camber deformation $\Pi_6$, and transient motion of the camber $\Pi_7$. The governing integro-differential equation (\ref{main_equation}) is supplemented by the boundary conditions at the support beam $X=X_c$ \begin{subequations}\label{BCs} \begin{equation}\label{BC_XC} D(X=X_c,T)=\frac{\partial D(X=X_c,T)}{\partial X}=0 \end{equation} supplemented by zero moment and shear at $X=0$ and $X=1$ \begin{equation}\label{BC_ends} \left[\frac{\partial^2 }{\partial X^2}(D-\Pi_1 D_A)\right]_{X=0,1}=\frac{\partial}{\partial X} \left[S\frac{\partial^2 }{\partial X^2}(D-\Pi_1 D_A)\right]_{X=0,1}=0 \end{equation} \end{subequations} and the initial conditions \begin{equation}\label{IVs} D(X,T=0)=F_1(X)\quad \frac{\partial D(X,T=0)}{\partial T}=F_2(X). \end{equation} \section{Steady-state solutions: Regular asymptotics, inverse solutions and numerical verification} We start by examining the simple case of steady-state deformations, and compare the results to numerical calculations by commercially available code (COMSOL Multiphysics \textregistered 5.2a). In this limit, the characteristic time-scale $t^*$ is required to be sufficiently large so that \begin{equation} \Pi_2,\, \Pi_3,\, \Pi_7 \ll 1, \end{equation} reducing (\ref{main_equation}) to the quasi-steady equation \begin{multline}\label{SteadyGov} \frac{\partial^2}{\partial X^2}\left[S(X)\frac{\partial^2 }{\partial X^2}\left(D-\Pi_1D_A\right)\right] =\left[\Pi_4-\frac{1}{\pi} \int_0^\pi \left(\Pi_5 \frac{\partial W_0}{\partial X}+ \Pi_6 \frac{\partial D}{\partial X} \right) d\theta\right]\cotT+\\ \sum_{n=1}^{\infty}{\left[\frac{2}{\pi} \int_0^\pi \left(\Pi_5 \frac{\partial W_0}{\partial X}+ \Pi_6 \frac{\partial D}{\partial X} \right) \cos(n\theta)d\theta\right] \sin(n \theta)}, \end{multline} along with initial and boundary conditions (\ref{BCs}). (The requirements for stability of such steady-state solutions will be examined in \S 4.) \subsection{Asymptotic expansion for the limit of small deflections} We define the small parameter \begin{equation} \varepsilon=\frac{\Pi_6}{\Pi_4+\Pi_5} \end{equation} (where $\Pi_2,\, \Pi_3,\, \Pi_7 \ll \varepsilon \ll 1$) representing the ratio between aerodynamic forces due to elastic deflection of the camber and the sum of aerodynamic forces due to the angle-of-attack ($\Pi_4$) and the undeformed camber ($\Pi_5$). Substituting the expansion \begin{equation} D = \sum_{n=0}^{\infty} \varepsilon^n D_n \label{asymptotic_expansion} \end{equation} into (\ref{SteadyGov}), along with $\Pi_6=\varepsilon (\Pi_4+\Pi_5)$, yields the leading-order equation governing $D_0$ \begin{subequations}\label{Steady_Asymp} \begin{multline}\label{le_ord} \frac{\partial^2}{\partial X^2}\left[S(X)\frac{\partial^2 }{\partial X^2}\left(D_0-\Pi_1D_A\right)\right] = \left[ \Pi_4-\frac{1}{\pi} \int_0^\pi \left( \Pi_5 \frac{\partial W_0}{\partial X} \right) d\theta \right]\cotT \\+ \sum_{n=1}^{\infty} {\left[\frac{2}{\pi} \int_0^\pi \left( \Pi_5 \frac{\partial W_0}{\partial X} \right) \cos(n\theta)d\theta\right] \sin(n \theta)}, \end{multline} and higher orders $O(\varepsilon^n)$ equations governing $D_n$ are given by \begin{multline}\label{n_ord} \frac{\partial^2}{\partial X^2}\left(S\frac{\partial^2 D_n}{\partial X^2}\right)= (\Pi_4+\Pi_5) \Big[ \left(-\frac{1}{\pi} \int_0^\pi \frac{\partial D_{n-1} }{\partial X} d\theta\right)\cotT+ \\ \sum_{l=1}^{\infty}{\left(\frac{2}{\pi} \int_0^\pi \frac{\partial D_{n-1} }{\partial X} \cos(l\theta)d\theta\right) \sin(l \theta)} \Big]. \end{multline} \end{subequations} Since the integrals in (\ref{Steady_Asymp}) depend on known terms, $D_n$ may be computed from (\ref{Steady_Asymp}) by direct integration with regards to $X$. Due to the clamping at $X=X_c$, the curvature will be discontinuous. Applying boundary conditions at the free-ends $X=0,1$, the leading-order curvature due to elastic deflections is \begin{subequations}\label{Steady_Asymp_curve} \begin{multline}\label{le_sol_pertubations} \frac{\partial^2 D_0}{\partial X^2}=\Pi_1\frac{\partial^2 D_A}{\partial X^2}+\frac{1}{S}\int_0^\theta \int_0^{\theta^{*}} \Bigg[ \left( \frac{\Pi_4}{4}-\frac{\Pi_5}{4\pi} \int_0^\pi {\frac{\partial W_0}{\partial X} }d\theta \right) \cot \left(\frac{{\theta^{**}}}{2}\right)+ \\+ \frac{\Pi_5}{2\pi } \sum_{n=1}^\infty \left( \int_0^\pi {\frac{\partial W_0}{\partial X}\cos(n\theta) }d\theta \right) \sin(n{\theta^{**}}) \Bigg] \sin({\theta^{**}})d{\theta^{**}} \sin({\theta^{*}})d{\theta^{*}} \\ \frac{\pi H(X-X_c)}{4S} \left\{ \Pi_4 \left( \cos(\theta) -\frac{1}{2} \right) +\frac{\Pi_5}{\pi} \int_0^\pi \frac{\partial W_0}{\partial X} \left[\left( \cos({\theta^{*}}) -1 \right) \cos(\theta)- \frac{\cos(2{\theta^{*}})-1}{2}\right]d{\theta^{*}} \right\}, \end{multline} and higher-order corrections are given by \begin{multline}\label{he_sol_pertubations} \frac{\partial^2 D_n}{\partial X^2}= \frac{\Pi_4+\Pi_5}{4\pi S} \int_0^\theta \int_0^{\theta^{*}} \Bigg[ \left( - \int_0^\pi {\frac{\partial D_{n-1}}{\partial X} }d\theta \right) \cot \left(\frac{{\theta^{**}}}{2}\right) \\+2 \sum_{l=1}^\infty \left( \int_0^\pi {\frac{\partial D_{n-1}}{\partial X}\cos(l\theta) }d\theta \right) \sin(l{\theta^{**}}) \Bigg] \sin({\theta^{**}})d{\theta^{**}} \sin({\theta^{*}})d{\theta^{*}} +\\ \frac{\Pi_4+\Pi_5}{4S} \left\{\int_0^\pi \frac{\partial D_{n-1}}{\partial X} \left[\left( \cos({\theta^{*}}) -1\right) \cos(\theta)- \frac{\cos(2{\theta^{*}})-1}{2}\right]d{\theta^{*}} \right\} H(X-X_c), \end{multline} where $H(X)$ is the Heaviside step function and $\theta^{*}$, $\theta^{**}$ are auxiliary integration coordinates. \end{subequations} Results (\ref{Steady_Asymp_curve}) for the steady deflection of an actuated airfoil are presented in figure \ref{numeric_validations}a, and compared to numerical simulations of potential flow over an elastic NACA-2412 airfoil. Good agreement is evident. Details regarding the physical and geometric parameters, the numerical scheme, as well as discussion, are presented below in \S 3.3. \subsection{Inverse solutions} For known fluid and airfoil properties, distributed actuation can be applied to achieve reduced aeroelastic deflection of soft airfoils, or to create transition between two predefined cambers. By setting the deformation $D$ and solving for the actuation $D_A$, it is possible to solve (\ref{SteadyGov}) without application of asymptotic expansions. Cancellation of steady aeroelastic deflection by distributed actuation is immediately calculated from (\ref{main_equation}) by setting $D=0$, yielding $D_A$ by \begin{multline}\label{trans_sol} \frac{\partial^2 D_A}{\partial X^2}= -\frac{1}{\Pi_1 S(X)}\int_0^\theta \int_0^{\theta^{*}} \Bigg[ \left( \frac{\Pi_4}{4}-\frac{\Pi_5}{4\pi} \int_0^\pi {\frac{\partial W_0}{\partial X} }d\theta \right) \cot \left(\frac{{\theta^{**}}}{2}\right) \\+ \frac{\Pi_5}{2\pi} \sum_{n=1}^\infty \left( \int_0^\pi {\frac{\partial W_0}{\partial X}\cos(n\theta) }d\theta \right) \sin(n{\theta^{**}}) \Bigg] \sin({\theta^{**}})d{\theta^{**}} \sin({\theta^{*}})d{\theta^{*}} +\frac{H(X-X_c)}{4\Pi_1 S(X)}\cdot\\ \left\{ \Pi_4 \left( \cos(\theta) -\frac{1}{2} \right) +\Pi_5 \int_0^\pi \frac{\partial W_0}{\partial X} \left[\left( \cos({\theta^{*}}) -1\right) \cos(\theta)- \frac{\cos(2{\theta^{*}})-1}{2}\right]d{\theta^{*}} \right\}. \end{multline} (Additional integration coefficients are obtained from clamping boundary conditions (\ref{BC_XC}) at $X=X_c$.) Similarly, transition between two predefined cambers $W_1(X)$ and $W_2(X)$ can be readily obtained by the following scheme: (I) setting $D_A=0$ and $W_0=W_1-D\Pi_6/\Pi_5$ in (\ref{SteadyGov}) and solving for $D$. In this case the RHS integral in (\ref{SteadyGov}) contains only $W_1$. Hence, the equation is no longer an integro-differential equation and $D$ can be calculated by integration. From $D$ the initial camber at rest $W_0$ is obtained for which the profile $W_1$ is acheived for $D_A=0$. (II) After calculating $W_0$, $D_A$ can be calculated by setting $D\Pi_6/\Pi_5=W_2-W_0$ into (\ref{SteadyGov}) and solving for $D_A$ yielding camber $W_2$. Figure \ref{numeric_validations}b presents example of cancellation of aeroelastic deformation, and figure \ref{numeric_validations}c presents transition between two predefined cambers. Both configurations are compared with numerical calculations, showing good agreement. A detailed description of the numerical scheme, physical and geometric parameters, and discussion, are presented below. \subsection{Numerical validation} In this section we present illustrative examples of results from \S 3.1 and \S 3.2, and compare the analysis to numerical calculations. We focus on a NACA-2412 airfoil geometry with chord of $c=1m$, clamped at $x_c=0.25m$ Young's modulus $E=8MPa$. Solid density is $\rho_s=1600 kg/m^3$ and the density of the fluid is $\rho_\infty=1.006 kg/m^3$ (taken from standard atmosphere model for altitude of $2 km$). The angle-of-attack is $\alpha=5^\circ$, and the uniform potential flow velocity is $u_\infty=40 m/s$. The specific method in which actuation is achieved is not essential to the analysis, and here we arbitrarily chose camber actuation by a distribution of pressurized internal chambers, commonly used in the field of soft robotics. Such actuation approach is known as embedded-fluidic-networks or pneumatic-artificial-muscles, \citep{thill2008morphing}. A description of the relation between the function $d_a$ and the pressure and geometry of the chambers, is presented in \cite{matia2015dynamics} as a long-wave approximation by \begin{equation} \frac{\partial^2d_a}{\partial x^2}=\phi(x)\psi(p_c,x), \end{equation} where $\phi$ represents channel density ($1/\phi(x)$ is the distance between the channels) and $\psi(p_c,x)$ is the total change in slope $\partial d_a/\partial x$ due to the actuated channel, where $p_c$ is the pressure within the channels. In the presented calculations the channel cross-section is a circle of diameter of $h/5$ with center located $2h/7$ above the midplane, where $h=h(x)$ is the local thickness of the airfoil. For the above parameters, and the limit of $p_c/E\ll1$, $\psi$ is approximated as $\psi\approx 0.1741(p_c/E) $ \citep[see][for detailed description] {matia2015dynamics}. The numerical calculations utilized commercially available code (COMSOL Multiphysics \textregistered 5.2a), with grid consisting of $10^3$ first-order unstructured triangular elements with average element quality of $0.94$ for the fluidic domain and $10^3$ second-order unstructured triangular elements with average element quality of $0.9$ for the solid domain. The size of the rectangular domain was $8c\times10c$, rotated by $\alpha$ so that the velocity condition at the front boundary is perpendicular to the boundary. The model included $10^4$ degrees of freedom. All our solutions converged by at least $6$ orders of magnitude from the value given at the initial condition. In the first step the solver created the flow field, allowing deformations to be created and become stable. Then, in the second step, internal pressure was applied within the chambers. Figure \ref{numeric_validations} presents comparison between analytic and numerical results. Panel (a) compares the difference between the numerical calculation $w_n$ and the analytic results $w_a$ computed by the asymptotic scheme (\ref{Steady_Asymp_curve}) for distributed actuation of $\partial^2 d_a/\partial x^2=1.9425$. The channel distribution is presented at the insert, where the the channels are pressurized at $p_c=150kPa$. Difference between the results is presented for no-correction (smooth line), leading-order correction (dashed line) and first-order correction (dotted line). The asymptotic scheme clearly reduces the discrepancy between the analysis and the numerical computation with increasing order of the correction terms. Panels (b) and (c) in figure \ref{numeric_validations} present comparison between numerical calculations and analytic results by inverse calculations, described in \S 3.2. Panel (b) presents deformation cancellation, which determines the required actuation, and thus the geometry of the chambers, (see insert in panel b; channels are pressurized at $p_c=146.5kPa$). The camber at rest is marked by a smooth line, the deformed unactuated camber is denoted by a dashed line. The numerical calculation for a camber actuated according to (\ref{trans_sol}) is marked by a dotted line. Clear agreement between the numerical calculation and the analytic results is evident. Panel (c) presents comparisons for transformation of NACA2412 camber to NACA4412 camber (channels are pressurized at $p_c=896.2kPa$). Good agreement between the analysis and numerical computations is presented for this case as well. \begin{figure} \centering \includegraphics[width=1\linewidth]{Figure_2.eps} \caption{Numeric validations for the results in \S 3.1 and \S 3.2. Small inserts demonstrate airfoils' shapes, with embedded-fluidic-network distributed actuation (circles), as presented in \S 3.3. Panel (a) presents results for the asymptotic expansion in \S 3.1 in the form of spatial error of the leading and the first order compared with the numerical solution ($w_a$ is the analytic solution, $w_n$ is numerical calculation). Panel (b) presents results for cancellation of aeroelastic deformation using actuation as the cambers shapes of the original, the deformed and the final shaped, where actuation is applied. Good agreement can be seen. Panel (c) presents results of using actuation to transform NACA2412 airfoil into NACA4412 airfoil. Good agreements are seen between cambers shapes of the analytic and numerical solutions.}\label{numeric_validations} \end{figure} \section{Transient solutions: Multi-scale expansion and stability analysis} This section examines transient dynamics and stability at the limit of small structural damping, defined as \begin{equation} \Pi_2=\varepsilon\bptwo, \end{equation} and small aerodynamic forces due to elastic deflection, defined as \begin{equation} \Pi_6=\varepsilon \bpsix ,\quad \Pi_7=\varepsilon \bpsev \end{equation} where $\bar \Pi_2, \bar \Pi_6, \bar \Pi_7\sim O(1)$. For this limit order-of-magnitude yields \begin{equation} t^*=c^2\sqrt{\frac{\mu_s}{s^*}},\quad d^*=\frac{2\rho_\infty U_\infty^2*c^3}{s^*}\max{[c\alpha,w_0^*]} \end{equation} and thus hereafter $\Pi_3=1$. In addition, for simplicity, we focus on constant sheet stiffness and mass-per-unit-length: $S(X)=1,\,M_s(X)=1$. Dynamics and stability requirements will be obtained via the compatibility equations of the multi-scale expansions for the different spatial oscillation modes. We refer the reader to \cite{hinch1991perturbation} and \cite{bender2013advanced} for discussion of the method of multi-scale asymptotic expansions. \subsection{Multiple-scale asymptotic expansion} We introduce a slow time-scale $T_1=\varepsilon T$ as well as an asymptotic expansion with regards to $\varepsilon$, \begin{subequations}\label{expansion} \begin{equation} D(X,T)=D_0(X,T_0,T_1)+\varepsilon D_1(X,T_0,T_1)+O(\varepsilon^2) \end{equation} and thus \begin{equation} \frac{\partial}{\partial T}=\frac{\partial}{\partial T_0}+\varepsilon \frac{\partial}{\partial T_1}. \end{equation} \end{subequations} Substituting (\ref{expansion}) into (\ref{main_equation}), the leading- and first-order equations are \begin{multline}\label{Leading_order_equation} \frac{\partial^4 D_0}{\partial X^4}+ \frac{ \partial^2 D_0}{\partial T_0^2} =\Pi_1 \frac{\partial^4 D_A}{\partial X^4} +\Pi_4 \cotT + \\\Pi_5 \left[ \left( -\frac{1}{\pi}\int_0^\pi \frac{\partial W_0}{\partial X}d\theta \right) \cotT +\sum_{n=1}^\infty \left( \frac{2}{\pi}\int_0^\pi \frac{\partial W_0}{\partial X} \cos(n\theta) d \theta \right) \sin(n\theta) \right] \end{multline} and \begin{multline}\label{first_order_equation} \frac{\partial^4 D_1}{\partial X^4}+ \frac{ \partial^2 D_1}{\partial T_0^2} =-\bptwo B \frac{\partial D_0}{\partial T_0} -2 \frac{\partial^2 D_0}{\partial T_0 \partial T_1}\\ +\bpsix \left[ \left( -\frac{1}{\pi}\int_0^\pi \frac{\partial D_0}{\partial X}d\theta \right) \cotT +\sum_{n=1}^\infty \left( \frac{2}{\pi}\int_0^\pi \frac{\partial D_0}{\partial X} \cos(n\theta) d \theta \right) \sin(n\theta) \right]\\ +\bpsev \left[ \left( -\frac{1}{\pi}\int_0^\pi \frac{\partial D_0}{\partial T_0}d\theta \right)\cotT +\sum_{n=1}^\infty \left( \frac{2}{\pi}\int_0^\pi \frac{\partial D_0}{\partial T_0} \cos(n\theta) d \theta \right) \sin(n\theta) \right]. \end{multline} \begin{subequations} Leading-order boundary and initial conditions for $D_0$ are identical to (\ref{BCs}). The first-order equation is supplemented by the homogeneous first-order boundary conditions \begin{equation} D_1\biggr\rvert_{X=X_C}=\frac{\partial D_1}{\partial X}\biggr\rvert_{X=X_C} = \frac{\partial^2 D_1}{\partial X^2}\biggr\rvert_{X=0,1}= \frac{\partial^3 D_1}{\partial X^3}\biggr\rvert_{X=0,1}=0, \end{equation} and the initial condition \begin{equation} D_1(X,T_0=0,T_1=0)=0\qquad \frac{\partial D_1(X,T_0=0,T_1=0)}{\partial T_0}=0. \end{equation} \end{subequations} The airfoil is connected to a rigid support at $X=X_c$ and we denote hereafter the upstream segment $0<X<X_c$ by the superscript $(0)$ and the downstream segment $X_c<X<1$ by the superscript $(1)$. In addition we define the auxiliary coordinates $\xi^{(m)}$ as \begin{equation}\label{xi_def} \xi^{(m)}=\frac{X-X_c}{m-X_c}= \frac{\cos(\theta_c)-\cos(\theta)}{2m-1+\cos(\theta_c)}, \end{equation} where $m\in\{0,1\}$. \subsection{Leading-order solution} While rather tedious, the solution of the leading-order equation can be obtained by standard methods of homogenization and separation of variables \cite{kreyszig2010advanced}. Thus, the results are directly presented. The leading-order solution $D_0^{(m)}$ can be expressed by \begin{equation}\label{LE_sol} D_0^{(m)}=\sum_{n=1}^\infty \Theta_n^{(m)}(T_0,T_1) \Xi_n(\xi^{(m)})+D_{0,SS}(\xi^{(m)})+V(\xi^{(m)},T_0,T_1). \end{equation} The function $V^{(m)}$ is the boundary condition homogenization function, given by \begin{equation}\label{V_def} V^{(m)}=-\frac{\Pi_1 (m-X_c)^2\xi^{(m)2}}{6} \left[ 3\frac{\partial ^2 D_A}{\partial X^2}\biggr\rvert_{X=m} -(3-\xi^{(m)})\left[m+(-)^m X_c\right]\frac{\partial ^3 D_A}{\partial X^3}\biggr\rvert_{X=m} \right]. \end{equation} The $\Xi_n(\xi^{(m)})$ functions are the spatial eigenmodes \begin{equation} \Xi_n(\xi^{(m)})=\cos(\lambda_n \xi^{(m)})-\cosh(\lambda_n\xi^{(m)})-\frac{\cos(\lambda_n)+\cosh(\lambda_n)}{\sin(\lambda_n)+\sinh(\lambda_n)} \left[\sin(\lambda_n\xi^{(m)})-\sinh(\lambda_n\xi^{(m)})\right] \end{equation} and $\Theta_n^{(m)}(T_0,T_1)$ are temporal eigenmodes \begin{subequations} \label{Theta_def} \begin{equation} \label{Theta_hom} \Theta_n^{(m)}(T_0,T_1)=C_n^{(m)}(T_1)\cost +S_n^{(m)}(T_1)\sint + \Theta_{n,A}^{(m)}, \end{equation} where $\Theta_{n,A}^{(m)}$ represents the influence of the actuation, reads \begin{equation}\label{Theta_a} \Theta_{n,A}^{(m)}= \frac{1}{\beta^{(m)2}_n}\sint \circledast \left[\int_0^1 \left(\Pi_1 \frac{\partial^4 D_A}{\partial X^4} - \frac{\partial^2 V^{(m)}}{\partial T_0^2} \right) \Xi_n(\xi^{(m)})d\xi^{(m)} \right] \end{equation} ($\circledast$ denotes convolution with regards to $T_0$). \end{subequations} \begin{subequations}\label{IVS_C&S} Substituting initial conditions (\ref{IVs}c) into $D_0^{(m)}$, we obtain the initial values of $C_n^{(m)}(T_1)$, $S_n^{(m)}(T_1)$ by \begin{equation} C_n^{(m)}(0)=\int_0^1 \left[F_1(\xi^{(m)})-D_{0,SS}(\xi^{(m)})-V(\xi^{(m)},0,0)\right] \Xi_n(\xi^{(m)}) d\xi^{(m)} \end{equation} \begin{equation} S_n^{(m)}(0)=\int_0^1 \left[F_2(\xi^{(m)})- \frac{\partial V(\xi^{(m)},0,0)}{\partial T_0}\right] \Xi_n(\xi^{(m)}) d\xi^{(m)} \end{equation} \end{subequations} The eigenvalues $\lambda_n$ are the positive roots of the transcendental equation \begin{equation}\label{trans_eq} \cos(\lambda_n)\cosh(\lambda_n)=-1 \end{equation} and $\beta_n^{(m)}$ are related to the solution of the transcendental equation (\ref{trans_eq}) via \begin{equation} \label{freq_relation} \beta_n^{(m)}= \frac{\lambda_n}{m+(-)^m X_c}. \end{equation} Finally, the function $D_{0,SS}$ is the steady-state leading-order solution, which is readily obtained from \begin{subequations} \begin{multline} \frac{\partial^4 D_{0,SS}}{\partial X^4}= \Pi_4 \cotT +\\ \Pi_5 \left[ \left( -\frac{1}{\pi}\int_0^\pi \frac{\partial W_0}{\partial X}d\theta \right) \cotT +\sum_{n=1}^\infty \left( \frac{2}{\pi}\int_0^\pi \frac{\partial W_0}{\partial X} \cos(n\theta) d \theta \right) \sin(n\theta) \right] \end{multline} with boundary conditions \begin{equation} \frac{\partial^2 D_{0,SS}}{\partial X^2}\biggr\rvert_{X=m}=\frac{\partial^3 D_{0,SS}}{\partial X^3}\biggr\rvert_{X=m}= D_{0,SS}\biggr\rvert_{X=X_c}=\frac{\partial D_{0,SS}}{\partial X}\biggr\rvert_{X=X_c}=0. \end{equation} \end{subequations} \subsection{Identification of secular-terms} Substituting the above leading-order solution into the first-order correction, equation (\ref{first_order_equation}) becomes \begin{multline} \label{first_order} \frac{\partial^4 D_1^{(m)}}{\partial X^4} + \frac{\partial^2 D_1^{(m)} }{\partial T_0^2}=\\ -2 \sum_{n=1}^\infty \frac{\partial^2 \Theta_n^{(m)} }{\partial T_0 \partial T_1} \Xi_n(\xi^{(m)}) -\bptwo B \frac{\partial V^{(m)}}{\partial T_0} -\bptwo \sum_{n=1}^\infty B_n \frac{\Theta_n^{(m)}}{\partial T_0}\Xi_n(\xi^{(m)}) \\+ \bpsix \Biggr[-\frac{1}{\pi} \left[ \sum_{n=1}^\infty \left(\Theta_n^{(0)} \int_0^{\theta_c}\frac{\Xi_n^{(0)}}{\partial X}d\theta+ \Theta_n^{(1)} \int_{\theta_c}^\pi \frac{\Xi_n^{(1)}}{\partial X}d\theta \right) \right]\cotTm \\+\frac{2}{\pi} \sum_{l=1}^\infty \left[ \sum_{n=1}^\infty \left(\Theta_n^{(0)} \int_0^{\theta_c}\frac{\Xi_n^{(0)}}{\partial X} \cos(l\theta) d\theta +\Theta_n^{(1)} \int_{\theta_c}^\pi\frac{\Xi_n^{(1)}}{\partial X} \cos(l\theta) d\theta \right) \right] \sin(l\thm) \Biggr]\\ +\bpsev \Biggr[-\frac{1}{\pi} \left[ \sum_{n=1}^\infty \left(\frac{\partial \Theta_n^{(0)}}{\partial T_0} \int_0^{\theta_c}\Xi_n^{(0)}d\theta +\frac{\partial \Theta_n^{(1)}}{\partial T_0} \int_{\theta_c}^\pi \Xi_n^{(1)}d\theta \right) \right]\cotTm \\+\frac{2}{\pi} \sum_{l=1}^\infty \left[ \sum_{n=1}^\infty \left(\frac{\partial \Theta_n^{(0)}}{\partial T_0} \int_0^{\theta_c}\Xi_n^{(0)}\cos(l\theta) d\theta +\frac{\partial \Theta_n^{(1)}}{\partial T_0} \int_{\theta_c}^\pi \Xi_n^{(1)}\cos(l\theta) d\theta \right) \right] \sin(l\thm) \Biggr]\\ -\frac{ \cotTm}{\pi}\int_0^\pi \left[ \bpsix \left(\frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] d\theta \\ + \sum_{l=1}^\infty \sin(l\thm)\frac{2}{\pi} \int_0^\pi \left[ \bpsix \left(\frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] \cos(l\theta) d\theta. \end{multline} Since the boundary conditions for $D_1^{(m)}$ are homogeneous, we suggest solution of the form \begin{equation} D_1^{(m)}(\xi^{(m)},T_0,T_1)=\sum_{n=1}^\infty\Xi_n(\xi^{(m)})Q_n^{(m)}(T_0,T_1) \end{equation} where $\Xi_n(\xi)$ are identical to the eigenmodes of the leading-order solution. Hence, the LHS of equation (\ref{first_order}) becomes \begin{equation} \frac{\partial^4 D_1^{(m)}}{\partial X^4} + \frac{\partial^2 D_1^{(m)} }{\partial T_0^2}= \sum_{n=1}^\infty \left[ Q_n^{(m)}\beta^{(m)4}_n+\frac{\partial^2 Q_n^{(m)}}{\partial T_0^2} \right] \Xi_n(\xi^{(m)}). \end{equation} We define the operator \begin{multline}\label{operator_defenition} \mathcal{N}\left[{F(\xi^{(j)})};{G(\xi^{(i)})}\right]= \int_0^1\bigg[-\frac{1}{\pi} \left( \int_{\theta^{(j)}(\xi^{(j)}=1-j)}^{\theta^{(j)}(\xi^{(j)}=j)} F(\xi^{(j)}) d\theta^{(j)} \right)\cotTj \\ +\frac{2}{\pi}\sum_{k=1}^\infty \left( \int_{\theta^{(j)}(\xi^{(j)}=1-j)}^{\theta^{(j)}(\xi^{(j)}=j)} F(\xi^{(j)}) \cos(k\theta^{(j)})d\theta^{(j)} \right)\sin(k\theta^{(i)}) \bigg]G(\xi^{(i)}) d\xi^{(i)} \end{multline} and for brevity, we hereafter denote the auxiliary scalars $\mathcal{I}^{k,(i)}_{n,(j)}$ and $\mathcal{J}^{k,(i)}_{n,(j)}$ as \begin{subequations}\label{Aux_scalars} \begin{equation} \mathcal{I}^{k,(i)}_{n,(j)}= \mathcal{N}\left[{\Xi_n(\xi^{(j)})};{\Xi_k(\xi^{(i)})}\right] \end{equation} and \begin{equation} \mathcal{J}^{k,(i)}_{n,(j)}= \mathcal{N}\left[\frac{\partial \Xi_n(\xi^{(j)})}{\partial X};\Xi_k(\xi^{(i)})\right] \end{equation} \end{subequations} representing interaction between the sheets' structural modes and the accompanying pressure distribution due to quasi-static aerodynamic modes. Multiplying equation (\ref{first_order}) by the spatial eigenmodes $\Xi_k(\xi^{(m)})$, interchanging the order of summation, integrating from $\xi^{(m)}=0$ to $1$ and applying orthonormality of the eigenmodes, yields \begin{multline}\label{first_order_full_eq} Q_k^{(m)}\beta^{(m)4}_k+ \frac{\partial^2 Q_k^{(m)}}{\partial T_0^2}=\\ -\bptwo B_k \int_0^1 \frac{\partial V^{(m)}}{\partial T_0}\Xi_k(\xi^{(m)})d\xi^{(m)} -\bptwo B_k \frac{\partial \Theta_k^{(m)}}{\partial T_0} -2 \frac{\partial ^2 \Theta_k^{(m)}}{\partial T_0 \partial T_1} \\+ \sum_{n=1}^\infty \left[\bpsev \frac{\partial \Theta_n^{(0)}}{\partial T_0} \mathcal{I}^{k,(m)}_{n,(0)}+\bpsev \frac{\partial \Theta_n^{(1)}}{\partial T_0} \mathcal{I}^{k,(m)}_{n,(1)}+ \bpsix \Theta_n^{(0)} \mathcal{J}^{k,(m)}_{n,(0)}+ \bpsix \Theta_n^{(1)} \mathcal{J}^{k,(m)}_{n,(1)} \right]\\ \bigg[-\frac{ 1}{\pi}\int_0^\pi \left[ \bpsix \left(\frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] d\theta \Bigg] \int_0^1 \cotTm \Xi_k(\xi^{(m)}) d\xi^{(m)} + \\ \sum_{l=1}^\infty \bigg[\frac{2}{\pi} \int_0^\pi \left[ \bpsix \left(\frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] \cos(l\theta) d\theta \Bigg] \int_0^1 \sin(l\thm) \Xi_k(\xi^{(m)}) d\xi^{(m)}. \end{multline} Equation (\ref{first_order_full_eq}) governs the temporal evolution of each of the spatial eigenmodes. In order to identify secular terms, \citep[which are solutions for the homogeneous equation, see][Ch. 11]{bender1978advanced} equation (\ref{Theta_def}) is substituted into equation (\ref{first_order_full_eq}), yielding \begin{multline}\label{temp_ref} Q_k^{(m)}\beta^{(m)4}_k+ \frac{\partial^2 Q_k^{(m)}}{\partial T_0^2}= -\bptwo B_k \int_0^1 \frac{\partial V^{(m)}}{\partial T_0}\Xi_k(\xi^{(m)})d\xi^{(m)}\\ - \bptwo B_k \beta_k^{(m)2} \left[ S_k^{(m)}(T_1) \cosk - C_k^{(m)}(T_1) \sink \right] \\ -2\beta_k^{(m)2} \left[ \frac{\partial S_k^{(m)}(T_1) }{\partial T_1} \cosk - \frac{\partial C_k^{(m)}(T_1)}{\partial T_1} \sink \right] \\+ \sum_{n=1}^\infty \Bigg\{ \bpsev \beta_n^{(0)2} \bigg[ S_n^{(0)}(T_1) \costz - C_n^{(0)}(T_1) \sintz \bigg] \mathcal{I}^{k,(m)}_{n,(0)} + \\ \bpsev \beta_n^{(1)2} \bigg[ S_n^{(1)} (T_1)\costo - C_n^{(1)} (T_1)\sinto \bigg] \mathcal{I}^{k,(m)}_{n,(1) + \\ \bpsix \bigg[ C_n^{(0)}(T_1)\costz +S_n^{(0)}(T_1)\sintz + \bigg] \mathcal{J}^{k,(m)}_{n,(0)}+\\ \bpsix \bigg[ C_n^{(1)}(T_1) \costo +S_n^{(1)}(T_1)\sinto\bigg] \mathcal{J}^{k,(m)}_{n,(1)} \Bigg\} + \hat{F}^{(m)}_k. \end{multline} The function $\hat{F}^{(m)}_k$ includes the effect of continuous actuation of the airfoil (i.e. shape modification of airfoil camber at rest by $D_A$). These terms cannot be classified as secular (or not secular) for a general function $D_A$, and need to be assessed separately for any specific actuation function. We thus define $\hatS \sinkin$ and $\hatC \coskin$ as secular terms of $\hat{F}^{(m)}_k$, and other terms as $R_k^{(m)}(T_0)$. The function $\hat{F}^{(m)}_k$ thus reads \begin{multline}\label{Fhat} \hat{F}^{(m)}_k=-\bptwo B_k \int_0^1 \frac{\partial V^{(m)}}{\partial T_0}\Xi_k(\xi^{(m)})d\xi^{(m)} -\bptwo B_k \frac{\partial \Theta_{k,A}^{(m)}}{\partial T_0} +\\ \sum_{n=1}^\infty \Bigg[ \bpsev\beta_n^{(0)2} \frac{\partial \Theta_{n,A}^{(0)}}{\partial T_0} \mathcal{I}^{k,(m)}_{n,(0)} + \bpsev\beta_n^{(1)2} \frac{\partial \Theta_{n,A}^{(1)}}{\partial T_0} \mathcal{I}^{k,(m)}_{n,(1) + \bpsix \Theta_{n,A}^{(0)} \mathcal{J}^{k,(m)}_{n,(0)}+ \bpsix \Theta_{n,A}^{(1)} \mathcal{J}^{k,(m)}_{n,(1)} \Bigg] \\+ \bigg[-\frac{ 1}{\pi}\int_0^\pi \left[ \bpsix \left( \frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] d\theta \Bigg] \int_0^1 \cotTm \Xi_k(\xi^{(m)}) d\xi^{(m)} + \\ \sum_{l=1}^\infty \bigg[\frac{2}{\pi} \int_0^\pi \left[ \bpsix \left( \frac{\partial D_{0,SS}}{\partial X}+\frac{\partial V}{\partial X} \right) +\bpsev \frac{\partial V}{\partial T_0} \right] \cos(l\theta) d\theta \Bigg] \int_0^1 \sin(l\thm) \Xi_k(\xi^{(m)}) d\xi^{(m)}\\ =R_k^{(m)}+\hatS \sink + \hatC \cosk. \end{multline} Substituting (\ref{Fhat}), equation (\ref{temp_ref}) thus takes the form of \begin{multline}\label{secular_terms_iden} \frac{\partial^2 Q_k^{(m)}}{\partial T_0^2} +\beta^{(m)4}_k Q_k^{(m)}= R_k^{(m)} +\\ \left[ \bptwo B_k\beta_k^{(m)2} C_k^{(m)} (T_1) +2\beta_k^{(m)2} \frac{\partial C_k^{(m)} (T_1) }{\partial T_1} +\hatS \right] \sink- \\ \left[ \bptwo B_k\beta_k^{(m)2} S_k^{(m)} (T_1) +2\beta_k^{(m)2} \frac{\partial S_k^{(m)}(T_1)}{\partial T_1} +\hatC \right]\cosk +\\ \sum_{n=1}^\infty \Bigg[ \bigg[\bpsix S_n^{(0)} (T_1) \mathcal{J}^{k,(m)}_{n,(0)} -\bpsev \beta_n^{(0)2} C_n^{(0)}(T_1) \mathcal{I}^{k,(m)}_{n,(0)} \bigg] \sintz + \\ \bigg[ \bpsev \beta_n^{(0)2} S_n^{(0)}(T_1) \mathcal{I}^{k,(m)}_{n,(0)} +\bpsix C_n^{(0)}(T_1) \mathcal{J}^{k,(m)}_{n,(0)} \bigg] \costz + \\ \Bigg[\bpsix S_n^{(1)}(T_1) \mathcal{J}^{k,(m)}_{n,(1)} - \bpsev \beta_n^{(1)2} C_n^{(1)}(T_1) \mathcal{I}^{k,(m)}_{n,(1)} \bigg]\sinto+\\ \bigg[\bpsev \beta_n^{(1)2} S_n^{(1)}(T_1) \mathcal{I}^{k,(m)}_{n,(1)} +\bpsix C_n^{(1)}(T_1) \mathcal{J}^{k,(m)}_{n,(1)} \bigg]\costo \Bigg], \end{multline} allowing to clearly identify the secular terms of the temporal equation for each of the spatial eigenmodes. \subsection{Compatibility equations} From equation (\ref{secular_terms_iden}) we can obtain compatibility equations and calculate the leading-order temporal behaviour of the orthogonal spatial modes. Depending on the value of $X_c$, the interaction between the front (segment $(0)$) and rear (segment $(1)$) parts of the wing can be separated to 3 distinct cases, using the relation (\ref{freq_relation}): (I) No identical natural oscillation frequencies of the two segments, i.e. $ \beta_k^{(0)} \ne \beta_l^{(1)}\, \forall\, l,k $. (II) A single identical natural oscillation frequency, i.e. $ \beta_k^{(0)} = \beta_l^{(1)}$, for a single combination of $l\ne k$ (this occurs for infinite number of discrete values of $X_c= {\lambda_k}/{(\lambda_k+\lambda_l)}$, for positive roots $\lambda$ of (\ref{trans_eq})). (III) All frequencies coincide, i.e. $ \beta_k^{(0)} = \beta_k^{(1)}\, \forall\, k$ (this occurs only for $X_c=0.5$). For all cases, the compatibility equation can be obtained by substituting the terms in (\ref{secular_terms_iden}) into a system of first-order ordinary differential equations for $S_n^{(m)}$ and $C_n^{(m)}$. This yields a compatibility condition in the form \begin{equation}\label{compatabily_matrix_corm} \frac{\partial\boldsymbol{F}(T_1)}{\partial T_1}= [\boldsymbol{A}] \boldsymbol{F}(T_1) + \boldsymbol{b}. \end{equation} Solutions of (\ref{compatabily_matrix_corm}) are given by \begin{equation}\label{ODEs_solution} \boldsymbol{F}=[\hat{\boldsymbol{A}}(T_1)] \left( [\hat{\boldsymbol{A}}(0)]^{-1} \boldsymbol{F}(0) +\int_0^{T_1}[\hat{\boldsymbol{A}}(\tau)]^{-1} \boldsymbol{b} d\tau \right) \end{equation} where $[\hat{\boldsymbol{A}}(T_1)]$ is a fundamental matrix solution, given by \begin{equation} [\hat{\boldsymbol{A}}(T_1)]= \begin{bmatrix} \vdots & \vdots & & \vdots \\ \boldsymbol{v}_1 e^{\sigma_1 T_1},& \boldsymbol{v}_2 e^{\sigma_2 T_1},& \cdots, & \boldsymbol{v}_n e^{\sigma_n T_1}\\ \vdots & \vdots & & \vdots \end{bmatrix} \end{equation} and $\sigma_i$ and $\boldsymbol{v}_i$ ($i\in\{1,2,...\}$) are eigenvalues and associated eigenvectors of $[\boldsymbol{A}]$, respectively. Therefore, the temporal eigenmodes involve terms such as $e^{\sigma_i T_1}\cos\left({\beta^{(m)2}_n T_0}\right)$. Thus, stability requires $\Real{[\sigma_i]}<0$ and slow-scale modulation of the elastic-inertial oscillations due to aeroelastic dynamics are related to $\Imag{[\sigma_i]}$. Solutions of the compatibility equation for cases I, II and III are presented below. \subsubsection{Case (I): $X_c\neq {\lambda_k} /(\lambda_k+\lambda_l)$} \begin{subequations}\label{no_fr} In the absence of identical oscillation frequencies of the front and rear parts of the airfoil, compatibility equations may be obtained separately for each segment. The matrix $[\boldsymbol{A}]$ for segment $(m)$ is this case is thus \begin{equation} [\boldsymbol{A}]= \frac{1}{2} \begin{bmatrix} \bpsev \mathcal{I}^{k,(m)}_{k,(m)} -\bptwo B_k & -{\bpsix \mathcal{J}^{k,(m)}_{k,(m)}}/{\beta_k^{(m)2}} \\ {\bpsix \mathcal{J}^{k,(m)}_{k,(m)}}/{\beta_k^{(m)2}} & \bpsev \mathcal{I}^{k,(m)}_{k,(m)} -\bptwo B_k \end{bmatrix} \end{equation} and the vectors $\boldsymbol{F}$ and $\boldsymbol{b}$ are given by \begin{equation} \boldsymbol{F}= \begin{bmatrix} C_k^{(m)} \\ S_k^{(m)} \end{bmatrix}, \qquad \boldsymbol{b}= -\frac{1}{2\beta_k^{(m)2}} \begin{bmatrix} \hatS \\ \hatC \end{bmatrix}. \end{equation} \end{subequations} The eigenvalues of $[\boldsymbol{A}]$ are thus \begin{equation}\label{eigenvals} \frac{\bpsev \mathcal{I}^{k,(m)}_{k,(m)} -\bptwo B_k}{2} \pm i\frac{\bpsix \mathcal{J}^{k,(m)}_{k,(m)}}{2\beta_k^{(m)2}} \end{equation} with associated eigenvectors $\begin{bmatrix} \pm i, & 1 \end{bmatrix}^t$. The real part of the eigenvalues (\ref{eigenvals}) represents the growth or decay of a given mode and the imaginary part represents the additional slow oscillation due to aeroelastic effects. The homogeneous part of the solution for $\boldsymbol{F}$, with substituted initial values (\ref{IVS_C&S}), is \begin{multline}\label{no_identical_homogeneous_sol} \begin{bmatrix} C_k^{(m)} \\ S_k^{(m)} \end{bmatrix} = \begin{bmatrix} C_k^{(m)}(0) \cos \left(\frac{\Pi_6 \mathcal{J}^{k,(m)}_{k,(m)}}{2\beta_k^{(m)2}} T\right) -S_k^{(m)}(0) \sin \left(\frac{\Pi_6 \mathcal{J}^{k,(m)}_{k,(m)}}{2\beta_k^{(m)2}} T\right) \\ S_k^{(m)}(0) \cos\left(\frac{\Pi_6 \mathcal{J}^{k,(m)}_{k,(m)}}{2\beta_k^{(m)2}} T\right) + C_k^{(m)}(0) \sin\left(\frac{\Pi_6 \mathcal{J}^{k,(m)}_{k,(m)}}{2\beta_k^{(m)2}} T\right) \end{bmatrix} e^{\frac{\Pi_7 \mathcal{I}^{k,(m)}_{k,(m)} -\Pi_2 B_k}{2}T}. \end{multline} The values of $\mathcal{I}^{k,(m)}_{k,(m)}$ (representing growth or decay of instability) and $\mathcal{J}^{k,(m)}_{k,(m)}$ (which represents slow modulation frequencies), are defined in equation (\ref{operator_defenition}) and are functions of $X_c$ only. The first $3$ modes are presented in figure \ref{IJ_values}. The parameter $\mathcal{I}^{k,(0)}_{k,(0)}$, representing the stability of the front region for mode $k$, is positive (increasing instability) for odd modes, and negative for even modes. In contrast, the rear part $\mathcal{I}^{k,(1)}_{k,(1)}$ is inherently stable for all modes, representing aeroelastic damping. Thus, instability emanates from odd modes of the front region of the wing. The parameters $\mathcal{J}^{k,(0)}_{k,(0)}$ and $\mathcal{J}^{k,(1)}_{k,(1)}$ associated with slow aeroelastic frequencies increase, as expected, as the elastic region they represent shorten due to larger values of $X_c$. \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figure_3.eps} \caption{Values of $\mathcal{I}^{k,(0)}_{k,(0)}$ (panel (a)), $\mathcal{I}^{k,(1)}_{k,(1)}$ (panel (b)), $\mathcal{J}^{k,(0)}_{k,(0)}$ (panel (c)) and $\mathcal{J}^{k,(1)}_{k,(1)}$ (panel (d)), as a function of the clamping location $X_c$, for $k\in\left\{1,2,3\right\}$ (see (\ref{operator_defenition})). Values of $\mathcal{I}^{k,(m)}_{k,(m)}$ represent the self-stability of each mode. It can be seen that the downstream part ($m=1$) is stable, as expected, while the upstream part ($m=0$) is {stable} for even modes, and unstable for odd modes. Values of $\mathcal{J}^{k,(m)}_{k,(m)}$ represent the corresponding modulation frequencies.}\label{IJ_values} \end{figure} \subsubsection{Cases (II) and (III): $X_c={\lambda_k} /(\lambda_k+\lambda_l)$} \begin{subequations}\label{one_fr} In Case (II) the frequency of mode $k$ of the front segment $\beta_k^{(0)}$ equals the frequency of mode $l$ of the rear segment $\beta_l^{(1)}$. Thus, a single compatibility condition governs mode $k$ of the front segment and mode $l$ of the rear segment, yielding the $[\boldsymbol{A}]$ matrix of \begin{equation}\label{nkeigen} [\boldsymbol{A}]= \frac{1}{2} \begin{bmatrix} \bpsev \mathcal{I}^{k,(0)}_{k,(0)} -\bptwo B_k & -{\bpsix \mathcal{J}^{k,(0)}_{k,(0)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{k,(0)}_{l,(1)} & -{\bpsix \mathcal{J}^{k,(0)}_{l,(1)}}/{\tilde{\beta}^2} \\ {\bpsix \mathcal{J}^{k,(0)}_{k,(0)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{k,(0)}_{k,(0)} -\bptwo B_k & {\bpsix \mathcal{J}^{k,(0)}_{l,(1)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{k,(0)}_{l,(1)} \\ \bpsev \mathcal{I}^{l,(1)}_{k,(0)} & -{\bpsix \mathcal{J}^{l,(1)}_{k,(0)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{l,(1)}_{l,(1)} -\Pi_2 B_l & -{\bpsix \mathcal{J}^{l,(1)}_{l,(1)}}/{\tilde{\beta}^2} \\ {\bpsix \mathcal{J}^{l,(1)}_{k,(0)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{l,(1)}_{k,(0)} & {\bpsix \mathcal{J}^{l,(1)}_{l,(1)}}/{\tilde{\beta}^2}& \bpsev \mathcal{I}^{l,(1)}_{l,(1)} -\bptwo B_l \end{bmatrix} \end{equation} where $\beta_k^{(0)}=\beta_l^{(1)}=\tilde{\beta}$, and the related $\boldsymbol{F}$ and $\boldsymbol{b}$ vectors are \begin{equation} \boldsymbol{F}= \begin{bmatrix} C_k^{(0)} \\ S_k^{(0)} \\ C_l^{(1)} \\ S_l^{(1)} \end{bmatrix}, \qquad \boldsymbol{b}= -\frac{1}{2\tilde{\beta}^2} \begin{bmatrix} \hat{S}_k^{(0)} \\ \hat{C}_k^{(0)} \\ \hat{S}_l^{(1)} \\ \hat{C}_l^{(1)} \end{bmatrix}. \end{equation} \end{subequations} Matrix $[\boldsymbol{A}]$ thus involves operator products such as $\mathcal{I}^{k,(0)}_{l,(1)}$, representing interaction between the elastic oscillation mode $k$ of the front segment and the aerodynamic forces due to mode $l$ oscillations of the rear segment. Unlike the previous case (I), we cannot deduce stability directly from the values of terms such as $\mathcal{I}^{k,(0)}_{l,(1)}$ and require computation of the eigenvalues of $[\boldsymbol{A}]$. Similarly, case (III) occurring for $X_c=0.5$ represents identical frequencies of the front and the rear parts for all modes, i.e. $\beta_k^{(0)} = \beta_k^{(1)} = \beta_k = 2\lambda_k\, \forall\,k$. In this case, for all modes $k$, matrix $[\boldsymbol{A}]$ is of the form \begin{subequations}\label{all_fr} \begin{equation}\label{all_frA} [\boldsymbol{A}]= \frac{1}{2} \begin{bmatrix} \bpsev \mathcal{I}^{k,(0)}_{k,(0)} -\bptwo B_k & -{\bpsix \mathcal{J}^{k,(0)}_{k,(0)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(0)}_{k,(1)} & -{\bpsix \mathcal{J}^{k,(0)}_{k,(1)}}/{\beta_k^2} \\ {\bpsix \mathcal{J}^{k,(0)}_{k,(0)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(0)}_{k,(0)} -\bptwo B_k & {\bpsix \mathcal{J}^{k,(0)}_{k,(1)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(0)}_{k,(1)} \\ \bpsev \mathcal{I}^{k,(1)}_{k,(0)} & -{\bpsix \mathcal{J}^{k,(1)}_{k,(0)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(1)}_{k,(1)} -\bptwo B_k & -{\bpsix \mathcal{J}^{k,(1)}_{k,(1)}}/{\beta_k^2} \\ {\bpsix \mathcal{J}^{k,(1)}_{k,(0)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(1)}_{k,(0)} & {\bpsix \mathcal{J}^{k,(1)}_{k,(1)}}/{\beta_k^2} & \bpsev \mathcal{I}^{k,(1)}_{k,(1)} -\bptwo B_k \end{bmatrix} \end{equation} and the vectors $\boldsymbol{F}$ and $\boldsymbol{b}$ are \begin{equation} \boldsymbol{F}= \begin{bmatrix} C_k^{(0)} \\ S_k^{(0)} \\ C_k^{(1)} \\ S_k^{(1)} \end{bmatrix}, \qquad \boldsymbol{b}= -\frac{1}{2\beta_k^2} \begin{bmatrix} \hat{S}_k^{(0)} \\ \hat{C}_k^{(0)} \\ \hat{S}_k^{(1)} \\ \hat{C}_k^{(1)} \end{bmatrix}. \end{equation} \end{subequations} In this case matrix $[\boldsymbol{A}]$ involves the operator products such as $\mathcal{I}^{k,(0)}_{k,(1)}$, representing interaction between the elastic oscillation mode $k$ of the front segment and the aerodynamic forces due to the same mode $k$ of the rear segment. \subsection{Stability condition} The compatibility equations derived in \S 4.4 yield the growth or decay of the various modes of the rear and front segments, or interaction between the modes of the segments, for arbitrary values of $X_c$ via computation of the eigenvalues of $[\boldsymbol{A}]$. Furthermore, in $[\boldsymbol{A}]$ the damping terms, denoted by $B$, appear only on the diagonal of $[\boldsymbol{A}]$. Thus, we can compute eigenvalues $\sigma$ from the characteristic polynomial $p(\sigma)=|[A]-\sigma I|$ for $B=0$, and then directly obtain the value of ${\Pi_2 B}/{2}$ required for stability. We denote hereafter $\tilde\sigma$ as the eigenvalue, calculated for $B=0$, with the maximal real part. This eigenvalue corresponds to the mode, or interaction between two modes, with the maximal growth rate which eventually dominates the dynamics of the configuration. Thus, the stability condition is \begin{equation} \frac{\Pi_2 B}{2}=\frac{c^2 r_d}{2\sqrt{s^*\mu_s^*}}>\Real{[\tilde\sigma]}. \end{equation} For case (I) $X_c\neq{\lambda_k}/(\lambda_k+\lambda_l)$ (using (\ref{ratios}) ) the dimensional stability condition for spatial mode $k$ and segment $(m)$ is \begin{equation} 2\rho_\infty u_\infty \mathcal{I}^{k,(m)}_{k,(m)} -r^*_{d,k} <0 \end{equation} where $r^*_{d,k}$ is the dimensional modal damping coefficient for spatial mode $k$. The dimensional aeroelastic modulation frequencies are \begin{equation} \frac{\rho_\infty u_\infty^2 c } {\beta_k^{(m)2} \sqrt{\mu^*_s s^*} } \mathcal{J}^{k,(m)}_{k,(m)} \end{equation} and the value of $\tilde\sigma$ is immediately obtained from (\ref{eigenvals}) as $\tilde\sigma=\Pi_7$ $\mathcal{I}^{k,(m)}_{k,(m)}$$/2$. From figure \ref{IJ_values}, the stability condition for the whole system is \begin{equation} \textcolor{black}{u_\infty < \frac{r^*_{d,k}}{2\rho_\infty \mathcal{I}^{1,(1)}_{1,(1)} }}. \end{equation} For cases (II) and (III) $X_c={\lambda_k}/(\lambda_k+\lambda_l)$, $\tilde\sigma$ needs to be computed from from (\ref{nkeigen}) or (\ref{all_frA}). \begin{figure} \centering \includegraphics[width=0.8\textwidth]{Figure_4.eps} \caption{Real and imaginary parts of $\tilde\sigma$ vs. $X_c$ for $\bar \Pi_7=1$ (panels (a),(b)), $\bar\Pi_7=0.1$ (panels (c),(d)) and $\bar\Pi_7=0.05$ (panels (e),(f)). In all cases $\bar\Pi_6=1$, taking 4 modes in account. Panels ((a),(c),(e)) present the growth rate of the dominant mode and panels ((b),(d),(f)) present the modulation frequency associated with the dominate mode. Dominant modes associated with interaction between the front and rear modes are presented by vertical lines due to sudden change in $\tilde\sigma$ at $X_c={\lambda_k}/(\lambda_k+\lambda_l)$. Numbers $(n,k)$ above vertical lines mark the front $n$ and rear $k$ modes interaction which yield the dominant instability growth rate. } \label{stability_graph} \end{figure} Figure \ref{stability_graph} presents the real and imaginary parts of $\tilde\sigma$ vs. $X_c$ for $\Pi_3=1$ and $\bar\Pi_6=1$ for various values of $\bar\Pi_7$, taking 4 modes in account. ($\bar\Pi_7=1$ in panels (a,b), $\bar\Pi_7=0.1$ in panels (c,d) and $\bar\Pi_7=0.05$ in panels (e,f)). Panels (a,c,e) present the growth rate of the dominant mode while panels (b,d,f) present the modulation frequency associated with the dominate mode. Without interaction between the modes, the dominant mechanism of aeroelastic instability in all presented cases emanates from the first mode of the front part of the airfoil. The interaction between the front and rear modes for $X_c={\lambda_k}/(\lambda_k+\lambda_l)$ is presented by vertical lines corresponding to the sudden change in $\tilde\sigma$ at the locations of $X_c$ in which the rear and front modes have identical natural frequencies. Above the vertical lines, the numbers $(n,k)$ mark the front $n$ and rear $k$ modes interaction which yield the dominant instability growth rate. For $\bar\Pi_7=1$ the interaction dynamics yield only minor effect on the stability condition (see panel (a)). However, as $\bar\Pi_7$ decreases in comparison to $\bar\Pi_6$, the effect of interaction between modes becomes significant and must be considered when analyzing the stability of such configurations (see panels (c,e)). In addition, the effect of interaction between the front and rear segments increases as $X_c$ decreases and the growth rate of the first front mode decreases. In contrast with modulation frequencies associated with the front segment first mode, instability associated with interaction between modes does not modulate the fast-time oscillation frequency (see panels (d,f)). \subsection{Dynamic response} \subsubsection{Initial conditions} We present the transient response of an elastic airfoil, clamped at $X_c\neq {\lambda_k} /(\lambda_k+\lambda_n)$, to initial conditions which differ from the steady-state solution. Based on \S 4.4, the vector $\boldsymbol{b}$ of the evolution equation as \begin{equation}\label{no_act_b} \boldsymbol{b}= -\frac{1}{2\beta_k^{(m)2}} \begin{bmatrix} \hatS \\ \hatC \end{bmatrix}= \begin{bmatrix} 0\\0 \end{bmatrix}. \end{equation} Substituting (\ref{no_act_b}) and the compatibility condition (\ref{no_identical_homogeneous_sol}) into the leading-order solution (\ref{LE_sol}), yields the multi-scale leading order solution as $D_0^{(m)}$ as \begin{subequations}\label{ssss} \begin{multline} D_0^{(m)}=D_{0,SS}^{(m)}+ \sum_{n=1}^\infty \Bigg\{ \Xi_n(\xi^{(m)}) exp\left[{\frac{\left(\Pi_7 \mathcal{I}^{n,(m)}_{n,(m)} -\Pi_2 B_n\right)T}{2}}\right] \times \\ \bigg[ \left( C_n^{(m)}(0) \cos\left(\frac{\Pi_6 \mathcal{J}^{n,(m)}_{n,(m)}T}{2\beta_n^{(m)2}} \right) -S_n^{(m)}(0) \sin\left(\frac{\Pi_6 \mathcal{J}^{n,(m)}_{n,(m)}T}{2\beta_n^{(m)2}} \right) \right) \cos \left(\beta_n^{(m)2}T\right) \\+ \left( S_n^{(m)}(0) \cos\left(\frac{\Pi_6 \mathcal{J}^{n,(m)}_{n,(m)}T}{2\beta_n^{(m)2}} \right) +C_n^{(m)}(0)\sin\left(\frac{\Pi_6 \mathcal{J}^{n,(m)}_{n,(m)}T}{2\beta_n^{(m)2}} \right) \right) \sin \left(\beta_n^{(m)2}T\right) \bigg] \Bigg\}, \end{multline} where \begin{equation} C_n^{(m)}(0)=\int_0^1 \left[F_1(\xi^{(m)})-D_{0,SS}(\xi^{(m)})\right]\Xi_n(\xi^{(m)})d\xi^{(m)} \end{equation} and \begin{equation} S_n^{(m)}(0)=\frac{1}{\beta^{(m)2}_n}\int_0^1 F_2(\xi^{(m)})\Xi_n(\xi^{(m)})d\xi^{(m)}. \end{equation} \end{subequations} Figure \ref{no_inlet_graphs} presents the lift coefficient (defined as $C_l=2l/\rho_\infty u_\infty^2c$, where $l$ is lift-per-unit-span) vs. time computed from (\ref{ssss}) for a NACA4405 elastic airfoil, with dimensional average Young's modulus $E_{eq}=10^9[Pa]$, rigidiy $s=E_{eq}(0.05c)^3/12[Pa\times m^2]$, solid mass per-unit-lengh $\mu_{s,eq}=270 [{kg}/m]$, chord length $c=1[m]$, air density $\rho_\infty= 1.2 [{kg}/{m^3}]$, air speed $u_\infty=30[{m}/{s}]$, and angle-of-attack $\alpha=3^\circ$. The above dimensional parameters yield the dimensionless ratios $\Pi_2=0.01$, $\Pi_4=1$, $\Pi_5=0.76$, $\Pi_6=0.21$, and $\Pi_7=0.04$. The airfoil is initially at rest, $D(T=0,X)=0$ and $\partial D/\partial T (T=0,X)=0$, and deformations occur due to aerodynamic loads on the profile. Panel (a) present a stable configuration with $x_c=0.2c$, and panels (b) and (c) present closeups on early and late stages. Panel (d) presents an unstable configuration with $X_c=0.4c$, and similarly panels (e) and (f) present closeups. For both configurations early times ($t\leq50[s]$) involves gradual decay of the initial excitation due to the effect of the external loads. Closeups (b) and (e) show multiple overlapping modes during the initial gradual decay. For the stable configurations, all modes continue to decay and appear in late times, as evident in panel (c), while the in unstable configuration a single mode grows and dominate the dynamics, as evident in panel (f). \begin{figure} \centering \includegraphics[width=1\textwidth]{Figure_5.eps} \caption{Lift coefficient $C_l$ vs. time computed for an elastic unactuated NACA4405 airfoil clamped at $x_c=0.2c$ (stable configuration, panels (a-c)) and $x_c=0.4c$ (unstable configurations, panels (d-f)). Relevant parameters are $E_{eq}=10^9[Pa]$, $s=E_{eq}(0.05c)^3/12[Pa\times m^2]$, $\mu_{s,eq}=270 [{kg}/m]$, $c=1[m]$, $\rho_\infty= 1.2 [{kg}/{m^3}]$, $u_\infty=30[{m}/{s}]$, and $\alpha=3^\circ$, yielding dimensionless numbers $\bptwo=0.01$, $\Pi_4=1$, $\Pi_5=0.76$, $\bpsix=0.21$, and $\bpsev=0.04$. Initial conditions are $D(T=0,X)=0$ and $\partial D/\partial T (T=0,X)=0$.} \label{no_inlet_graphs} \end{figure} \subsubsection{Oscillatory actuation} We present the effect of distributed actuation of the form \begin{equation}\label{DA_a} D_A(X,T_0)=-\frac{1}{2} \left(X-X_c\right)^2 \sin\left(\kappa T_0\right), \end{equation} which may represent uniform distributed shape-morphing actuation methods. Substituting (\ref{DA_a}) into (\ref{V_def}), the corresponding homogenization function $V^{(m)}$ is \begin{equation} V^{(m)}=\frac{\Pi_1}{2} \left(X-X_c\right)^2 \sin\left(\kappa T_0\right) =\frac{\Pi_1}{2} \left(m-X_c\right)^2 \xi^{(m)2} \sin\left(\kappa T_0\right), \end{equation} and substituting into (\ref{Theta_a}) yields \begin{equation} \Theta_{n,A}^{(m)}= \frac{\Pi_1 \kappa^2 \left(m-X_c\right)^2}{2 \beta_n^{(m)2}} \left[ \sint \circledast \sin\left(\kappa T_0\right) \right] \int_0^1 \Xi_n(\xi^{(m)})\xi^{(m)2}d\xi^{(m)}. \end{equation} The convolution product for a specific mode $k$ and segment $(m)$ depends on the actuation frequency. For actuation frequency which is not the natural mode frequency, i.e. $\kappa \ne \beta_n^{(m)2}$, we obtain \begin{equation}\label{notfr} \sint \circledast \sin\left(\kappa T_0\right)= \frac{\kappa \sint -\beta_n^{(m)2} \sin\left(\kappa T_0\right)}{\kappa^2 -\beta_n^{(m)4}}. \end{equation} However, as $\kappa \rightarrow \beta_n^{(m)2}$, (\ref{notfr}) becomes singular, and the convolution yields the limit \begin{equation} \sint \circledast \sint= \frac{1}{2}\left[ \frac{1}{\beta_n^{(m)2}}\sint -T_0 \cost\right] \end{equation} which, as expected, modifies the compatibility equation (appearing within $\boldsymbol{b}$). Thus, for the non-resonance input frequency case $\kappa \ne \beta_n^{(m)2}$, vector $\boldsymbol{b}$ is \begin{equation} \boldsymbol{b}= -\frac{1}{2\beta_k^{(m)2}} \begin{bmatrix} {\bpsix \mathcal{J}^{k,(m)}_{k,(m)}}/{\beta_k^{(m)2}} \\ \bpsev \beta_k^{(m)2} \mathcal{I}^{k,(m)}_{k,(m)} +\bptwo B_k \end{bmatrix} \frac{\Pi_1 \kappa^3 (m-X_c)^2}{2(\beta_k^{(m)4}-\kappa^2)}\int_0^1 \Xi_k (\xi^{(m)})\xi^{(m)2}d\xi^{(m)}, \end{equation} and for the resonance input frequency $\kappa = \beta_n^{(m)2}$, vector $\boldsymbol{b}$ is \begin{multline} \boldsymbol{b}= \begin{bmatrix} (\bptwo B_k-\bpsev \mathcal{I}^{k,(m)}_{k,(m)}\beta_k^{(m)2})\beta_k^{(m)2}\\ \bpsix \mathcal{J}^{k,(m)}_{k,(m)} \end{bmatrix}T_0 \frac{\Pi_1 (m-X_c)^2}{4}\int_0^1 \Xi_k (\xi^{(m)})\xi^{(m)2}d\xi^{(m)}+\boldsymbol{c}, \end{multline} where $\boldsymbol{c}$ is vector containing only constants. The compatibility condition (\ref{compatabily_matrix_corm}) for nonzero $\boldsymbol b$ is given by (\ref{ODEs_solution}) where the system's fundamental matrix $\left[\boldsymbol{\hat{A}}(T_1)\right]$ is: \begin{equation} \left[\boldsymbol{\hat{A}}(T_1)\right]= \begin{bmatrix} -\sin\left(\frac{\bpsix \mathcal{J}^{n,(m)}_{n,(m)}}{2} T_1\right) & \cos\left(\frac{\bpsix \mathcal{J}^{n,(m)}_{n,(m)}}{2} T_1\right) \\ \cos\left(\frac{\bpsix \mathcal{J}^{n,(m)}_{n,(m)}}{2} T_1\right) & \sin\left(\frac{\bpsix \mathcal{J}^{n,(m)}_{n,(m)}}{2} T_1\right) \end{bmatrix} e^{\frac{\bpsev \mathcal{I}^{n,(m)}_{n,(m)} -\bptwo B_n}{2}T_1} \end{equation} \begin{figure} \centering \includegraphics[width=1\textwidth]{Figure_6.eps} \caption{Lift coefficient $C_L$ vs. time for oscillatory actuation $D_A(X,T)=-\left(X-X_c\right)^2 \sin\left(\kappa T_0\right)/2$ computed for an elastic unactuated NACA4405 airfoil clamped at $x_c=0.2c$ (stable configuration, panels (a-c)) and $x_c=0.4c$ (unstable configurations, panels (d-f)). Actuation frequency is $\kappa=0.5$ and actuation amplitude is $\Pi_1=0.92$. All other parameters and initial conditions are identical to figure \ref{no_inlet_graphs}.} \label{Sine_inlet_graphs} \end{figure} Figure \ref{Sine_inlet_graphs} presents oscillations of lift coefficient (defined as $C_l=2l/\rho_\infty u_\infty^2$, where $l$ is lift-per-unit-span) vs. time for a NACA4405 elastic airfoil actuated at the form (\ref{DA_a}) with nondimensional frequency of $\kappa=0.5$ ($\kappa\neq\beta_n^{(m)2}$) and amplitude $\Pi_1=d_a^*/d^*=0.92$. Initial conditions, as well as all dimensional parameters, are identical to those presented in \S 4.6.1. In panels (a-c) $x_c=0.2c$ and in panels (e-f) $x_c=0.4c$. Panel (a), and closeups (b) and (c), present a stable configuration. Initially, the forced actuation is negligible compared with the modes excited by the aerodynamic forcing. This is reversed in late times where all natural oscillations modes decay leaving $D_a$ as the dominant oscillation. Panel (d) and closeups (e) and (f) present an unstable configuration (due to the location $x_c=0.4c$ and the first mode of the front segment). Initial decay of multiple modes is observed, similarly to figure \ref{no_inlet_graphs}, transitioning to gradual growth of a single dominant mode coupled with the actuation forcing for late times. \section{Concluding remarks} This work presented analysis and numerical calculations of a shape-morphing soft two-dimensional airfoil in potential flow. The airfoil was modelled as two cantilevered elastic sheets connected to a rigid support at an arbitrary location amid chord. Steady-state and transient solutions are presented, based on regular and matched asymptotics, respectively. Stability conditions, and initial dynamics of stable and unstable configurations, are obtained from the compatibility equations of the different spatial modes. The maximal stable speed is presented as a function of elastic damping, fluid density and location of clamping. Focus is given to the interaction between the front and rear segments, which is shown to be a dominant instability mechanism for a set of discrete locations of clamping. The presented results lay a theoretical foundation for the realization of shape-morphing soft airfoils. The most limiting simplifying assumption used in the current study was neglecting vortex shedding effects. While this assumption is commonly used, and significantly simplifies the analysis of such configurations, future research is required to assess the effect of vortex shedding on such soft airfoils. In addition, in the current study the clamping location, $X_c$, is taken as constant for a given configuration. However, the main importance of $X_c$ in the analysis is setting the natural frequencies of the front and rear segments. Significant interaction effects occur for identical frequencies of one of the spatial modes of the front segment and one of the modes of the rear segment. However, actuation of soft airfoils is expected to change the properties of the front and rear segments, and thus change the natural frequencies during actuation. Thus, instability due to intersection of front and rear natural frequencies may occur during actuation. \acknowledgments{We thank Dr. Sonya Tiomkin and Prof. Daniella Raveh for helpful discussions.}
{ "timestamp": "2018-05-08T02:14:44", "yymm": "1805", "arxiv_id": "1805.02378", "language": "en", "url": "https://arxiv.org/abs/1805.02378" }
\section{Introduction} \label{sec:introduction} The security of our communication schemes is of significant concern --- Big Brother is often watching! While much attention focuses on schemes that aim to hide the \emph{content} of communication, in many scenarios, the \emph{fact} of communication should also be kept secret. For example, a secret agent being caught communicating with an accomplice is of potentially drastic consequences --- merely ensuring secrecy does not guarantee undetectability. This observation has drawn attention to the problem of \emph{covert communication}. In a canonical information-theoretic setting for this problem, a transmitter Alice {\it may} wish to transmit messages to a receiver Bob over a noisy channel, and remains \emph{silent} otherwise. James eavesdrops on her transmission through another noisy channel. The communication goals are twofold. Firstly, the communication should be \emph{covert}, i.e., James should be unable to reliably distinguish whether or not Alice is transmitting. Simultaneously, it should also be \emph{reliable}, i.e., Bob should be able to correctly estimate Alice's transmission with a high probability of success. Recent literature~\cite{BasGT:12a,bash2015quantum,7407378,tahmasbi2017second,7447769,CheBJ:13} has quite successfully characterized the information-theoretic aspects of this problem, in terms of characterizing the fundamental limits on the total amount of covert communication possible from Alice to Bob. Specifically, it turns out that no more than $c_{p,q}\sqrt{n}$ bits may be covertly transmitted from Alice to Bob over $n$ channel uses, where $c_{p,q}$ is an explicitly characterizable constant depending on the channels from Alice to Bob and James. This sub-linear throughput (as opposed to the linear throughput in most communication settings) results from the stringent requirement on Alice's transmissions imposed by the need to remain covert --- she must ``whisper'', so to speak (pun intended). Indeed, most of her transmitted codewords must have low Hamming weight (${\cal O}(\sqrt{n})$). Prior information-theoretic work on covert communication largely focuses on random noise to both Bob and James. While such channel models are appropriate for passive eavesdroppers, a truly malicious adversary might wish to also actively disrupt any potential communication even when he is unable to detect if transmission has indeed taken place. To model this scenario, in this work, we take a somewhat {\it coding-theoretic} view --- we let the channel from Alice to James be probabilistic, but we allow James to try to {\it jam} the channel to Bob adversarially, as a function of his noisy observations of Alice's potential transmissions. Semi-formally, in our setting, Alice's channel input is an $n$-length binary vector $\X$. The channel from Alice to James is a binary symmetric channel with transition probability $q$ (i.e., BSC($q$)). James uses his observation $\mathbf{Z}$ in two ways --- to detect if communication is being attempted via an estimator $\Phi$, and to choose a binary jamming vector $\textbf{S}$ of Hamming weight at most $pn$ --- Bob receives the vector $\Y=\X\oplus\textbf{S}$. We denote the channel from Alice to Bob as ADVC($p\vert q$). When Alice is silent, $\X$ {\it must} be the all-zeros vector $\mathbf{0}$; when Alice is \emph{active}, the $\X$ she transmits may be a function of the message she wishes to transmit. Alice and Bob's encoding/decoding procedures are known to all parties. We measure covertness via a {\it hypothesis-testing metric} --- we say that the communication is $(1-\epsilon_d)$-covert if irrespective of James' estimator $\Phi$, his probability of {\it false alarm} plus his probability of {\it missed detection} is always lower-bounded by $1-\epsilon_d$. Secondly, we require reliability --- Bob should be able to reconstruct Alice's transmission with high probability (w.h.p.) regardless of James' jamming strategy. Unfortunately, in our setting this turns out to be impossible --- it turns out (as we show in our first main result) that the noise $\textbf{S}$ on Bob's channel is adversarially chosen (rather than randomly as in the classical setting, e.g.~\cite{BasGT:12a,7407378,tahmasbi2017second,7447769}) implies James can ensure {\it any} such communication protocol must be either non-covert or unreliable. This is true even if James has computational restrictions, or is required to behave causally~\cite{chen2015characterization}. This is in stark contrast to the probabilistic channel setting wherein covert communication is possible for a wide range of parameters. Hence, we mildly relax our problem --- prior to transmission, Alice and Bob secretly share a $\Delta(n)$-bit randomly generated \emph{shared key} that is unknown to James. It turns out (as another result in this work shows) that a modest value of\footnote{All logarithms in this paper are binary.} $\Delta(n) = 6\log(n)$ suffices\footnote{Using a finer analytical technique provided by a recent work~\cite{Shared2019}, one may show that even $\Delta(n) = (2.5+\delta)\log(n)$ (where $\delta >0$ can be made arbitrarily small) suffices.} (and $\Delta(n) \ge \frac{1}{2}\log(n)$ is necessary) to instantiate throughput scaling as $r^\ast_{\Delta(n),\epsilon_d}(p,q)\sqrt{n}$, for a constant $r^\ast_{\Delta(n),\epsilon_d}(p,q)$ that we explicitly and tightly characterize (provide matching inner and outer bounds) for a wide range of parameters $\Delta(n)$ (shared key size), $\epsilon_d$ (covertness parameter), and $p$, $q$ (noise parameters). Hence in these parameter regimes the amount $\Delta(n) \in {\cal O}(\log(n))$ of shared key required to initiate reliable and covert communication scales much more gracefully than the amount of communication ${\cal O}(\sqrt{n})$ thereby instantiated. When the size of shared key is ``moderate'' (in the regime $(\Omega(\log(n),{\cal O}(\sqrt{n}))$), we provide inner and outer bounds on the information-theoretically optimal throughput, and a larger amount of shared key in general yields a better inner bound. In contrast, increasing the amount of shared key $\Delta(n)$ leads to diminishing returns in the regime $\Delta(n) \in \omega(\sqrt{n})$ --- the optimal throughput possible when $\Delta(n) \in \omega(\sqrt{n})$ is the same as when $\Delta(n)= \infty$, and we are able to fully characterize this optimal throughput, as our inner and outer bounds match in this regime. Our achievability schemes make no computational or causality assumptions on James. While the achievability schemes alluded to in the paragraphs above are existential and therefore have high computational complexity for Alice/Bob, when $\Delta(n) \in \Theta(\sqrt{n}\log(n))$ we demonstrate a {\it computationally efficient} communication scheme (i.e., the computational complexity for Alice/Bob is polynomial in the blocklength $n$) which makes no computational or causality assumptions on James, and achieves within a constant factor of the information-theoretically optimal throughput. If the computational complexity of James is further restricted to polynomial time, then the aforementioned communication scheme can be implemented with a much smaller amount of shared key. \subsection{Related Work \& Comparisons} \noindent{\bf Covert Communication:} Bash et al.~\cite{BasGT:12a} were the first to study covert communication for additive white Gaussian noise (AWGN) channels in an information-theoretic setting and demonstrate a {\it square-root law} --- communication that is simultaneously covert and reliable is possible when the message length is $\mathcal{O}(\sqrt{n})$ bits and shared key is available. Subsequently, Che et al. showed that for BSCs, as long as James has a noisier channel than Bob, no shared key is necessary~\cite{CheBJ:13,CheBCJ:14a,CheBCJ:14b,CheSBCJA:14}. Bloch et al.~\cite{7407378,bloch2017optimal} and Wang et al.~\cite{7447769} then derived tight capacity characterizations for general discrete memoryless channels (DMCs) and AWGN channels. The work in~\cite{7407378} also showed that the amount of shared key needed when Bob has a noisier channel than James is ${\cal O}(\sqrt{n})$. While prior work on covert communication focuses on random noise channels (e.g., BSCs, AWGNs, and DMCs), to the best of our knowledge, our work is the first to examine covert communication over adversarial channels. \noindent{\bf Random noise vs adversarial noise channels:} In the non-covert setting, much work has focused on two classes of noisy channels --- {\it random noise} channels and {\it adversarial noise} channels. The capacities of random noise channels have been fully characterized by Shannon in his seminal work~\cite{shannon2001mathematical}. On the contrary, though many upper and lower bounds (sometimes but not always matching) for a variety of special adversarial jamming models, a tight capacity characterization for general adversarial channels (also called Arbitrarily Varying Channels (AVCs) in the information theory literature --- see~\cite{lapidoth1998reliable} for an excellent survey) is still elusive. One way to classify adversarial models is via the adversary's knowledge level of the transmitted codeword $\X$. Models of interest include {\it classical/omniscient} adversarial model~\cite{gilbert1952comparison, varshamov1957estimate, mceliece1977new} (full knowledge of $\X$), the {\it myopic} adversarial model~\cite{sarwate_avc_2012, sarwate_coding_2010, dey2015sufficiently,zhang2018quadratically} (noisy observations of $\X$), the {\it oblivious} adversarial model~\cite{lapidoth1998reliable, langberg2008oblivious, guruswami2010codes} (no knowledge of $\X$) and the {\it causal} adversarial model~\cite{chen2015characterization, dey2016bit, dey_improved_2012} (causal observations of $\X$). Also, the {\it computationally bounded} adversary model~\cite{gopalanerror, micali2005optimal} considers models wherein Alice/Bob/James are all computationally bounded. \noindent {\bf Arbitrarily Varying Channels (AVCs):} At a high level, reliable communication in the model considered in this work is closely related that of communication over an AVC~\cite{blackwell_capacities_1960, csiszar_capacity_1988, lapidoth1998reliable} with stringent input constraints. Indeed, the impossibility result we present in Theorem~\ref{thm:converse1} is motivated by the {\it symmetrizability} condition for AVCs. \begin{enumerate} \item {\it Myopic adversaries with shared key:} These are AVC problems first explicitly considered by Sarwate~\cite{sarwate_coding_2010} wherein James only observes a noisy version $\mathbf{Z}$ of $\X$ (for instance through a BSC$(q)$) before deciding on his jamming vector $\textbf{S}$. Sarwate~\cite{sarwate_coding_2010} provided a tight characterization of the throughput in such settings over general DMCs in the presence of an unlimited-sized shared key --- as such, the model therein has strong connections to the problem we consider. Indeed, the converse we present in Theorem~\ref{thm:upperbound} relies heavily on the information-theoretic framework for impossibility results in AVCs in general and~\cite{sarwate_coding_2010} in particular. \item {\it Myopic adversaries without shared key:} Problems concerning myopic adversaries {\it without} shared key between Alice and Bob~\cite{dey2015sufficiently} are considerably more challenging than when shared key is available. However, if the adversary is {\it sufficiently myopic}, i.e., the noise $q$ on the BSC($q$) to James is strictly larger than the fraction $p$ of bit flips he can impose on the channel to Bob, and there are no constraints on Alice's transmissions, the capacity of such a channel has been shown to exactly equal that of a BSC$(p)$. \end{enumerate} {\it Remark:} Despite similarities, the main focus of most work in the AVC literature differs from this work in the following aspects: (i) an unlimited-sized shared key between Alice and Bob is assumed, as opposed to the careful classification of achievabilities/converses obtained in our work pertaining to differing-sized shared keys; (ii) covertness is not considered as in this work; (iii) the stringent requirements in channel inputs enforced due to covertness imply that some of the analytical techniques used in the AVC literature do not translate to our setting; and (iv) no effort is made to consider computational restrictions on Alice/Bob/James, unlike in our work.\\ \noindent {\bf List decoding:} One of the primitives our achievability schemes rely heavily on is that of {\it list decoding}~\cite{elias_list_1957, guruswami2004list, sarwate_list-decoding_2012}. Results in this subset of the literature guarantee that even in the presence of omniscient adversaries, Bob is able to localize Alice's transmission to a small (often constant-sized) list at a communication rate approaching that of a corresponding random noise channel. However, we note that the ``usual'' list decoding model does not translate to our setting due to the severity of the constraint on Alice's transmissions imposed by covertness. Hence in our work we prove a novel version of list decoding for such input-constrained channels, in which we rely heavily on James' myopicity. \noindent {\bf Usage of shared key:} One pathway to achievability schemes for AVCs (e.g.~\cite{langberg2004private}) is to ensure that Bob can list-decode to a small list, and then to use the key shared with Alice to disambiguate this list down to a unique message. There are multiple such schemes in the literature, including computationally efficient schemes~\cite{cramer2008detection}. \noindent {\bf Permutation-based coding:} Another idea in the literature that has borne multiple dividends (e.g.~\cite{ahlswede_elimination_1978-1, langberg2004private}) in the context of code design for AVCs (especially computationally efficient codes, e.g.~\cite{lipton1994new, guruswami2010codes}) and even in covert communication from a source-resolvability perspective~\cite{bash2015hiding, 7407378} is that of {\it permutation-based coding}. Alice and Bob generate a small (polynomial-size) set $\Pi$ (known also to James) of randomly sampled permutations as part of code-design, and then use their shared key to pick a particular permutation $\pi$ that is unknown to James. Alice then transmits the codeword $\pi(\X)$, and Bob attempts to decode $\pi^{-1}(\Y)$. In several problems it can be shown that the effect of this permutation $\pi$ is to ``scramble'' James' jamming action, and hence makes him behave essentially like i.i.d. noise. In our work we show that similar ideas work even in the presence of a myopic and computationally unbounded jammer James, and results in a computationally efficient communication scheme for Alice and Bob. \section{Model}\label{sec:model} Random variables are denoted by uppercase letters, e.g., $X$, and their realizations are denoted by lowercase letters, e.g., $x$. Sets are denoted by calligraphic letters, e.g., $\mathcal{X}$. Vectors of length $n$ are denoted by boldface letters, e.g., $\X$ and $\mathbf{x}$. The $i$-th locations of $\X$ and $\mathbf{x}$ are denoted by $X_i$ and $x_i$ respectively. The $Q$-function takes the form \begin{align} Q(x) = \frac{1}{2\pi} \int_{x}^{\infty}\exp\left(-\frac{u^2}{2}\right)du. \end{align} \noindent{\bf Encoder:} Let $n$ denote the blocklength (number of channel uses) of Alice's communication. Alice's {\it encoder} $\Psi(.,.,.)$ takes three inputs\footnote{In some scenarios in the literature, in addition to the three inputs below, the encoder also incorporates additional private randomness (known {\it a priori} only to Alice, but not to Bob or James). Indeed, in some communication scenarios~\cite{dey2016bit} it can be shown that the throughput in the presence of such private randomness is strictly higher than in its absence. However, since in this work such types of encoders do not help, we ignore this potential flexibility in code design.}: (i) the single bit {\it transmission status} $T$: Alice's silence is denoted by $T=0$ whereas $T=1$ denotes that she is {\it active}.\footnote{Note that no assumptions are made about any probability distribution on $T$.} (ii) the {\it message} $M$, which is either $0$ (if Alice is silent), or uniformly distributed over $\{1,2,\ldots,N\}$ (if Alice is active). (iii) the $\Delta(n)$-bit {\it shared key} $K$ distributed uniformly over $\{0,1\}^{\Delta(n)}$. Prior to transmission, only Alice knows the transmission status $T$ and message $M$, and both Alice and Bob know the key $K$ --- James is {\it a priori} ignorant of all three. If $T=0$, then Alice's encoder $\Psi(0,.,.)$ {\it must} output $\X = {\mathbf 0}$, a length-$n$ vector comprising entirely of zeros. On the other hand if $T = 1$, then Alice's encoder $\Psi(1,.,.)$ may output an arbitrary length-$n$ binary vector $\X$. The collection of all outputs of Alice’s encoder $\Psi(1,.,.)$ is called the {\it codebook}, denoted by $\C$. This encoder is known {\it a priori} to all parties (Alice, Bob, and James). The {\it relative throughput} of the code is defined as $r \triangleq (\log N )/ \sqrt{n}$. \begin{figure} \begin{center} \includegraphics[scale=0.55]{system.pdf} \caption{System diagram.} \label{fig:system} \end{center} \end{figure} \noindent {\bf James' observations:} James receives the vector $\mathbf{Z} = \X \oplus \bar{\mathbf{S}}$, where $\bar{S}_i$ is a Bernoulli$(q)$ random variable. Hence James' observed vector $\mathbf{Z}$ is the output of a BSC$(q)$ channel to which the input is Alice's transmission $\X$. On the basis of this observation $\mathbf{Z}$ and his knowledge of Alice's encoder $\Psi(.,.,.)$, James, as described below: (i) estimates Alice's transmission status $T$, and (ii) generates a jamming vector $\mathbf{S}$ to disrupt communication. \noindent{\textbf{Estimator:}} James' {\it estimator} $\Phi(.): \left\{0, 1\right\}^{n} \rightarrow \left\{0, 1\right\}$ estimates Alice's transmission status $T$ as $\That = \Phi(\mathbf{Z})$. We use a hypothesis-testing metric (defined below) to measure covertness: \begin{definition}[\textbf{Covertness}]\label{def:covert} Let $P_{\emph{FA}}(\Phi) \triangleq \mathbb{P}_{K,\bar{\mathbf{S}}}(\That = 1| \T = 0)$ and $P_{\emph{MD}}(\Phi) \triangleq \mathbb{P}_{M,K,\bar{\mathbf{S}}}(\That = 0| \T = 1)$ respectively be the probability of false alarm and the probability of missed detection of an estimator $\Phi$. The communication is said to be ($1-\epsilon_d$)-covert if\footnote{Note that even if James ignores the knowledge of $\mathbf{Z}$, a na\"ive estimator $\tilde{\Phi}$ (which always outputs $\widehat{\T} = 0$ or $\widehat{\T}=1$) also guarantees $P_{\text{FA}}(\tilde{\Phi}) + P_{\text{MD}}(\tilde{\Phi}) = 1$. Therefore, Definition~\ref{def:covert} implies that James' optimal estimator $\Phi^\ast$ cannot be much better than the na\"ive estimator $\widehat{\Phi}$. } \begin{align} \lim_{n \to \infty} \min_{\Phi} \{ P_{\emph{FA}}(\Phi) + P_{\emph{MD}}(\Phi) \} \ge 1 - \epsilon_d, \end{align} where $\Phi$ is minimized over all possible estimators. \end{definition} For the optimal estimator $\Phi^\ast, P_{\text{FA}}(\Phi^\ast) + P_{\text{MD}}(\Phi^\ast) = 1 - \mathbb{V}(Q_0(\mathbf{Z}),Q_1(\mathbf{Z}))$, where $\mathbb{V}(Q_0(\mathbf{Z}),Q_1(\mathbf{Z}))$ is the variational distance between the two distributions (corresponding to $T=0$ and $T=1$, respectively) on James' observation $\mathbf{Z}$. In general the computational complexity of implementing the optimal estimator $\Phi^\ast$ is high (potentially $\exp(n)$); also, analyzing its performance can also be tricky. \noindent{\textbf{Jamming function:}} As a function of his observation $\mathbf{Z}$ and his knowledge of Alice's encoding function $\Psi(.,.,.)$ James chooses a {\it jamming function} to output a length-$n$ binary {\it jamming vector} $\mathbf{S}$ of Hamming weight at most $pn$. In general James' jamming function corresponds to a conditional probability distribution $W_{\mathbf{S}|\mathbf{Z},\C}$ that stochastically maps James' observations to his jamming vector $\mathbf{S}$. Note that $W_{\mathbf{S}|\mathbf{Z},\C}$ generates an $n$-letter distribution over length-$n$ binary sequences $\mathbf{S}$, given James’ length-$n$ observation $\mathbf{Z}$, and his knowledge of Alice and Bob's code $\C$. \noindent{\textbf{Decoder:}} Bob receives the length-$n$ binary vector $\Y = \X \oplus \textbf{S}$, and then applies his {\it decoding function} $\Gamma(.,.): \{0, 1\}^n \times \{0, 1\}^{\Delta(n)} \rightarrow \{0\} \cup \{1, 2, \ldots , N\}$ to produce his {\it message reconstruction} $\widehat{M}$ from his observed vector $\Y$ and the shared key $K$. \noindent{\textbf{Probability of decoding error:}} Bob's probability of error is defined as\footnote{The two terms correspond to Bob's decoder making an error in each of two scenarios: when Alice is silent, and when she is active. } \begin{align} P_{\text{err}} \triangleq \max_{W_{\mathbf{S}|\mathbf{Z},\C}} \left( \mathbb{P}_{K,\bar{\mathbf{S}},\mathbf{S}} (\widehat{T} \neq 0 | T = 0) + \mathbb{P}_{M,K,\bar{\mathbf{S}},\mathbf{S}} (\widehat{M} \neq M | T = 1) \right). \label{eq:pe} \end{align} \noindent{\it Remarks:} Note that the probability as defined in~\eqref{eq:pe} is maximized over the $n$-letter distribution $W_{\mathbf{S}|\mathbf{Z},\C}$. This is to indicate that there may (or may not) be a stochastic component to the jamming function James uses to generate $\mathbf{S}$ from his observation $\mathbf{Z}$. Hence we include an averaging over $\mathbf{S}$. \noindent{\textbf{Achievable relative throughput/covert capacity:}} For any $p,q \in (0,\frac{1}{2})$, $\Delta(n)\geq 0$, and $\epsilon_d \in (0,1)$, a relative throughput $r_{\Delta(n),\epsilon_d}(p,q)$ is said to be {\it achievable} if there exists an infinite sequence of codes with $\Delta(n)$ bits of shared key such that each of the codes in the sequence has relative throughput at least $r_{\Delta(n),\epsilon_d}(p,q)$, $\lim_{n\rightarrow \infty }P_{\text{err}} = 0$, and ensures the communication is $(1-\epsilon_d)$-covert. Then the {\it covert capacity}\footnote{Note that the covert capacity defined here depends on the amount of shared key available.} $r^{\ast}_{\Delta(n),\epsilon_d}(p,q)$ is defined as the supremum over all possible achievable relative throughputs. \noindent{{\bf Positive throughput region:}} For any $\Delta(n)$ and $\epsilon_d \in (0,1)$, the {\it positive throughput region} $\mathcal{R}^{+}_{\Delta(n),\epsilon_d}(p,q)$ is defined as a collection of values $(p,q)$ such that the covert capacity $r^{\ast}_{\Delta(n),\epsilon}(p,q)$ is positive. \section{Main Results} \label{sec:result} We now summarize the main contributions of this work. There are at least two types of estimators and jamming functions James can use, each of which results in a non-trivial restriction on the reliable and covert throughput obtainable from Alice to Bob. Perhaps surprisingly, there is a unified achievability scheme that Alice and Bob can use that meets these constraints for a wide range of parameters of interest, and thereby shows that these types of estimators/jamming functions are in some sense optimal from James' perspective. \\ \noindent $\bullet$ {\bf Weight-detector:} This estimator (with computational complexity ${\cal O}(n)$) merely computes the Hamming weight of the observed $\mathbf{Z}$, and if this is significantly higher than expected ($qn+c_t\sqrt{n}$ for some constant $c_t$), then James estimates\footnote{Even though this estimator is a sub-optimal proxy to the Hypothesis-testing estimator, it has been shown in~\cite{CheBJ:13,7407378} to be ``good enough'' from James' perspective, in the sense that it constrains Alice's throughput to the same extent as does the Hypothesis-testing estimator, which is known~\cite{neyman1992problem} to be optimal.} $\widehat{T}=1$. \\ \noindent $\bullet$ {\bf Hypothesis-testing estimator:} James first computes two distributions $Q_0(\mathbf{Z})$ and $Q_1(\mathbf{Z})$, which respectively correspond to the distributions of $\mathbf{Z}$ when $T = 0$ and $T = 1$. This (optimal) estimator $\Phi^\ast$ outputs $\widehat{T}=1$ if $Q_1(\mathbf{z}) \ge Q_0(\mathbf{z})$, and outputs $\widehat{T}=0$ if $Q_0(\mathbf{z}) > Q_1(\mathbf{z})$. Note that this estimator potentially has computational complexity $\exp(n)$ for James.\\ \noindent $\bullet$ {\bf Oblivious jamming:} This jamming strategy ignores James' channel observations $\mathbf{Z}$, and chooses $\mathbf{S}$ as a binary addition of multiple (at most ${\cal O}(\sqrt{n})$) codewords from the codebook. Since Bob's observation is a sum of Alice's transmission $\X$ and James' jamming vector $\mathbf{S}$, this jamming strategy attempts to confuse Bob as to what Alice truly transmitted. Note that this jamming strategy can be implemented by James causally, with computational complexity at most $\sqrt{n}$ times the computational complexity of Alice's encoder. This converse is presented in Theorem~\ref{thm:converse1}.\\ \noindent $\bullet$ {\bf Myopic jamming:} In this jamming strategy, even if Alice's transmission is covert and hence James is unsure whether or not Alice is active, James nonetheless uses his observations in $\mathbf{Z}$ to guess which channel uses correspond to potential $1$’s in Alice’s transmitted codeword if she indeed is active. He then preferentially flips these bits --- specifically, if $Z_i =1$ then he flips the corresponding $X_i$ with probability about $p/q$, but if $Z_i = 0$ he does not flip $X_i$. Note that this jamming strategy can be implemented by James causally, with computational complexity linear in $n$. This converse is presented in Theorem~\ref{thm:upperbound}. For any channel parameters $p, q \in (0,\frac{1}{2})$ and covertness parameter $\epsilon_d \in (0,1)$, the following definitions help characterize upper and lower bounds on the throughput. \begin{definition}[Weight normalized mutual information and code-weight parameter] For any $\epsilon_d \in (0,1)$ and $p,q \in (0,\frac{1}{2})$ such that $p \le q$, the weight normalized mutual information for Bob and James are respectively defined as \begin{align} &I_B(p,q) \triangleq \frac{p(q-1)}{q}\log \left(\frac{(q-p+pq)(1-p)}{p^2(1-q)}\right) +\log\left(\frac{q-p+pq}{pq}\right), \mbox{ and} \\ &I_J(q) \triangleq (1-2q)\log \left(\frac{1-q}{q}\right), \end{align} and the code weight parameter $t(q,\epsilon_d)$ equals \begin{align} t(q,\epsilon_d) \triangleq \frac{2 \sqrt{q (1-q)}}{1-2q}\cdot Q^{-1}\left(\frac{1-\epsilon_d}{2}\right). \label{eq:q} \end{align} \end{definition} The parameter $t(q,\epsilon_d)$ is independent of the blocklength $n$, corresponding to the average weight of our codewords. Roughly speaking, ``most'' codewords have Hamming weight about $t(q,\epsilon_d)\sqrt{n}$. Following the techniques in~\cite{tahmasbi2017second}, it has been optimized to be as large as possible while still ensuring $(1-\epsilon_d)$ covertness. The quantity $I_J(q)$ denotes the mutual information (times the normalization $t(q,\epsilon_d)\sqrt{n}$) corresponding to the BSC$(q)$ from Alice to James, derived by taking the appropriate Taylor series expansion of the mutual information between $\X$ and $\mathbf{Z}$. The quantity $I_B(p,q)$ denotes the mutual information (times the normalization $t(q,\epsilon_d)\sqrt{n}$) of the worst i.i.d. channel inducible from Alice to Bob due to an i.i.d. myopic jamming strategy employed by James. As outlined in Theorem~\ref{thm:upperbound} this corresponds to the asymmetric channel arising from James flipping $X_i$ with probability approximately $p/q$ only within the support of $\mathbf{Z}$ --- hence James concentrates his bit-flip power in bits he observes to be likelier to correspond to actual transmissions from Alice. While mutual information from Alice to Bob in the presence of such an i.i.d. jamming strategy clearly serves as an outer bound on Alice's achievable throughput, it is perhaps more surprising that this is also achievable by our codes in a wide range of parameter regimes (corresponding to the {\it achievable positive throughput region} presented in Theorem~\ref{thm:achievable} below). \subsection{Impossibility of covert communication with $\Delta(n) < \frac{1}{2}\log (n)$} When the amount of shared key is less than $\frac{1}{2}\log{n}$, if James employs a weight-detector with an appropriate threshold, combined with an oblivious jamming strategy, it turns out that he can ensure that the probability of decoding error is bounded away from zero. Roughly speaking, since Alice's codebook comprises mostly of low-weight codewords, James is able to confuse Bob by choosing a jamming vector that comprises of the binary addition of multiple potential codewords -- ``spoofs'' -- Bob is unable to disambiguate Alice's true $\X$ from among the cacophony of spoofs. The following theorem makes the above claim precise, and the proof of Theorem~\ref{thm:converse1} can be found in Section~\ref{sec:converse}. \begin{theorem} \label{thm:converse1} Let $\epsilon_d\in(0,1)$ and $\Delta(n)<\frac{1}{2}\log(n)$. For every sequence of codes $\{\C_n\}$ of blocklength $n$, message length $\log{N} = nr$, and encoding complexity $f_{\C}(n)$, at least one of the following is true: \begin{enumerate} \item ($\C_n$ is not covert) There exists a detector $\Phi$ with computational complexity $\mathcal{O}(n)$ such that $P_{\emph{FA}}(\Phi)+P_{\emph{MD}}(\Phi)<1-\epsilon_d$. In particular, $\Phi$ can be chosen to be the weight-detector $\Phi_{\rho}$ for an appropriately set threshold $\rho$. \item ($\C_n$ is not reliable) There exists a constant $\eta=\eta(\epsilon_d,p,q)$ and causal jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C}$ with computational complexity $\mathcal{O}(\sqrt{n} f_{\C}(n))$, such that the probability of error is bounded from below as \begin{equation} P_{\emph{err}} \ge \mathbb{P}_{M,K,\bar{\mathbf{S}},\mathbf{S}}(M\neq\widehat{M} |T=1)\geq 1-\max\left\{\frac{2}{N},\frac{2^{\Delta(n)}\eta}{\sqrt{n}}\right\}.\label{eq:errorsmallshared}\end{equation} In particular, $W_{\mathbf{S}|\mathbf{Z},\C}$ may be chosen as the oblivious jamming strategy $W^{\emph{(ob)}}_{\mathbf{S}|\mathbf{Z},\C}$. \end{enumerate} \end{theorem} \begin{remark} The lower bound on the probability of error in~\eqref{eq:errorsmallshared} is valid for all values of $\Delta(n)$. However, it is non-vanishing only if $\Delta(n)<\frac{1}{2}\log(n)$. \end{remark} \begin{table}[] \scriptsize \centering \caption{Summary of Main results} \begin{tabular}{|l|l|l|l|l|l|} \hline Theorem & length of $\Delta(n)$ & Relative throughput $r$ & \begin{tabular}[c]{@{}l@{}}Enc/Dec\\ Complexity\end{tabular} & \begin{tabular}[c]{@{}l@{}}Complexity of\\ adversary's attack\end{tabular} \\ \hline Thm~\ref{thm:converse1} (Converse) & less than $\frac{1}{2}\log (n) $ & $0$ & N/A & $\mathcal{O}(\sqrt{n}f_{\C}(n))$, causal \\ \hline Thm~\ref{thm:upperbound} (Converse) & arbitrary & $t(q,\epsilon_d)I_B(p,q)$ & N/A & $\mathcal{O}(n)$, causal \\ \hline Thm~\ref{thm:achievable} (Achievability) & arbitrary & $t(q,\epsilon_d)I_B(p,q)$ & $2^{\mathcal{O}(\sqrt{n})}$ & arbitrary \\ \hline Thm~\ref{thm:efficient} (Achievability) & $\mathcal{O}(\sqrt{n}\log (n))$ & \begin{tabular}[c]{@{}l@{}}$\frac{t(q,\epsilon_d)}{\rho^\ast} C_{\text{BAC}}(p,q)$, where $\rho^{\ast}$ and \\ $C_{\text{BAC}}(p,q)$ are defined later \end{tabular} & poly$(n)$ & arbitrary \\ \hline \end{tabular} \end{table} \subsection{An upper bound on the covert capacity for any $\Delta(n)$} Next, we obtain an upper bound on the covert capacity that holds regardless of the amount of shared key available. Our strategy here is to bound the throughput of any simultaneously covert and reliable code by first showing that the average Hamming weight of codewords from such a code must be bounded from above by an appropriate function of the covertness parameter. Next, since the transmitted message must also be reliably decoded under all jamming strategies, this gives an upper bound on the number of distinct messages possible. To bound the number of codewords, we analyze Bob's reliability with respect to the mutual information $t(q,\epsilon_d)I_B(p,q)\sqrt{n}$ of the channel induced under James' myopic jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C}^{\text{(my)}}$. The proof of Theorem~\ref{thm:upperbound} can be found in Section~\ref{sec:upperbound}. \begin{theorem}\label{thm:upperbound} Let $\epsilon_d\in(0,1)$ and $p,q \in (0,\frac{1}{2})$. For every sequence $\{\Delta(n)\}$, \begin{enumerate} \item if $q \le p$, then $r^{\ast}_{\Delta(n),\epsilon_d}(p,q) = 0$ (corresponds to the region below the blue dashed line in Fig.~\ref{fig:regime}); \item if $p < q$, then $r^{\ast}_{\Delta(n),\epsilon_d}(p,q) \leq t (q,\epsilon_d) I_B(p,q)$ (corresponds to the region above the blue dashed line in Fig.~\ref{fig:regime}). \end{enumerate} \end{theorem} \begin{comment} \ez{(I think the following statement can be removed) Further, when $0 < p < q < 1/2$ (resp. $0 < q \le p < 1/2$), if $\{\C_n\}$ is a sequence of codes with relative throughput $r>t (q,\epsilon_d) I_B(p,q)$ (resp. $r>0$), then there exists $\epsilon>0$ such that at least one of the following is true for infinitely many values of $n$: \begin{enumerate} \item ($\C_n$ is not covert) There exists a detector $\Phi$ with computational complexity $\Theta(n)$ such that $P_{\text{FA}}(\Phi)+P_{\text{MD}}(\Phi)<1-\epsilon_d$. In particular, $\Phi$ can be chosen to be the weight-detector $\Phi_{\rho}$ for an appropriately set threshold $\rho$. \item ($\C_n$ is not reliable) There exists a causal jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C}$ with computational complexity $\Theta(n)$ under which Bob's probability of decoding error is bounded from below as $$P_{\emph{err}} \ge \mathbb{P}_{M,K,\bar{\mathbf{S}},\mathbf{S}}(M\neq\widehat{M} |T=1)\geq \epsilon.$$ In particular, $W_{\mathbf{S}|\mathbf{Z},\C}$ may be chosen to be the myopic jamming strategy $W^{\emph{(my)}}_{\mathbf{S}|\mathbf{Z},\C}$. \end{enumerate}} \end{comment} \begin{figure} \begin{center} \includegraphics[scale=0.35]{regime.pdf} \caption{This figure shows the achievable positive throughput regions for different values of $\Delta(n)$. (1) For any $\Delta(n) \in (\Omega(\log(n)),o(\sqrt{n}))$, as shown via Theorem~\ref{thm:achievable}, covert communication is possible above the red curve. (2) When $\Delta(n) \in \Omega(\sqrt{n})$, as shown via Theorem~\ref{thm:achievable}, the achievable positive throughput region increases. The two black curves delineate the achievable positive throughput regions for $\Delta(n) = 0.015\sqrt{n}$ and $0.03 \sqrt{n}$ respectively. The achievable positive throughput regions for each corresponding $\Delta(n)$ are now above the respective black curves. (3) Regardless of the value of shared key $\Delta(n)$, no covert communication is possible below the blue dashed line corresponding to $p=q$. This is in contrast to ``classical'' covert communication~\cite{7407378} in the presence of a passive adversary (rather than an actively jamming adversary), wherein increasing amounts of shared key allow for covert communication even when $p>q$, i.e., the channel from Alice to Bob has more bit-flips than the channel from Alice to James. This is due to the fact that when $p>q$, among the classes of channels James can induce from Alice to Bob is one which has zero channel capacity even if $p<\frac{1}{2}$.} \label{fig:regime} \end{center} \end{figure} \subsection{Achievability of covert communication with $\Delta(n)\ge 6\log(n)$} Next, we give an achievability result based on low-weight random codes and list decoding. The crux of our proof is a novel {\it myopic list-decoding lemma} described in the introduction, and formally presented in Claims~\ref{claim:ratio1} and~\ref{claim:ratio2} in Section~\ref{sec:reliability}. This lemma first demonstrates that for the parameter regime under consideration, with high probability, from James' perspective there are multiple (roughly $\mathcal{O}(\exp(\sqrt{n}))$) equally likely transmissions by Alice --- hence James has a large ``uncertainty set''. It then shows that, averaged over the uncertainty set, regardless of James' specific choice of jamming vector $\mathbf{S}$, very few codewords $\X$ in James' uncertainty set are ``killed'' by $\mathbf{S}$, i.e., if Bob attempts to list-decode the corresponding $\Y = \X \oplus \mathbf{S}$, his list size is ``too large'' (larger than some polynomial --- say $n^2$). Hence, with high probability over the randomness in which $\X$ in the uncertainty set is instantiated, James is unable to force too large a list on Bob. To complete the argument we show that the dominant error event (among all joint distributions James can induced between $\mathbf{Z}$ and $\mathbf{S}$) corresponds to James behaving in the i.i.d. manner specific in the myopic jamming strategy. Bob is then able to use the ${\cal O}(\log(n))$-sized shared key to disambiguate the list down to a unique element via a hashing scheme. In the following, we present the {\it achievable positive throughput regions} ${\cal \underbar{R}}^+_{\Delta(n),\epsilon_d}(p,q)$ corresponding to the parameter regime where our codes have positive throughput. The achievable positive throughput regions $\underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q)$ are subsets of the true positive throughput regions ${\cal R}^+_{\Delta(n),\epsilon_d}(p,q)$. \begin{theorem}\label{thm:achievable} Let $\epsilon_d\in(0,1)$, $p,q \in (0,\frac{1}{2})$, and $\Delta(n) \ge 6\log(n)$. For three different regimes of $\Delta(n)$, the achievable positive throughput regions $\underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q)$ are given by \begin{enumerate} \item {\bf small-sized key:} $\underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q) \triangleq \left\{(p,q): p < q \mbox{ and } I_B(p,q) >I_J(q) \right\}$ if $\Delta(n) \in (\Omega(\log(n)), o(\sqrt{n}))$. \item {\bf moderate-sized key:} $\underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q) \triangleq \left\{(p,q): p < q \mbox{ and } I_B(p,q)+\frac{\sigma}{t(q,\epsilon_d)} >I_J(q) \right\}$ if $\Delta(n) = \sigma \sqrt{n}$ for a constant $\sigma > 0$. \item {\bf large-sized key:} $\underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q) \triangleq \left\{(p,q): p < q \right\}$ if $\Delta(n) \in \omega(\sqrt{n})$. \end{enumerate} For any $\Delta(n)$ and $(p,q) \in \underbar{{\cal R}}^+_{\Delta(n),\epsilon_d}(p,q)$, the relative throughput \begin{align} r_{\Delta(n),\epsilon_d}(p,q)= t(q,\epsilon_d) I_B(p,q) \label{eq:long} \end{align} is achievable, which implies that the covert capacity $r^{\ast}_{\Delta(n),\epsilon_d}(p,q) = t(q,\epsilon_d) I_B(p,q)$ (since~\eqref{eq:long} meets the outer bound derived in Theorem~\ref{thm:upperbound}). Both encoding and decoding may be performed with complexity $\exp(\mathcal{O}(\sqrt{n}))$. \end{theorem} The proof of Theorem~\ref{thm:achievable} is included in Section~\ref{sec:achievability}. For any $0 < p < q < \frac{1}{2}$, to achieve relative throughput $t(q,\epsilon_d) I_B(p,q)$, the minimum size of the shared key is $\Delta(n) = \mathcal{O}(\log(n))+ [t(q,\epsilon_d)(I_J(q)-I_B(p,q))]^+\sqrt{n}$, where $x^+ \triangleq \max(0,x)$. The intuition behind this scaling of $\Delta(n)$ is as follows: \begin{enumerate} \item When the BSC$(q)$ channel from Alice to James is worse (has lower mutual information) than the worst channel he can instantiate from Alice to Bob, then ${\cal O}(\log(n))$ bits of shared key suffices for our scheme to work. \item Conversely, if James can make the channel from Alice to Bob to be worse than the channel to him, then Alice and Bob need a larger shared key (equaling at least the mutual information difference between the two channels) to cause James' uncertainty set to be large enough for the myopic list-decoding lemma (Claims~\ref{claim:ratio1} and~\ref{claim:ratio2}) to hold. Structurally this phenomenon in the presence of an active adversary James is intriguingly reminiscent of the phenomenon observed in~\cite{7407378} showing that covert communication in the presence of a passive adversary is possible if and only if the key-rate exceeds the normalized mutual information difference between the main channel and the eavesdropped channel. \end{enumerate} Fig.~\ref{fig:regime} graphically represents the numerics of Theorems~\ref{thm:upperbound} and~\ref{thm:achievable}. Note that $\Delta(n) = \omega(\sqrt{n})$ is the same as $\Delta(n) = \infty$ (by comparing Theorems~\ref{thm:upperbound} and~\ref{thm:achievable}), since both the achievable positive throughput regime and the covert capacity are independent of $\Delta(n)$ as long as it is larger than $\omega(\sqrt{n})$. Also, it is clear that the achievability and the converse may not match when $\Delta(n) \in (\Omega(\log(n)), o(\sqrt{n}))$ or $\Delta(n) = \sigma \sqrt{n}$ (for some small $\sigma > 0$). \subsection{Computationally efficient codes with $\Delta(n)\in \Omega(\sqrt{n}\log{n})$} This result presents computationally efficient encoding and decoding schemes when the amount of shared key is $\Omega(\sqrt{n}\log{n})$. Consider the binary asymmetric channel (BAC) with input alphabet $\mathcal{X}\in\{0,1\}$, output alphabet $\mathcal{Y}\in\{0,1\}$, and the bit flip probabilities $W_{Y|X}(1|0)=p$ and $W_{Y|X}(0|1)= \frac{(1-q)p}{q}$. This corresponds to the channel from Alice and Bob caused by the myopic jamming strategy. Let $C_{\text{BAC}}(p,q)\triangleq \max_{p(X)}I(X;Y)$ denote the channel capacity, and let Bernoulli$(\rho^\ast)$ be the input distribution that achieves the maximum value of $I(X;Y)$. \begin{theorem}\label{thm:efficient} Let $\epsilon_d \in(0,1)$, $0 < p < q < \frac{1}{2}$, and $r< \frac{t(q,\epsilon_d)}{\rho^\ast}C_{\emph{BAC}}(p,q)$. There exists a sequence of codes $\{\C_n\}$ of blocklength $n$ and relative throughput $r$, and $n_0$ such that for every $n\geq n_0$, \begin{enumerate} \item $\C_n$ is $(1-\epsilon_d)$-covert; \item $\C_n$ ensures the probability of error $P_{\emph{err}} \le \epsilon_n$, where $\lim_{n \to \infty} \epsilon_n = 0$; \item $\C_n$ can be encoded and decoded with ${\operatorfont{poly}}(n)$ complexity. \end{enumerate} \end{theorem} The proof of Theorem~\ref{thm:efficient} can be found in Section~\ref{sec:efficient}. This scheme works via the permutation-based coding described in the introduction --- Alice permutes her codeword (of Hamming weight $t(q,\epsilon_d)\sqrt{n}$) uniformly at random among all length-$n$ binary sequences of Hamming weight $t(q,\epsilon_d)\sqrt{n}$ using her shared key. As argued in~\cite{7407378}, such a source-resolvability scheme results in covertness with respect to James. Also, as argued in~\cite{lipton1994new,langberg2004private} such codes also work well to scramble James' bit-flips, and make his actions behave in an i.i.d. manner. Note that the above theorem does not place any computational restrictions on James. If we further require James to be of at most polynomial complexity, the amount of shared key needed can be significantly relaxed under the existence of cryptographic pseudorandom generators (PRG). This leads to the following corollary. \begin{corollary} \label{cor:prg}Let $\epsilon_d \in(0,1)$, $0 < p < q < \frac{1}{2}$, and $r< \frac{t(q,\epsilon_d)}{\rho^\ast}C_{\emph{BAC}}(p,q)$. Let $\Delta(n)\in\Omega(n^\xi)$, where $\xi \in (0,1)$ can be chosen arbitrarily small. There exists a sequence of codes $\{\C_n\}$ of blocklength $n$ and relative throughput $r$, and $n_0$ such that for every $n\geq n_0$, \begin{enumerate} \item For every polynomial time detector $\Phi$, $P_{\emph{FA}}(\Phi)+P_{\emph{MD}}(\Phi)>1-\epsilon_d$; \item $\C_n$ ensures the probability of error $P_{\emph{err}} \le \epsilon_n$, where $\lim_{n \to \infty} \epsilon_n = 0$; \item $\C_n$ can be encoded and decoded with ${\operatorfont{poly}}(n)$ complexity. \end{enumerate} \end{corollary} \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{fixed_p.pdf} \caption{The two sets of curves show covert capacities as functions of $q$ for two fixed values of $p$ ($p = 0.15$ and $p = 0.3$) and different amounts of shared key $\Delta(n)$ ($\Delta(n)=o(\sqrt{n}), \Delta(n)=0.015\sqrt{n},$ and $\Delta(n) = \infty$).} \label{fig:fixed_p_delta} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.9\textwidth]{fixed_q.pdf} \caption{The two sets of curves show covert capacities as functions of $p$ for two fixed values of $q$ ($q = 0.25$ and $q = 0.4$) and different amounts of shared key $\Delta(n)$ ($\Delta(n)=o(\sqrt{n}), \Delta(n)=0.01(\sqrt{n}),$ and $\Delta(n) = \infty$).} \label{fig:fixed_q_delta} \end{subfigure} \caption{Since the covert capacity region would require a three dimensional plot ($p$ and $q$ along the $x$ and $y$ axes respectively, and the relative throughput along the $z$ axis) that is hard to digest, we instead present here cross-sections of our partial characterization of the capacity region. The plot in Fig.~\ref{fig:fixed_p_delta} shows inner and outer bounds on the optimal relative throughput curves for two values of $p$ ($p=0.15$ and $p=0.3$), and that in Fig.~\ref{fig:fixed_q_delta} shows the corresponding curves for two values of $q$ ($q=0.25$ and $q=0.4$), and the covertness parameter $\epsilon_d = 0.02$ for each of these curves. For each of these values of $p$/values of $q$, the blue dashed curves indicate outer bounds on the covert capacity, and indeed, these are attainable via matching achievability schemes when $\Delta(n) \in \omega(\sqrt{n})$ --- i.e., effectively unlimited-sized shared keys. As alluded to in the achievable positive throughput region plot in Fig.~\ref{fig:regime}, note the impact of increasing values of $\Delta(n)$ --- the achievable positive throughput region increases, and the corresponding throughput achievable by our coding scheme in Theorem~\ref{thm:achievable} tracks the blue curve corresponding to having unbounded shared keys. The red curve corresponds to the relative throughput attainable by our coding scheme for any value of $\Delta(n) \in (\Omega(\log(n)), o(\sqrt{n}))$, and the black curve corresponds to the attainable relative throughput for $\Delta(n)=0.015\sqrt{n}$ in Fig.~\ref{fig:fixed_p_delta} and $\Delta(n)=0.01\sqrt{n}$ in Fig.~\ref{fig:fixed_q_delta}.} \label{fig:fix_pq} \end{figure*} \subsection{Graphical representation of capacities}\label{appendix:figure} Fig.~\ref{fig:fix_pq} and Fig.~\ref{fig:epsilon} give a graphical representation of the covert capacities. \begin{figure*} \centering \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.9\linewidth]{fixed_q_epsilon.pdf} \caption{Covert capacities as functions of $p$ for fixed $q$ ($q = 0.4$) and different covertness parameters $\epsilon_d$.} \label{fig:fixed_q_epsilon} \end{subfigure} \begin{subfigure}{0.45\textwidth} \centering \includegraphics[width=0.9\linewidth]{fixed_p_epsilon.pdf} \caption{Covert capacities as functions of $q$ for fixed $p$ ($p = 0.15$) and different covertness parameters $\epsilon_d$.} \label{fig:fixed_p_epsilon} \end{subfigure} \caption{The covertness parameter $\epsilon_d$ also has an impact on the covert capacity --- as shown in Fig.~\ref{fig:fixed_q_epsilon} and Fig.~\ref{fig:fixed_p_epsilon}, increasing $\epsilon_d$ increases the covert capacity, since Alice's codebook can comprise of somewhat ``heavier'' codewords.} \label{fig:epsilon} \end{figure*} \begin{table}[] \small \centering \caption{Table of parameters} \begin{tabular}{|l|l|l|l|} \hline \textbf{Symbol} & \textbf{Description} & \textbf{Equality/Range} & \textbf{Section} \\ \hline $M$ & Message & $M \in \{1,2,\ldots,N\}$ & Section~\ref{sec:model} \\ \hline $T$ & Transmission status & $T \in \{0,1\}$ & Section~\ref{sec:model} \\ \hline $K$ & Shared key & $K \in \{0,1\}^{\Delta(n)}$ & Section~\ref{sec:model} \\ \hline $p$ & ADVC$(p)$ -- channel from Alice to Bob & $0 \le p \le 0.5$ & Section~\ref{sec:introduction} \\ \hline $q$ & BSC$(q)$ -- channel from Alice to James & $0 \le q < 0.5$ &Section~\ref{sec:introduction} \\ \hline $\epsilon_d$ & Covertness parameter & $\epsilon_d > 0$ & Section~\ref{sec:model} \\ \hline $\Delta(n)$ & Size of shared key & N/A & Section~\ref{sec:model} \\ \hline $t(q,\epsilon_d)$ & Code-weight design parameter & $t(q,\epsilon_d) = \frac{2 \sqrt{q (1-q)}}{1-2q}\cdot Q^{-1}\left(\frac{1-\epsilon_d}{2}\right)$ & Section~\ref{sec:result} \\ \hline $\rho$ & Normalized code-weight design parameter & $\rho = t(q,\epsilon_d)/\sqrt{n}$ & Section~\ref{sec:result} \\ \hline $\X$ & Codeword & $\X \in \{0,1\}^n$ & Section~\ref{sec:model} \\ \hline $\mathbf{Z}$ & James' received vector & $\mathbf{Z} \in \{0,1\}^n$ & Section~\ref{sec:model} \\ \hline $\mathbf{S}$ & James' jamming vector & $\mathbf{S} \in \{0,1\}^n$ & Section~\ref{sec:model} \\ \hline $\Y$ & Bob's received vector & $\Y \in \{0,1\}^n$ & Section~\ref{sec:model} \\ \hline $R$ & Rate & $R = (\log N)/n$ & Section~\ref{sec:model} \\ \hline $r$ & Relative throughput & $r = (\log N)/\sqrt{n}$ & Section~\ref{sec:model} \\ \hline $r^{\ast}_{\Delta(n),\epsilon_d}(p,q)$ & Covert capacity & N/A & Section~\ref{sec:model} \\ \hline $\mathcal{R}^+_{\Delta(n),\epsilon_d}(p,q)$ & Positive throughput region & N/A & Section~\ref{sec:model} \\ \hline $\underbar{\cal R}^+_{\Delta(n),\epsilon_d}(p,q)$ & Achievable positive throughput region & N/A & Section~\ref{sec:model} \\ \hline $Q_0(\mathbf{Z})$ & Innocent distribution of $\mathbf{Z}$ ($T = 0$) & N/A & Section~\ref{sec:model} \\ \hline $Q_1(\mathbf{Z})$ & Active distribution of $\mathbf{Z}$ ($T = 1$) & N/A & Section~\ref{sec:model} \\ \hline $I_J(q)$ & Weight normalized mutual information & N/A & Section~\ref{sec:result} \\ \hline $I_B(p,q)$ & Weight normalized mutual information & N/A & Section~\ref{sec:result} \\ \hline \end{tabular} \end{table} \newpage \section{Proof of Theorem~\ref{thm:converse1}} \label{sec:converse} We now show that if $\Delta(n) < \frac{1}{2}\log (n)$, the probability of error is bounded from below by $1-\max\left\{\frac{2}{N},\frac{2^{\Delta(n)}\eta}{\sqrt{n}}\right\}$, for some constant $\eta$ independent of $n$. First note that due to the covertness constraint, most of the codewords have Hamming weight $\mathcal{O}(\sqrt{n})$, otherwise Alice's transmission status can be detected by James' weight-detector. Since James is able to flip $\mathcal{O}(n)$ bits, he can apply an oblivious jamming strategy --- generate his jamming vector by selecting $\mathcal{O}(\sqrt{n})$ codewords. Since the number of possible values of shared key is $2^{\Delta(n)} < \mathcal{O}(\sqrt{n})$, he can select codewords in the following way (without loss of generality we assume $\mathbf{x}(m_0,k_0)$ --- the codeword corresponding to message $m_0$ and shared key $k_0$ --- is transmitted): \begin{enumerate} \item For each value of $k \in \{ 1, \ldots, 2^{\Delta(n)}\}$, James randomly chooses $b = \min\left\{\frac{\mathcal{O}(\sqrt{n})}{2^{\Delta(n)}}, \frac{N}{2}\right\}$ messages $m_1, m_2, \ldots, m_b$, and use Alice's encoding function to obtain codewords $\mathbf{x}(m_1,k), \mathbf{x}(m_2,k), \ldots, \mathbf{x}(m_b,k)$. \item Let $\mathcal{S}_k \triangleq \{ \mathbf{x}(m_1,k), \mathbf{x}(m_2,k), \ldots, \mathbf{x}(m_b,k)\}$. James' jamming vector $\mathbf{s}$ equals $\oplus_k \oplus_{\mathcal{S}_k} \mathbf{x}$, i.e., the binary additions of all selected codewords. Bob's observation $\mathbf{y}$ equals the binary addition of $\mathbf{x}(m_0,k_0)$ and $\mathbf{s}$. \end{enumerate} Now let's focus on the set $\mathcal{S}_{k_0}$. We define a modified set \begin{align} \widehat{\mathcal{S}}_{k_0} \triangleq \begin{cases} \mathcal{S}_{k_0} \setminus \mathbf{x}(m_0,k_0), \text{ if } \ \mathbf{x}(m_0,k_0) \in \mathcal{S}_{k_0}, \\ \mathcal{S}_{k_0} \cup \mathbf{x}(m_0,k_0), \text{ if } \ \mathbf{x}(m_0,k_0) \notin \mathcal{S}_{k_0}. \end{cases} \end{align} We assume there is an oracle who reveals to Bob the value of $k_0$, the set $\widehat{\mathcal{S}}_{k_0}$, and all $\mathcal{S}_{k}$ for $k \ne k_0$ selected by James, and the fact that whether or not Alice's true codeword in the set $\widehat{\mathcal{S}}_{k_0}$. Note that the oracle only strengthens Bob, since he can recover the received vector from the oracle revealed information. Thus, Bob's probability of decoding error with the knowledge of the oracle information is no larger than that without it. If $\mathbf{x}(m_0,k_0) \in \widehat{\mathcal{S}}_{k_0}$, from Bob's point of view the true message is uniformly distributed over the set $\widehat{\mathcal{S}}_{k_0}$, since he cannot distinguish the following $(b+1)$ equally likely events \begin{itemize} \item $\mathcal{E}_{m_0}$: Alice transmits $\mathbf{x}(m_0,k_0)$ and James selects $\left\{\mathbf{x}(m_1,k_0), \mathbf{x}(m_2,k_0), \ldots, \mathbf{x}(m_b,k_0) \right\}$. \item $\mathcal{E}_{m_i}$ ($i \ne 0$): Alice transmits $\mathbf{x}(m_i,k_0)$ and James selects $\left\{\mathbf{x}(m_0,k_0), \mathbf{x}(m_1,k_0), \ldots,\mathbf{x}(m_b,k_0) \right\}\setminus \mathbf{x}(m_i,k_0)$. \end{itemize} Similarly, if $\mathbf{x}(m_0,k_0) \notin \widehat{\mathcal{S}}_{k_0}$, from Bob's point of view the true message is uniformly distributed over the set $\widehat{\mathcal{S}}_{k_0}^c$ (the complement of $\widehat{\mathcal{S}}_{k_0}$). These imply that the probability of decoding error (when $T = 1$) is bounded from below by $1 - \max\left\{\frac{2^{\Delta(n)}\eta}{\sqrt{n}}, \frac{2}{N}\right\}$, for some $\eta > 0$. \section{Proof of Theorem~\ref{thm:upperbound}} \label{sec:upperbound} The upper bound in Theorem~\ref{thm:upperbound} is obtained by considering a specific myopic jamming strategy performed by James, as described in the following. This strategy leads to an artificial binary asymmetric channel (BAC) between Alice and Bob, and in turn limits the message size of any codes that simultaneously ensures $(1-\epsilon_d)$-covertness and a small probability of error $P_{\text{err}}$. \subsection{A myopic Jamming strategy} Consider the jamming strategy $W^{\text{(my)}}_{\mathbf{S}|\mathbf{Z},\C}$ described as follows. For each $i \in \{1, \ldots, n \}$, James does not flip bit $X_i$ if the corresponding $Z_i$ equals $0$, and flips with probability approximately $p/q$ if the corresponding $Z_i$ equals $1$. This ensures that his bit-flips are stochastically distributed in the support of the $\mathbf{Z}$ vector. Since the $\mathbf{Z}$ vector is correlated with Alice's transmission $\X$ via a BSC$(q)$, this ensures that James' jamming vector $\mathbf{S}$ is likelier to flip $1$'s in $\X$ to $0$'s, than it is to flip $0$'s in $\X$ to $1$'s. More precisely, let $\nu = n^{-1/3}$ be a slackness parameter. For any $i \in \{1, \ldots, n \}$, $$ S_i = \begin{cases} 0, &\mbox{ with probability $1$ if $Z_i = 0$, }\\ 0, &\mbox{ with probability $1-\frac{p(1 -\nu)}{q}$ if $Z_i = 1$, }\\ 1, &\mbox{ with probability $\frac{p(1 -\nu)}{q}$ if $Z_i = 1$. } \end{cases} $$ Note that generating $\mathbf{S}$ in the i.i.d. manner specified above may in general result in James' exceeding his jamming budget $pn$. However, by setting the slackness parameter $\nu = n^{-1/3}$, we ensure with probability at least $1 - \exp(-\mathcal{O}(n^{\frac{1}{3}}))$, the Hamming weight of $\mathbf{S}$ is bounded from above by $pn$. By using this strategy, James induces a BAC from Alice to Bob with channel transition probabilities \begin{align*} &\wy(0|0) = 1-p(1-\nu), &\wy(1|0) &= p(1-\nu), \\ &\wy(0|1) = \frac{(1-q)p}{q}(1-\nu), &\wy(1|1) &= 1-\frac{(1-q)p}{q}(1-\nu). \end{align*} Note that when $q<\frac{1}{2}$, $\wy(0|1) > \wy(1|0)$, which means that the probability of a bit-flip is higher when $X_i = 1$, than when $X_i = 0$. \subsection{Converse with respect to the BAC} Though the error criterion of interest in this work is the average probability of error $P_{\text{err}}$ defined in~\eqref{eq:pe}, to prove the upper bound in Theorem~\ref{thm:upperbound}, we take a detour by introducing another error criterion --- the {\it max-average probability of error} \begin{align} \widetilde{P}_{\text{err}} \triangleq \max_{k} \{ \mathbb{P}(M \ne \widehat{M}|K=k,T = 1) + \mathbb{P}(\widehat{M} \ne 0 | T = 0)\}, \end{align} which is maximized over the shared key and averaged over the message. Lemma~\ref{lemma:reduction} below, which is adapted from~\cite[Lemma 6]{zhang2019covert}, establishes a nice connection between the two error criterions. For the benefit of the readers, we provide a detailed proof of Lemma~\ref{lemma:reduction} in the supplementary document~\cite{zhang2019supp}. \begin{lemma}[Adapted from~\cite{zhang2019covert}] \label{lemma:reduction} Suppose a code $\C$, which contains $2^{\Delta(n)}$ sub-codes of size $N$, guarantees $(1-\epsilon_d)$-covertness and $P_{\emph{err}} \le \epsilon_n$. Then, there exists another code $\C'$ containing $2^{\Delta'(n)}$ sub-codes of size $N'$ guarantees $(1-\epsilon_d)$-covertness and $\widetilde{P}_{\emph{err}} \le \epsilon'_n$. In particular, \begin{align*} \lim_{n \to \infty} \epsilon_n = \lim_{n \to \infty} \epsilon'_n = 0, \ \ \lim_{n \to \infty} \frac{\log N}{\sqrt{n}} = \lim_{n \to \infty} \frac{\log N'}{\sqrt{n}}, \ \ \lim_{n \to \infty} \frac{\Delta(n)}{\Delta'(n)} = 1. \end{align*} \end{lemma} First, we provide an upper bound on the number of bits that can be reliably and covertly transmitted, under max-average probability of error $\widetilde{P}_{\text{err}}$. Then, we use a reduction argument to show that the aforementioned upper bound is also valid under average probability of error $P_{\text{err}}$. The proof of converse leverages different techniques from~\cite{7407378, tahmasbi2017second, zhang2019covert}. \subsubsection{Converse under $\widetilde{P}_{\text{err}}$} Consider any code $\C$ containing $2^{\Delta(n)}$ sub-codes of size $N$ (indexed by $\{\C_i\}_{i=1}^{2^{\Delta(n)}}$) that ensures $(1-\epsilon_d)$-covertness and a vanishing max-average probability of error $\widetilde{P}_{\text{err}} \le \epsilon_n$, where $\lim_{n \to \infty} \epsilon_n = 0$. We first find an upper bound on the maximum weight of codewords in a suitable sub-code. Specializing~\cite[Lemma 12]{tahmasbi2017second} to our setting, we obtain that for any $\gamma\in(0,1)$, there exists a subset $\C^\gamma$ of $\C$ such that \begin{enumerate} \item $|\C^{\gamma}|\geq \gamma|\C|$ \item there is a constant $c_0$ such that \begin{align*} w(\C^\gamma)\triangleq\frac{\max_{\mathbf{x}\in\C^\gamma} \text{wt}_H(\mathbf{x})}{\sqrt{n}}\leq \frac{2\sqrt{q(1-q)}}{1-2q} Q^{-1}\left(\frac{1-\epsilon_d}{2}-\frac{c_0}{\sqrt{n}}-\gamma\right). \end{align*} \end{enumerate} For each sub-code $\C_i$ ($i \in \{ 1, \ldots, 2^{\Delta(n)}\}$), the intersection between $\C^{\gamma}$ and $\C_{i}$ is denoted by $\C_i^{\gamma}$. Note that there must exist a sub-code $\C_i$ such that the size of $\C_i^{\gamma}$ is at least $\gamma N$, and we denote this sub-code by $\C_{i^*}$. Let $\gamma = \max \{\sqrt{\epsilon_n}, \exp(-n^{\frac{1}{2}-\varepsilon})\}$ for some small $\varepsilon > 0$. Since the average probability of error of $\C_{i^*}$ is at most $\epsilon_n$, the average probability of error of $\C_{i^*}^\gamma$, denoted by $\epsilon_n'$, is bounded from above as $$\epsilon_n' \le \epsilon_n/\gamma = \min\{\sqrt{\epsilon_n}, \epsilon_n \exp(n^{\frac{1}{2}-\varepsilon})\} \le \sqrt{\epsilon_n},$$ which is due to the fact that $|\C_{i^*}^\gamma|\cdot \epsilon_n' \le |\C_{i^*}|\cdot \epsilon_n$ if Bob simply employs the decoding rule for $\C_{i^*}$. Let $\widetilde{M}$ be the uniformly distributed random variable that corresponds to the message in $\C_{i^*}^\gamma$, $\bar{X}$ be the random variable distributed according to $\text{Bernoulli}(w(\C^\gamma)/\sqrt{n})$, $\bar{Y}$ be the random variable corresponding to the output of the BAC $\wy$ with input $\bar{X}$. By standard information inequalities, we have \begin{align} \log N +\log{\gamma}\le H(\widetilde{M}) &= I(\widetilde{M};\Y K) + H(M|\Y K) \label{eq:home} \\ & \le I(\widetilde{M};\Y K) + \epsilon_n'\cdot \log (\gamma N) + 1 \label{eq:hom1} \\ & = I(\widetilde{M};\Y | K) + \epsilon_n'\cdot \log (\gamma N) + 1 \label{eq:hom2} \\ &\le I(\widetilde{M}K;\Y) + \epsilon_n'\cdot \log (\gamma N) + 1 \label{eq:hom3} \\ &\le I(\X;\Y) + \epsilon_n'\cdot \log (\gamma N) + 1 \label{eq:home4} \\ &\le \sum_{i=1}^n I(X_i;Y_i) + \epsilon_n'\cdot \log (\gamma N) + 1 \\ &\leq n I(\bar{X};\bar{Y}) + \epsilon_n'\cdot \log (\gamma N) + 1. \label{eq:home6} \end{align} Inequality~\eqref{eq:hom1} follows from the Fano's inequality, equation~\eqref{eq:hom2} holds since $I(\widetilde{M};\Y K) = I(\widetilde{M};Y|K)+I(\widetilde{M};K)$ and $\widetilde{M}$ is independent of $K$. Similarly, inequality~\eqref{eq:hom3} holds since $I(\widetilde{M};\Y|K) = I(\widetilde{M}K;\Y)-I(K;\Y) \le I(\widetilde{M}K;\Y)$. Inequality~\eqref{eq:home4} is due to the data processing inequality, and~\eqref{eq:home6} is due to the concavity of mutual information with respect to the marginal distributions. Hence, we have \begin{align} \frac{\log N}{\sqrt{n}} \le \frac{1}{1-\epsilon_n'}\sqrt{n}I(\bar{X};\bar{Y}) + \frac{1-(1-\epsilon'_n)\log \gamma}{(1-\epsilon_n')\sqrt{n}}, \end{align} where $\frac{1-(1-\epsilon'_n)\log \gamma}{(1-\epsilon_n')\sqrt{n}}$ goes to zero for sufficiently large $n$, since $\gamma = \max \{\sqrt{\epsilon_n}, \exp(-n^{\frac{1}{2}-\varepsilon})\}$. The mutual information $I(\bar{X};\bar{Y})$ of the BAC can be approximated as \begin{align*} \lefteqn{\sqrt{n}I(\bar{X};\bar{Y})}\\ & = \sqrt{n}(H(\bar{Y}) - H(\bar{Y}|\bar{X})) \\ &\overset{\text{$n \to \infty$}}= \frac{w(\C_\gamma) p(q-1)}{q}\log \left(\frac{(1-p)(q-p+pq)}{p^2(1-q)}\right) + w(\C_\gamma) \log \left(\frac{q-p+pq}{pq}\right), \\ &\overset{\text{$n \to \infty$}}= t(q,\epsilon_d) I_B(p,q). \end{align*} Therefore, we obtain that $\lim_{n \to \infty} \frac{\log N}{\sqrt{n}} \le t(q,\epsilon_d) I_B(p,q)$. \vspace{0.1cm} \subsubsection{Converse under $P_{\text{err}}$ via reduction} Now we use a reduction argument to show that the converse result also holds under $P_{\text{err}}$ (the probability of error of interest in this work), based on Lemma~\ref{lemma:reduction}. Suppose there exists a code $\C$, which contains $2^{\Delta(n)}$ sub-codes of size $N$, ensures $(1-\epsilon_d)$-covertness and $P_{\text{err}} \le \epsilon_n$. If $\lim_{n \to \infty} \frac{\log N}{\sqrt{n}} > t(q,\epsilon_d) I_B(p,q)$, then a contradiction arises since Lemma~\ref{lemma:reduction} says that there exists another code $\C'$ containing $2^{\Delta'(n)}$ sub-codes of size $N'$ that ensures $(1-\epsilon_d)$-covertness, $\widetilde{P}_{\text{err}} \le \epsilon'_n$ (where $\lim_{n \to \infty} \epsilon'_n$), and \begin{align*} \lim_{n \to \infty} \frac{\log N'}{\sqrt{n}} = \lim_{n \to \infty} \frac{\log N}{\sqrt{n}} > t(q,\epsilon_d) I_B(p,q). \end{align*} Therefore, we conclude that any code $\C$ that ensures $(1-\epsilon_d)$-covertness and a vanishing average probability of error $P_{\text{err}}$ must satisfy $$\lim_{n \to \infty} \frac{\log N}{\sqrt{n}} \leq t(q,\epsilon_d) I_B(p,q).$$ This completes the proof of Theorem~\ref{thm:upperbound}. \section{Proof of theorem~\ref{thm:achievable}}\label{sec:achievability} When the amount of shared key $\Delta(n) \in (\Omega(\log(n)),o(\sqrt{n}))$ (small-sized key regime\footnote{In fact, as we shall see, $\Delta(n) = 6\log(n)$ suffices.}), Theorem~\ref{thm:achievable} indicates that the optimal throughput $t(q,\epsilon_d) I_B(p,q)$ is achievable as long as $I_J(q)<I_B(p,q)$, and this is the main focus of this section. We introduce our coding scheme in Sub-section~\ref{sec:code}, and sketch the proofs of reliability and covertness in Sub-sections~\ref{sec:reliability} and~\ref{sec:covertness}, respectively. After proving the above achievability results for small-sized key regime, it is then relatively straightforward to extend the achievability results to moderate-sized key regime and large-sized key regime, and we discuss such extensions in detail in Sub-section~\ref{sec:large}. Moreover, we also provide the detailed proofs of several technical lemmas (Lemmas~\ref{lemma:error1}-\ref{lemma:error3}, which are important for proving reliability) in Sub-sections~\ref{sec:reliability1}-\ref{sec:reliability3}, respectively. \subsection{Coding scheme} \label{sec:code} \noindent{\textbf{Polynomial hash function:}} Let $\Delta(n) = 6\log(n)$. Alice and Bob partition the $6\log(n)$ bits of shared key $K$ into two equal parts, $K_1$ and $K_2$, and each part of the key contains $3\log(n)$ bits. Let \begin{align} L \triangleq n^3, \end{align} and both $K_1$ and $K_2$ can be viewed as elements of finite field $\mathbb{F}_{L}$. Let $l \triangleq r\sqrt{n}/(3\log(n))$. The message $M$ is partitioned into $3\log(n)$ sized small chunks $M_1, M_2, \ldots, M_l$. Likewise, each message chunk $M_i$ is also viewed as an element of $\mathbb{F}_{L}$. Alice uses the message $M$ and the shared key $K = (K_1,K_2)$ to compute a hash $G$ based on the {\it polynomial hash function}, which is defined as \begin{align} G = G_K(M) \triangleq K_2 + \sum_{u=1}^l K_1^{u} M_{u}, \label{eq:hash} \end{align} where the additions and the multiplications are operated over $\mathbb{F}_{L}$. Note that this usage of shared key is distinct from the manner in which shared key is used in a wiretap secrecy setting. In particular, in a wiretap secrecy setting, it is highly unlikely that a single codeword could correspond to many message-key pairs, while in our constructions, each codeword corresponds to multiple different message-key pairs. This property is critical since it ensures part of the shared key ($K_1$ in this work) is uniformly distributed from James' perspective even if he gains some information from his received vector, and this uniformity is critical in the list decoding argument. \noindent{\textbf{Codebook generation:}} Let the relative throughput $r = t(q,\epsilon_d)I_B(p,q) - \delta$ (where $\delta > 0$ can be chosen arbitrarily small). For each message-hash pair $(i,j) \in \{0,1\}^{r\sqrt{n}}\times \{0,1\}^{3\log(n)}$, we generate a length-$n$ codeword $\mathbf{x}_{ij}$ according to $P_{\X} \triangleq \prod_{i=1}^n P_X$, where $P_X$ is a Bernoulli$(\rho)$ distribution with \begin{align} \rho \triangleq \frac{t(q,\epsilon_d)}{\sqrt{n}} = \frac{2 \sqrt{q (1-q)}}{(1-2q)\sqrt{n}}\cdot Q^{-1}\left(\frac{1-\epsilon_d}{2}\right). \end{align} For different message-hash pairs, the codewords are generated independently. The codebook is a collection of $\mathbf{x}_{ij}, \forall (i,j)$. \noindent{\textbf{Encoder:}} To encode a message $M=i$, Alice uses the shared key $K=k$ and the polynomial hash function to compute a hash $j = G_k(i)$. She then transmits the codeword $\mathbf{x}_{ij}$ to Bob. \begin{claim} \label{claim:remark} For any message $M = i$ and hash $G=j$, the number of shared key $K=(K_1,K_2)$ that is consistent with $(i,j)$ equals $L = n^3$, i.e., \begin{align} \sum_{k \in (\mathbb{F}_{L})^2} \mathbbm{1}\left\{j=G_k(i)\right\} = L. \label{eq:qinaide} \end{align} \end{claim} As is common in the literature, we assume the message $M$ and the shared key $K$ are uniformly distributed, and $M$ and $K$ are generated independently. These assumptions together with~\eqref{eq:qinaide} imply that each of the codeword $\mathbf{x}_{ij}$ is equally likely to be transmitted, since \begin{align} \mathbb{P}(M=i,G=j) &= \sum_{k \in (\mathbb{F}_{L})^2} \mathbb{P}(M=i,K=k,G=j) \\ &= \sum_{k \in (\mathbb{F}_{L})^2} \mathbb{P}(M=i) \mathbb{P}(K=k) \mathbb{P}(G=j|M=i,K=k) \\ &=\frac{1}{N} \frac{1}{L^2} \sum_{k \in (\mathbb{F}_{L})^2} \mathbbm{1}\left\{j=G_k(i)\right\} \\ &=\frac{1}{NL}. \end{align} \qed \noindent{\textbf{Decoding rule:}} Given a received vector $\mathbf{y}$, the list decoder $\mathcal{L}(\mathbf{y})$ contains all the codewords satisfying the following constraints: \begin{align} \mathcal{L}(\mathbf{y}) \triangleq \left\{ \mathbf{x}: \begin{array}{ll} nf^{xy}_{10}(\mathbf{x},\mathbf{y}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1) \\ nf^{xy}_{11}(\mathbf{x},\mathbf{y}) > \rho n\left(1-\frac{p(1-q)}{q}\right)(1-\varepsilon_2) \end{array} \right\}. \label{eq:list_decoder} \end{align} In this work we set $\varepsilon_1 = \frac{1}{\log(n)}$ and $\varepsilon_2 =\frac{p-pq}{(q-p+pq)\log(n)}$, and explain the reason for such choices in Sub-section~\ref{sec:reliability}. The decoding rule is as follows: \begin{enumerate} \item output all the codewords satisfying the list decoding rule~\eqref{eq:list_decoder} to $\mathcal{L}(\mathbf{y})$; \item decode $\widehat{M} = i$ if $\mathbf{x}_{ij}$ (for some $j$) is the unique codeword in $\mathcal{L}(\mathbf{y})$ that is consistent with the shared key $k$ (i.e., $j = G_k(i)$). Decode $\widehat{M} =0$ if no codeword in $\mathcal{L}(\mathbf{y})$ is consistent with $k$. Declare an error otherwise. \end{enumerate} \noindent{\textbf{Decoding error events:}} When Alice is active $(T = 1)$, we suppose $M = i, K=k, G=j=G_k(i)$ without loss of generality. The decoding error $\mathcal{E}_1$ occurs if the transmitted codeword $\mathbf{x}_{ij}$ is not the unique codeword in the list $\mathcal{L}(\mathbf{y})$ that is consistent with the shared key $k$, i.e., \begin{align} \mathcal{E}_1:\big\{ \{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{y})\} \text{ or } \{ \exists (i',j')\ne(i,j): \mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \text{ and } j' = G_k(i') \} \big\}. \end{align} When Alice is silent $(T = 0)$, the decoding error $\mathcal{E}_0$ occurs if there exists a codeword $\mathbf{x}_{ij} \in \mathcal{L}(\mathbf{y})$ such that $\mathbf{x}_{ij}$ is consistent with the shared key $k$, i.e., \begin{align} \mathcal{E}_0: \big\{ \exists (i,j): \mathbf{x}_{ij} \in \mathcal{L}(\mathbf{y}) \text{ and } j = G_k(i) \big\}. \end{align} \subsection{Proof sketch of reliability} \label{sec:reliability} \subsubsection{Transmission status $T = 1$} Let $\mathcal{E}_\text{list}$ be the error event corresponding to the list decoder, which occurs if one of the following two events occurs: \begin{itemize} \item $\mathcal{E}_\text{list}^{(1)}$: the transmitted codeword $\mathbf{x}_{ij}$ does not belong to $\mathcal{L}(\mathbf{y})$; \vspace{2pt} \item $\mathcal{E}_\text{list}^{(2)}$: the number of codewords $\mathbf{x}_{i'j'}$ (for $(i',j') \ne (i,j)$) falling into $\mathcal{L}(\mathbf{y})$ is at least $n^2$. \end{itemize} Generally speaking, we hope that the list decoder contains the correct codeword $\mathbf{x}_{ij}$, and also keep the list size as small as possible (no larger than $n^2$). Lemmas~\ref{lemma:error1} and~\ref{lemma:error2} below respectively show that with high probability over the code design, a randomly chosen code $\C$ ensures that the probabilities of error events $\mathcal{E}_\text{list}^{(1)}$ and $\mathcal{E}_\text{list}^{(2)}$ go to zero as $n$ goes to infinity. \begin{restatable}{lemma}{errorone} \label{lemma:error1} With probability at least $1 - \exp(-\mathcal{O}(n^{1/4}))$ over the code design, a randomly chosen code $\C$ ensures \begin{align} \mathbb{P}(\mathcal{E}_{\emph{list}}^{(1)}) \le 3\exp(-n^{1/8}). \notag \end{align} \end{restatable} \begin{restatable}{lemma}{errortwo} \label{lemma:error2} With probability at least $1 - \exp(-\mathcal{O}(\sqrt{n}))$ over the code design, a randomly chosen code $\C$ ensures \begin{align} \mathbb{P}(\mathcal{E}_{\emph{list}}^{(2)}) \le \exp(-n^{1/4}). \notag \end{align} \end{restatable} Combining Lemmas~\ref{lemma:error1} and~\ref{lemma:error2}, we have \begin{align} \mathbb{P}(\mathcal{E}_\text{list}) = \mathbb{P}(\mathcal{E}_\text{list}^{(1)} \cup \mathcal{E}_\text{list}^{(2)}) &\le \mathbb{P}(\mathcal{E}_\text{list}^{(1)}) + \mathbb{P}(\mathcal{E}_\text{list}^{(2)}) \notag \\ &\le 3\exp(-n^{1/8}) + \exp(-n^{1/4}). \label{eq:list} \end{align} Secondly, even if the list decoder does not make an error, one still need to worry the situation in which more than one codeword in $\mathcal{L}(\mathbf{y})$ is consistent with the shared key $k$. In particular, the transmitted codeword $\mathbf{x}_{ij}$ is consistent with $k$ since $j = G_k(i)$ by the definition of the encoder, so we hope none of the other codewords $\mathbf{x}_{i'j'} \ne \mathbf{x}_{ij}$ are consistent with $k$. We denote the complement of $\mathcal{E}_{\text{list}}$ by $\mathcal{E}_{\text{list}}^c$, which means that the transmitted codeword $\mathbf{x}_{ij} \in \mathcal{L}(\mathbf{y})$, and the number of codewords (other than $\mathbf{x}_{ij}$) falling into $\mathcal{L}(\mathbf{y})$ is bounded from above by $n^2$. Lemma~\ref{lemma:error3} below shows that as long as the list decoder is ``well-behaved'' (i.e., $\mathcal{E}_{\text{list}}$ does not occur), the probability of decoding error $\mathbb{P}(\mathcal{E}_1|\mathcal{E}_{\text{list}}^c)$ will be negligible. \begin{restatable}{lemma}{errorthree} \label{lemma:error3} Conditioned on $\mathcal{E}_{\text{list}}^c$, the error event $\mathcal{E}_1$ occurs with probability (over the shared key $K$) at most $\mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right)$. \end{restatable} We provide the detailed proofs of Lemmas~\ref{lemma:error1}-\ref{lemma:error3} in Sub-sections~\ref{sec:reliability1}-\ref{sec:reliability3}. Combining Lemmas~\ref{lemma:error1}-\ref{lemma:error3} and inequality~\eqref{eq:list}, we have the following lemma. \begin{lemma} \label{lemma:e1} When Alice is active ($T = 1$), with probability at least $1 - \exp(-\mathcal{O}(n^{1/4}))$ over the code design, a randomly chosen code $\C$ ensures a vanishing probability of decoding error, i.e., $$\mathbb{P}(\mathcal{E}_1) = \max_{W_{\mathbf{S}|\mathbf{Z},\C}} \mathbb{P}(\widehat{M} \neq M | T = 1) \le \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right).$$ \end{lemma} \noindent{\it Proof:} By the total probability theorem, we have \begin{align} \mathbb{P}(\mathcal{E}_1) &= \mathbb{P}(\mathcal{E}_\text{list})\mathbb{P}(\mathcal{E}_1|\mathcal{E}_\text{list}) + \mathbb{P}(\mathcal{E}^c_\text{list})\mathbb{P}(\mathcal{E}_1|\mathcal{E}^c_\text{list}) \\ &\le \mathbb{P}(\mathcal{E}_\text{list}) + \mathbb{P}(\mathcal{E}_1|\mathcal{E}^c_\text{list}) \\ &\le 3\exp(-n^{1/8}) + \exp(-n^{1/4}) + \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right) \\ &= \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right). \end{align} \qed \begin{figure} \begin{center} \includegraphics[scale=0.33]{roadmap.PNG} \caption{A road-map for the proof of reliability.} \label{fig:map} \end{center} \end{figure} \subsubsection{Transmission status $T = 0$} We provide an upper bound on the probability of error $\mathbb{P}(\mathcal{E}_0)$ as follows. \begin{lemma} \label{lemma:e0} With probability at least $1 - \exp(-\mathcal{O}(\sqrt{n}))$ over the code design, a randomly chosen code $\C$ ensures a vanishing probability of decoding error $\mathbb{P}(\mathcal{E}_0)$, i.e., $$\mathbb{P}(\mathcal{E}_0) = \max_{W_{\mathbf{S}|\mathbf{Z},\C}} \mathbb{P} (\That \neq 0 | \T = 0) \le \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right).$$ \end{lemma} The proof of Lemma~\ref{lemma:e0} is similar to that of Lemma~\ref{lemma:e1}. When $\T = 0$, no codeword is transmitted by Alice, and the list decoder makes an error $\mathcal{E}_{\text{list}}$ if and only if more than $n^2$ codewords falls into the list. Similar to Lemma~\ref{lemma:error2}, we argue that with probability at least $1 - \exp(-\mathcal{O}(\sqrt{n})$ over the code design, a randomly chosen code $\C$ ensures a vanishing probability of list-decoding error. This can be proved by simply reusing the proof of Lemma~\ref{lemma:error2} in Sub-section~\ref{sec:reliability2}, by noting that a length-$n$ zero vector can be view as a typical codeword, as defined in~\eqref{eq:typical_x}. Secondly, conditioned on $\mathcal{E}_{\text{list}}^c$, the probability (over the shared key) that more than one codeword in the list satisfies the polynomial hash function is at most $\mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right)$. This completes the proof sketch of Lemma~\ref{lemma:e0}. Therefore, \begin{align} P_{\text{err}} \le \mathbb{P}(\mathcal{E}_0) + \mathbb{P}(\mathcal{E}_1) \le \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right). \end{align} \subsection{Proof of covertness} \label{sec:covertness} Note that the proof of covertness directly follows from prior work on covert communication over probabilistic channels, since James' observation $\mathbf{Z}$ (which is used to estimate Alice's transmission status) depends only on the probabilistic wiretap channel BSC($q$), and is independent of the adversarial jamming stucture. Hence, we only provide a high-level proof sketch, and refer the interested readers to~\cite{CheBJ:13,7407378,7447769,TahmasbiB17} for detailed proofs. The proof of covertness essentially connects to the analysis of the distributions of James' channel outputs $\mathbf{Z}$. Let $Q_0(\mathbf{Z})$ be the $n$-letter innocent distribution of James' channel output $\mathbf{Z}$ when Alice is silent ($\T = 0$), and $Q_1(\mathbf{Z})$ be the $n$-letter active distribution of James' channel output $\mathbf{Z}$ when Alice is transmitting ($\T = 1$). A standard statistical arguments~\cite{lehmann2006testing} shows that the optimal estimator $\widehat{\Phi}$ satisfies $P_{\text{FA}}(\widehat{\Phi}) + P_{\text{MD}}(\widehat{\Phi}) = 1 - \mathbb{V}(Q_0(\mathbf{Z}), Q_1(\mathbf{Z}))$, where $\mathbb{V}(Q_0(\mathbf{Z}), Q_1(\mathbf{Z}))$ is the {\it variational distance} between the two distributions. Therefore, to prove $(1-\epsilon_d)$-covertness, it suffice to show \begin{align} \lim_{n \to \infty} \mathbb{V}(Q_0(\mathbf{Z}), Q_1(\mathbf{Z})) \le \epsilon_d. \end{align} The $n$-letter innocent distribution $Q_0(\mathbf{Z})$ is a Binomial$(n,q)$ distribution, with \begin{align} Q_0(\mathbf{z}) \triangleq W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{0}) = q^{\text{wt}_H(\mathbf{z})}(1-q)^{(n-\text{wt}_H(\mathbf{z}))},\ \forall \mathbf{z} \in \{0,1\}^n. \end{align} The $n$-letter active distribution $Q_1(\mathbf{Z})$ depends on the specific codebook, and is given by \begin{align} Q_1(\mathbf{z}) \triangleq \sum_{i=1}^N \sum_{j=1}^L \frac{1}{NL} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}), \ \forall \mathbf{z}\in \{0,1\}^n. \label{eq:p1} \end{align} For the purpose of analysis, we also define an \emph{$n$-letter ensemble-averaged active distribution} $\mathbb{E}_{\C}\left(Q_1(\mathbf{Z})\right)$, which is essentially the active distribution $Q_1(\mathbf{Z})$ averaged over all the possible codebooks, as \begin{align} \mathbb{E}_{\C}\left(Q_1(\mathbf{z})\right) \triangleq \mathbb{E}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \frac{1}{NL} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij})\right) &= \sum_{i=1}^N \sum_{j=1}^L \frac{1}{NL} \sum_{\mathbf{x}_{ij} \in \{0,1\}^n} P_{\X}(\mathbf{x}_{ij}) W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \\ &=\sum_{\mathbf{x} \in \{0,1\}^n} P_{\X}(\mathbf{x}) W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}). \end{align} To prove $\mathbb{V}(Q_0, Q_1) \le \epsilon_d$, we first note that $\mathbb{V}(Q_0, Q_1)$ is no larger than $\mathbb{V}(Q_0, \mathbb{E}_{\C}(Q_1)) + \mathbb{V}(\mathbb{E}_{\C}(Q_1), Q_1)$ by the {\it triangle inequality}, and then bound the two terms from above separately. \begin{itemize} \item Following the lead of~\cite{tahmasbi2017second} (based on the {\it Berry-Esseen theorem}), it has been proved that by setting the code-weight parameter $t(q,\epsilon_d) = \frac{2 \sqrt{q (1-q)}}{1-2q}\cdot Q^{-1}\left(\frac{1-\epsilon_d}{2}\right)$, $$\lim_{n \to \infty} \mathbb{V}(Q_0, \mathbb{E}_{\C}(Q_1)) \le \epsilon_d.$$ \item As long as $r\sqrt{n} + \Delta(n)$ (the normalized size of the code) is greater than $t(q,\epsilon_d) I_J(q)\sqrt{n}$ (the mutual information from Alice to James), with high probability over the code design, the output distribution $Q_1$ induced by the randomly chosen code $\C$ is indistinguishable from the ensemble-averaged active distribution $\mathbb{E}_{\C}(Q_1)$, i.e., $\lim_{n \to \infty} \mathbb{V}(\mathbb{E}_{\C}(Q_1), Q_1) = 0$. This result was discovered independently by~\cite{CheBJ:13} based on the type class decompositions, and by~\cite{7407378} based on the {\it channel resolvability}. \end{itemize} For any values of $(p,q)$ such that $I_B(p,q) > I_J(q) > 0$, the coding scheme described above then ensures $(1-\epsilon_d)$-covertness, since the relative throughput $r = t(q,\epsilon_d) I_B(p,q) - \delta$ and $\delta > 0$ can be chosen arbitrarily small. This completes the proof sketch of covertness, as well as the proof of the achievability result for small-sized key regime in Theorem~\ref{thm:achievable}. \subsection{Achievability scheme with moderate-sized and large-sized key} \label{sec:large} We now provide a modified coding scheme when the amount of shared key is moderate or large. First, let $\Delta(n) = \sigma \sqrt{n} + 6\log (n)$ for some constant $\sigma > 0$ (which asymptotically equals $\sigma \sqrt{n}$ when $n$ is sufficiently large). Alice and Bob generate a public code $\C$ that contains $2^{\sigma \sqrt{n}}$ sub-codes, and each sub-code (containing $r\sqrt{n}$ message bits and $6\log(n)$ bits of shared key) is generated independently according to the codebook generation process described in Sub-section~\ref{sec:code}. Again, the relative throughput $r = t(q,\epsilon_d)I_B(p,q) - \delta$ for some arbitrarily small $\delta > 0$. The extra $\sigma \sqrt{n}$ bits of shared key is used for Alice and Bob to select which sub-code to use during transmission, and the selected one is kept secret from James. It is worth noting that each sub-code also contains $6\log(n)$ bits of shared key and it is critical for list decoding. From Bob's perspective, the size of the selected sub-code is small enough so that he can reliably decode (the proof follows from Sub-section~\ref{sec:reliability}). From James' perspective, the size of the public code is sufficiently large, since he does not know the shared key and the sub-code used by Alice and Bob. In particular, the normalized size of the public code roughly equals $$(r+\sigma)\sqrt{n} = (I_B(p,q)t(q,\epsilon_d)+\sigma)\sqrt{n},$$ which is greater than $t(q,\epsilon_d) I_J(q)\sqrt{n}$ (the criterion for achieving covertness provided in Sub-section~\ref{sec:covertness}) as long as $I_B(p,q) + \frac{\sigma}{t(q,\epsilon_d)} > I_J(q)$. This implies the achievability result for the moderated-sized key regime in Theorem~\ref{thm:achievable}. Further, we note that in the regime $\Delta(n) \in \omega(\sqrt{n})$ (large-sized key regime), the criterion for achieving covertness is always satisfied since $\sigma = \omega(1)$ is larger than any constant. Therefore, the optimal throughput $t(q,\epsilon_d)I_B(p,q)$ is achievable for any values of $(p,q)$ such that $p < q$. \subsection{Definitions of typical sets and type classes}\label{sec:def} The proof of Lemmas~\ref{lemma:error1} and~\ref{lemma:error2} relies critically on the {\it type class decompositions}, hence we first define the concepts of typical sets and type classes as follows. The fractional Hamming weight of $\mathbf{x}$ and $\mathbf{z}$, and the fraction of pair $(u,v)$ in $(\mathbf{x}, \mathbf{z})$ (where $u,v \in \{0,1\}$), are respectively denoted by \begin{align} f^x_1(\mathbf{x}) \triangleq \frac{\text{wt}_H(\mathbf{x})}{n}, \ \ f^z_1(\mathbf{z}) \triangleq \frac{\text{wt}_H(\mathbf{z})}{n}, \ \ f^{xz}_{uv}(\mathbf{x},\mathbf{z}) \triangleq \frac{\left|i \in \{1,\ldots,n\}:(x_i,z_i)=(u,v)\right|}{n}. \end{align} The $n${\it -letter typical set} of $\X$ and $\mathbf{Z}$ are respectively defined as \begin{align} &\mathcal{A}_{\X} \triangleq \left\{ \mathbf{x} \in \{0,1\}^n: f^x_1(\mathbf{x}) \le 2 \rho \right\}, \label{eq:typical_x}\\ &\mathcal{A}_{\Z} \triangleq \left\{ \mathbf{z} \in \{0,1\}^n: (\rho * q) (1 - n^{-1/4}) \le f^z_1(\mathbf{z}) \le (\rho * q) (1 + n^{-1/4}) \right\}. \label{eq:typical_z} \end{align} Given a fixed $\mathbf{z}$, the $n${\it -letter conditional type class of $\X$} (with type $(f^{xz}_{10}, f^{xz}_{11})$) is defined as \begin{align} \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \triangleq \left\{ \mathbf{x}\in \{0,1\}^{n}: \begin{array}{ll} \big|i:(x_i,z_i)=(1,0)\big| = n f^{xz}_{10} \\ \big|i:(x_i,z_i)=(1,1)\big| = n f^{xz}_{11} \end{array} \right\}, \end{align} and the $n${\it -letter conditionally typical set of $\X$} is defined as \begin{align} \mathcal{A}_{\X|\z} \triangleq \left\{ \mathbf{x} \in \{0,1\}^{n}: \begin{array}{ll} \rho q(1 - n^{-1/8}) \le f^{xz}_{10}(\mathbf{x},\mathbf{z}) \le \rho q(1 + n^{-1/8}) \\ \rho (1-q)(1 - n^{-1/8}) \le f^{xz}_{11}(\mathbf{x},\mathbf{z}) \le \rho (1-q)(1 + n^{-1/8}) \end{array} \right\}, \label{eq:typical_xz} \end{align} Note that the conditionally typical set can be represented as the union of typical conditional type classes, i.e., \begin{align} \mathcal{A}_{\X|\z} = \bigcup_{(f^{xz}_{10}, f^{xz}_{11})\in \mathcal{F}^{xz}_{n}} \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}), \end{align} where $\mathcal{F}^{xz}_{n}$ is the set of typical fractional Hamming weight, and is given by \begin{align} \mathcal{F}^{xz}_{n} \triangleq \left\{ (f^{xz}_{10}, f^{xz}_{11}) : \begin{array}{ll} \rho q(1 - n^{-1/8}) \le f^{xz}_{10}(\mathbf{x},\mathbf{z}) \le \rho q(1 + n^{-1/8}) \\ \rho (1-q)(1 - n^{-1/8}) \le f^{xz}_{11}(\mathbf{x},\mathbf{z}) \le \rho (1-q)(1 + n^{-1/8}) \\ nf^{xz}_{10} \in \mathbb{Z}^{\ast}, \ nf^{xz}_{11} \in \mathbb{Z}^{\ast} \end{array} \right\}. \end{align} \noindent{\bf Oracle argument:} Before stating the formal proof, we first introduce the {\it oracle argument} that is frequently used in the myopic adversarial setting. When Alice transmits a codeword $\mathbf{x}_{ij}$ and James receives a vector $\mathbf{z}$, the only knowledge that James has is the received vector $\mathbf{z}$ and the public code $\C$. We now assume that there is an oracle which helps James by revealing the type class $\tau = \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ that the transmitted codeword $\mathbf{x}_{ij}$ lies in. Note that this extra information $\tau$ strengthens James in the sense that it reduces James' uncertainty about which codeword is transmitted by Alice (since only the codewords in $\tau$ are likely to be the transmitted codewords). If our coding scheme is proven to be reliable against the stronger adversary, it will also succeed against the original adversary. With the extra information $\tau$, James' jamming strategy may depend on the received vector $\mathbf{z}$, the public code $\C$, as well as $\tau$. Hence, in the following analysis, we denote James' jamming function by the $n$-letter conditional distribution $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$, instead of $W_{\mathbf{S}|\mathbf{Z},\C}$ defined in Section~\ref{sec:model}. The main purpose of introducing an oracle is to simplify the analysis, and such oracle is by no means necessary for analysis. Note that James has the flexibility to optimize his jamming function $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$; however, our proofs show that with high probability, a randomly generated code guarantees a small probability of error regardless of James' choices of $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$. \subsection{Proof of Lemma~\ref{lemma:error1}}\label{sec:reliability1} Recall that the error event $\mathcal{E}_\text{list}^{(1)}$ occurs if the transmitted codeword $\mathbf{x}_{ij}$ does not belong to $\mathcal{L}(\mathbf{y})$. For a fixed code $\C$, the probability of $\mathcal{E}_\text{list}^{(1)}$ is given by \begin{align} \mathbb{P}(\mathcal{E}_\text{list}^{(1)}) &= \max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{ \sum_{i=1}^N \sum_{j=1}^L \frac{1}{NL} \sum_{\mathbf{z}}W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C, \tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\} \right\} \notag \\ &\le \max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{\frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \in \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau) \cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\}\right\} \notag \\ & + \max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{ \frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \notin \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\} \right\} \notag \\ &+\max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{ \frac{1}{NL} \sum_{i=1}^N \sum_{j=1}^L \sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\} \right\}. \label{eq:xi1} \end{align} In~\eqref{eq:xi1}, we partition James' received vector $\mathbf{z}$ into typical $\mathbf{z}$ and atypical $\mathbf{z}$; for any typical $\mathbf{z}$, we further partition all the codewords into the conditionally typical codewords and conditionally atypical codewords. By using the fact that the indicator function $\mathbbm{1}(\cdot)$ is always bounded from above by one, the two atypical in~\eqref{eq:xi1} can be respectively upper bounded as \begin{align} \frac{1}{N L}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \notin \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) + \frac{1}{N L} \sum_{i=1}^{N} \sum_{j=1}^L \sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}). \label{eq:vanish2} \end{align} The following two claims state that the probabilities of error caused by the two atypical events are vanishing, and the detailed proofs are deferred to Appendix~\ref{appendix:reliability}. \begin{restatable}{claim}{claimfour} \label{claim:atypical1} With probability at least $1 - \exp(-\mathcal{O}(n^{1/4}))$ over the code design, $$\frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \notin \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) < \exp(-n^{1/8}).$$ \end{restatable} \begin{restatable}{claim}{claimfive} \label{claim:atypical2} With probability at least $1 - \exp(-\mathcal{O}(\sqrt{n}))$ over the code design, $$\frac{1}{NL} \sum_{i=1}^N \sum_{j=1}^L \sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) <\exp(-n^{1/4}).$$ \end{restatable} From now on we consider the the typical event in~\eqref{eq:xi1} --- a typical $\mathbf{z}$ is received and a conditionally typical codeword is transmitted. One critical step in our proof is to decompose the conditionally typical set $\mathcal{A}_{\X|\z}$ into the conditionally typical type classes $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ (where $(f^{xz}_{10}, f^{xz}_{11})\in \mathcal{F}^{xz}_{n}$) that comprise it. Let \begin{align} c \triangleq r-t(q,\epsilon_d)\cdot I_J(q) > 0. \end{align} Claim~\ref{claim:ratio1} below shows that for any typical $\mathbf{z}$ and conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, with probability super-exponentially\footnote{Note that this super-exponential concentration result is critical since we need to take a union bound over exponentially many typical $\mathbf{z}$ and type classes $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$.} close to one (over the code design), the number of codewords falling into $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ is tightly concentrated around $2^{c\sqrt{n}}$. \begin{claim} \label{claim:ratio1} For any typical $\mathbf{z}$ and any conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, the expected number of codewords falling into $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ is super-polynomially large, i.e., $$\mathbb{E}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \} \right) = 2^{c\sqrt{n}}.$$ Further, with probability at least $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ over the code design, a randomly chosen code $\C$ satisfies \begin{align*} &\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \} > \left(1-\exp(-n^{\frac{1}{4}})\right) \cdot 2^{c\sqrt{n}}. \notag \end{align*} \end{claim} The proof of Claim~\ref{claim:ratio1} follows from the Chernoff bound, and can be found in Appendix~\ref{appendix:ratio1}. It is worth noting that the number of codewords in $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ essentially reflects James' uncertainty about the transmitted codeword. Given the observation of $\mathbf{z}$ and the oracle revealed information $\tau =\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, James knows that the transmitted codeword must belong to $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, but each of the codeword in $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ is equally likely from his perspective. \begin{definition}\label{def:kill} A codeword $\mathbf{x}$ is killed by a jamming vector $\mathbf{s}$ if $\mathbf{x}$ is pushed out of the list decoder by $\mathbf{s}$, i.e., $\mathbf{x} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})$. \end{definition} If magically James is able to find a jamming vector $\mathbf{s}$ such that each of the codeword in $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ is killed by $\mathbf{s}$, then the jamming vector must result in a decoding error since the true transmitted codeword $\mathbf{x}$ belongs to $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ and is killed by $s$. Fortunately, Claim~\ref{claim:ratio2} below shows that for typical $\mathbf{z}$ and $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, with probability super-exponentially close to one (over the code design), no matter which $\mathbf{s} \in \{0,1\}^n$ James chooses, only a decaying fraction of codewords in $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ are killed by $\mathbf{s}$ (as illustrated in Fig.~\ref{fig:kill}). The proof of Claim~\ref{claim:ratio2} is deferred to Appendix~\ref{appendix:ratio2}. \begin{claim}[{\bf Myopic list-decoding lemma}] \label{claim:ratio2} For any typical $\mathbf{z}$ and any conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, with probability at least $1 - \exp\left(-2^{\mathcal{O}(\sqrt{n})}\right)$ over the code design, $$\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\} < \exp(-n^{1/4}) \cdot 2^{c\sqrt{n}}, \ \ \forall \mathbf{s} \in \{0,1\}^n.$$ \end{claim} Based on Claims~\ref{claim:ratio1} and~\ref{claim:ratio2}, Claim~\ref{claim:ratio3} below shows that for typical $\mathbf{z}$ and $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, a decaying fraction of codewords $\mathbf{x} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ being killed (regardless of $\mathbf{s}$) implies a vanishing probability of error. Finally, we also needs to take a union bound over all typical $\mathbf{z}$ and conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$. \begin{figure} \begin{center} \includegraphics[scale=0.55]{kill.pdf} \caption{We consider a typical $\mathbf{z}$ and a conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ with respect to $\mathbf{z}$. We prove that the number of codewords falling into $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ is super-polynomially large, and no matter which jamming vector $\mathbf{s}$ is chosen, only a small fraction of codewords that belong to $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ are killed.} \label{fig:kill} \end{center} \end{figure} \begin{claim}[First term in~\eqref{eq:xi1}] \label{claim:ratio3} With probability at least $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ over the code design, a randomly chosen code $\C$ satisfies \begin{align*} &\max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{ \frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \in \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\}\right\} \le \exp(-n^{\frac{1}{4}}+1). \end{align*} \end{claim} \noindent{{\it Proof:}} For any typical $\mathbf{z}$ and conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, Claim~\ref{claim:ratio1} together with Claim~\ref{claim:ratio2} guarantee that a randomly chosen code $\C$ satisfies \begin{align} \frac{\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\}}{\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \}} &< \frac{\exp(-n^{1/4})\cdot 2^{c\sqrt{n}}}{(1-\exp(-n^{1/4})) \cdot 2^{c\sqrt{n}}} \label{eq:live}\\ &\le \exp(-n^{1/4}+1), \ \ \forall \mathbf{s} \in \{0,1\}^n, \end{align} with probability at least $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ over the code design. We now turn to analyze the first term in~\eqref{eq:xi1}. For any jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$, we have \begin{align} &\frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \in \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\} \label{eq:start}\\ &=\frac{1}{NL} \sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(f^{xz}_{10}, f^{xz}_{11}) \in \mathcal{F}_n^{xz}} \ \sum_{(i,j): \mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)\cdot \mathbbm{1}\{\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})\} \label{eq:start2} \\ &= \frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(f^{xz}_{10}, f^{xz}_{11}) \in \mathcal{F}_n^{xz}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau) \notag \\ & \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\} \\ &\overset{\text{w.h.p.}}\le{} \frac{\exp(-n^{\frac{1}{4}}+1)}{NL} \sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(f^{xz}_{10}, f^{xz}_{11}) \in \mathcal{F}_n^{xz}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})) \sum_{\mathbf{s}} W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau) \notag \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \right\} \label{eq:tutorial} \\ &= \exp(-n^{\frac{1}{4}}+1) \cdot \frac{1}{NL} \sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(f^{xz}_{10}, f^{xz}_{11}) \in \mathcal{F}_n^{xz}} \ \sum_{(i,j):\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \label{eq:tutorial2} \\ &\le \exp(-n^{\frac{1}{4}}+1) \cdot \frac{1}{NL} \sum_{\mathbf{z}} \sum_{i=1}^N \sum_{j=1}^L W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \label{eq:tutorial3} \\ &= \exp(-n^{\frac{1}{4}}+1). \label{eq:error1-1} \end{align} In~\eqref{eq:start2}, we decompose the conditionally typical set $\mathcal{A}_{\X|\z}$ into the union of all conditionally typical type classes $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$. Inequality~\eqref{eq:tutorial} follows from~\eqref{eq:live}, and holds with probability at least $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ over the code design. Note that in~\eqref{eq:tutorial} we need to take a union bound over exponentially many $\mathbf{z}, \mathbf{s}$ and $(f^{xz}_{10}, f^{xz}_{11})$, which is valid since $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ is super-exponentially large. Equation~\eqref{eq:tutorial2} follows since $\sum_{s}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau) = 1$, and inequality~\eqref{eq:tutorial3} is obtained by relaxing the constraints on $\mathbf{z}$. Note that equations~\eqref{eq:start}-\eqref{eq:error1-1} holds for arbitrary jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$, hence Claim~\ref{claim:ratio3} is proved. \qed By combining Claims~\ref{claim:atypical1},~\ref{claim:atypical2}, and~\ref{claim:ratio3}, we finally prove that with probability at least $1 - \exp(-\mathcal{O}(n^{1/4}))$ over the code design, a randomly chosen code $\C$ ensures the probability of the error event $\mathcal{E}_\text{list}^{(1)}$ is bounded from above as \begin{align} \mathbb{P}(\mathcal{E}_\text{list}^{(1)}) \le \exp(-n^{1/8}) + \exp(-n^{1/4}) + \exp(-n^{\frac{1}{4}}+1) \le 3\exp(-n^{1/8}). \end{align} This completes the proof of Lemma~\ref{lemma:error1}. \subsection{Proof of Lemma~\ref{lemma:error2}} \label{sec:reliability2} Recall that the error event $\mathcal{E}_\text{list}^{(2)}$ occurs if more than $n^2$ codewords (other than the transmitted codeword) fall into the list $\mathcal{L}(\mathbf{y})$. \begin{claim} \label{claim:n2} Fix a typical transmitted codeword $\mathbf{x}_{ij}$ and a jamming vector $\mathbf{s}$ satisfying $\emph{wt}_H(\mathbf{s}) \le pn$. With probability at least $1- \exp(-\mathcal{O}(n^{5/2}))$ over the code design, the number of codewords $\mathbf{x}_{i'j'}$ (where $(i',j') \ne (i,j)$) falling into the list $\mathcal{L}(\mathbf{x}_{ij}+\mathbf{s})$ is bounded from above by $n^2$. \end{claim} \noindent{\it Proof:} The Hamming weight of Bob's received vector $\mathbf{y} = \mathbf{x}_{ij} + \mathbf{s}$ satisfies \begin{align} \text{wt}_H(\mathbf{y}) = \text{wt}_H\left(\mathbf{x}_{ij} + \mathbf{s}\right) \le \text{wt}_H(\mathbf{x}_{ij}) + \text{wt}_H(\mathbf{s}) \le 2\rho n + pn, \end{align} since $\text{wt}_H(\mathbf{s}) \le pn$, $\text{wt}_H(\mathbf{x}_{ij}) \le 2\rho n$ for typical $\mathbf{x}_{ij}$, and the intersection between $\mathbf{x}_{ij}$ and $\mathbf{s}$ is greater than zero. For any codeword $\mathbf{x}_{i'j'}$ such that $(i',j') \ne (i,j)$, $\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y})$ if and only if \begin{align} \begin{cases} nf^{xy}_{10}(\mathbf{x}_{i'j'}, \mathbf{y}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1),\\ nf^{xy}_{11}(\mathbf{x}_{i'j'},\mathbf{y}) > \rho n\left(1-\frac{p(1-q)}{q}\right)(1-\varepsilon_2). \end{cases} \end{align} Note that the complement of the support of $\mathbf{y}$ has size greater than $(1-p)n - 2\rho n$, hence we have $$\mathbb{E}\left(nf^{xy}_{10}(\X_{i'j'}, \mathbf{y})\right) \ge \rho \left((1-p)n - 2\rho n\right),$$ since each bit of $\X_{i'j'}$ is generated i.i.d. according to Bernoulli$(\rho)$. Let $\kappa_1 = 1 - \frac{(p-pq)(1+\varepsilon_1)}{q-pq+2\rho q}$. By the {\it Chernoff–Hoeffding Theorem}~\cite{hoeffding1994probability}, we have \begin{align} &\mathbb{P} \left(nf^{xy}_{10}(\X_{i'j'}, \mathbf{y}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1) \right) \\ &= \mathbb{P} \left(nf^{xy}_{10}(\X_{i'j'}, \mathbf{y}) < (1-\kappa_1)\mathbb{E}_{\C}\left(nf^{xy}_{10}(\X_{i'j'}, \mathbf{y})\right) \right) \\ &\le 2^{-\mathbb{D}(\rho(1-\kappa_1) \Vert \rho)((1-p)n-2\rho n) \log e}. \label{eq:kappa1} \end{align} Similarly, since $\text{wt}_H(\mathbf{y}) \le 2\rho n + pn$, we have $$\mathbb{E}\left(nf^{xy}_{11}(\X_{i'j'}, \mathbf{y})\right) \le \rho \left(2\rho n + pn\right).$$ Let $\kappa_2 = \frac{(q-p+pq)(1-\varepsilon_2)}{q(2\rho + p)} - 1$. By the Chernoff–Hoeffding Theorem, we have \begin{align} &\mathbb{P} \left(nf^{xy}_{11}(\X_{i'j'}, \mathbf{y}) > \rho n\left(1 - \frac{p(1-q)}{q}\right)(1-\varepsilon_2) \right) \\ &= \mathbb{P} \left(nf^{xy}_{11}(\X_{i'j'}, \mathbf{y}) > (1+\kappa_2)\mathbb{E}_{\C}\left(nf^{xy}_{11}(\X_{i'j'}, \mathbf{y})\right) \right) \\ &\le 2^{-\mathbb{D}(\rho(1+\kappa_2) \parallel \rho)(pn+2\rho n) \log e}. \label{eq:kappa2} \end{align} Combining inequalities~\eqref{eq:kappa1} and~\eqref{eq:kappa2}, we have \begin{align} &\mathbb{P} \left(\X_{i'j'} \in \mathcal{L}(\mathbf{y}) \right) \notag \\ &= \mathbb{P} \left(nf^{xy}_{10}(\X_{i'j'}, \mathbf{y}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1) \right) \cdot \mathbb{P}_{\X_{i'j'}} \left(nf^{xy}_{11}(\X_{i'j'}, \mathbf{y}) > \rho n\left(1 - \frac{p(1-q)}{q}\right)(1-\varepsilon_2) \right) \notag \\ &\le 2^{-\mathbb{D}(\rho(1-\kappa_1) \parallel \rho)((1-p)n-2\rho n) \log e} \cdot 2^{-\mathbb{D}(\rho(1+\kappa_2) \parallel \rho)(pn+2\rho n) \log e} \\ & \overset{\text{$n \to \infty$}}={} 2^{-t(q,\epsilon_d) I_B(p,q)\sqrt{n}}, \end{align} since the two events (the number of ones of $\X_{i'j'}$ inside the support of $\mathbf{y}$ and outside the support of $\mathbf{y}$) are independent. On expectation, the total number of codewords (other than the transmitted codeword $\mathbf{x}_{ij}$) falling into the list $\mathcal{L}(\mathbf{y})$ is given by \begin{align} \mathbb{E}\left(\sum_{(i'j')\ne(i,j)} \mathbbm{1}\left\{\X_{i'j'} \in \mathcal{L}(\mathbf{y})\right\} \right) \le 2^{r\sqrt{n} + 3\log n} \cdot 2^{-t(q,\epsilon_d)I_B(p,q)\sqrt{n}} = 2^{(r-t(q,\epsilon_d) I_B(p,q))\sqrt{n} + 3\log n}, \end{align} which is super-polynomially small since $r < t(q,\epsilon_d) I_B(p,q)$. Therefore, we use a counting argument to characterize the probability that more than $n^2$ codewords falling into the list $\mathcal{L}(\mathbf{y})$. As long as $r < t(q,\epsilon_d) I_B(p,q)$, we have \begin{align} &\mathbb{P}_{\C \setminus \mathbf{x}_{ij}} \left(\sum_{(i'j')\ne(i,j)} \mathbbm{1}\left\{\X_{i'j'} \in \mathcal{L}(\mathbf{y})\right\} \ge n^2 \right) \\ & = \sum_{\theta=n^2}^{2^{r\sqrt{n}}} \mathbb{P}_{\C \setminus \mathbf{x}_{ij}} \left(\sum_{(i'j')\ne(i,j)} \mathbbm{1}\left\{\X_{i'j'} \in \mathcal{L}(\mathbf{y})\right\} = \theta \right) \label{eq:eye1}\\ & = \sum_{\theta=n^2}^{2^{r\sqrt{n}}} \binom{2^{r\sqrt{n}}}{\theta}\left(2^{-t(q,\epsilon_d) I_B(p,q)\sqrt{n}}\right)^\theta \left(1- 2^{-t(q,\epsilon_d) I_B(p,q)\sqrt{n}}\right)^{(2^{r\sqrt{n}}-\theta)} \\ &\le 2^{r\sqrt{n}} \binom{2^{r\sqrt{n}}}{n^2}\left(2^{-t(q,\epsilon_d) I_B(p,q)\sqrt{n}}\right)^{n^2} \label{eq:eye2} \\ & \le 2^{r\sqrt{n}} \left(\frac{e\cdot 2^{r\sqrt{n}}}{n^2}\right)^{n^2}\left(2^{-t(q,\epsilon_d) I_B(p,q)\sqrt{n}}\right)^{n^2} \label{eq:eye3}\\ & = 2^{r\sqrt{n}}\left(\frac{e\cdot 2^{(r-t(q,\epsilon_d) I_B(p,q))\sqrt{n}}}{n^2}\right)^{n^2} \\ &= \exp(-\mathcal{O}(n^{5/2})). \label{eq:eye4} \end{align} Inequality~\eqref{eq:eye2} follows since $\theta=n^2$ maximizes the probability in~\eqref{eq:eye1}, and we bound the number of summations from above by $2^{r\sqrt{n}}$. Inequality~\eqref{eq:eye3} follows from the inequality \begin{align} \binom{n}{k} \le \left(\frac{en}{k}\right)^k. \end{align} Finally, we obtain~\eqref{eq:eye4} by using the fact that $r < t(q,\epsilon_d) I_B(p,q)$. \qed In the following we also consider the atypical events, and prove that with high probability over the code design, a randomly chosen code $\C$ ensures that the probability of the error events $\mathcal{E}_\text{list}^{(2)}$ goes to zero. Note that \begin{align} \mathbb{P}(\mathcal{E}_\text{list}^{(2)}) = \max_{W_{\mathbf{S}|\mathbf{Z},\C,\tau}}\left\{ \frac{1}{NL}\sum_{i=1}^N \sum_{j=1}^L \sum_{\mathbf{z}}W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z}, \C,\tau) \cdot \mathbbm{1}\bigg\{\sum_{(i',j')\ne(i,j)}\mathbbm{1}\left\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \right\} \ge n^2\bigg\} \right\} \notag, \end{align} and regardless of James' jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$, \begin{align} &\mathbb{E}_{\C}\left[ \frac{1}{NL}\sum_{i=1}^N \sum_{j=1}^L \sum_{\mathbf{z}}W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \sum_{\mathbf{s}}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z}, \C,\tau) \cdot \mathbbm{1}\bigg\{\sum_{(i',j')\ne(i,j)}\mathbbm{1}\left\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \right\} \ge n^2\bigg\} \right] \\ &=\mathbb{E}_{\C}\left[ \sum_{\mathbf{z}}W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{11}) \sum_{\mathbf{s}}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z}, \C,\tau) \cdot \mathbbm{1}\bigg\{\sum_{(i',j')\ne(1,1)}\mathbbm{1}\left\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \right\} \ge n^2\bigg\} \right] \label{eq:atl1} \\ &= \sum_{\mathbf{x}_{11} \in \{0,1\}^n} P_{\X}(\mathbf{x}_{11}) \sum_{\C \setminus \mathbf{x}_{11}} P_{\C \setminus \X_{11}}(\C \setminus \mathbf{x}_{11}) \sum_{\mathbf{z}}W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{11}) \notag \\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \sum_{\mathbf{s}}W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z}, \C,\tau) \cdot \mathbbm{1}\bigg\{\sum_{(i',j')\ne(1,1)}\mathbbm{1}\left\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \right\} \ge n^2\bigg\}\label{eq:jide1} \\ & \le \sum_{\mathbf{x}_{11} \in \mathcal{A}_{\X}} P_{\X}(\mathbf{x}_{11}) \sum_{\mathbf{s}} \sum_{\C \setminus \mathbf{x}_{11}}P_{\C \setminus \X_{11}}(\C \setminus \mathbf{x}_{11}) \cdot \mathbbm{1}\bigg\{\sum_{(i',j')\ne(1,1)}\mathbbm{1}\left\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \right\} \ge n^2\bigg\} + \sum_{\mathbf{x}_{11} \notin \mathcal{A}_{\X}} P_{\X}(\mathbf{x}_{11}) \label{eq:jide3}\\ & \le \sum_{\mathbf{x}_{11} \in \mathcal{A}_{\X}} P_{\X}(\mathbf{x}_{11}) \sum_{\mathbf{s}} P_{\C \setminus \mathbf{x}_{11}} \left(\sum_{(i'j')\ne(1,1)} \mathbbm{1}\left\{\X_{i'j'} \in \mathcal{L}(\mathbf{y})\right\} \ge n^2 \right) + \sum_{\mathbf{x}_{11} \notin \mathcal{A}_{\X}} P_{\X}(\mathbf{x}_{11})\\ & \le \left(\sum_{\mathbf{s}} \exp(-\mathcal{O}(n^{5/2}))\right) + \exp \left(-\frac{1}{3} t(q,\epsilon_d) \sqrt{n}\right) \label{eq:jide4} \\ & = \exp(-\mathcal{O}(\sqrt{n})). \label{eq:atl2} \end{align} Equation~\eqref{eq:atl1} is obtained by noting that for each codeword $\mathbf{x}_{ij}$, the averaged probability of error (over the code design) is the same. Hence, without loss of generality, we consider the average probability of error corresponding to the codeword $\mathbf{x}_{11}$ being transmitted. The notation $P_{\C \setminus \X_{11}}(\C \setminus \mathbf{x}_{11})$ in~\eqref{eq:jide1} represents the probability of generating a code $\C$ excluding the transmitted codeword $\mathbf{x}_{11}$. In~\eqref{eq:jide3}, we again consider the transmitted codeword $\mathbf{x}_{11}$ to be either typical or atypical, and \begin{itemize} \item When $\mathbf{x}_{11}$ is atypical, we simply bound the indicator function $\mathbbm{1}\{\sum_{(i',j')\ne(1,1)}\mathbbm{1}\{\mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y})\} \ge n^2 \}$ from above by one. \item When $\mathbf{x}_{11}$ is typical, we bound the probability $W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)$ from above by one, and then interchange the order of summations. Note that if we keep the term $W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)$, the order of summations cannot be changed since James' jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C,\tau}(\mathbf{s}|\mathbf{z},\C,\tau)$ depends on the realization of the code $\C$. \end{itemize} Inequality~\eqref{eq:jide4} follows from Claim~\ref{claim:n2} (which is valid for all typical transmitted codewords) and the Chernoff bound. Finally, by noting that~\eqref{eq:atl2} holds for every possible jamming strategy $W_{\mathbf{S}|\mathbf{Z},\C,\tau}$, the Markov's inequality yields \begin{align} \mathbb{P}_{\C}\left( \mathbb{P}(\mathcal{E}_\text{list}^{(2)}) \ge \exp(-n^{1/4})\right) \le \exp(-\mathcal{O}(\sqrt{n})). \end{align} This completes the proof of Lemma~\ref{lemma:error2}. \qed \subsection{Proof of Lemma~\ref{lemma:error3}} \label{sec:reliability3} If $\mathcal{E}_\text{list}$ does not occur, the transmitted codeword $\mathbf{x}_{ij}$ belong to the list $\mathcal{L}(\mathbf{y})$, and the number of codewords (other than $\mathbf{x}_{ij}$) falling into $\mathcal{L}(\mathbf{y})$ is bounded from above by $n^2$. Let \begin{align} V \triangleq \{\mathbf{x}_{i',j'}: (i',j')\ne (i,j) \text{ and } \mathbf{x}_{i'j'} \in \mathcal{L}(\mathbf{y}) \} \end{align} be the set of codewords that belong to $\mathcal{L}(\mathbf{y})$, where $|V| \le n^2$. In the following, we show that with high probability (over the shared key $K$), none of the codewords in $V$ is consistent with $K$. Recall that the polynomial hash function, first defined in~\eqref{eq:hash}, is given by \begin{align} G = G_K(M) \triangleq K_2 + \sum_{u=1}^l K_1^{u} M_{u}, \end{align} where the additions and multiplications are over $\mathbb{F}_{n^3}$, and $l = \frac{r\sqrt{n}}{3\log(n)}$. Though the shared key $K$ is {\it a priori} uniformly distributed, it may not necessarily be uniform from James' perspective, since his observations $\mathbf{z}$ may reveal some information about $K$. Nevertheless, we argue that the first part of the key, $K_1$, is still uniformly distributed from James' perspective, even if James knows $\mathbf{z}$ as well as the message-hash pair $(i,j)$ transmitted by Alice. Note that the above argument also holds without the extra assumption that the transmitted message-hash pair is revealed, since this assumption only strengthens James, and the purpose of introducing it is merely to simplify the analysis. As James knows $M = i$ ($M_u = i_u, \forall u \in \{1,2,\ldots,l\}$) and $G = j$, he certainly knows that the shared key $K = (K_1,K_2)$ satisfies \begin{align} j = K_2 + \sum_{u=1}^l K_1^{u} i_{u}. \label{eq:85} \end{align} For each value of $K_1 \in \mathbb{F}_{n^3}$, there exists a unique $K_2$ such that the $(K_1,K_2)$ pair satisfies equation~\eqref{eq:85}. Saying differently, the total number of $(K_1,K_2)$ pairs satisfying equation~\eqref{eq:85} is $n^3$, and each pair contains a distinct $K_1$. Thus, from James' perspective, $K_1$ is uniformly distributed, while $K_2$ may or may not be uniformly distributed. For any $(i', j') \ne (i,j)$, by Schwartz–Zippel lemma and the uniformity of $K_1$, the probability that $(i', j')$ is consistent with $K$ is given by \begin{align} &\mathbb{P}_{K} \left( j' = K_2 + \sum_{u=1}^{l}K_1^{u}i'_u \Big| j = K_2 + \sum_{u=1}^{l}K_1^{u}i_u\right) = \mathbb{P}_{K} \left( j'-j= \sum_{u=1}^{l}K_1^{u}(i'_u-i_u) \right) \le \frac{l}{n^3} = \frac{rn^{-5/2}}{3\log(n)}. \end{align} By taking a union bound over all the codewords in $V$ (where $|V| \le n^2$), one can prove that with probability at least \begin{align} 1 - n^2 \cdot \frac{rn^{-5/2}}{3\log(n)} = 1- \mathcal{O}\left(\frac{1}{\sqrt{n}\log(n)}\right) \end{align} over $K$, none of the codewords in $|V|$ is consistent with $K$.\qed \section{Computationally Efficient Achievability Scheme } \label{sec:efficient} The achievability scheme in Section~\ref{sec:achievability} relies on a random coding argument, hence the decoding complexity grows super-polynomially with the blocklength $n$. In this section we develop a computationally efficient coding scheme in which the encoding and decoding can be implemented in polynomial time. \subsection{Permutation-based coding scheme} \label{sec:efficient1} Recall that covert communication requires the average Hamming weights of codewords to scale as $\mathcal{O}(\sqrt{n})$. Instead of generating a low-weight (or low-rate) code directly, we now generate a code $\widetilde{C}$ of length $d\sqrt{n}$ and rate $R$, where $d$ and $R$ scale as constants, and the fraction of ones in the code also scales as a constant. The code $\widetilde{\C}$ consists of $2^{dR\sqrt{n}}$ codewords $\widetilde{\X}$. To satisfy the covertness constraint, Alice uses a uniformly distributed shared key $\Pi_1$ to select $d\sqrt{n}$ locations (out of $n$ locations) for transmission, and insert $\widetilde{\X}$ (of length $d\sqrt{n}$ bits) into these locations. The key $\Pi_1$ is of size $\log \left(\binom{n}{d\sqrt{n}}\right)= \mathcal{O}(\sqrt{n}\log(n))$, and is only known to Alice and Bob. All the locations except for those $d\sqrt{n}$ locations selected by $\Pi_1$ comprising entirely of zeros. Further, we introduce another $\log\left((d\sqrt{n})!\right) = \mathcal{O}(\sqrt{n}\log(n))$ bits of shared key $\Pi_2$, which is used to permute the $d\sqrt{n}$ bits (inside the codewords $\widetilde{\X}$). A length-$n$ sequence $\widehat{\X} = \Pi_2(\Pi_1(\widetilde{\X}))$ is obtained after insertion and permutation. Note that to guarantee covertness, the code length parameter $d$ cannot be too large (a larger $d$ implies a larger average Hamming weight of $\widehat{\X})$, and we specify the value of $d$ later. Now we focus on the selected $d\sqrt{n}$ locations corresponding to $\widetilde{\X}$. Since $\Pi_1$ and $\Pi_2$ are uniformly distributed from James' perspective, he seems unlikely to estimate the $d\sqrt{n}$ locations selected by $\Pi_1$. Nevertheless, James can perform the BAC-type jamming on the length-$n$ sequence $\widehat{\X}$ --- flip $\widehat{X}_i$ with probability approximately $p/q$ if $Z_i=1$, and does not flip $\widehat{X}_i$ if $Z_i = 0$ (as described in Section~\ref{sec:upperbound}). Effectively, this attack roughly flips $p$ fraction of zeros and $(1-q)p/q$ fraction of ones of $\widetilde{\X}$, irrespective of the permutations $\Pi_1$ and $\Pi_2$ that Alice and Bob use. One may notice that even if James does not know any information about $\Pi_1$ {\it a priori}, he is still able to learn something non-trivial from $\mathbf{z}$. For example, if the channel from Alice to James is a noiseless channel, James will know exactly that the locations corresponding to the support of $\mathbf{z}$ must be selected by $\Pi_1$ (though he is unable to infer all the $d\sqrt{n}$ locations selected by $\Pi_1$). If the channel to James is a BSC$(q)$, he can still learn some information about $\Pi_1$, though not as much as the noiseless case. However, a careful analysis (based on an oracle argument similar to that described in Section~\ref{sec:def}) shows that there are super-polynomially many ``plausible'' permutations $\Pi_1 \times \Pi_2$ from James' perspective, and each with equal probability. Under specific permutations $\pi_1$ and $\pi_2$, a jamming vector $\mathbf{s}$ is said to outperform the BAC-type jamming if $\mathbf{s}$ flips more than $p(1+\varepsilon)$ fraction of zeros or more than $(1-q)p(1+\varepsilon)/q$ fraction of ones in $\widetilde{\X}$. We are able to show that no matter which jamming vector $\mathbf{s}$ James chooses, only under a vanishing fraction of plausible permutations, $\mathbf{s}$ outperforms the BAC-type jamming. This implies that with high probability over the permutations, the channel between Alice and Bob is not worse than a binary asymmetric channel with $\wy(1|0) = p(1+\varepsilon)$ and $\wy(0|1) = (1-q)p(1+\varepsilon)/q$. Therefore, the length-$d\sqrt{n}$ code $\widetilde{\C}$ should satisfy the following properties: \begin{enumerate} \item $\widetilde{\C}$ is robust to the binary asymmetric channel described above, and also achieves the channel capacity $C_{\text{BAC}}(p,q)$; \item The encoding and decoding of $\widetilde{\C}$ is computationally efficient. \end{enumerate} The existence of such $\widetilde{\C}$ is guaranteed by Forney's concatenated codes~\cite{forney1965concatenated}, which provide a generic computationally efficient code construction for arbitrary discrete memoryless channels. To guarantee $(1-\epsilon_d)$-covertness, the average Hamming weight of the length-$n$ codewords $\widehat{\X}$, which equals $d\sqrt{n}\cdot \rho^{\ast}$ (since Bernoulli($\rho^{\ast}$) is the capacity-achieving distribution for the BAC), should be at most $t(q,\epsilon_d)\sqrt{n}$. Hence, $d$ should be bounded from above by $\frac{t(q,\epsilon_d)}{\rho^\ast}$, and the optimal relative throughput of the permutation-based coding scheme equals $d\cdot R = \frac{t(q,\epsilon_d)}{\rho^\ast}\cdot C_{\text{BAC}}(p,q)$. \subsection{Computational assumptions on the adversary} As is usual in the standard cryptographic setting, we assume the computational power of James is restricted to be polynomial in the blocklength $n$. By introducing a {\it pseudorandom generator (PRG)} and reusing the coding scheme in Sub-section~\ref{sec:efficient1}, we develop a computationally efficient coding scheme with much less amount of shared key. Roughly speaking, a length $l$ shared key can be used in conjunction with a PRG to generate a length $\text{poly}(l)$ pseudorandom shared key, without being detected by any polynomial time algorithms~\cite[Theorem 6.20]{katz2014introduction}. Specifically, suppose the amount of shared key needed in Sub-section~\ref{sec:efficient1} is $\Delta(n) = c\sqrt{n}\log(n)$ for some constant $c > 0$, and for any $\xi > 0$, there exists an efficiently computable function $g: \{0,1\}^{n^{\xi}}\rightarrow\{0,1\}^{c\sqrt{n}\log(n)}$ such that if ${U}\sim \operatorname{Unif}(\{0,1\}^{n^\xi})$ and ${K}\sim \operatorname{Unif}(\{0,1\}^{c\sqrt{n}\log(n)})$, then for all polynomial time computable functions $D:\{0,1\}^{c\sqrt{n}\log(n)}\to\{0,1\}$, \begin{align} \left|\mathbb{P}_{{U}}(D(g({U}))=1)-\mathbb{P}_{{K}}(D({K})=1)\right| \le \nu_n, \label{eq:video} \end{align} for some $\nu_n$ such that $\lim_{n \to \infty}\nu_n = 0$. We argue that if James can distinguish $T=0$ from $T=1$ in polynomial time, he should also be able to distinguish a truly random $K$ from a pseudorandom $K$ in polynomial time. We say $K = \mathcal{P}$ if the $\Delta(n)$ bits of shared key comes from the output of a PRG with a length-$n^{\xi}$ seed, and $K = \mathcal{R}$ if the $\Delta(n)$ bits of shared key is truly uniformly distributed. Let $K = \mathcal{P}$. According to Definition~\ref{def:covert} and the computational assumptions, $(1-\epsilon_d)$-covertness requires that \begin{align} \forall \text{ poly-time }\Phi, \ \lim_{n \to \infty} \mathbb{P}(\widehat{T}=1|T=0)+\mathbb{P}(\widehat{T}=0|T=1, K=\mathcal{P}) \ge 1 - \epsilon_d. \label{eq:phi} \end{align} By noting that \begin{align*} &\mathbb{P}(\widehat{T}=1|T=0)+\mathbb{P}(\widehat{T}=0|T=1, K=\mathcal{P}) \\ & = \mathbb{P}(\widehat{T}=1|T=0)+\mathbb{P}(\widehat{T}=0|T=1, K=\mathcal{R}) + \mathbb{P}(\widehat{T}=1|T=1, K=\mathcal{R}) - \mathbb{P}(\widehat{T}=1|T=1, K=\mathcal{P}), \end{align*} it suffices to show that for some $\nu'_n > \nu_n$ such that $\lim_{n \to \infty} \nu'_n = 0$, \begin{align} \begin{cases} \forall \text{ poly-time }\Phi, \ \mathbb{P}(\widehat{T}=1|T=1,K=\mathcal{P})-\mathbb{P}(\widehat{T}=1|T=1, K=\mathcal{R}) \le \nu'_n, \\ \forall \text{ poly-time }\Phi, \ \lim_{n \to \infty} \mathbb{P}(\widehat{T}=1|T=0)+\mathbb{P}(\widehat{T}=0|T=1, K=\mathcal{R}) \ge 1 - \epsilon_d, \label{eq:pseudo} \end{cases} \end{align} Note that the second condition is satisfied if the coding scheme in Sub-section~\ref{sec:efficient1} is covert. Hence it suffices to focus on the first inequality in equation~\eqref{eq:pseudo} only. \begin{figure} \begin{center} \includegraphics[scale=0.66]{pseudo_covert.pdf} \caption{A polynomial time algorithm $D_1$ that can distinguish $K = \mathcal{P}$ and $K = \mathcal{R}$ based on the assumptions on the estimator.} \label{fig:pseudo_covert} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[scale=0.65]{pseudo_decoding.pdf} \caption{A polynomial time algorithm $D_2$ that can distinguish $K = \mathcal{P}$ and $K = \mathcal{R}$ based on the assumptions on the decoder.} \label{fig:pseudo_decoding} \end{center} \end{figure} Suppose there is a polynomial time estimator $\Phi$ such that $\mathbb{P}(\widehat{T}=1|T=1,K=\mathcal{P})-\mathbb{P}(\widehat{T}=1|T=1, K=\mathcal{R}) > \nu'_n$. James is then able to use the estimator $\Phi$ to design a polynomial time algorithm $D_1$ by generating an artificial system that contains message, encoder, channel, and estimator $\Phi$ (as shown in Fig.~\ref{fig:pseudo_covert}). The algorithm $D_1$ takes $K$ as input, and outputs $D_1(K) = \widehat{T}$. Note that $D_1$ runs in polynomial time (since both the encoder and the estimator run in polynomial time), and can distinguish $\mathcal{P}$ from $\mathcal{R}$ with at least $\nu'_n$ advantage, i.e., $$\mathbb{P}(D_1(K)=1|T=1,K=\mathcal{P})-\mathbb{P}(D_1(K)=1|T=1, K=\mathcal{R}) > \nu'_n.$$ The existence of such $D_1$ contradicts with equation~\eqref{eq:video} since $\nu'_n > \nu_n$, hence the first inequality in equation~\eqref{eq:pseudo} holds. On the other hand, we also need to show that using PRG does not significantly increase the probability of error. Suppose there is a polynomial time decoder satisfying $$\mathbb{P}(M \ne \widehat{M}|K=\mathcal{P})-\mathbb{P}(M \ne \widehat{M}|K=\mathcal{R}) > \nu''_n,$$ where $\nu''_n > \nu_n$ and $\lim_{n \to \infty} \nu''_n = 0$. Bob is then able to use this decoder to design a polynomial time algorithm $D_2$ by generating an artificial system as illustrated in Fig.~\ref{fig:pseudo_decoding}. It outputs $1$ if $\widehat{M} \ne M$, and outputs $0$ if $\widehat{M} = M$. Note that the algorithm $D_2$ runs in polynomial time (since both the encoder and the decoder run in polynomial time), and can distinguish $\mathcal{P}$ from $\mathcal{R}$ with at least $\nu''_n$ advantage, i.e., $$\mathbb{P}(D_2(K)=1|K=\mathcal{P})-\mathbb{P}(D_2(K)=1|K=\mathcal{R}) > \nu''_n.$$ This contradicts with equation~\eqref{eq:video} since $\nu''_n > \nu_n$. Therefore, we conclude that for any polynomial time decoder, \begin{align} \mathbb{P}(M \ne \widehat{M}|K=\mathcal{P})-\mathbb{P}(M \ne \widehat{M}|K=\mathcal{R}) \le \nu''_n, \notag \end{align} which implies that as $n$ goes to infinity, the probability of error under pseudorandom key satisfies $$\lim_{n \to \infty} \mathbb{P}(M \ne \widehat{M}|K=\mathcal{P}) \le \lim_{n \to \infty} \mathbb{P}(M \ne \widehat{M}|K=\mathcal{R}) + \lim_{n \to \infty} \nu''_n = 0.$$ \begin{appendices} \section{Proofs of Claims~\ref{claim:atypical1} and~\ref{claim:atypical2}} \label{appendix:reliability} \claimfour* \noindent{\it Proof:} First note that \begin{align} &\mathbb{E}_{\C}\left(\frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j):\mathbf{x}_{ij} \notin \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij})\right) \\ & = \frac{1}{NL} \sum_{i=1}^N \sum_{j=1}^L \mathbb{E}_{\C}\left(\sum_{\mathbf{z}} W_{\mathbf{Z}|\mathbf{X}}(\mathbf{z}|\mathbf{x}_{ij}) \mathbbm{1}\Big\{(\mathbf{z} \in \mathcal{A}_{\Z}) \cap (\mathbf{x}_{ij} \in \mathcal{A}_{\X|\z}) \Big\}\right) \\ & = \mathbb{E}_{\C}\left(\sum_{\mathbf{z}} W_{\mathbf{Z}|\mathbf{X}}(\mathbf{z}|\mathbf{x}) \mathbbm{1}\Big\{(\mathbf{z} \in \mathcal{A}_{\Z}) \cap (\mathbf{x} \in \mathcal{A}_{\X|\z}) \Big\}\right) \label{eq:ball1}\\ &= \sum_{\mathbf{x} \in\{0,1\}^n} \sum_{\mathbf{z}} P_{\X}(\mathbf{x}) W_{\mathbf{Z}|\mathbf{X}}(\mathbf{z}|\mathbf{x}) \mathbbm{1}\Big\{(\mathbf{z} \in \mathcal{A}_{\Z}) \cap (\mathbf{x} \in \mathcal{A}_{\X|\z}) \Big\} \label{eq:cui}\\ &= \mathbb{P}_{\X\mathbf{Z}}\left(\mathbf{Z} \in \mathcal{A}_{\Z} \cap \X \in \mathcal{A}_{\X|\z} \right) \\ &\le \mathbb{P}_{\X\mathbf{Z}}\left(\X \in \mathcal{A}_{\X|\z} \right) \\ &= \mathbb{P}_{\X\mathbf{Z}}\left( f_{10}^{xz}(\X,\mathbf{Z}) \notin \rho q(1\pm n^{-\frac{1}{8}}) \text{ or } f_{11}^{xz}(\X,\mathbf{Z}) \notin \rho (1-q)(1\pm n^{-\frac{1}{8}})\right) \label{eq:ball2}\\ & \le \exp\left(-\frac{q\cdot t(q,\epsilon_d)}{3} n^{\frac{1}{4}}\right) + \exp\left(-\frac{(1-q)\cdot t(q,\epsilon_d)}{3} n^{\frac{1}{4}}\right).\label{eq:ball3} \end{align} We simplify the notation in~\eqref{eq:ball1} since the expectation $\mathbb{E}_{\C}(.)$ for each codeword $\mathbf{x}_{ij}$ is exactly the same. Equation~\eqref{eq:ball2} follows from the definition of the conditionally typical set $\mathcal{A}_{\X|\z}$, while~\eqref{eq:ball3} is due to the Chernoff bound. Finally, by applying the Markov's inequality, we have \begin{align} \mathbb{P}_{\C}\left(\frac{1}{NL}\sum_{\mathbf{z} \in \mathcal{A}_{\Z}} \sum_{(i,j): \mathbf{x}_{ij} \notin \mathcal{A}_{\X|\z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \ge \exp(-n^{1/8})\right) \le \exp(-\mathcal{O}(n^{1/4})). \label{eq:error1-2} \end{align} \qed \claimfive* \noindent{\it Proof:} Note that \begin{align} \mathbb{E}_{\C}\left(\frac{1}{NL}\sum_{i=1}^N \sum_{j=1}^L\sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij})\right) &= \mathbb{E}_{\C}\left(\sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x})\right) \\ &= \sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} \sum_{\mathbf{x} \in \{0,1\}^n} P_{\X}(\mathbf{x}) W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}) \\ &= \mathbb{P}\left(f_1^z(\mathbf{Z}) \notin (\rho *q)\cdot (1\pm n^{-\frac{1}{4}}) \right) \\ &\le \exp\left(-\frac{1}{3}(\rho * q) \sqrt{n}\right), \end{align} where the last step is due to the Chernoff bound. By the Markov's inequality, we have \begin{align} \mathbb{P}_{\C}\left(\frac{1}{NL}\sum_{i=1}^N \sum_{j=1}^L\sum_{\mathbf{z} \notin \mathcal{A}_{\Z}} W_{\mathbf{Z}|\X}(\mathbf{z}|\mathbf{x}_{ij}) \ge \exp(-n^{1/4}) \right) \le \exp(-\mathcal{O}(\sqrt{n})). \label{eq:error1-3} \end{align} \qed \section{Proof of Claim~\ref{claim:ratio1}} \label{appendix:ratio1} As first shown in~\cite{CheBJ:13}, the expected number of codewords falling into the a type class is given by \begin{align} &\mathbb{E}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \} \right) \\ &= \sum_{i=1}^N \sum_{j=1}^L \mathbb{P}_{\C}\left(\X_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \right) \label{eq:temp1} \\ &= \sum_{i=1}^N \sum_{j=1}^L \binom{n(f^{xz}_{01}+f^{xz}_{11})}{nf^{xz}_{11}} \rho^{nf^{xz}_{11}} (1-\rho)^{nf^{xz}_{01}} \cdot \binom{n(f^{xz}_{00}+f^{xz}_{10})}{nf^{xz}_{10}} \rho^{nf^{xz}_{10}} (1-\rho)^{nf^{xz}_{00}} \\ &=2^{r\sqrt{n} + 3 \log n} \cdot 2^{-n[\mathbb{I}(\mathbf{x};\mathbf{z})+ \mathbb{D}(\mathbf{x} \parallel \rho)] - 2\log \left(n+1\right)}, \end{align} and for any typical $\mathbf{z}$ and any conditionally typical $\mathbf{x}$, i.e., $(f^{xz}_{10}, f^{xz}_{11}) \in \mathcal{F}_n^{xz}$, \begin{align} \mathbb{I}(\mathbf{x};\mathbf{z}) = \rho (1-2q) \log \left(\frac{1-q}{q}\right) + \mathcal{O}(n^{-3/4}), \ \ \mathbb{D}(\mathbf{x} \parallel \rho) = \mathcal{O}(1). \end{align} Hence, for sufficiently large $n$, we have \begin{align} \mathbb{E}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \} \right) &\ge 2^{r\sqrt{n}+3\log n} \cdot 2^{-t(q,\epsilon_d)(1-2q)\log((1-q)/q) \sqrt{n}+o(\sqrt{n})} \\ &\overset{\text{$n \to \infty$}}={} 2^{(r-t(q,\epsilon_d)\cdot I_J(q))\sqrt{n}} \label{eq:temp2} \\ & = 2^{c\sqrt{n}}. \end{align} Finally, the Chernoff bound ensures that with probability at least $1-\exp(-2^{\mathcal{O}(\sqrt{n})})$ over the code design, a randomly chosen code $\C$ satisfies \begin{align} &\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\{\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \} > \left(1-\exp(-n^{\frac{1}{4}})\right) \cdot 2^{c\sqrt{n}}. \notag \end{align} \section{Proof of Claim~\ref{claim:ratio2}} \label{appendix:ratio2} The key step is to calculate the probability that a randomly generated codeword $\X$ falls into the the type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ and is simultaneously killed by a jamming vector $\mathbf{s}$. The probability is maximized when the support of $\mathbf{s}$ is entirely inside the support of $\mathbf{z}$. We now fix a typical $\mathbf{z}$ and a worst-case jamming vector $\mathbf{s}$ satisfying $\big| \text{supp}(\mathbf{z}) \cap \text{supp}(\mathbf{s})\big| = pn$. By the list decoding rule, a codeword $\mathbf{x}$ is included in the list $\mathcal{L}(\mathbf{y})$ (or $\mathcal{L}(\mathbf{x} + \mathbf{s})$) if \begin{align} \begin{cases} nf^{xy}_{10}(\mathbf{x},\mathbf{y}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1), \\ nf^{xy}_{11}(\mathbf{x},\mathbf{y}) > \rho n\left(1-\frac{p(1-q)}{q}\right)(1-\varepsilon_2). \end{cases} \label{eq:constraint1} \end{align} Note that $f^{xy}_{10}(\mathbf{x},\mathbf{y}) = f^{xs}_{11}(\mathbf{x},\mathbf{s})$ and $f^{xy}_{11}(\mathbf{x},\mathbf{y}) = f^{xs}_{10}(\mathbf{x},\mathbf{s})$ (as illustrated in Fig.~\ref{fig:fraction}), hence the constraint in~\eqref{eq:constraint1} is equivalent to \begin{align} \begin{cases} nf^{xs}_{11}(\mathbf{x},\mathbf{s}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\varepsilon_1), \\ nf^{xs}_{10}(\mathbf{x},\mathbf{s}) > \rho n\left(1-\frac{p(1-q)}{q}\right)(1-\varepsilon_2). \end{cases} \label{eq:constraint2} \end{align} \begin{figure} \begin{center} \includegraphics[scale=0.5]{fraction.pdf} \caption{The black region represents ones in the vector while the white region represents zeros in the vector. We denote the joint type classes between $\X$ and $\mathbf{Z}$,$\mathbf{S}$,$\Y$ respectively by $f_{ij}^{xz}$, $f_{ij}^{xs}$, $f_{ij}^{xy}$, for $(i,j) \in \{0,1\} \times \{0,1\}$.} \label{fig:fraction} \end{center} \end{figure} \noindent{We further notice that $f^{xs}_{10}(\mathbf{x},\mathbf{s}) = f^{xz}_{11}(\mathbf{x},\mathbf{z}) - f^{xs}_{11}(\mathbf{x},\mathbf{s}) + f^{xz}_{10}(\mathbf{x},\mathbf{z})$, and $f^{xz}_{10}(\mathbf{x},\mathbf{z})$, $f^{xz}_{11}(\mathbf{x},\mathbf{z})$ are tightly concentrated since $\mathbf{z} \in \mathcal{A}_{\Z}$ and $\mathbf{x} \in \mathcal{A}_{\X|\z}$. By setting $\varepsilon_1 = \frac{1}{\log(n)}$ and $\varepsilon_2 =\frac{p-pq}{(q-p+pq)\log(n)}$, the constraints in~\eqref{eq:constraint2} is also equivalent to} \begin{align} \begin{cases} nf^{xs}_{11}(\mathbf{x},\mathbf{s}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\frac{1}{\log(n)}), \\ nf^{xs}_{11}(\mathbf{x},\mathbf{s}) < \rho n\left(\frac{p(1-q)}{q}\right)(1+\frac{1}{\log(n)}) + \mathcal{O}(n^{-1/8}). \end{cases} \label{eq:constraint3} \end{align} Without loss of correctness, we ignore the lower order term $\mathcal{O}(n^{-1/8})$ in~\eqref{eq:constraint3} in the following analysis. Let $i_0 = \rho n \left(\frac{p(1-q)}{q}\right)(1+ \frac{1}{\log(n)})$ be the minimum amount of intersections between $\mathbf{x}$ and $\mathbf{s}$ such that $\mathbf{x}$ is killed by $\mathbf{s}$. A codeword $\mathbf{x}$ does not fall into the list $\mathcal{L}(\mathbf{x}+\mathbf{s})$ if $i_0 \le nf^{xs}_{11}(\mathbf{x},\mathbf{s}) \le nf^{xz}_{11}$. The probability that a randomly generated codeword $\X$ falls into the type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ and does not fall into the list $\mathcal{L}(\X+\mathbf{s})$ is bounded from above as \begin{align} &\mathbb{P}_{\X}\left(\left[\X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\X \notin \mathcal{L}(\X+\mathbf{s})\right] \right) \\ &= \mathbb{P}_{\X}\left(\X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \right) \cdot \mathbb{P}_{\X} \left(\X \notin \mathcal{L}(\X+\mathbf{s}) \big| \X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \right) \\ & = 2^{-t(q,\epsilon_d) \cdot I_J(q) \sqrt{n}+\mathcal{O}(n^{1/4})} \cdot \mathbb{P}_{\X} \left(\X \notin \mathcal{L}(\X+\mathbf{s}) \big| \X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11}) \right) \label{eq:movie1}\\ & = 2^{-t(q,\epsilon_d) \cdot I_J(q) \sqrt{n}+\mathcal{O}(n^{1/4})} \cdot \frac{\sum_{i=i_0}^{nf^{xz}_{11}} \binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}\binom{n\left(f_{00}^{xz}+f_{10}^{xz}\right)}{nf_{10}^{xz}}}{\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)}{nf_{11}^{xz}} \binom{n\left(f_{00}^{xz}+f_{10}^{xz}\right)}{nf_{10}^{xz}}} \label{eq:movie2}\\ &=2^{-t(q,\epsilon_d) \cdot I_J(q) \sqrt{n}+\mathcal{O}(n^{1/4})} \cdot \frac{\sum_{i=i_0}^{nf^{xz}_{11}} \binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}}{\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)}{nf_{11}^{xz}} } \\ &= 2^{-t(q,\epsilon_d) \cdot I_J(q) \sqrt{n}+\mathcal{O}(n^{1/4})} \cdot \frac{\sum_{i=i_0}^{nf^{xz}_{11}} \binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}}{\sum_{j=0}^{nf^{xz}_{11}} \binom{pn}{j}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-j}} \label{eq:g(i)}. \end{align} The calculation in~\eqref{eq:movie1} follows from equations~\eqref{eq:temp1}-\ref{eq:temp2}. In equation~\eqref{eq:movie2}, the denominator is the total number of $(\mathbf{x},\mathbf{z})$ pairs that belong to $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, while the numerator is the number of $(\mathbf{x},\mathbf{z})$ pairs that belong to $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ and are simultaneously killed by $\mathbf{s}$. In equation~\eqref{eq:g(i)}, we reformulate the denominator such that it has similar structure to the numerator. We define the auxiliary function $g(i)$ as \begin{align} g(i) = \binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}. \end{align} To find the maximum value of $g(i)$ when $0 \le i \le nf^{xz}_{11}$, we calculate the ratio between the two successive terms in the following: \begin{align} \frac{g(i+1)}{g(i)} &= \frac{\binom{pn}{i+1}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i-1}}{\binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}} = \frac{(pn-i)(nf^{xz}_{11})}{(i+1)(f^{xz}_{01}-pn+i+1)}. \end{align} Let $\phi \triangleq \frac{pf^{xz}_{11}n^2-nf^{xz}_{01}+pn-1}{nf^{xz}_{01}+nf^{xz}_{11}+2}$. It turns out that $g(i+1)/g(i) > 1$ when $i < \phi$, and $g(i+1)/g(i) < 1$ when $i > \phi$, which means the function $g(i)$ achieves its maximum when $i = \left \lceil{\phi}\right \rceil $. Note that the parameter $\phi$ itself depends on $f^{xz}_{01}$ and $f^{xz}_{11}$, i.e., the particular type class. One can prove that for typical $\mathbf{z}$ and conditionally typical type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$, the maximum value of $\phi$ is always bounded from above as \begin{align} \phi \le \frac{\rho n p(1-q)}{q}\left(1+n^{-1/8}\right) \triangleq \phi_{\max}. \end{align} Note that as $n$ grows without bound, $i_0$ is larger than $\phi_{\max}$, hence $g(i_0)$ is always smaller than $g(\phi_{\max})$. On the other hand, $g({i_0})$ is always greater than $g(\tilde{i})$, for any $\tilde{i} > i_0$. We now bound the second term in~\eqref{eq:g(i)} as \begin{align} \frac{\sum_{i=i_0}^{nf^{xz}_{11}} \binom{pn}{i}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-i}}{\sum_{j=0}^{nf^{xz}_{11}} \binom{pn}{j}\binom{n\left(f_{01}^{xz}+f_{11}^{xz}\right)-pn}{nf_{11}^{xz}-j}} \le \frac{\sum_{i=i_0}^{nf^{xz}_{11}}g(i)}{g(\phi)} \le \frac{\sum_{i=i_0}^{nf^{xz}_{11}}g(i)}{g(\phi_{\max})} \le \frac{g(i_0)\cdot \log(n)}{g(\phi_{\max})}. \label{eq:shu} \end{align} The last step follows from the geometric sequence \begin{align} \sum_{i=i_0}^{nf^{xz}_{11}}g(i) = \sum_{i=i_0}^{\infty}g(i) & = g(i_0) + g(i_0) \frac{g(i_0+1)}{g(i_0)} + g(i_0) \frac{g(i_0+1)}{g(i_0)}\frac{g(i_0+2)}{g(i_0+1)} + \cdots \cdots \\ & \le g(i_0) + g(i_0) \frac{g(i_0+1)}{g(i_0)} + g(i_0) \left(\frac{g(i_0+1)}{g(i_0)}\right)^2 + \cdots \cdots \label{eq:jun1} \\ &= g(i_0) \cdot \frac{1- \left(\frac{g(i_0+1)}{g(i_0)}\right)^{\infty}}{1-\frac{g(i_0+1)}{g(i_0)}} \\ &\le g(i_0)\cdot \log(n), \label{eq:jun2} \end{align} where inequality~\eqref{eq:jun1} holds since $g(i+1)/g(i)$ is monotonically decreasing, and inequality~\eqref{eq:jun2} follows from the fact $g(i_0+1)/g(i_0) \le 1 - 1/\log(n)$. To calculate the ratio between $g(i_0)$ and $g(\phi_{\max})$, we introduce an interpolation point $\phi' \triangleq \frac{\rho n p(1-q)}{q}\left(1 + \frac{1}{(\log(n))^2}\right)$. Note that $g(\phi_{\max}) \ge g(\phi')$ since $\phi' \ge \phi_{\max}$ and $g(i)$ is monotonically decreasing when $i \ge \phi_{\max}$. Now we consider the ratio between $g(i_0)$ and $g(\phi_{\max})$ as follows: \begin{align} \frac{g(i_0)}{g(\phi_{\max})} \le \frac{g(i_0)}{g(\phi')} &= \frac{g(\phi'+1)}{g(\phi')} \frac{g(\phi'+2)}{g(\phi'+1)} \frac{g(\phi'+3)}{g(\phi'+2)} \cdots \frac{g(i_0)}{g(i_0-1)} \\ &\le \left(\frac{g(\phi'+1)}{g(\phi')}\right)^{i_0 - \phi'} \\ &= \left(1 - \frac{1}{(\log(n))^2}\right)^{\frac{t(q,\epsilon_d) p(1-q)\sqrt{n}}{q}\left(\frac{1}{\log(n)} - \frac{1}{(\log(n))^2}\right)} \label{eq:anjing}\\ & \le \left(1 - \frac{1}{(\log(n))^2}\right)^{c_1\sqrt{n}/\log(n)}, \end{align} for some constant $c_1 > 0$. Inequality~\eqref{eq:anjing} follows since $g(\phi'+1)/g(\phi') \le 1 - 1/(\log(n))^2$. Using the approximation $\lim_{n \to \infty}(1+1/n)^n = 1/e$, as $n$ grows without bound, we obtain \begin{align} \frac{g(i_0)}{g(\phi_{\max})} \le e^{-c_1\sqrt{n}/(\log(n))^3}. \label{eq:shu2} \end{align} By combining~\eqref{eq:g(i)},~\eqref{eq:shu} and~\eqref{eq:shu2}, we finally show that \begin{align} \mathbb{P}_{\X}\left(\left[\X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\X \notin \mathcal{L}(\X+\mathbf{s})\right] \right) \le 2^{-(t(q,\epsilon_d) \cdot I_J(q)+c_2/(\log(n))^3)\sqrt{n} + \mathcal{O}(n^{1/4})}, \end{align} where $c_2 = c_1 \ln 2$. Without loss of correctness, we ignore the lower order terms to simplify the following analysis. The expected number of codewords falling into the type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ and is simultaneously killed by $\mathbf{s}$ equals \begin{align} \mu_2 &\triangleq \mathbb{E}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\}\right) \\ &= 2^{r\sqrt{n}}\cdot \mathbb{P}_{\X}\left(\left[\X \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\X \notin \mathcal{L}(\X+\mathbf{s})\right] \right) \\ &\le 2^{(c-c_2/(\log(n))^3)\sqrt{n}}, \end{align} where $c = r - t(q,\epsilon_d)\cdot I_J(q) > 0$. The probability that more than $\epsilon_1 \cdot 2^{c\sqrt{n}}$ messages that falls into the type class $\mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})$ as well as being killed by $\mathbf{s}$ is bounded from above as \begin{align} &\mathbb{P}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\} \ge \epsilon_1 \cdot 2^{c\sqrt{n}}\right) \\ &=\mathbb{P}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\} \ge \left(1+\frac{\epsilon_1 \cdot 2^{c\sqrt{n}}}{\mu_2}-1\right)\mu_2\right) \\ &\le \exp\left(-\frac{1}{3}\left(\frac{\epsilon_1 \cdot 2^{c\sqrt{n}}}{\mu_2}-1\right)\mu_2 \right) \\ &\le \exp\left(-\frac{2^{c\sqrt{n}}}{3}\left(\epsilon_1 - 2^{-c_2\sqrt{n}/(\log(n))^3}\right) \right). \end{align} By setting $\epsilon_1 = \exp(-n^{1/4})$, we have \begin{align*} &\mathbb{P}_{\C}\left(\sum_{i=1}^N \sum_{j=1}^L \mathbbm{1}\left\{\left[\mathbf{x}_{ij} \in \mathcal{T}_{\X|\z}(f^{xz}_{10}, f^{xz}_{11})\right] \cap \left[\mathbf{x}_{ij} \notin \mathcal{L}(\mathbf{x}+\mathbf{s})\right] \right\} < \exp(-n^{1/4}) \cdot 2^{c\sqrt{n}}\right) \ge 1 - \exp\left(-2^{\mathcal{O}(\sqrt{n})}\right). \end{align*} \qed \section*{Acknowledgement} The authors would like to thank Pascal O. Vontobel for his valuable comments. \end{appendices} \bibliographystyle{IEEEtran}
{ "timestamp": "2019-09-18T02:10:01", "yymm": "1805", "arxiv_id": "1805.02426", "language": "en", "url": "https://arxiv.org/abs/1805.02426" }
\section{Calculation of the scattering matrix of the graphene superlattice} \label{app_Smatrix} The calculation of the scattering matrix of the graphene superlattice region $0<x<L$, sandwiched between heavily doped pristine-graphene contacts, proceeds as follows. (See Refs.\ \onlinecite{S_Two06,S_Sny08} for similar calculations in graphene.) We start from the Dirac Hamiltonian $H(p_x,p_y)$, given by Eq.\ \eqref{HDiracPauli}. We consider solutions of the Dirac equation $H\Psi=E\Psi$ at energy $E$ that are plane waves in the $y$-direction, $\Psi(x,y)=\Psi_q(x)e^{iqy}$. The four-component spinor $\Psi_q(x)$ in the region $0<x<L$ is a solution of \begin{equation} \frac{\partial}{\partial x}\Psi_q(x)=\Xi(q)\Psi_q(x),\;\; \Xi(q)=i{v}_{\rm F}^{-1}\sigma_x[E- H(0,q)],\label{dpsidx} \end{equation} resulting in the transfer matrix \begin{equation} \Psi_q(L)={\cal T}(q)\Psi_q(0),\;\;{\cal T}(q)=e^{\Xi(q)}.\label{transfermatrix} \end{equation} The next step is to transform to a basis of right-moving and left-moving modes in the contact regions $x<0$, $x>L$. The Dirac Hamiltonian in those regions is \begin{equation} H_{\rm contact}=v_{\rm F}(p_x\sigma_x+p_y\sigma_y)-V_{\rm doping}.\label{Hcontact} \end{equation} [We use the same valley-isotropic basis $\Psi=(\psi_{{\rm K}{\rm A}},\psi_{{\rm K}{\rm B}},-\psi_{{\rm K}'{\rm B}},\psi_{{\rm K}'{\rm A}})$ as in Eq.\ \eqref{HDiracPauli}.] In the limit $V_{\rm doping}\rightarrow\infty$ of infinitely doped contacts the right-moving modes $\Psi_+(x,y)$ and left-moving modes $\Psi_-(x,y)$ are given for $x<0$ by \begin{equation} \begin{split} &\Psi_+=c_{\rm K}^+ e^{ikx+iqy}\begin{psmallmatrix} 1\\ 1\\ 0 \\ 0 \end{psmallmatrix}+c_{{\rm K}'}^+e^{ikx+iqy}\begin{psmallmatrix} 0\\ 0\\ 1 \\ 1 \end{psmallmatrix},\\ &\Psi_-=c_{\rm K}^-e^{-ikx+iqy}\begin{psmallmatrix} 1\\ -1\\ 0 \\ 0 \end{psmallmatrix}+c_{{\rm K}'}^-e^{-ikx+iqy}\begin{psmallmatrix} 0\\ 0\\ 1 \\ -1 \end{psmallmatrix}, \end{split} \end{equation} with $v_{\rm F}k=V_{\rm doping}\rightarrow\infty$. The same expression with $x\mapsto x-L$ applies for $x>L$. The transfer matrix in the basis $(c_{\rm K}^+,c_{\rm K}^-, c_{{\rm K}'}^+,c_{{\rm K}'}^-)$ is \begin{equation} \tilde{\cal T}(q)={\cal H}{\cal T}(q){\cal H},\;\;{\cal H}=\frac{1}{\sqrt{2}}\begin{psmallmatrix} 1&1&0&0\\ 1&-1&0&0\\ 0&0&1&1\\ 0&0&1&-1 \end{psmallmatrix}.\label{Hadamard} \end{equation} After this ``Hadamard transform''\cite{S_Sny08} we can directly read off the elements of the reflection matrix $\bm{r}$ from the $x=0$ interface, \begin{subequations} \begin{align} &{\bm r}=\begin{pmatrix} r_{\rm{KK}}&r_{\rm{KK}'}\\ r_{\rm{K}'{\rm K}}&r_{\rm{K}'\rm{K'}} \end{pmatrix}=-(\tilde{\cal T}_{--})^{-1}\cdot\tilde{\cal T}_{-+},\\ &\tilde{\cal T}_{--}=\begin{pmatrix} \tilde{\cal T}_{22}&\tilde{\cal T}_{24}\\ \tilde{\cal T}_{42}&\tilde{\cal T}_{44} \end{pmatrix},\;\;\tilde{\cal T}_{-+}=\begin{pmatrix} \tilde{\cal T}_{21}&\tilde{\cal T}_{23}\\ \tilde{\cal T}_{41}&\tilde{\cal T}_{43} \end{pmatrix}. \label{rresult} \end{align} \end{subequations} The final results are lengthy and not recorded here, but they are easily derived using a computer algebra system.
{ "timestamp": "2018-06-06T02:10:28", "yymm": "1805", "arxiv_id": "1805.02487", "language": "en", "url": "https://arxiv.org/abs/1805.02487" }
\section{Introduction} Let $(R, {\frak m})$ be a Noetherian local ring with the maximal ideal ${\frak m}$ of dimension $d>0$. The associated Buchsbaum-Rim multiplicities of an $R$-module $C$ of finite length, which is denoted by $\{ e^j(C) \}_{0 \leq j \leq d+r-1}$, are a sequence of integers. These are invariants of $C$ introduced by Kleiman-Thorup \cite{KT2} and Kirby-Rees \cite{KR2} independently. For an $R$-module $C$ of finite length with a minimal free presentation $R^n \stackrel{\varphi}{\to} R^r \to C \to 0$, the multiplicities are defined by the so-called Buchsbaum-Rim function of two variables $$\Lambda(p, q):={\ell}_R(S_{p+q}/M^{p}S_{q}), $$ where $S_p$ (resp. $M^p$) is a homogeneous component of degree $p$ of $S=\Sym_R(F)$ (resp. $R[M]=\Im \Sym_R(\varphi)$). The function $\Lambda(p, q)$ is eventually a polynomial of total degree $d+r-1$, and then the associated Buchsbaum-Rim multiplicities are defined as for $j=0, 1, \dots , d+r-1$, $$e^j(C):=(\mbox{The coefficient of} \ p^{d+r-1-j}q^j \ \mbox{in the polynomial})\times (d+r-1-j)!j!. $$ These are a descending sequence of non-negative integers with $e^{r-1}(C)$ is positive, and $e^j(C)=0$ for $j \geq r$. This was proved by Kleiman-Thorup \cite{KT2} and Kirby-Rees \cite{KR2} independently. Moreover, they proved that the first multiplicity $e^0(C)$ coincides with the ordinary Buchsbaum-Rim multiplicity $e(C)$ of $C$ introduced in \cite{BR2}, which is the normalized leading coefficient of the polynomial function $\lambda(p)=\Lambda(p, 0)=\ell_R(S_p/M^p)$ of degree $d+r-1$ for $p \gg 0$. Namely, $$e(C) = e^0(C) \geq e^1(C) \geq \dots \geq e^{r-1}(C)>e^r(C)= \dots = e^{d+r-1}(C)=0. $$ Note that the ordinary Buchsbaum-Rim multiplicity $e(R/I)$ of a cyclic module defined by an ${\frak m}$-primary ideal $I$ coincides with the classical Hilbert-Samuel multiplicity $e(I)$ of $I$. Thus, the ordinary Buchsbaum-Rim multiplicity $e^0(C)=e(C)$ and the associated one $e^j(C)$ are a generalization of the classical Hilbert-Samuel multiplicity. However, as compared to the classical Hilbert-Samuel multiplicity, the Buchsbaum-Rim multiplicities are not well-understood. There are some cases where the computation of the ordinary Buchsbaum-Rim multiplicity is possible (see \cite{Bi, CLU, J, KR1, KR2} for instance). In particular, in the case where $C$ is a direct sum of cyclic modules, there is an interesting relation between the ordinary Buchsbaum-Rim multiplicity and the mixed multiplicities of ideals. Let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$. Then Kirby and Rees proved that $$e(R/I_1 \oplus \dots \oplus R/I_r)=\sum_{\stackrel{i_1, \dots , i_r \geq 0}{i_1+\dots +i_r=d}}e_{i_1 \cdots i_r}(I_1, \dots , I_r), $$ where $e_{i_1 \cdots i_r}(I_1, \dots , I_r)$ is the mixed multiplicity of $I_1, \dots , I_r$ of type $(i_1, \dots , i_r)$ (see \cite{KR1, KR2} and also \cite{Bi}). Then we are interested in the other associated Buchsbaum-Rim multiplicities in this case. The starting point of this research is the following interesting formula which was also discovered by Kirby-Rees \cite{KR1, KR2}. Suppose that $I_1 \subset I_2 \subset \dots \subset I_r$. Then they proved that for any $j=1, \dots , r-1$, the $j$th Buchsbaum-Rim multiplicity can be expressed as the ordinary Buchsbaum-Rim multiplicity of a direct sum of $(r-j)$ cyclic modules defined by the last $(r-j)$ ideals: $$e^j(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_{j+1} \oplus \dots \oplus R/I_r). $$ In particular, the last positive one $e^{r-1}$ can be expressed as the classical Hilbert-Samuel multiplicity $e(I_r)$ of the largest ideal: $$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_r). $$ Then it is natural to ask the formula for $e^j(R/I_1 \oplus \dots \oplus R/I_r)$ without the assumption $I_1 \subset I_2 \subset \dots \subset I_r$. However, as compared to the special case considered in \cite{KR1, KR2}, it seems that the problem is more complicated, and we need a different approach to obtain the formula in general. Recently, we tried to compute the function $\Lambda(p, q)$ directly using some ideas and obtained the formula for $e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)$ without the assumption $I_1 \subset \dots \subset I_r$. Indeed, we proved in our previous work \cite[Theorem 1.3]{Ha2} that for any ${\frak m}$-primary ideals $I_1, \dots , I_r$, the last positive Buchsbaum-Rim multiplicity can be expressed as the classical Hilbert-Samuel multiplicity $e(I_1+\dots +I_r)$ of the sum of all ideals: $$e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/[I_1 + \dots + I_r]). $$ The present purpose is to improve the method of computation given in \cite{Ha2} towards a formula for not only the last positive Buchsbaum-Rim multiplicity $e^{r-1}(R/I_1 \oplus \dots \oplus R/I_r)$ but also the next one $e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)$ in terms of the ordinary Buchsbaum-Rim and Hilbert-Samuel multiplicities. Here is the main result. \begin{Theorem}\label{main} Let $I_1, \dots , I_r$ be arbitrary ${\frak m}$-primary ideals in $R$. Then we have a formula $$e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)=E_{r-1}(I_1, \dots , I_r)-(d+1)(r-1)e(R/[I_1 + \dots + I_r]), $$ where $E_{r-1}(I_1, \dots , I_r)$ is a sum of the ordinary Buchsbaum-Rim multiplicities of two cyclic modules defined by the ideals $I_1+\dots +\hat{I_j}+\dots +I_r$ and $I_1+\dots+ I_r: $ $$E_{r-1}(I_1, \dots , I_r):=\sum_{j=1}^r e(R/[I_1+\dots +\hat{I_j}+\dots +I_r] \oplus R/[I_1+\dots+ I_r]). $$ \end{Theorem} Let me illustrate the formula when $r=3$. Let $C=R/I_1\oplus R/I_2 \oplus R/I_3$. It is known that $e^0(C)$ coincides with the ordinary Buchsbaum-Rim multiplicity by \cite{KR2, KT2}, and $e^2(C)$ can be expressed as the ordinary Hilbert-Samuel multiplicity of the sum of all ideals by \cite{Ha2}. Theorem \ref{main} tells us that there is a similar expression for the remaining multiplicity $e^1(C)$. Namely, if we put $I_{123}:=I_1+I_2+I_3$ and $I_{ij}:=I_i+I_j$ for $1 \leq i < j \leq 3$, then we can write all the multiplicities in terms of ordinary Buchsbaum-Rim multiplicities and hence mixed multiplicities. \begin{align*} e^0(C) &= e(R/I_1\oplus R/I_2 \oplus R/I_3) & \\ e^1(C) &=e(R/I_{23} \oplus R/I_{123}) +e(R/I_{13} \oplus R/I_{123}) +e(R/I_{12} \oplus R/I_{123}) -2(d+1) e(R/I_{123}) & \\ e^2(C) &= e(R/I_{123}) . & \\ \end{align*} Our formula can be viewed as a natural generalization of the above mentioned Kirby-Rees formula for $e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)$ in a special case where $I_1 \subset I_2 \subset \dots \subset I_r$. Indeed, as an immediate consequence of Theorem \ref{main}, we get the following. \setcounter{section}{4} \setcounter{Theorem}{1} \begin{Corollary}\label{cor} Let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$ and assume that $I_1, \dots , I_{r-1} \subset I_r$, that is, the ideal $I_r$ is the largest ideal. Then we have a formula $$e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/[I_1+\dots +I_{r-1}] \oplus R/I_r). $$ In particular, if $I_1 \subset I_2 \subset \dots \subset I_r$, then $$e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_{r-1} \oplus R/I_r). $$ \end{Corollary} \setcounter{section}{1} The contents of the article are organized as follows. In the next section 2, we will recall some necessary notation and results from our previous work \cite{Ha2}. In section 3, we will compute the Buchsbaum-Rim function of two variables by improving the method in \cite{Ha2}. In the last section 4, we will give a proof of Theorem \ref{main} and its consequence Corollary \ref{cor}. We will also discuss the remaining multiplicities $e^j(C)$ for $j=1, \dots , r-3$. Throughout this article, we will work in the same manner in our previous work \cite{Ha2}. Let $(R, {\frak m})$ be a Noetherian local ring with the maximal ideal ${\frak m}$ of dimension $d>0$. Let $r>0$ be a fixed positive integer and let $[r]=\{1, \dots , r\}$. For a finite set $A$, ${}^{\sharp} A$ denotes the number of elements of $A$. Vectors are always written in bold-faced letters, e.g., $\boldsymbol i =(i_1, \dots , i_r)$. We work in the usual multi-index notation. Let $I_1, \dots , I_r$ be ideals in $R$ and let $t_1, \dots , t_r$ be indeterminates. Then for a vector $\boldsymbol i =(i_1, \dots , i_r) \in \mathbb Z_{\geq 0}^r$, we denote $\boldsymbol I^{\boldsymbol i}=I_1^{i_1} \cdots I_r^{i_r}, \boldsymbol t^{\boldsymbol i}=t_1^{i_1} \cdots t_r^{i_r}$ and $| \boldsymbol i | =i_1+ \dots + i_r$. For vectors $\boldsymbol a, \boldsymbol b \in \mathbb Z^r$, $\boldsymbol a \geq \boldsymbol b \stackrel{{\rm def}}{\Leftrightarrow} a_i \geq b_i \ \mbox{for all} \ i=1, \dots , r.$ Let $\boldsymbol 0=(0, \dots , 0)$ be the zero vector in $\mathbb Z_{\geq 0}^r$. By convention, empty sum is defined to be zero. \section{Preliminaries} In this section, we give a few elementary facts to compute the associated Buchsbaum-Rim multiplicities. See also \cite[section 2]{Ha2} for the related facts and the details. In what follows, let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$ and let $C=R/I_1 \oplus \dots \oplus R/I_r$. Let $S=R[t_1, \dots , t_r]$ be a polynomial ring over $R$ and let $R[M]=R[I_1t_1, \dots , I_rt_r]$ be the multi-Rees algebra of $I_1, \dots , I_r$. Let $S_p$ (resp. $M^p$) be a homogeneous component of degree $p$ of $S$ (resp. $R[M]$). Then it is easy to see that the function $\Lambda(p, q)$ can be expressed as $${\displaystyle \Lambda(p, q) = \sum_{\boldsymbol n \in H_{p,q}} \ell_R(R/J_{p, q}({\boldsymbol n})) }$$ where $H_{p, q}:=\{ \boldsymbol n \in \mathbb Z_{\geq 0}^r \mid |\boldsymbol n |=p+q \}$ and ${\displaystyle J_{p,q}({\boldsymbol n}):=\sum_ {\substack{| \boldsymbol i|=p \\ \boldsymbol 0 \leq \boldsymbol i \leq \boldsymbol n}} \boldsymbol I^ {\boldsymbol i} }$ for $\boldsymbol n \in H_{p, q}$. For a subset $\Delta \subset H_{p, q}$, we set $$\Lambda_{\Delta}(p, q):=\sum_{\boldsymbol n \in \Delta} \ell_R(R/J_{p, q}({\boldsymbol n})). $$ As in \cite{Ha2}, we consider the following special subsets of $H_{p, q}$, which will play a basic role in our computation of $\Lambda(p, q)$. For $p, q>0$ and $k=1, \dots , r$, let $$\Delta_{p, q}^{(k)}:=\{\boldsymbol n \in H_{p, q} \mid n_1, \dots , n_k>p, n_{k+1}+ \dots + n_r \leq p \}. $$ Then the function $\Lambda_{\Delta_{p, q}^{(k)}}(p, q)$ can be described explicitly as follows. \ \begin{Proposition}\label{2.1} $($\cite[Proposition 2.3]{Ha2}$)$ Let $p, q>0$ with $q \geq (p+1)r$ and let $k=1, \dots , r$. Then $$\Lambda_{\Delta_{p, q}^{(k)}}(p, q)=\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} {q-(k-1)p-1-(n_{k+1}+\dots +n_r) \choose k-1} \ell_R(R/{\frak a}), $$ where ${\frak a}$ is an ideal depending on $n_{k+1}, \dots , n_r: $ $$\displaystyle{ {\frak a}:=(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}}. $$ \end{Proposition} Here we make a slightly different description of the above mentioned basic functions $\Lambda_{\Delta_{p, q}^{(k)}}(p, q)$. To state it, we first recall some elementary facts about the ordinary Buchsbaum-Rim functions and multiplicities of a direct sum of cyclic modules. The ordinary Buchsbaum-Rim function $\lambda(p)$ of $C=R/I_1 \oplus \dots \oplus R/I_r$ (we will often denote it $\lambda_C(p)$ to emphasize the defining module $C$) can be expressed as follows: \begin{eqnarray*} \lambda(p)&=&\ell_R(S_p/M^p)\\ &=&\sum_ {\substack{\boldsymbol i \geq \boldsymbol 0 \\ | \boldsymbol i|=p}} \ell_R(R/\boldsymbol I^ {\boldsymbol i}) \\ &=&\sum_ {\substack{\boldsymbol i \geq \boldsymbol 0 \\ | \boldsymbol i|=p}} \ell_R(R/I_1^{i_1} \cdots I_r^{i_r}). \end{eqnarray*} In particular, if we consider the case where $I_1=\dots = I_r=:I$, then $$\lambda(p)={p+r-1 \choose r-1}\ell_R(R/I^p). $$ The function $\ell_R(R/I^p)$ is just the Hilbert-Samuel function of $I$ so that it is a polynomial for all large enough $p$, and one can write $$\ell_R(R/I^p)=\frac{e(R/I)}{d!}p^d+(\mbox{lower terms}), $$ where $e(R/I)$ is the usual Hilbert-Samuel multiplicity of $I$. Therefore, the ordinary Buchsbaum-Rim function can be expressed as $$\lambda(p)=\frac{e(R/I)}{d!(r-1)!}p^{d+r-1}+(\mbox{lower terms}). $$ This implies the following elementary formula for the ordinary Buchsbaum-Rim multiplicity: \begin{equation}\label{ordinaryBuchsbaum-Rim} e(C)=e(\underbrace{R/I \oplus \dots \oplus R/I}_{r})={d+r-1 \choose r-1}e(R/I). \end{equation} Now, let me give another description of $\Lambda_{\Delta_{p, q}^{(k)}}(p, q)$. \begin{Proposition}\label{2.2} Let $p, q>0$ with $q \geq (p+1)r$ and let $k=1, \dots , r$. Then \begin{multline*} \Lambda_{\Delta_{p, q}^{(k)}}(p, q)= {q-(k-1)p-1 \choose k-1}\lambda_{L_k}(p) \\ -\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} \sum_{i=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2-i \choose k-2} \ell_R(R/{\frak a}), \end{multline*} where $\displaystyle{L_k:=R/[I_1+\dots +I_k] \oplus \bigoplus_{j=k+1}^{r}R/[I_1+\dots +I_k+I_j]}$ is a direct sum of $(r-k+1)$ cyclic modules and $\displaystyle{{\frak a}:=(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}}$ is an ideal depending on $n_{k+1}, \dots , n_r$. \end{Proposition} \begin{proof} The case where $k=1$ follows from Proposition \ref{2.1}. Indeed, \begin{eqnarray*} \Lambda_{\Delta_{p, q}^{(1)}}(p, q) &=& \sum_{\stackrel{n_{2}, \dots , n_r \geq 0}{n_{2}+ \dots +n_r \leq p}} \ell_R\Big( R \big/I_1^{p-(n_2+\dots +n_r)} \prod_{j=2}^r (I_1+I_j)^{n_j} \Big) \\ &=& \sum_{\substack{\boldsymbol i \geq \boldsymbol 0 \\ | \boldsymbol i|=p}} \ell_R \big( R/I_1^{i_1} (I_1+I_2)^{i_2} \cdots (I_1+I_r)^{i_r} \big) \\ &=&\lambda_{L_1}(p). \end{eqnarray*} Suppose that $k \geq 2$. By using an elementary combinatorial formula ${m-\ell \choose n}={m \choose n}-\sum_{i=0}^{\ell-1} {m-\ell+i \choose n-1}$, one can see that \begin{eqnarray*} &&{q-(k-1)p-1-(n_{k+1}+\dots +n_r) \choose k-1}\\ &=&{q-(k-1)p-1 \choose k-1}-\sum_{j=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-1-(n_{k+1}+\dots+ n_r)+j \choose k-2} \\ &=&{q-(k-1)p-1 \choose k-1}-\sum_{j=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2+j-(n_{k+1}+\dots +n_r-1) \choose k-2} \\ &=&{q-(k-1)p-1 \choose k-1}-\sum_{i=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2-i \choose k-2}. \end{eqnarray*} By Proposition \ref{2.1}, we can write the function $\Lambda_{\Delta_{p, q}^{(k)}}(p, q)$ as follows: \begin{eqnarray*} &&\Lambda_{\Delta_{p, q}^{(k)}}(p, q)\\ &=&\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} {q-(k-1)p-1-(n_{k+1}+\dots +n_r) \choose k-1} \ell_R(R/{\frak a})\\ &=&\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} \left[ {q-(k-1)p-1 \choose k-1}-\sum_{i=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2-i \choose k-2} \right] \ell_R(R/{\frak a})\\ \end{eqnarray*} \begin{eqnarray*} &=& {q-(k-1)p-1 \choose k-1} \sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} \ell_R(R/{\frak a}) \\ &&\hspace{4cm} -\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} \sum_{i=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2-i \choose k-2} \ell_R(R/{\frak a}) \\ &=&{q-(k-1)p-1 \choose k-1} \lambda_{L_k}(p) \\ &&\hspace{4cm} -\sum_{\stackrel{n_{k+1}, \dots , n_r \geq 0}{n_{k+1}+ \dots +n_r \leq p}} \sum_{i=0}^{n_{k+1}+\dots +n_r-1}{q-(k-1)p-2-i \choose k-2} \ell_R(R/{\frak a}), \\ \end{eqnarray*} where $\displaystyle{L_k:=R/[I_1+\dots +I_k] \oplus \bigoplus_{j=k+1}^{r}R/[I_1+\dots +I_k+I_j]}$ is a direct sum of $(r-k+1)$ cyclic modules and $\displaystyle{{\frak a}:=(I_1+\dots +I_k)^{p-(n_{k+1}+\dots +n_r)} \prod_{j=k+1}^r(I_1+\dots +I_k+I_j)^{n_j}}$ is an ideal depending on $n_{k+1}, \dots , n_r$. \end{proof} \section{A computation of the Buchsbaum-Rim functions} In this section, we compute the function $\Lambda(p,q)$ by improving the method in \cite{Ha2} towards a formula for $e^{r-2}(R/I_1\oplus \dots \oplus R/I_r)$. The notation we will use here is under the same manner in \cite{Ha2}. See also \cite[Section 3]{Ha2} for more detailed observations. In order to compute the multiplicity defined by the asymptotic function $\Lambda(p, q)$, we may assume that $q \geq (p+1)r \gg 0.$ In what follows, let $p, q$ be fixed integers satisfying $q \geq (p+1)r \gg 0$. We put $H:=H_{p, q}$ for short. Then the set $H$ can be divided by $r$-regions $$H=\coprod_{k=1}^r H^{(k)}, $$ where $H^{(k)}:=\{ \boldsymbol n \in H \mid {}^{\sharp} \{ i \mid n_i > p \}=k \}. $ Moreover, we divide each $H^{(k)}$ into ${r \choose k}$-regions $$H^{(k)}=\coprod_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} D_A^{(k)}, $$ where $D_A^{(k)}:=\{ {\boldsymbol n} \in H^{(k)} \mid n_i >p \ \mbox{for} \ i \notin A, n_i \leq p \ \mbox{for} \ i \in A \}$ and $D_{\emptyset}^{(r)}=H^{(r)}$. Then $$H=\coprod_{k=1}^r \coprod_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} D_A^{(k)}. $$ Let me illustrate this decomposition when $r=3$. Figure \ref{pic1} below is the picture where $H^{(3)}=D_{\emptyset}^{(3)}$ is the region of the pattern of dots, $H^{(2)}=D_{\{1\}}^{(2)} \coprod D_{\{2\}}^{(2)} \coprod D_{\{3\}}^{(2)}$ is the region of no pattern, and $H^{(1)}=D_{\{1, 2\}}^{(1)} \coprod D_{\{1, 3\}}^{(1)} \coprod D_{\{2, 3\}}^{(1)}$ is the region of lines. \begin{figure}[h] \includegraphics[clip, trim=30 500 0 80]{Fig1.ps} \caption{A decomposition of $H$ when $r=3$}\label{pic1} \end{figure} Therefore, the computation of $\Lambda(p, q)$ can be reduced to the one of each $\Lambda_{D_A^{(k)}} (p, q)$: \begin{eqnarray*} \Lambda(p, q)&=&\sum_{k=1}^r \Lambda_{H^{(k)}}(p, q)\\ &=&\sum_{k=1}^r \sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \Lambda_{D_A^{(k)}} (p, q). \end{eqnarray*} When $k=r$, $D_{\emptyset}^{(r)}=H^{(r)}=\Delta_{p, q}^{(r)}$ so that we get the explicit description of $\Lambda_{H^{(r)}}(p, q)$ by Proposition \ref{2.2}. Similarly, when $k=r-1$, $D_{\{r\}}^{(r-1)}=\Delta_{p, q}^{(r-1)}$ so that we get the explicit description of $\Lambda_{D_{\{r\}}^{(r-1)}}(p, q)$ by Proposition \ref{2.2} and hence the one of $\Lambda_{H^{(r-1)}}(p, q)$. \begin{Proposition}\label{3.1} We have the following description of $\Lambda_{H^{(k)}}(p, q)$ when $k=r, r-1$. \begin{enumerate} \item The case where $k=r: $ $$\Lambda_{H^{(r)}} (p, q)={q-(r-1)p-1 \choose r-1} \lambda_{L}(p), $$ where $L:=R/[I_1+\dots +I_r]$ is a cyclic module. \item The case where $k=r-1: $ \begin{multline*} \Lambda_{H^{(r-1)}} (p, q)={q-(r-2)p-1 \choose r-2} \sum_{j=1}^r \lambda_{L_j}(p) \\ -\sum_{j=1}^r \sum_{n=0}^p \sum_{i=0}^{n-1} {q-(r-2)p-2-i \choose r-3} \ell_R(R/{\frak a}_j(n)) \end{multline*} where $L_j:=R/[I_1+\dots +\hat{I_j}+ \dots +I_r] \oplus R/[I_1+\dots +I_r]$ is a direct sum of two cyclic modules and $\displaystyle{{\frak a}_j(n):=(I_1+\dots +\hat{I_j}+ \dots +I_r)^{p-n} (I_1+\dots +I_r)^n}$ is an ideal depending on $j$ and $n$. \end{enumerate} \end{Proposition} \begin{proof} These follow directly from Proposition \ref{2.2}. \end{proof} We now turn to investigate the remaining functions $\Lambda_{H^{(k)}}(p, q)$ when $k=1, 2, \dots, r-2$. These cases seem to be more complicated than the case of $k=r, r-1$. Suppose that $k=1, 2, \dots , r-2$ and let $A$ be a subset of $[r]$ with ${}^{\sharp} A=r-k$. Then we divide the set $D_A^{(k)}$ into $2$-parts as follows: $$D_A^{(k)}=E_{A-}^{(k)} \coprod E_{A+}^{(k)}, $$ where $$E_{A-}^{(k)}:=\{ {\boldsymbol n} \in D_A^{(k)} \mid \sum_{i \in A} n_i \leq p \}, $$ $$E_{A+}^{(k)}:=\{ {\boldsymbol n} \in D_A^{(k)} \mid \sum_{i \in A} n_i > p \}. $$ Let $$H_{-}^{(k)}:=\coprod_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} E_{A-}^{(k)},$$ $$H_{+}^{(k)}:=\coprod_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} E_{A+}^{(k)}. $$ Then $$\Lambda_{H^{(k)}}(p, q)=\Lambda_{H_{-}^{(k)}}(p, q)+\Lambda_{H_{+}^{(k)}}(p, q). $$ Let me illustrate this decomposition when $r=3$. Figure \ref{pic2} below is the picture where $H_{-}^{(1)}=E_{\{1, 2\}-}^{(1)} \coprod E_{\{1, 3\}-}^{(1)} \coprod E_{\{2, 3\}-}^{(1)}$ is the region of the pattern of lines, and $H_{+}^{(1)}=E_{\{1, 2\}+}^{(1)} \coprod E_{\{1, 3\}+}^{(1)} \coprod E_{\{2, 3\}+}^{(1)}$ is the region of the pattern of dots. \ \begin{figure}[h] \includegraphics[clip, trim=30 500 0 80]{Fig2.ps} \caption{A decomposition of $H^{(1)}$ when $r=3$}\label{pic2} \end{figure} Here we note that $E_{\{k+1, \dots , r\}-}^{(k)}=\Delta_{p, q}^{(k)}$ for any $k=1, 2, \dots , r-2$. Thus, the function $\Lambda_{H_{-}^{(k)}}(p, q)$ can be expressed explicitly as follows, similar to the one of $\Lambda_{H^{(r)}}(p, q)$ and $\Lambda_{H^{(r-1)}}(p, q)$. \begin{Proposition}\label{3.2} For any $k=1, 2, \dots, r-2$, we have the following description. \begin{multline*} \Lambda_{H_{-}^{(k)}} (p, q) ={q-(k-1)p-1 \choose k-1} \sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \lambda_{L_A}(p) \\ -\sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \sum_{\stackrel{n_j \geq 0 (j \in A)}{(\sum_{j \in A} n_j) \leq p}} \sum_{i=0}^{(\sum_{j \in A} n_j)-1} {q-(k-1)p-2-i \choose k-2} \ell_R(R/{\frak a}), \end{multline*} where $ \displaystyle{ L_A:=\bigg( R\Big/ \Big[\sum_{s \in [r] \setminus A} I_s \Big] \bigg) \oplus \bigoplus_{j \in A} \bigg( R \Big/ \Big[ \sum_{s \in [r] \setminus A} I_s +I_j \Big] \bigg) } $ is a direct sum of $(r-k+1)$ cyclic modules and $\displaystyle{{\frak a}:=\Big(\sum_{s \in [r] \setminus A} I_s \Big)^{p-(\sum_{j \in A} n_j)} \prod_{j \in A}\Big(\sum_{s \in [r] \setminus A} I_s+I_j \Big)^{n_j}}$ is an ideal depending on $A$ and $n_j$ $(j \in A)$. \end{Proposition} \begin{proof} This follows directly from Proposition \ref{2.2}. \end{proof} On the other hand, the function $\Lambda_{H_{+}^{(k)}}(p, q)$ seems to be more complicated than the one $\Lambda_{H_{-}^{(k)}}(p, q)$. We do not get the explicit description, but we have the following inequality. \begin{Proposition}\label{3.3} For any $k=1, 2, \dots , r-2$, there exists a polynomial $g^{\circ}_{k}(X) \in \mathbb Q[X]$ of degree $d+r-k$ such that $$\Lambda_{H_{+}^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1}g^{\circ}_{k}(p). $$ \end{Proposition} \begin{proof} This follows from \cite[Lemma 3.5]{Ha2}. \end{proof} Here we consider the following functions $g_k(p)$ and $h_k(p, q)$ appeared in Propositions \ref{3.1} and \ref{3.2}, which will be used in the next section. For any $k=1, \dots , r-1$, we define \begin{align} g_k(p)&:=\sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \lambda_{L_A}(p) \label{polyg} \\ h_k(p, q)&:=\sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=r-k}} \sum_{\stackrel{n_j \geq 0 (j \in A)}{(\sum_{j \in A} n_j) \leq p}} \sum_{i=0}^{(\sum_{j \in A} n_j)-1} {q-(k-1)p-2-i \choose k-2} \ell_R(R/{\frak a}) \end{align} where $ \displaystyle{ L_A:=\bigg( R\Big/ \Big[\sum_{s \in [r] \setminus A} I_s \Big] \bigg) \oplus \bigoplus_{j \in A} \bigg( R \Big/ \Big[ \sum_{s \in [r] \setminus A} I_s+I_j \Big] \bigg) } $ is a direct sum of $(r-k+1)$ cyclic modules and $\displaystyle{{\frak a}:=\Big(\sum_{s \in [r] \setminus A} I_s \Big)^{p-(\sum_{j \in A} n_j)} \prod_{j \in A}\Big(\sum_{s \in [r] \setminus A} I_s+I_j \Big)^{n_j}}$. When $k=r$, we set $g_r(p)=\lambda_{R/[I_1+\dots +I_r]}(p)$ and $h_r(p, q)=0$. Note that for $p, q \gg 0$, $g_k(p)$ is a polynomial function of degree $d+r-k$, and $h_k(p, q)$ is a non-negative integer valued function. Then, the above two Propositions \ref{3.2} and \ref{3.3} imply the following. \begin{Corollary}\label{3.4} For any $k=1, 2, \dots , r-2$, there exists a polynomial $f_{k}(X) \in \mathbb Q[X]$ of degree $d+r-k$ such that $$\Lambda_{H^{(k)}}(p, q) \leq {q-(k-1)p-1 \choose k-1}f_{k}(p). $$ \end{Corollary} \begin{proof} By Propositions \ref{3.2} and \ref{3.3}, \begin{eqnarray*} \Lambda_{H^{(k)}}(p, q)&=&\Lambda_{H_{-}^{(k)}}(p, q)+\Lambda_{H_{+}^{(k)}}(p, q) \\ &\leq& {q-(k-1)p-1 \choose k-1} g_k(p)-h_k(p, q)+{q-(k-1)p-1 \choose k-1} g_k^{\circ}(p) \\ &=&{q-(k-1)p-1 \choose k-1} \big(g_k(p)+g^{\circ}_k(p)\big)-h_k(p, q)\\ &\leq &{q-(k-1)p-1 \choose k-1} (g_k(p)+g^{\circ}_k(p)). \end{eqnarray*} Thus, $f_k(X):=g_k(X)+g_k^{\circ}(X)$ is our desired polynomial. \end{proof} \section{Proof of Theorem \ref{main}} We give a proof of Theorem \ref{main}. In this section, we work in the same situation and under the same notation as in the previous sections. For $k=1, 2, \dots , r$, we consider the following function: $$F_k(p, q):=\Lambda(p, q)-\sum_{i=1}^k {q-(r-i)p-1 \choose r-i} g_{r-i+1}(p), $$ which is a polynomial function for $p, q \gg 0$ with the total degree is at most $d+r-1$. We begin with the following. \begin{Proposition}\label{limit} Suppose that $p$ is a large enough fixed integer. Then $$\lim_{q \to \infty} \frac{1}{q^{r-2}} F_2(p, q)=0. $$ \end{Proposition} \begin{proof} Fix $p \gg 0$. By Proposition \ref{3.1} and Corollary \ref{3.4}, we have the following equalities and inequality. \begin{eqnarray*} F_2(p, q)+h_{r-1}(p, q)&=&\Lambda(p, q)-\Lambda_{H^{(r)}}(p, q)-\Lambda_{H^{(r-1)}}(p, q)\\ &=&\sum_{k=1}^{r-2}\Lambda_{H^{(k)}}(p, q)\\ &\leq & \sum_{k=1}^{r-2} {q-(k-1)p-1 \choose k-1} f_k(p). \end{eqnarray*} Hence, we have that $$-h_{r-1}(p, q) \leq F_2(p, q) \leq \sum_{k=1}^{r-2} {q-(k-1)p-1 \choose k-1} f_k(p). $$ Therefore, it is enough to show that \begin{align} & \lim_{q \to \infty} \frac{1}{q^{r-2}} \sum_{k=1}^{r-2} {q-(k-1)p-1 \choose k-1} f_k(p)=0, \ \mbox{and} \label{lim1}\\ & \lim_{q \to \infty} \frac{1}{q^{r-2}} h_{r-1}(p, q)=0.\label{lim2} \end{align} The first assertion (\ref{lim1}) is clear because the degree of a polynomial function $$\sum_{k=1}^{r-2} {q-(k-1)p-1 \choose k-1} f_k(p)$$ with respect to $q$ is at most $(r-2)-1=r-3$. We show the second assertion (\ref{lim2}). Then one can see that \begin{eqnarray*} h_{r-1}(p, q) &=& \sum_{j=1}^r \sum_{n=0}^p \sum_{i=0}^{n-1} {q-(r-2)p-2-i \choose r-3} \ell_R(R/{\frak a}_j(n)) \\ &\leq & \sum_{j=1}^r \sum_{n=0}^p n {q-(r-2)p-2 \choose r-3} \ell_R(R/{\frak a}_j(n)) \\ &\leq & \sum_{j=1}^r \sum_{n=0}^p p {q-(r-2)p-2 \choose r-3} \ell_R(R/{\frak a}_j(n)) \\ &= & p {q-(r-2)p-2 \choose r-3} \sum_{j=1}^r \sum_{n=0}^p \ell_R(R/{\frak a}_j(n)), \\ \end{eqnarray*} where $\displaystyle{{\frak a}_j(n):=(I_1+\dots +\hat{I_j}+ \dots +I_r)^{p-n} (I_1+\dots+ I_r)^n}$. Note that $$\sum_{j=1}^r \sum_{n=0}^p \ell_R(R/{\frak a}_j(n))=\sum_{j=1}^r \lambda_{L_j}(p)$$ is a sum of the ordinary Buchsbaum-Rim functions of two cyclic modules, where $$L_j=R/[I_1+\dots +\hat{I_j}+\dots +I_r] \oplus R/[I_1+\dots +I_r]. $$ Hence, noting that $h_{r-1}(p, q) \geq 0$, we have that $$0 \leq h_{r-1}(p, q) \leq {q-(r-2)p-2 \choose r-3} u(p)$$ for some polynomial function $u(p)$ of degree $(d+1)+1=d+2$. Therefore, $$\lim_{q \to \infty} \frac{1}{q^{r-2}} {q-(r-2)p-2 \choose r-3} u(p)=0$$ so that $\lim_{q \to \infty} \frac{1}{q^{r-2}} h_{r-1}(p, q)=0. $ \end{proof} We are now ready to prove Theorem \ref{main}. \begin{proof}[Proof of Theorem \ref{main}] The degree of $\Lambda(p, q)$ with respect to $q$ is at most $r-1$ so that one can write $$\Lambda(p, q)=\sum_{i=0}^{r-1} a_i q^i $$ where each $a_i$ is a polynomial function of $p$ with degree at most $d+r-1-i$. Similarly, we can write \begin{align} {q-(r-1)p-1 \choose r-1} g_r(p)&=\sum_{j=0}^{r-1} b_j q^j \notag \\ {q-(r-2)p-1 \choose r-2} g_{r-1}(p)&=\sum_{k=0}^{r-2} c_k q^k \notag \end{align} where each $b_j$ (resp. $c_k$) is a polynomial function of $p$ with degree at most $d+r-1-j$ (resp. $d+r-1-k$). Then $$F_2(p, q)=(a_{r-1}-b_{r-1})q^{r-1}+(a_{r-2}-b_{r-2}-c_{r-2})q^{r-2}+ (\mbox{lower terms in} \ q). $$ By Proposition \ref{limit}, we have the equalities as polynomials of $p$, \begin{align} a_{r-1}&=b_{r-1}, \ \mbox{and} \label{eq1} \\ a_{r-2}&=b_{r-2}+c_{r-2}. \label{eq2} \end{align} Note that the first equality (\ref{eq1}) implies a formula $e^{r-1}(C)=e(R/[I_1+\dots +I_r])$ which is our previous result in \cite{Ha2}. We then look at the second equality (\ref{eq2}). Since the total degree $\Lambda(p, q)$ is $d+r-1$, and the coefficient of $p^{d+1}q^{r-2}$ is non-zero, which is $\frac{e^{r-2}(C)}{(d+1)!(r-2)!}$, the polynomial $a_{r-2}$ is of the form: $$a_{r-2}=\frac{e^{r-2}(C)}{(d+1)!(r-2)!}p^{d+1}+(\mbox{lower terms in } p). $$ Since $g_r(p)=\lambda_{R/[I_1+\dots +I_r]}(p)$ is the Hilbert-Samuel function of $I_1+\dots +I_r$, \begin{eqnarray*} &&{q-(r-1)p-1 \choose r-1} g_r(p) \\ &=&{q-(r-1)p-1 \choose r-1} \left( \frac{e(R/[I_1+\dots +I_r])}{d!}p^d+(\mbox{lower terms in } p )\right) \\ &=&\frac{(q-(r-1)p)^{r-1}}{(r-1)!} \cdot \frac{e(R/[I_1+\dots +I_r])}{d!}p^d+(\mbox{lower terms}) \\ &=&\frac{e(R/[I_1+\dots +I_r])}{d!(r-1)!} p^dq^{r-1} -\frac{(r-1)e(R/[I_1+\dots +I_r])}{d!(r-2)!}p^{d+1}q^{r-2}+(\mbox{lower terms in $q$}) \end{eqnarray*} so that $$b_{r-2}=-\frac{(r-1)e(R/[I_1+\dots +I_r])}{d!(r-2)!}p^{d+1}. $$ Similarly, since $g_{r-1}(p)=\sum_{j=1}^r \lambda_{L_j}(p)$, and its normalized leading coefficient is $$E_{r-1}:=E_{r-1}(I_1, \dots , I_r):=\sum_{j=1}^r e(L_j), $$ where $$L_j= R/[I_1+\dots +\hat{I_j}+\dots +I_r] \oplus R/[I_1+\dots +I_r], $$ we have that \begin{eqnarray*} {q-(r-2)p-1 \choose r-2} g_{r-1}(p) &=&{q-(r-2)p-1 \choose r-2} \left( \frac{E_{r-1}}{(d+1)!}p^{d+1}+(\mbox{lower terms in $p$}) \right) \\ &=&\frac{(q-(r-2)p)^{r-2}}{(r-2)!} \cdot \frac{E_{r-1}}{(d+1)!}p^{d+1}+(\mbox{lower terms}) \\ &=&\frac{E_{r-1}}{(d+1)!(r-2)!} p^{d+1}q^{r-2}+(\mbox{lower terms in $q$}). \end{eqnarray*} Therefore, we get that $$c_{r-2}=\frac{E_{r-1}}{(d+1)!(r-2)!} p^{d+1}. $$ By comparing the coefficient of $p^{d+1}$ in the equation (\ref{eq2}), we have the equality $$\frac{e^{r-2}(C)}{(d+1)!(r-2)!}=-\frac{(r-1)e(R/[I_1+\dots +I_r])}{d!(r-2)!}+\frac{E_{r-1}(I_1, \dots , I_r)}{(d+1)!(r-2)!}. $$ By multiplying $(d+1)!(r-2)!$ to the above equation, we get the desired formula. \end{proof} As stated in the proof, the proof of Theorem \ref{main} contains our previous result in \cite{Ha2}. Moreover, the obtained formula for $e^{r-2}(C)$ can be viewed as a natural generalization of the Kirby-Rees formula given in \cite{KR2}. \begin{Corollary}\label{cor} Let $I_1, \dots , I_r$ be ${\frak m}$-primary ideals in $R$ and assume that $I_1, \dots , I_{r-1} \subset I_r$, that is, the ideal $I_r$ is the largest ideal. Then we have a formula $$e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/[I_1+\dots +I_{r-1}] \oplus R/I_r). $$ In particular, if $I_1 \subset I_2 \subset \dots \subset I_r$, then $$e^{r-2}(R/I_1 \oplus \dots \oplus R/I_r)=e(R/I_{r-1} \oplus R/I_r). $$ \end{Corollary} \begin{proof} Suppose that $I_1, \dots , I_{r-1} \subset I_r$. Then by Theorem \ref{main}, \begin{eqnarray*} e^{r-2}(C) &=& \sum_{j=1}^r e( R/[I_1+\dots +\hat{I_j}+\dots +I_r] \oplus R/[I_1+\dots +I_r] ) \\ && \hspace{5.5cm} -(d+1)(r-1)e(R/[I_1+\dots +I_r]) \\ &=& e(R/[I_1+ \dots +I_{r-1}] \oplus R/I_r)+(r-1)e(R/I_r \oplus R/I_r) \\ && \hspace{7.5cm} -(d+1)(r-1)e(R/I_r) \\ &=& e(R/[I_1+\dots +I_{r-1}] \oplus R/I_r)+(r-1)(d+1)e(R/I_r) \\ && \hspace{7.5cm} -(d+1)(r-1)e(R/I_r) \\ &=& e(R/[I_1+\dots +I_{r-1}] \oplus R/I_r). \end{eqnarray*} Here the third equality follows from the elementary formula (\ref{ordinaryBuchsbaum-Rim}). \end{proof} Before closing this article, we would like to give a few observations on the remaining multiplicities. We first recall the polynomial function $F_k(p, q)$ defined at the beginning of this section: $$F_k(p, q):=\Lambda(p, q)-\sum_{i=1}^k {q-(r-i)p-1 \choose r-i} g_{r-i+1}(p). $$ The key of our proof of Theorem \ref{main} is the fact that $\deg_q F_2(p, q) \leq r-3$ (Proposition \ref{limit}). It would be interesting to know whether this kind of property holds true or not for which $k$. \begin{Question}\label{question} Let $p$ be a fixed large enough integer. Then for which $k=1, 2, \dots , r-1$, does the following hold true? $$\lim_{q \to \infty} \frac{1}{q^{r-k}} F_k(p, q)=0. $$ In other word, is the degree of $F_k(p, q)$ with respect to $q$ at most $r-k-1$? \end{Question} This holds true when $k=2$ (and also $k=1$) by Proposition \ref{limit}. We are interested in the remaining cases. Suppose that $k \geq 3$. The affirmative answer to Question \ref{question} will tell us that for any $1 \leq j \leq k$, the $(r-j)$th associated Buchsbaum-Rim multiplicity $e^{r-j}(C)$ is determined by the polynomial \begin{equation} \sum_{i=1}^{k} {q-(r-i)p-1 \choose r-i} g_{r-i+1}(p). \label{expectedpoly} \end{equation} Then we will be able to describe the multiplicity $e^{r-j}(C)$ as a sum of the ordinary Buchsbaum-Rim multiplicities of a direct sum of at most $(r-j)$ cyclic modules in the same manner. Here we would like to record the expected formula. Note that the polynomial $g_{r-i+1}(p)$ defined in (\ref{polyg}) is of the form $$g_{r-i+1}(p)=\frac{1}{(d+i-1)!} \sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=i-1}} e(L_A) \cdot p^{d+i-1}+\mbox{(lower terms)}$$ where $ \displaystyle{ L_A:=\bigg( R\Big/ \Big[\sum_{s \in [r] \setminus A} I_s \Big] \bigg) \oplus \bigoplus_{j \in A} \bigg( R \Big/ \Big[ \sum_{s \in [r] \setminus A} I_s+I_j \Big] \bigg) } $. We put $$E_{r-i+1}:=E_{r-i+1}(I_1, \dots , I_r):=\sum_{\stackrel{A \subset [r]}{{}^{\sharp}A=i-1}} e(L_A).$$ Then for any $1 \leq j \leq k$, the coefficient of $p^{d+j-1}q^{r-j}$ in the polynomial (\ref{expectedpoly}) is $$\sum_{i=1}^j \frac{E_{r-i+1}}{(d+i-1)!(r-i)!} {r-i \choose r-j} \big( -(r-i) \big)^{j-i}. $$ If Question \ref{question} is affirmative, then the above coefficient coincides with $$\frac{e^{r-j}(C)}{(d+j-1)!(r-j)!} $$ so that we can get the formula for $e^{r-j}(C)$. Therefore, we can ask the following. \begin{Question}\label{conj} Under the same notation as above, does the formula $$e^{r-j}(R/I_1 \oplus \dots \oplus R/I_r)=\sum_{i=1}^j {d+j-1 \choose j-i} \big( -(r-i) \big)^{j-i} E_{r-i+1}(I_1, \dots , I_r)$$ hold true? \end{Question} This is affirmative when $j=1$ (\cite[Theorem 1.3]{Ha}) and $j=2$ (Theorem \ref{main}). Note that the affirmative answer to Question \ref{question} for some $k$ implies the affirmative one to Question \ref{conj} for any $1 \leq j \leq k$.
{ "timestamp": "2018-05-08T02:13:38", "yymm": "1805", "arxiv_id": "1805.02314", "language": "en", "url": "https://arxiv.org/abs/1805.02314" }
\section{Introduction} Recent years have witnessed a rapid increase in the requirements of real-time video streaming~\cite{cisco}. Live video streams are being published and watched by different applications(e.g., Twitch, Kwai, Douyu) at any time, from anywhere, and under any network environments. Due to the complicated environment and stochastic property in various network conditions, transmitting video stream with high video bitrate and low latency has become the fundamental challenge in real-time video streaming scenario. Many rate control approaches have been proposed to tackle the problem, such as loss-based approach (TFRC~\cite{handley2002tcp}, RAP~\cite{752152}), delay-based approach (Vegas~\cite{brakmo1995tcp}, LEDBAT (Over UDP)~\cite{rossi2010ledbat}), and model-based approach~(Google Congestion Control(GCC)~\cite{carlucci2016analysis}, Rebera~\cite{kurdoglu2016real}). The same strategy of them is to select bitrate as high as possible with the permission of network condition. However, due to the inequality between high video quality and high bitrate, this strategy may cause a large waste of bandwidth resources. For example, if a video footage consists of darkness and few objects, a low bitrate may also provide a barely satisfactory perceptual video quality but can save large bandwidth resources, and the example is shown in Figure~\ref{fig:vmaf}(a). In this paper, we propose QARC(video Quality Awareness Rate Control), a novel deep-learning based rate control algorithm aiming to obtain high video quality and low latency. Due to that fixed rules fail to effectively handle the complicated scenarios caused by perplexing network conditions and various video content, we leverage DRL-based method to select the future video bitrate, which can adjust itself automatically to the variety of its inputs. In detail, QARC uses DRL method to train a neural network to select the bitrate for future video frames based on past time network status observed and historical video frames. However, if we directly import raw pictures as the inputs of state, the state space will cause ``state explosion''~\cite{Clarke2012}. To overcome this, we meticulously divide this complexed RL model into two feasible and useful models: one is Video Quality Prediction Network (VQPN), which can predict future video quality via previous video frames; the other is Video Quality Reinforcement Learning (VQRL). VQRL uses A3C~\cite{mnih2016asynchronous}, a DRL method, to train the neural network. The inputs of the VQRL are past time network status observed and future video quality predicted by VQPN, and the output is the bitrate for the next video with high video quality and low latency. We design the training methodologies for those two neural networks respectively. To train VQPN, in addition to some general test video clips, we build up a dataset consisting of various types videos including movie, live-cast show, and music video. For training VQRL, we propose an offline acceleration network simulator to emulate real-world network environment with a trace-driven dataset. We then collect a corpus of network traces for the simulator with both packet-level traces and chunk-level public traces. After deciding the architecture of two neural networks, we compare QARC with existing proposed approaches, results of trace-driven emulation show that QARC outperforms with existing proposed approaches, with improvements in average video quality of 18\% - 25\% and decreases in average queuing delay of 23\% - 45\%. Besides that, by comparing the performance of QARC with the baseline which represents the offline optimal based on high bitrate and low latency over different network conditions and videos, we find that in all considered scenarios, despite a decrease in average video quality of only 4\% - 9\%, QARC saves the sending rate with 46\% to 60\% and reduces the average queuing delay of 40\% to 50\%. As a result, our contributions are shown as follows. \begin{itemize} \item Unlike the previous goal, we propose a novel sight to evaluate QoE: aiming to optimize video quality rather than video bitrate during the entire video session. \item To the best of our knowledge, we are the first to establish a deep reinforcement learning (DRL) model to select sending bitrate for future video frames based on jointly considered perceptual video quality and network status observed in the real-time video streaming scenario. \item Due to the complexity of input state, we derive the neural network into two parts: the first part is a neural network used to precisely predict future video quality based on the previous video frames; the second part is an RL model used to determine the proper bitrate based on the output of the first model. By using the output video quality from the first part instead of the raw video frames, the state space of the RL model can be reduced efficiently. \end{itemize} \begin{figure} \centering \begin{minipage}{0.5\linewidth} \centering \subfigure[A sample video clip with static video background~\cite{beyourself}]{\includegraphics[width=0.5\textwidth]{figs/0_txt} \includegraphics[width=0.5\textwidth]{figs/1_0}} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \subfigure[A sample video clip with dynamic video scene\cite{KiboutekiRefrain}]{\includegraphics[width=0.5\textwidth]{figs/1_txt} \includegraphics[width=0.5\textwidth]{figs/0_0}} \end{minipage} \begin{minipage}{0.5\linewidth} \centering \subfigure[A sample video clip with both static and dynamic video scene\cite{ILoveIEmbrace}]{\includegraphics[width=0.5\textwidth]{figs/2_txt} \includegraphics[width=0.5\textwidth]{figs/2_0}} \end{minipage} \caption{This group of figures shows our motivation: In the real-time live streaming scenario, high video bitrate is equaled to high video quality, however, in some circumstance, high video quality only requires a low bitrate.} \label{fig:vmaf} \end{figure} \section{Motivation} In this section, we start by designing an experiment to answer the fundamental question: With the enhancement of video encoding technology, how will the correlation change between video quality and video bitrate? \label{sec:qualityandbitrate} To solve this, we establish a testbed to assess the video quality score of selected videos with the given encoding bitrate. The selected videos consist of three video clips, and each of them represents a video with static video scene (live-cast), a video with dynamic video scene (live concert), and a video with hybrid static video scene and dynamic video scene (MV) respectively. In our experiment, we use Video Multi-Method Assessment Fusion(VMAF), a smart perceptual video quality assessment algorithm based on support vector machine(SVM)~\cite{rassool2017vmaf}. We compare the video quality score of each video in different encoders. In detail, we use three video encoders in our experiments including x264~\cite{x264}, x265~\cite{x265}, and AV1~\cite{av1}. The first two encoders are popularly used nowadays, and the last one is the state-of-the-art video encoder proposed by Google. As illustrated in Figure~\ref{fig:vmaf}, comparing VMAF score of different encoders on different videos and encoded bitrates, the results show that as the encode bitrate increases, the rate of increase in video quality score decreases. In addition, the refinement of encoder technology does not eliminate this phenomenon. As a result, in the real-time live streaming scenario, if we blindly select the high bitrate, it will make the burden of the network transmission highly increase with little enhancement of video quality. Inspired by this, we propose a novel sight which aims to optimize perpetual video quality rather than video bitrate during the entire video session. \begin{figure} \centerline{\includegraphics[width=0.5\linewidth]{figs/overview}} \caption{QARC's System Architecture} \label{fig:overview} \vspace{-10pt} \end{figure} \section{System Architecture} We start with introducing the conventional end-to-end transmission process for real-time video streaming. The system contains a sender and a receiver, and its transport protocol mainly consists of two channels: the streaming channel and the feedback message channel. At the beginning, the sender deploys a UDP socket channel to send the instant real-time video streaming packets $P = \{p_0,p_1,\cdots,p_k\}$, denoted as a packet train~\cite{sato2017experimental}, to the receiver through the streaming channel. The receiver then feeds network status observed back to the sender through the feedback channel. Based on this information, the sender will select bitrate for next time period. As shown in Figure~\ref{fig:overview}, on the basis of conventional real-time video streaming system architecture, we propose QARC, which is placed on the sender side. Motivated by the unbalanced growth of video quality and video bitrate as described in Section~\ref{sec:qualityandbitrate}, we design a RL model to ``learn'' the correlation among the previous video frame, network status, and the best future bitrate. However, if we use raw pictures directly as its inputs, the state will cause ``state explosion''~\cite{Clarke2012}. Moreover, it will hard to train and validate in an allowable time. To overcome this, we meticulously divide the complexed RL model into two feasible and useful models, which involves: \textbf{Video Quality Prediction Network(VQPN)}, proposed by end-to-end deep learning method, which predicts the future video quality metrics based on historical video frames; \textbf{Video Quality Reinforcement Learning(VQRL)}, which uses A3C, an effective actor-critic method which trains two neural networks to select bitrates for future video frames based on network status observations and the future video quality metrics predicted by VQPN. \begin{figure} \centerline{\includegraphics[width=0.5\linewidth]{figs/vqpn}} \caption{VQPN Architecture Overview} \label{fig:VQP} \end{figure} \subsection{Video Quality Prediction Network(VQPN)} To help the RL model select a proper encoding bitrate for the next frame, we need to let the model ``know'' the relationship between the bitrate and corresponding video quality first. However, this form of prediction is quite challenging, because the perceptual video quality is closely related to the video itself. As shown in Figure~\ref{fig:vmaf}, the video type, brightness, and objects number all have a great impact on the correlation between bitrate and VMAF. Motivated by the effectiveness of the neural network in a prediction of time sequence data, we design video quality prediction network(VQPN) helps the RL model to predict the perceptual video quality of the future frame. Figure~\ref{fig:VQP} describes the VQPN's neural network architecture, which is mainly made up with a layer that extracts image features through Convolutional Neural Network (CNN), and another layer which capture temporarily features via Recurrent Neural Network (RNN). Details are shown as follows. \textbf{Video Quality Metric:} We use mean video quality metric to describe the quality of the video over a period. For each raw video frame $f_i$ in time-slot t, the video quality score $V_{f_i,bitrate}$ is computed by the raw video frames $f$ and the bitrate at which the raw video frames $f$ will be encoded, then the mean score $V_{t,bitrate}$ is defined as the average value of $V_{f,bitrate}$. In our study, we use mean VMAF score, which is a score that is specifically formulated by Netflix to correlate strongly with subjective MOS scores to describe the video quality of video frames. In particular, we normalize the score into the distribution of the range from [0,1]. \textbf{Input.}~VQPN takes state inputs $F_i = [f_{i-k}, f_{i-k+1},\cdots,$ $f_{i}]$ to its neural network, in which $f_i$ reflects the i-th sampled video frame. \textbf{Extract image features:} VQPN uses CNN layers to extract frame features, which can obtain the spatial information for each video frame in inputs $F_i$. \textbf{Capture temporal features:} Upon extracting frame features, VQPN uses a double-layered recurrent layer~\cite{chung2014empirical} to further extract temporal characteristics of the video frames $F_i$ in past k sequences. \textbf{Output:} The outputs of VQPN are the prediction of the video quality assessment in the next time slot $t+1$ of candidate bitrates, denoted as $V_{t+1}$. \textbf{Loss function:} We use mean square error(MSE) to describe the loss function, besides that, we also consider to add regulation to the loss function to decrease the probability of over fitting that on training set. Let $\hat{V_{t}}$ denote the real vector of video quality score of the video in time {t}. Therefore, the loss function can be written as (Eq.~\ref{eq:loss}), where $\lambda$ is the regulation coefficient. \begin{align} L_t(V;\theta) = \frac{1}{N}\sum|V_{t} - \hat{V_{t}}|^{2} + \lambda||\theta||^{2} \label{eq:loss} \end{align} \subsection{Video Quality Reinforcement Learning(VQRL)} In our study, we aim to let the neural network ``learn'' a video bitrate selection policy from observations instead of using preset rules in the form of fine-tuned heuristics. Specifically, our approach is based on RL. The sender, serving as an agent in RL problem, observes a set of metrics including future video quality and previous network status as the state. The neural network then selects the action as the output which denotes the video bitrate of next time-slot. Then the goal is to find the policy that maximizes the quality of experience (QoE) perceived by the user. In our scheme, QoE is influenced by video quality, latency, and smoothness. As shown in Figure~\ref{fig:VQRL-INTRO}, We formulate ``video quality first'' real-time video streaming problem within A3C framework, named as video quality reinforcement learning (VQRL). Detailing our system components, which include: \begin{figure} \centerline{\includegraphics[width=0.4\linewidth]{figs/a3c}} \caption{The Actor-Critic algorithm that VQRL uses to generate sending bitrate selection policies} \label{fig:VQRL-INTRO} \vspace{-15pt} \end{figure} \textbf{State:} We consider the metrics which can be obtained by both sender and receiver from feedback message session. VQRL's learning agent pushes the input state of time-slot $t$ $s_t = \{p,v,s,r,d,l\}$ into neural network, where $p$ means the past sending video quality, $v$ represents the future video quality predicted by VQPN, $s$ is the video sending rate of the past k sequences which is equal to the throughput measurement from the uplink of the sender; $r$ represents the receiving bitrate of past $k$ sequences measured by the receiver; $d$ is the delay gradient which is measured between a sender and receiver of the recent $k$ sequences; $l$ is the packet loss ratio of the previous $k$ sequences. To better estimate the network condition in our scenario, we need precisely measure queuing delay of each packet. However, due to the clocks on both sides are unsynchronized, the measurements are unreliable. Motivated by \cite{carlucci2016analysis}, we also use delay gradient to solve the problem. More details can be seen in \citep{carlucci2016analysis,HuangZZS18}. Besides that, we assume receiving bitrate as a form of signal. Then, the Fast Fourier Transform (FFT) can be used to decompose signals into a complex-valued function of frequency, whose absolute value represents the amount of that frequency present in the original function, whose complex argument is the phase offset of the basic sinusoid in that frequency.~\cite{frigo1999a} As a result, we add the additional features into input which decomposed the receive rate sequence through FFT. The results that validate its improvement will be discussed in Section~\ref{sec:VQRL_exp}; \textbf{Action:} The agent needs to take action when receiving the state, and the policy is the guide telling the agent which action will be selected in the RL problem. In general, the action space is discrete, and the output of the policy network is defined as a probability distribution: $f(s_t, a_t)$, meaning the probability of selection action $a_t$ being in state $s_t$. In this paper, the action space contains the candidate of sending bitrate in the next time-slot $t$. In traditional RL problem, the state space is small and can be represented in a tabular form, and there have been a lot of effective algorithms to solve this kind of problems, such as Q-learning and SARSA~\cite{sutton1998reinforcement}. However, in our problem, the state space is fairly large, e.g., loss rate and received bit rate are continuous numbers, so it is impossible to store the state in a tabular form. To tackle this barrier, we use a neural network~\cite{hagan1996neural} to represent the policy, and the weights of the neural network, we use $\theta$ in this paper, are called the policy parameters. In recent researches, the technique of combining neural network and RL is widely used to solve large-state-space RL problems~\citep{silver2016mastering,mao2017neural} and shows its exceptional power; \textbf{Reward:}~Our reward~(QoE) will be described in Section~\ref{sec:qoe}; \textbf{Training:} In the RL problem, after taking a specific action in state $s_t$, the agent will get a corresponding reward , and the goal for the RL agent is to find the best action in each state which can maximize the accumulated reward $r_t$ and as a result, the policy should be changed in the direction of achieving this goal. In this paper, we use A3C~\cite{mnih2016asynchronous}, a state of the art actor-critic RL algorithm, as the fundamental algorithm of our system, and in this algorithm, policy training is done by performing policy gradient algorithm. The key thought of the policy gradient algorithm is to change the parameter in the direction of increasing the accumulated reward. The gradient direction is the direction in which a function increases. The gradient of the accumulated reward with respect to policy parameter $\theta$ can be written as: \begin{align} \nabla E_{\pi_{\theta}}[\sum_{t=0}^{\infty} \gamma^t r_t ]=E_{\pi_{\theta}}[\nabla_{\theta}log_{\pi_{\theta}}(s,a)A^{\pi_{\theta}}(s,a)] \end{align} We can use:~$E_\theta[\nabla_\theta log{\pi_\theta(s,a)}A^{\pi_\theta}(s,a)]$ as its unbiased form, where $A(s_t,a_t)$is called the advantage of action $a_t$ in state $s_t$ which satisfies the following equality: $A(a_t, s_t)=Q(a_t,s_t)-V(s_t)$, where $V(s_t)$ is the estimate of the value function of state $s_t$ and $Q(a_t, s_t)$ is the value of taking certain action at in state $s_t$, and it can also be written as: \begin{align} Q(a_t,s_t)=r_t+\gamma V(s_{t+1}|\theta_ {t+1}) \end{align} \noindent Thus, policy parameter will be updated as: \begin{align} \theta \gets \theta + \alpha \sum_{t}\nabla_\theta log\pi_{\theta}(s_t,a_t)A(s_t,a_t) \end{align} \noindent in which the parameter $\alpha$ represents the learning rate. To calculate $A(s_t, a_t)$, we need to have the $V(s_t)$ first, and we can estimate it in the value network. The value network aims to give a reasonable estimate of the actual value of the expected accumulated reward of state $s_t$, written as $V(s_t|\theta_v)$. Continuing the same line of thought, value network also uses neural network to represent the large state space. In this paper, we use n-step Q-learning to update the network parameter \cite{mnih2016asynchronous}, and for each time, the error between estimation and true value can be represented as $Err_t=(r_t+\gamma V(s_{t+1}|\theta_v)-V(s_t|\theta_v))^2$, where $V(s_t|\theta_v)$ is the estimate of $V(s_t)$, and to reduce the $Err_t$, the direction of changing parameter $\theta_v$ is the negative gradient of it, and in A3C, the gradient will be added up with respect to $t$, so the value network will be updated as: \begin{align} \theta_v \gets \theta_v - \sum_t \nabla_{\theta_v} Err_t \end{align} \noindent where $\alpha$ is the learning rate. Inspired by~\cite{mao2017neural,mnih2016asynchronous}, we also add the entropy of policy in the object of policy network, which can effectively discourage converging to suboptimal policies. See more details in~\cite{mnih2016asynchronous}. So the update of $\theta$ will be rewritten as: \begin{align} \theta \gets \theta + \alpha \sum_t \nabla log_{\pi_\theta}(s_t,a_t)A(s_t,a_t)+\beta \nabla_\theta H(\pi_\theta(\cdot|s_t)) \end{align} \noindent where $\beta$ is also a hyper-parameter, $H(\cdot)$ is the entropy of the policy. After convergence, the value network will be abandoned, and we only use policy network to make decisions; \textbf{Multiple training:} To accelerate the training process, as suggested by \cite{mnih2016asynchronous}, we modify VQRL's training in the single agent as training in multi-agents. Multi-agents training consists of two parts, a central agent and a group of forwarding propagation agents. The forward propagation agents only decide with both policy and critic via state inputs and neural network model received by the central agent for each step, then it sends the $n$-dim vector containing $\{state, action, reward\}$ to the central agent. The central agent uses the actor-critic algorithm to compute gradient and then updates its neural network model. Finally, the central agent pushes the newest model to each forward propagation agent. Note that this can happen asynchronously among all agents, for instance, there is no locking between agents. By default, VQRL with multiple training uses 8 forward propagation agents and 1 central agent; \textbf{Train with network simulator:} \label{sec:simulator} To train VQRL, we first consider to train our neural network model in real-world network conditions, e.g., deploying the model on the edge server. With the increasing number of session, the model will finally converge. However, training the model online is hard to converge because RL training should meet almost all network status as the state. We then decide to train the model in simulated offline networks. Hence, we are a facing new challenge: How to design a fast-forward network simulator which can precisely compute the latency with given saturated trace and sending rate? \begin{figure} \centerline{\includegraphics[scale=0.4]{figs/queue}} \caption{The working principle of the network simulator.} \label{fig:queue} \vspace{-20pt} \end{figure} To train our model, our training data should consist of queuing delay rather than one-way delay. So, our simulator should simulate the process of the packets coming and leaving in different network conditions, and keep track of the timestamps, by which we can get the corresponding queuing delay. Inspired by ~\cite{winstein2013stochastic} and ~\cite{netravali2015mahimahi:}, we use saturated network trace to generate queuing delay data. Seen in Figure~\ref{fig:queue}, assuming the distribution of packets arrival and leave fits closely to the Poisson process~\cite{winstein2013stochastic}, we use sending bitrate and bandwidth in saturated network traces as the arriving rate $\lambda$ and leaving rate $\mu$, respectively. \begin{figure*} \centering \begin{minipage}{0.30\linewidth} \centering \subfigure[The curves of average reward under different neural network model including CNN, FNN, and GRU.]{\includegraphics[width=0.9\textwidth]{figs/0}} \end{minipage} \begin{minipage}{0.30\linewidth} \centering \subfigure[Comparing VQRL which uses FFT with the one without using it.]{\includegraphics[width=0.9\textwidth]{figs/2}} \end{minipage} \begin{minipage}{0.30\linewidth} \centering \subfigure[Sweeping sequence length and number of filters in VQRL's neural network architecture.]{\includegraphics[width=0.9\textwidth]{figs/1}} \end{minipage} \caption{VQRL's implementation} \label{fig:VQRL} \end{figure*} \section{Evaluation} \subsection{Datasets and Metrics} \textbf{Video dataset:} We train and test VQPN on two video datasets, that is, VideoSet: a large-scale compressed video quality dataset based on JND measurement and self-collected video datasets: a video quality dataset involves live-casts, music-videos, and some short movies. For each video in datasets, we measure its VMAF with the bitrate of 300Kbps to 1400Kbps, and the reference resolution is configured as $800\times480$, which is the same size as default resolution that observed by the receiver during the real-time live streaming. We generate the VMAF video datasets using both x264 and x265 encoder. \textbf{Network traces:} \label{sec:networkdataset} To train and evaluate VQRL, the first thing we must do is to generate saturated network trace datasets. However, these types of network traces are hard to be recorded, even public datasets are extremely limited. For example, Cellsim~\cite{winstein2013stochastic} only provides a small number of saturated network traces which describe the cellular network conditions instead of all network environments, which hardly afford us to make our neural network converge. Thus, we consider to collect datasets in two ways: \begin{itemize} \item \textbf{Packet-level network traces:} We use a proprietary dataset of packet-level live-cast session status from all platforms APPs of Kwai collected in January 2018. \footnote{Kwai is a leading platform in China which has over 700 million users worldwide, and millions of original videos are published on it every day. } Motivated by the one-way-delay estimation method in Ledbat~\cite{rossi2010ledbat}, We generate 2,300 real network traces from packet train datasets. \item \textbf{Chunk-level network traces:} We also collect hybrid network traces datasets which consists of different network datasets, such as FCC~\cite{bworld} and Norway~\cite{riiser2013commute}. The FCC dataset is a broadband dataset, and Norway dataset is mainly collected in 3G/HSDPA environment. In short, we generate 1,000 network traces from the datasets. \item \textbf{Synthetic network traces:}We generate a synthetic dataset using a Markovian model where each state represented an average throughput in the aforementioned range.\cite{mao2017neural} Thus, we create a dataset in over 500 traces which can cover a board set of network conditions. \end{itemize} \textbf{QoE metrics:} For a better result, we consider designing Quality of Experience (QoE) metric based on previous scheme. In the recent research~\cite{mao2017neural}, QoE metrics are evaluated as a method with 4 essential factors: bitrate received, loss ratio, latency, and delay gradient, without considering video quality metric. Still, in this paper, after rethinking the correspondence between video quality and video bitrate, we redefine the QoE metric as (Eq.~\ref{eq:qoe}) \label{sec:qoe} \begin{align} \texttt{QoE} = \sum_{n=1}^{N}{(V_n -\alpha B_n - \beta D_n)} - \gamma \sum_{n=1}^{N-1}{|V_n - V_{n-1}|} \label{eq:qoe} \end{align} \noindent for a live video with N time-slots. Where $V_n$ denotes the video quality of time $n$, $B_n$ is the video bitrate that the sender selects, and $D_n$ represents the delay gradient measured by the receiver. The final term comprises the smoothness of video quality. Coefficient $\alpha, \beta$ and $\gamma$ are the weight to describe their aggressiveness. \begin{table} \begin{center} \begin{tabular}{cc|p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}} \toprule \multirow{2}{*}{\textbf{filter number}} & \multirow{2}{*}{\textbf{hidden units}} & \multicolumn{4}{ c }{\textbf{Learning Rate}} \\ \cline{3-6} & & 1e-3 & \textbf{1e-4} & 1e-5 & 6e-6\\ \midrule 32 & 32 & 4.88 & 5.20 & 4.42 & 4.24 \\ 32 & 128 & 4.40 & 4.28 & 4.24 & 4.13 \\ 64 & 64 & 3.94 & 3.93 & 4.22 & 4.31 \\ 64 & 128 & 4.92 & 4.17 & 4.16 & 4.17 \\ \textbf{128} & \textbf{64} & 4.20 & \textbf{3.80} & 4.17 & 4.23 \\ 128 & 128 & 4.52 & 3.86 & 4.15 & 3.99 \\ \bottomrule \end{tabular} \end{center} \caption{Comparing performance (SMAPE\%) of VQPN with different filter number and hidden units. Results are collected under learning rate=1e-3,1e-4,1e-5, and 6e-6 respectively.} \label{table:vqpn} \vspace{-30pt} \end{table} \subsection{Implementation} We now describe the implementation of QARC. In this section, we decide the best hyper-parameters and explain the implementation of VQPN and VQRL respectively. \textbf{Time-slot t:} In this paper, we set time-slot $t$ as 1s. \textbf{VQPN:} The introduced VQPN help VQRL predict future video quality, but we have yet studied how to set the hyper-parameters. Table~\ref{table:vqpn} shows our results with different settings of filter number, hidden units, and learning rate. Results are summarized as symmetric mean absolute percentage error (SMAPE) metric, which is computed as Eq.~\ref{eq:smape}: \begin{align} \begin{aligned} {\text{SMAPE}}={\frac {100\%}{n}}\sum _{t=1}^{n}{\frac {\left|F_{t}-A_{t}\right|}{(|A_{t}|+|F_{t}|)/2}}. \label{eq:smape} \end{aligned} \end{align} Here $A_t$ is the actual value and $F_t$ is the forecast value. Empirically, filter number = 128, hidden units = 64, and learning rate = 1e-4 yields the best performance. To sum up, VQPN passes $t=5$ past time video, and it samples 5 frames for each time, totally $k=25$ previous frames as input to the neural network architecture, and each size of the frame is defined as [64,36] with 3 channels. The input frames then extract features in 128-dimension vector via a feature extraction layer respectively. The feature extraction layer is constructed with 5 layers, a conv layer with 64 filters, each of size 5 with stride 1, an average pooling layer with filter number $3\times3$, an another conv layer with 64 filters, each of size 3 with stride 1, also, a max pooling layer with filter number $2\times2$. Finally, the feature extraction layer passes the features into a hidden layer with 64 neurons. Considering the frame sequence as a time series data, a recurrent network is designed to estimate future video quality. VQPN passes $k$ = 25 feature maps to a gated recurrent unit(GRU) layer with 64 hidden units, then the states of that layer are passed to another GRU layer with the same hidden units. A hidden layer is then connected to the hidden output of the last GRU layer. Finally, VQPN uses the final output as a 5-dimension vector, and for each value in the vector represents the video quality score of video bitrate $\{300, 500, 800, 1100, 1400\}$ Kbps. During the training process, we use Adam gradient optimizer to optimize VQPN with learning rate $\alpha$ = $10^{-4}$. In this work, we use TensorFlow~\cite{abadi2016tensorflow} to implement this architecture, in particular, we leveraged the TFLearn deep learning library's TensorFlow API to declare VQPN. \textbf{VQRL:} In this section, we describe how to choose the best neural network model of VQRL. Firstly, we design three different models which are based on FNN (Feedback Neural Network), CNN, and LSTM (Long-Short Term Memory) respectively. We set sequence length $k = 5$,We use the QoE metric with $\alpha = 0.2$, $\beta = 1.0$ and $\gamma = 1.0$ as the baseline reward. As illustrated in Figure~\ref{fig:VQRL}(a), the CNN model increase the average QoE by about 39\% compared with the LSTM model and about 83\% compared with the FNN model. \begin{figure*} \centering \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_1_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_1_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_1_flv} \end{minipage} \vspace{-10pt} \caption{Comparing QARC with previously proposed approaches on the 4G network environments: The QoE of QARC is considered as $\alpha=0.2$, $\beta=10.0$,and $\gamma=1.0$. After testing three video clips, results are shown as average queuing delay, average sending rates, and average video quality.} \label{fig:exp3} \centering \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_flv} \end{minipage} \vspace{-10pt} \caption{Comparing QARC with different QoE and the baseline which is computed as an offline optimal value based on high video bitrate. We evaluate several QARC methods and a baseline on the \textbf{broadband network environments}. Like the process of Figure~\ref{fig:exp3}, after testing three video clips, results are shown as average queuing delay, average sending rates, and average video quality which are against the performance of the baseline value.} \label{fig:exp1} \centering \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_0_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_0_flv} \end{minipage} \begin{minipage}{0.3\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_0_flv} \end{minipage} \vspace{-10pt} \caption{Like the process of Figure~\ref{fig:exp1}, comparing QARC with different QoE and the baseline which is computed as an offline optimal value based on high video bitrate. We evaluate several QARC methods and a baseline on the \textbf{4G network environments}.} \label{fig:exp2} \end{figure*} \label{sec:VQRL_exp} Then, we consider validating the importance of adding FFT feature into inputs. We set up two CNN models, one of them is established with FFT feature. We set sequence length $k=20$ with the same environment as the first experiment. Results are shown in Figure~\ref{fig:VQRL}(b), which implies that the CNN model with using FFT feature can provide a high reward with the improvement of about 29\% compared with the CNN model without using FFT feature. Finally, we investigate how CNN parameters inflect output results. In our experiment, the different parameters are set as \{$k=5,c=64$\}, \{$k=10,c=64$\} and \{$k=20,c=128$\}, in which $k$ is the input sequence length and $c$ is the CNN channel size. As shown in Figure~\ref{fig:VQRL}(c), with the increase of $k$ and $c$, the performance increases. However, when we choose parameter \{ $k=20,c=128$\}, the average QoE only increases 1\% compared with parameter \{$k=10,c=128$\}, so in consideration of calculation complexity, we finally choose \{$k=10,c=64$\}. Additionally, the action space is configured as 5, which is same as the output of VQPN. During the training process, we use Adam gradient optimizer to optimize it, and the learning rate for the actor and critic is set as $10^{-4}$ and $10^{-3}$, respectively. \textbf{Training time:} To measure the performance limitation of predicting future video quality, we profile VQPN's training process. To know when the network converges, we use early stopping method to train the neural network. Totally, training VQPN requires approximately an hour on a single GPU GTX-1080Ti. For measuring the overhead of the neural network of VQRL, we also introduce the training process for it. To train this, we use 8 agents to update the parameters of the central agent in parallel. The neural network will converge in 22 hours, or less than 5 hours using 20 agents.\footnote{This experiment is worked on AWS with an instance in 20 CPUs and 140G RAM size.}. \subsection{Experiments and Results} In this section, we establish a real-time video streaming system to experimentally evaluate QARC, and use Mahimahi~\cite{netravali2015mahimahi:}, a trace-driven emulator, to simulate various network environments. Our results answer the following questions: \begin{enumerate} \item Comparing QARC with previously proposed approaches in different video clips, does QARC stand for the best approach? \item Compared with the baseline algorithm based on high video bitrate and low latency, how much improvement does QARC gain on the results? \item How does the coefficient $\alpha$, $\beta$, and $\gamma$ affect the outcome of QARC? \end{enumerate} \textbf{QARC vs. Existing approaches} In this experiment, we evaluate QARC with existing proposed heuristic methods on several network traces which represent various network conditions by using trace-driven emulation. After running the trace for each approach, we collect the average queuing delay, average video quality and average sending rate from the receiver. We compare their performance to different video clips. In this experiment, QARC is compared with Google Hangout, a famous video conference app, Compound TCP\cite{ha2008cubic}, and Vegas~\cite{brakmo1995tcp}. As illustrated in Figure~\ref{fig:exp3}, one of the results show that QARC outperforms with existing proposed approaches, with improvements in average video quality of 18\% - 25\% and decreases in average queuing delay of 23\% - 45\%. Especially, we observe that QARC also saves the sending rate, which also performs well. \textbf{Video quality first vs. Bitrate first} In this experiment, we aim to evaluate QARC with different QoE parameters and the baseline algorithm which uses the policy based on high video bitrate. Specifically, we compare QARC to the baseline algorithm in terms of queuing delay, the sending rate, and the video quality of the entire video session. As shown in Figure~\ref{fig:exp1} and Figure~\ref{fig:exp2}, compared with the baseline algorithm on broadband and 4G network environments, the performance of QARC outperforms the baseline based on greedy algorithm. In the broadband network environment, despite a shrinkage in average video quality of 4\% - 9\%, QARC decreases the sending rate of 46\% to 60\% and reduces the average queuing delay \footnote{In this paper, queuing delay is regarded as self-inflicted delay, which is a lower bound on the 95\% end-to-end delay that must be experienced between a sender and receiver, given observed network behavior.~\cite{winstein2013stochastic}} from $0.5s$ to $0.04s$. It is noteworthy that if the footage of the video does not switch violently (Figure~\ref{fig:exp1}(b)), for instance, in video conference scenario, sending bitrate decreases from 51\% to 62\% while video quality reduces less than 5\%. We can also find similar results in 4G network environments. Details can be seen in Figure~\ref{fig:exp1}. \textbf{Influence of $\alpha$,$\beta$ and $\gamma$:} Figure~\ref{fig:exp1} and Figure~\ref{fig:exp2} show the results of QARC with different initial QoE reward parameters. Unsurprisingly, initialize QoE reward with small latency coefficient $\alpha$ yield high-performance improvement over the one with a bigger $\alpha$ in wired network conditions, however, in 4G network environments, it performs a very different performance. In conclusion, there is no optimal pair can fit any network conditions. \section{Related Work} \subsection{Real-time Rate Control Methods} Traditional real-time rate control methods have been proposed and applied about two decades. These schemes are mainly classified into three types, loss-based bitrate approach, delay-based bitrate approach and model-based bitrate approach. \textbf{Loss-based: } Loss-based approaches such as TFRC~\cite{handley2002tcp} and rate adaptation protocol (RAP)~\cite{752152}, have been widely used in TCP congestion control, and these methods increase bitrate till packet loss occurs, which means that the actions are always late, because when packet loss occurs, latency also increases. Furthermore, using packet loss event as the control signal may cause its throughput to be unstable, especially in error-prone environments~\cite{geng2015delay}. \textbf{Delay-based: } Delay-based approaches, try to adjust sending rate to control the transmission delay, can be divided into the end-to-end delay (RTT) approaches, for example, TCP Vegas~\cite{brakmo1995tcp}; one-way delay approaches, such as LEDBAT (Over UDP) and TCP-LP~\cite{rossi2010ledbat,kuzmanovic2006tcp}; and delay gradient approaches~\cite{carlucci2016analysis}. \textbf{Model-based:} Model-based bitrate control method, such as Rebera~\cite{kurdoglu2016real}, GCC~\cite{carlucci2016analysis} and so on, they control sending bitrate based on previous network status observed including end-to-end latency, receiving rate which is measured by the receiver, and past sending bitrate, loss ratio which is measured by the sender. \subsection{Video Quality Metrics} \label{sec:videoquality} Video quality is a characteristic to measure the perceived video degradation while passing through a video transmission system. Up to now, the video quality metrics which are commonly used are shown as follows. \textbf{PSNR:} A traditional signal quality metric~\cite{hore2010image}, which is directly derived from mean square error (MSE) or its square root (RMSE). Due to the simplicity and low complexity of its calculation, PSNR continues to be the most popular evaluation of the video quality. However, the result cannot precisely reflect the visual quality seen by human eyes. \textbf{SSIM:} An image quality metric, submitted in 2004 by Wang et al.~\cite{1284395}. Unlike previously proposed video quality evaluation criteria, SSIM uses the structural distortion measurement instead of mean square error. Due to the consideration of the whole picture, SSMI can give a properer evaluation of the video quality experienced by users. However, SSIM is not a professional tool for video quality assessment. \textbf{VMAF:}Video Multi-method Assessment Fusion (VMAF)~\cite{rassool2017vmaf} is an objective full-reference video quality metric which is formulated explicitly by Netflix to estimate subjective video quality based on a reference and distorted video sequence. Using machine learning techniques, VMAF provides a single output score in the range of $[0,100]$ per video frame. In general, this metric is focused on describing the quality degradation due to compression and rescaling and it is closer to users' real experience of video quality than previous schemes. \vspace{-5pt} \subsection{Deep Reinforcement Learning Approaches} Deep reinforcement learning, one of the deep learning methods, aims to maximize the $reward$ of each $action$ taken by the agent in given $states$ per step. Recent years, several approaches (e.g.~ \citep{winstein2013stochastic,mao2017neural,DDASH}) have been made to optimize the network control algorithm. \textbf{Remy:} Remy~\cite{winstein2013tcp} decides with ``a tabular method' , and it collects experience from the network simulator with network assumptions, however, like all TCP variants, when the real network deviates from Remy`s input assumption, performance degrades. \textbf{Pensieve:} ~\citeauthor{mao2017neural}\cite{mao2017neural} develop a system that uses deep reinforcement learning to select bitrate for future video chunks. Unlike most of the adaptive bit rate(ABR) algorithms, Pensieve does not need any predefined rules and assumptions to make decisions, and it can automatically adjust itself to the change of network conditions. By comparing with the existing ABR algorithms, Pensieve performs very well. \vspace{-5pt} \section{Conclusion} In this paper, we propose QARC, a deep-learning-based rate control algorithm in the real-time video streaming scenario. Unlike previously proposed approaches, we try to get a higher video quality with possibly lower sending rate. Due to that fixed rules cannot effectively handle the complicated scenarios caused by perplexing network conditions and various video content, we use deep reinforcement learning to select the future video bitrate, which can adjust itself automatically to the change of its inputs. To reduce the state space of the reinforcement learning model, we derive the neural network into two parts and train them respectively. After training on a board set of network data, we explore the performance of QARC over several network conditions and QoE metrics. We find that QARC outperforms existing rate control algorithms. \section*{Acknowledgement} We thank the anonymous reviewers for their valuable feedback. The work is supported by the National Natural Science Foundation of China under Grant No. 61472204 and 61521002, Beijing Key Laboratory of Networked Multimedia No. Z161100005016051, and Key Research and Development Project under Grant No. 2018YFB1003703. \section{Introduction} Recent years have witnessed a rapid increase in the requirements of real-time video streaming~\cite{cisco}. Live video streams are being published and watched by different applications(e.g., Twitch, Kwai, Douyu) at any time, from anywhere, and under any network environments. Due to the complicated environment and stochastic property in various network conditions, transmitting video stream with high video bitrate and low latency has become the fundamental challenge in real-time video streaming scenario. Many rate control approaches have been proposed to tackle the problem, such as loss-based approach (TFRC~\cite{handley2002tcp}, RAP~\cite{752152}), delay-based approach (Vegas~\cite{brakmo1995tcp}, LEDBAT (Over UDP)~\cite{rossi2010ledbat}), and model-based approach~(Google Congestion Control(GCC)~\cite{carlucci2016analysis}, Rebera~\cite{kurdoglu2016real}). The same strategy of them is to select bitrate as high as possible with the permission of network condition. However, due to the inequality between high video quality and high bitrate, this strategy may cause a large waste of bandwidth resources. For example, if a video footage consists of darkness and few objects, a low bitrate may also provide a barely satisfactory perceptual video quality but can save large bandwidth resources, and the example is shown in Figure~\ref{fig:vmaf}(a). In this paper, we propose QARC(video Quality Awareness Rate Control), a novel deep-learning based rate control algorithm aiming to obtain high video quality and low latency. Due to that fixed rules fail to effectively handle the complicated scenarios caused by perplexing network conditions and various video content, we leverage DRL-based method to select the future video bitrate, which can adjust itself automatically to the variety of its inputs. In detail, QARC uses DRL method to train a neural network to select the bitrate for future video frames based on past time network status observed and historical video frames. However, if we directly import raw pictures as the inputs of state, the state space will cause ``state explosion''~\cite{Clarke2012}. To overcome this, we meticulously divide this complexed RL model into two feasible and useful models: one is Video Quality Prediction Network (VQPN), which can predict future video quality via previous video frames; the other is Video Quality Reinforcement Learning (VQRL). VQRL uses A3C~\cite{mnih2016asynchronous}, a DRL method, to train the neural network. The inputs of the VQRL are past time network status observed and future video quality predicted by VQPN, and the output is the bitrate for the next video with high video quality and low latency. We design the training methodologies for those two neural networks respectively. To train VQPN, in addition to some general test video clips, we build up a dataset consisting of various types videos including movie, live-cast show, and music video. For training VQRL, we propose an offline acceleration network simulator to emulate real-world network environment with a trace-driven dataset. We then collect a corpus of network traces for the simulator with both packet-level traces and chunk-level public traces. After deciding the architecture of two neural networks, we compare QARC with existing proposed approaches, results of trace-driven emulation show that QARC outperforms with existing proposed approaches, with improvements in average video quality of 18\% - 25\% and decreases in average queuing delay of 23\% - 45\%. Besides that, by comparing the performance of QARC with the baseline which represents the offline optimal based on high bitrate and low latency over different network conditions and videos, we find that in all considered scenarios, despite a decrease in average video quality of only 4\% - 9\%, QARC saves the sending rate with 46\% to 60\% and reduces the average queuing delay of 40\% to 50\%. As a result, our contributions are shown as follows. \begin{itemize} \item Unlike the previous goal, we propose a novel sight to evaluate QoE: aiming to optimize video quality rather than video bitrate during the entire video session. \item To the best of our knowledge, we are the first to establish a deep reinforcement learning (DRL) model to select sending bitrate for future video frames based on jointly considered perceptual video quality and network status observed in the real-time video streaming scenario. \item Due to the complexity of input state, we derive the neural network into two parts: the first part is a neural network used to precisely predict future video quality based on the previous video frames; the second part is an RL model used to determine the proper bitrate based on the output of the first model. By using the output video quality from the first part instead of the raw video frames, the state space of the RL model can be reduced efficiently. \end{itemize} \begin{figure} \centering \begin{minipage}{1.0\linewidth} \centering \subfigure[A sample video clip with static video background~\cite{beyourself}]{\includegraphics[width=0.5\textwidth]{figs/0_txt} \includegraphics[width=0.5\textwidth]{figs/1_0}} \end{minipage} \begin{minipage}{1.0\linewidth} \centering \subfigure[A sample video clip with dynamic video scene\cite{KiboutekiRefrain}]{\includegraphics[width=0.5\textwidth]{figs/1_txt} \includegraphics[width=0.5\textwidth]{figs/0_0}} \end{minipage} \begin{minipage}{1.0\linewidth} \centering \subfigure[A sample video clip with both static and dynamic video scene\cite{ILoveIEmbrace}]{\includegraphics[width=0.5\textwidth]{figs/2_txt} \includegraphics[width=0.5\textwidth]{figs/2_0}} \end{minipage} \caption{This group of figures shows our motivation: In the real-time live streaming scenario, high video bitrate is equaled to high video quality, however, in some circumstance, high video quality only requires a low bitrate.} \label{fig:vmaf} \end{figure} \section{Motivation} In this section, we start by designing an experiment to answer two fundamental questions: \begin{itemize} \item With the enhancement of video encoding technology, what will the correlation change between video quality and video bitrate? \item Despite the high precision in time series data by using a neural network, can it also precisely predict the fluctuation of the network especially without knowing the saturated bandwidth of the entire video session? \end{itemize} \subsection{High Video Quality or High Video Bitrate?} \label{sec:qualityandbitrate} To solve this, we establish a testbed to assess the video quality score of selected videos with the given encoding bitrate. The selected videos consist of three video clips, and each of them represents a video with static video scene (live-cast), a video with dynamic video scene (live concert), and a video with hybrid static video scene and dynamic video scene (MV) respectively. In our experiment, we use Video Multi-Method Assessment Fusion(VMAF), a smart perceptual video quality assessment algorithm based on support vector machine(SVM)~\cite{rassool2017vmaf}. We compare the video quality score of each video in different encoders. In detail, we use three video encoders in our experiments including x264~\cite{x264}, x265~\cite{x265}, and AV1~\cite{av1}. The first two encoders are popularly used nowadays, and the last one is the state-of-the-art video encoder proposed by Google. As illustrated in Figure~\ref{fig:vmaf}, comparing VMAF score of different encoders on different videos and encoded bitrates, the results show that as the encode bitrate increases, the rate of increase in video quality score decreases. In addition, the refinement of encoder technology does not eliminate this phenomenon. As a result, in the real-time live streaming scenario, if we blindly select the high bitrate, it will make the burden of the network transmission highly increase with little enhancement of video quality. Inspired by this, we propose a novel sight which aims to optimize perpetual video quality rather than video bitrate during the entire video session. \begin{figure}[ht] \centerline{\includegraphics[width=1.0\linewidth]{figs/2018-04-08-04-58-45}} \vspace{-10pt} \caption{The Results of Online-learning} \label{fig:motivation2} \end{figure} \subsection{Estimating Future Network Status using Neural Network} The second issue is focused on conventional network congestion control. In the real-time network scenario, by using a neural network, how to accurately estimate the future saturated bandwidth based on past network status observed is still a challenge. In this paper, we use machine learning approach to solve the problem, and in details, we use online learning to train a neural network model to predict the future network status. Considering past $k$ time-slots, we define $I_t = \{s,r,d\}$ as the input of neural network, where $s$ is the sending rate of past $k$ time-slots measured by the sender; $r$ represents the receiving throughput collected by the receiver of past $k$ time-slots, and $d$ is the delay gradient computed by the receiver at that time-slot. In our experiment, we set $k = 5$. The output is a linear value described as the throughput of next time-slot $t+1 $, and in our problem, this value is equal to the available bandwidth. The model is mainly constructed as a 1D-convolutional network (1D-CNN). To train this model, We propose a network simulator which can use saturated traces to generate network status data, and more details can be seen in Section~\ref{sec:simulator}. In particular, sending rate is constrained in the range of $[0.01,1.8]$ Mbps, which cannot reach the maximum size of available bandwidth. Figure~\ref{fig:motivation2} illustrates our results on real-world network datasets (Section.~\ref{sec:networkdataset}). As shown, the model that is trained on the synthetic dataset is able to generalize across network conditions, and achieving SMAPE (Eq.~\ref{eq:smape}) score within 11.1\% of the model trained directly on the real-world networks including wired network and 4G network. These results suggest that, in practice, the neural network using 1d-CNN will have an ability to estimate future network status without measuring available bandwidth. \section{System Architecture} We start with introducing the conventional end-to-end transmission process for real-time video streaming. The system contains a sender and a receiver, and its transport protocol mainly consists of two channels: the streaming channel and the feedback message channel. At the beginning, the sender deploys a UDP socket channel to send the instant real-time video streaming packets $P = \{p_0,p_1,\cdots,p_k\}$, denoted as a packet train~\cite{sato2017experimental}, to the receiver through the streaming channel. The receiver then feeds network status observed back to the sender through the feedback channel. Based on this information, the sender will select bitrate for next time period. \begin{figure} \centerline{\includegraphics[width=1.1\linewidth]{figs/overview}} \caption{QARC's System Architecture} \label{fig:overview} \end{figure} As shown in Figure~\ref{fig:overview}, on the basis of conventional real-time video streaming system architecture, we propose QARC, which is placed on the sender side. Motivated by the unbalanced growth of video quality and video bitrate as described in Section~\ref{sec:qualityandbitrate}, we design a RL model to ``learn'' the correlation among the previous video frame, network status, and the best future bitrate. However, if we use raw pictures directly as its inputs, the state will cause ``state explosion''~\cite{Clarke2012}. Moreover, it will hard to train and validate in an allowable time. To overcome this, we meticulously divide the complexed RL model into two feasible and useful models, which involves: \textbf{Video Quality Prediction Network(VQPN)}, proposed by end-to-end deep learning method, which predicts the future video quality metrics based on historical video frames; \textbf{Video Quality Reinforcement Learning(VQRL)}, which uses A3C, an effective actor-critic method which trains two neural networks to select bitrates for future video frames based on network status observations and the future video quality metrics predicted by VQPN. \begin{figure}[ht] \centerline{\includegraphics[width=1.0\linewidth]{figs/vqpn}} \caption{VQPN Architecture Overview} \label{fig:VQP} \end{figure} \subsection{Video Quality Prediction Network(VQPN)} To help the RL model select a proper encoding bitrate for the next frame, we need to let the model ``know'' the relationship between the bitrate and corresponding video quality first. However, this form of prediction is quite challenging, because the perceptual video quality is closely related to the video itself. As shown in Figure~\ref{fig:vmaf}, the video type, brightness, and objects number all have a great impact on the correlation between bitrate and VMAF. Motivated by the effectiveness of the neural network in a prediction of time sequence data, we design video quality prediction network(VQPN) helps the RL model to predict the perceptual video quality of the future frame. Figure~\ref{fig:VQP} describes the VQPN's neural network architecture, which is mainly made up with a layer that extracts image features through Convolutional Neural Network (CNN), and another layer which capture temporarily features via Recurrent Neural Network (RNN). Details are shown as follows. \textbf{Video Quality Metric:} We use mean video quality metric to describe the quality of the video over a period. For each raw video frame $f_i$ in time-slot t, the video quality score $V_{f_i,bitrate}$ is computed by the raw video frames $f$ and the bitrate at which the raw video frames $f$ will be encoded, then the mean score $V_{t,bitrate}$ is defined as the average value of $V_{f,bitrate}$. In our study, we use mean VMAF score, which is a score that is specifically formulated by Netflix to correlate strongly with subjective MOS scores to describe the video quality of video frames. In particular, we normalize the score into the distribution of the range from [0,1]. \textbf{Input.}~VQPN takes state inputs $F_i = [f_{i-k}, f_{i-k+1},\cdots,$ $f_{i}]$ to its neural network, in which $f_i$ reflects the i-th sampled video frame. \textbf{Extract image features:} VQPN uses CNN layers to extract frame features, which can obtain the spatial information for each video frame in inputs $F_i$. \textbf{Capture temporal features:} Upon extracting frame features, VQPN uses a double-layered recurrent layer~\cite{chung2014empirical} to further extract temporal characteristics of the video frames $F_i$ in past k sequences. \textbf{Output:} The outputs of VQPN are the prediction of the video quality assessment in the next time slot $t+1$ of candidate bitrates, denoted as $V_{t+1}$. \textbf{Loss function:} We use mean square error(MSE) to describe the loss function, besides that, we also consider to add regulation to the loss function to decrease the probability of over fitting that on training set. Let $\hat{V_{t}}$ denote the real vector of video quality score of the video in time {t}. Therefore, the loss function can be written as (Eq.~\ref{eq:loss}), where $\lambda$ is the regulation coefficient. \begin{align} L_t(V;\theta) = \frac{1}{N}\sum|V_{t} - \hat{V_{t}}|^{2} + \lambda||\theta||^{2} \label{eq:loss} \end{align} \subsection{Video Quality Reinforcement Learning(VQRL)} In our study, we aim to let the neural network ``learn'' a video bitrate selection policy from observations instead of using preset rules in the form of fine-tuned heuristics. Specifically, our approach is based on RL. The sender, serving as an agent in RL problem, observes a set of metrics including future video quality and previous network status as the state. The neural network then selects the action as the output which denotes the video bitrate of next time-slot. Then the goal is to find the policy that maximizes the quality of experience (QoE) perceived by the user. In our scheme, QoE is influenced by video quality, latency, and smoothness. As shown in Figure~\ref{fig:VQRL-INTRO}, We formulate ``video quality first'' real-time video streaming problem within A3C framework, named as video quality reinforcement learning (VQRL). Detailing our system components, which include: \begin{figure} \centerline{\includegraphics[width=0.7\linewidth]{figs/a3c}} \caption{The Actor-Critic algorithm that VQRL uses to generate sending bitrate selection policies} \label{fig:VQRL-INTRO} \vspace{-15pt} \end{figure} \textbf{State:} We consider the metrics which can be obtained by both sender and receiver from feedback message session. VQRL's learning agent pushes the input state of time-slot $t$ $s_t = \{p,v,s,r,d,l\}$ into neural network, where $p$ means the past sending video quality, $v$ represents the future video quality predicted by VQPN, $s$ is the video sending rate of the past k sequences which is equal to the throughput measurement from the uplink of the sender; $r$ represents the receiving bitrate of past $k$ sequences measured by the receiver; $d$ is the delay gradient which is measured between a sender and receiver of the recent $k$ sequences; $l$ is the packet loss ratio of the previous $k$ sequences. To better estimate the network condition in our scenario, we need precisely measure queuing delay of each packet. However, due to the clocks on both sides are unsynchronized, the measurements are unreliable. Motivated by \cite{carlucci2016analysis}, we also use delay gradient to solve the problem. More details can be seen in \citep{carlucci2016analysis,HuangZZS18}. Besides that, we assume receiving bitrate as a form of signal. Then, the Fast Fourier Transform (FFT) can be used to decompose signals into a complex-valued function of frequency, whose absolute value represents the amount of that frequency present in the original function, whose complex argument is the phase offset of the basic sinusoid in that frequency.~\cite{frigo1999a} As a result, we add the additional features into input which decomposed the receive rate sequence through FFT. The results that validate its improvement will be discussed in Section~\ref{sec:VQRL_exp}; \textbf{Action:} The agent needs to take action when receiving the state, and the policy is the guide telling the agent which action will be selected in the RL problem. In general, the action space is discrete, and the output of the policy network is defined as a probability distribution: $f(s_t, a_t)$, meaning the probability of selection action $a_t$ being in state $s_t$. In this paper, the action space contains the candidate of sending bitrate in the next time-slot $t$. In traditional RL problem, the state space is small and can be represented in a tabular form, and there have been a lot of effective algorithms to solve this kind of problems, such as Q-learning and SARSA~\cite{sutton1998reinforcement}. However, in our problem, the state space is fairly large, e.g., loss rate and received bit rate are continuous numbers, so it is impossible to store the state in a tabular form. To tackle this barrier, we use a neural network~\cite{hagan1996neural} to represent the policy, and the weights of the neural network, we use $\theta$ in this paper, are called the policy parameters. In recent researches, the technique of combining neural network and RL is widely used to solve large-state-space RL problems~\citep{silver2016mastering,mao2017neural} and shows its exceptional power; \textbf{Reward:}~Our reward~(QoE) will be described in Section~\ref{sec:qoe}; \textbf{Training:} In the RL problem, after taking a specific action in state $s_t$, the agent will get a corresponding reward , and the goal for the RL agent is to find the best action in each state which can maximize the accumulated reward $r_t$ and as a result, the policy should be changed in the direction of achieving this goal. In this paper, we use A3C~\cite{mnih2016asynchronous}, a state of the art actor-critic RL algorithm, as the fundamental algorithm of our system, and in this algorithm, policy training is done by performing policy gradient algorithm. The key thought of the policy gradient algorithm is to change the parameter in the direction of increasing the accumulated reward. The gradient direction is the direction in which a function increases. The gradient of the accumulated reward with respect to policy parameter $\theta$ can be written as: \begin{align} \nabla E_{\pi_{\theta}}[\sum_{t=0}^{\infty} \gamma^t r_t ]=E_{\pi_{\theta}}[\nabla_{\theta}log_{\pi_{\theta}}(s,a)A^{\pi_{\theta}}(s,a)] \end{align} We can use:~$E_\theta[\nabla_\theta log{\pi_\theta(s,a)}A^{\pi_\theta}(s,a)]$ as its unbiased form, where $A(s_t,a_t)$is called the advantage of action $a_t$ in state $s_t$ which satisfies the following equality: \begin{align} A(a_t, s_t)=Q(a_t,s_t)-V(s_t) \end{align} Where $V(s_t)$ is the estimate of the value function of state $s_t$ and $Q(a_t, s_t)$ is the value of taking certain action at in state $s_t$, and it can also be written as: \begin{align} Q(a_t,s_t)=r_t+\gamma V(s_{t+1}|\theta_ {t+1}) \end{align} \noindent Thus, policy parameter will be updated as: \begin{align} \theta \gets \theta + \alpha \sum_{t}\nabla_\theta log\pi_{\theta}(s_t,a_t)A(s_t,a_t) \end{align} \noindent in which the parameter $\alpha$ represents the learning rate. To calculate $A(s_t, a_t)$, we need to have the $V(s_t)$ first, and we can estimate it in the value network. The value network aims to give a reasonable estimate of the actual value of the expected accumulated reward of state $s_t$, written as $V(s_t|\theta_v)$. Continuing the same line of thought, value network also uses neural network to represent the large state space. In this paper, we use n-step Q-learning to update the network parameter \cite{mnih2016asynchronous}, and for each time, the error between estimation and true value can be represented as $Err_t=(r_t+\gamma V(s_{t+1}|\theta_v)-V(s_t|\theta_v))^2$, where $V(s_t|\theta_v)$ is the estimate of $V(s_t)$, and to reduce the $Err_t$, the direction of changing parameter $\theta_v$ is the negative gradient of it, and in A3C, the gradient will be added up with respect to $t$, so the value network will be updated as: \begin{align} \theta_v \gets \theta_v - \sum_t \nabla_{\theta_v} Err_t \end{align} \noindent where $\alpha$ is the learning rate. Inspired by~\cite{mao2017neural,mnih2016asynchronous}, we also add the entropy of policy in the object of policy network, which can effectively discourage converging to suboptimal policies. See more details in~\cite{mnih2016asynchronous}. So the update of $\theta$ will be rewritten as: \begin{align} \theta \gets \theta + \alpha \sum_t \nabla log_{\pi_\theta}(s_t,a_t)A(s_t,a_t)+\beta \nabla_\theta H(\pi_\theta(\cdot|s_t)) \end{align} \noindent where $\beta$ is also a hyper-parameter, $H(\cdot)$ is the entropy of the policy. After convergence, the value network will be abandoned, and we only use policy network to make decisions; \textbf{Multiple training:} To accelerate the training process, as suggested by \cite{mnih2016asynchronous}, we modify VQRL's training in the single agent as training in multi-agents. Multi-agents training consists of two parts, a central agent and a group of forwarding propagation agents. The forward propagation agents only decide with both policy and critic via state inputs and neural network model received by the central agent for each step, then it sends the $n$-dim vector containing $\{state, action, reward\}$ to the central agent. The central agent uses the actor-critic algorithm to compute gradient and then updates its neural network model. Finally, the central agent pushes the newest model to each forward propagation agent. Note that this can happen asynchronously among all agents, for instance, there is no locking between agents. By default, VQRL with multiple training uses 8 forward propagation agents and 1 central agent; \textbf{Train with network simulator:} \label{sec:simulator} To train VQRL, we first consider to train our neural network model in real-world network conditions, e.g., deploying the model on the edge server. With the increasing number of session, the model will finally converge. However, training the model online is hard to converge because RL training should meet almost all network status as the state. We then decide to train the model in simulated offline networks. Hence, we are a facing new challenge: How to design a fast-forward network simulator which can precisely compute the latency with given saturated trace and sending rate? \begin{figure} \centerline{\includegraphics[scale=0.4]{figs/queue}} \caption{The working principle of the network simulator.} \label{fig:queue} \vspace{-20pt} \end{figure} To train our model, our training data should consist of queuing delay rather than one-way delay. So, our simulator should simulate the process of the packets coming and leaving in different network conditions, and keep track of the timestamps, by which we can get the corresponding queuing delay. Inspired by ~\cite{winstein2013stochastic} and ~\cite{netravali2015mahimahi:}, we use saturated network trace to generate queuing delay data. Seen in Figure~\ref{fig:queue}, assuming the distribution of packets arrival and leave fits closely to the Poisson process~\cite{winstein2013stochastic}, we use sending bitrate and bandwidth in saturated network traces as the arriving rate $\lambda$ and leaving rate $\mu$, respectively. \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centering \subfigure[The curves of average reward under different neural network model including CNN, FNN, and GRU.]{\includegraphics[width=0.9\textwidth]{figs/0}} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \subfigure[Comparing VQRL which uses FFT with the one without using it.]{\includegraphics[width=0.9\textwidth]{figs/2}} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \subfigure[Sweeping sequence length and number of filters in VQRL's neural network architecture.]{\includegraphics[width=0.9\textwidth]{figs/1}} \end{minipage} \caption{VQRL's implementation} \label{fig:VQRL} \end{figure*} \section{Evaluation} \subsection{Datasets and Metrics} \textbf{Video dataset:} We train and test VQPN on two video datasets, that is, VideoSet: a large-scale compressed video quality dataset based on JND measurement and self-collected video datasets: a video quality dataset involves live-casts, music-videos, and some short movies. For each video in datasets, we measure its VMAF with the bitrate of 300Kbps to 1400Kbps, and the reference resolution is configured as $800\times480$, which is the same size as default resolution that observed by the receiver during the real-time live streaming. We generate the VMAF video datasets using both x264 and x265 encoder. \textbf{Network traces:} \label{sec:networkdataset} To train and evaluate VQRL, the first thing we must do is to generate saturated network trace datasets. However, these types of network traces are hard to be recorded, even public datasets are extremely limited. For example, Cellsim~\cite{winstein2013stochastic} only provides a small number of saturated network traces which describe the cellular network conditions instead of all network environments, which hardly afford us to make our neural network converge. Thus, we consider to collect datasets in two ways: \begin{itemize} \item \textbf{Packet-level network traces:} We use a proprietary dataset of packet-level live-cast session status from all platforms APPs of Kwai collected in January 2018. \footnote{Kwai is a leading platform in China which has over 700 million users worldwide, and millions of original videos are published on it every day. } The dataset, recorded as packet train, consists of over 14 million sessions from 47,000 users covering 50 thousand unique sessions over three days in January 2018. For each session, it is consists of packet size, packet send time and packet receive time. Based on raw data collected, we propose measuring the available bandwidth {ABW/n} of the whole link, where the available sample bandwidth is obtained by the packet train which is received in the receiving side in period n. Motivated by the one-way-delay estimation method in Ledbat~\cite{rossi2010ledbat}, We generate 2,300 real network traces from packet train datasets. \item \textbf{Chunk-level network traces:} We also collect hybrid network traces datasets which consists of different network datasets, such as FCC~\cite{bworld} and Norway~\cite{riiser2013commute}. The FCC dataset is a broadband dataset, and Norway dataset is mainly collected in 3G/HSDPA environment. In short, we generate 1,000 network traces from the datasets. \item \textbf{Synthetic network traces:}We generate a synthetic dataset using a Markovian model where each state represented an average throughput in the aforementioned range.\cite{mao2017neural} Thus, we create a dataset in over 500 traces which can cover a board set of network conditions. \end{itemize} \textbf{QoE metrics:} For a better result, we consider designing Quality of Experience (QoE) metric based on previous scheme. In the recent research~\cite{mao2017neural}, QoE metrics are evaluated as a method with 4 essential factors: bitrate received, loss ratio, latency, and delay gradient, without considering video quality metric. Still, in this paper, after rethinking the correspondence between video quality and video bitrate, we redefine the QoE metric as (Eq.~\ref{eq:qoe}) \label{sec:qoe} \begin{align} \texttt{QoE} = \sum_{n=1}^{N}{(V_n -\alpha B_n - \beta D_n)} - \gamma \sum_{n=1}^{N-1}{|V_n - V_{n-1}|} \label{eq:qoe} \end{align} \noindent for a live video with N time-slots. Where $V_n$ denotes the video quality of time $n$, $B_n$ is the video bitrate that the sender selects, and $D_n$ represents the delay gradient measured by the receiver. The final term comprises the smoothness of video quality. Coefficient $\alpha, \beta$ and $\gamma$ are the weight to describe their aggressiveness. \begin{table} \begin{center} \begin{tabular}{cc|p{0.8cm}p{0.8cm}p{0.8cm}p{0.8cm}} \toprule \multirow{2}{*}{\textbf{filter number}} & \multirow{2}{*}{\textbf{hidden units}} & \multicolumn{4}{ c }{\textbf{Learning Rate}} \\ \cline{3-6} & & 1e-3 & \textbf{1e-4} & 1e-5 & 6e-6\\ \midrule 32 & 32 & 4.88 & 5.20 & 4.42 & 4.24 \\ 32 & 128 & 4.40 & 4.28 & 4.24 & 4.13 \\ 64 & 64 & 3.94 & 3.93 & 4.22 & 4.31 \\ 64 & 128 & 4.92 & 4.17 & 4.16 & 4.17 \\ \textbf{128} & \textbf{64} & 4.20 & \textbf{3.80} & 4.17 & 4.23 \\ 128 & 128 & 4.52 & 3.86 & 4.15 & 3.99 \\ \bottomrule \end{tabular} \end{center} \caption{Comparing performance (SMAPE\%) of VQPN with different filter number and hidden units. Results are collected under learning rate=1e-3,1e-4,1e-5, and 6e-6 respectively.} \label{table:vqpn} \vspace{-30pt} \end{table} \subsection{Implementation} We now describe the implementation of QARC. In this section, we decide the best hyper-parameters and explain the implementation of VQPN and VQRL respectively. \textbf{Time-slot t:} In this paper, we set time-slot $t$ as 1s. \textbf{VQPN:} The introduced VQPN help VQRL predict future video quality, but we have yet studied how to set the hyper-parameters. Table~\ref{table:vqpn} shows our results with different settings of filter number, hidden units, and learning rate. Results are summarized as symmetric mean absolute percentage error (SMAPE) metric, which is computed as Eq.~\ref{eq:smape}: \begin{align} \begin{aligned} {\text{SMAPE}}={\frac {100\%}{n}}\sum _{t=1}^{n}{\frac {\left|F_{t}-A_{t}\right|}{(|A_{t}|+|F_{t}|)/2}}. \label{eq:smape} \end{aligned} \end{align} Here $A_t$ is the actual value and $F_t$ is the forecast value. Empirically, filter number = 128, hidden units = 64, and learning rate = 1e-4 yields the best performance. To sum up, VQPN passes $t=5$ past time video, and it samples 5 frames for each time, totally $k=25$ previous frames as input to the neural network architecture, and each size of the frame is defined as [64,36] with 3 channels. The input frames then extract features in 128-dimension vector via a feature extraction layer respectively. The feature extraction layer is constructed with 5 layers, a conv layer with 64 filters, each of size 5 with stride 1, an average pooling layer with filter number $3\times3$, an another conv layer with 64 filters, each of size 3 with stride 1, also, a max pooling layer with filter number $2\times2$. Finally, the feature extraction layer passes the features into a hidden layer with 64 neurons. Considering the frame sequence as a time series data, a recurrent network is designed to estimate future video quality. VQPN passes $k$ = 25 feature maps to a gated recurrent unit(GRU) layer with 64 hidden units, then the states of that layer are passed to another GRU layer with the same hidden units. A hidden layer is then connected to the hidden output of the last GRU layer. Finally, VQPN uses the final output as a 5-dimension vector, and for each value in the vector represents the video quality score of video bitrate $\{300, 500, 800, 1100, 1400\}$ Kbps. During the training process, we use Adam gradient optimizer to optimize VQPN with learning rate $\alpha$ = $10^{-4}$. In this work, we use TensorFlow~\cite{abadi2016tensorflow} to implement this architecture, in particular, we leveraged the TFLearn deep learning library's TensorFlow API to declare VQPN. \textbf{VQRL:} In this section, we describe how to choose the best neural network model of VQRL. Firstly, we design three different models which are based on FNN (Feedback Neural Network), CNN, and LSTM (Long-Short Term Memory) respectively. We set sequence length $k = 5$,We use the QoE metric with $\alpha = 0.2$, $\beta = 1.0$ and $\gamma = 1.0$ as the baseline reward. As illustrated in Figure~\ref{fig:VQRL}(a), the CNN model increase the average QoE by about 39\% compared with the LSTM model and about 83\% compared with the FNN model. \begin{figure*} \centering \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_1_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_1_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_1_flv} \end{minipage} \vspace{-10pt} \caption{Comparing QARC with previously proposed approaches on the 4G network environments: The QoE of QARC is considered as $\alpha=0.2$, $\beta=10.0$,and $\gamma=1.0$. After testing three video clips, results are shown as average queuing delay, average sending rates, and average video quality.} \label{fig:exp3} \centering \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_flv} \end{minipage} \vspace{-10pt} \caption{Comparing QARC with different QoE and the baseline which is computed as an offline optimal value based on high video bitrate. We evaluate several QARC methods and a baseline on the \textbf{broadband network environments}. Like the process of Figure~\ref{fig:exp3}, after testing three video clips, results are shown as average queuing delay, average sending rates, and average video quality which are against the performance of the baseline value.} \label{fig:exp1} \centering \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/0_0_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/4_0_flv} \end{minipage} \begin{minipage}{0.33\linewidth} \centering \includegraphics[width=1.0\textwidth]{figs/7_0_flv} \end{minipage} \vspace{-10pt} \caption{Like the process of Figure~\ref{fig:exp1}, comparing QARC with different QoE and the baseline which is computed as an offline optimal value based on high video bitrate. We evaluate several QARC methods and a baseline on the \textbf{4G network environments}.} \label{fig:exp2} \end{figure*} \label{sec:VQRL_exp} Then, we consider validating the importance of adding FFT feature into inputs. We set up two CNN models, one of them is established with FFT feature. We set sequence length $k=20$ with the same environment as the first experiment. Results are shown in Figure~\ref{fig:VQRL}(b), which implies that the CNN model with using FFT feature can provide a high reward with the improvement of about 29\% compared with the CNN model without using FFT feature. Finally, we investigate how CNN parameters inflect output results. In our experiment, the different parameters are set as \{$k=5,c=64$\}, \{$k=10,c=64$\} and \{$k=20,c=128$\}, in which $k$ is the input sequence length and $c$ is the CNN channel size. As shown in Figure~\ref{fig:VQRL}(c), with the increase of $k$ and $c$, the performance increases. However, when we choose parameter \{ $k=20,c=128$\}, the average QoE only increases 1\% compared with parameter \{$k=10,c=128$\}, so in consideration of calculation complexity, we finally choose \{$k=10,c=64$\}. Additionally, the action space is configured as 5, which is same as the output of VQPN. During the training process, we use Adam gradient optimizer to optimize it, and the learning rate for the actor and critic is set as $10^{-4}$ and $10^{-3}$, respectively. \textbf{Training time:} To measure the performance limitation of predicting future video quality, we profile VQPN's training process. To know when the network converges, we use early stopping method to train the neural network. Totally, training VQPN requires approximately an hour on a single GPU GTX-1080Ti. For measuring the overhead of the neural network of VQRL, we also introduce the training process for it. To train this, we use 8 agents to update the parameters of the central agent in parallel. The neural network will converge in 22 hours, or less than 5 hours using 20 agents.\footnote{This experiment is worked on AWS with an instance in 20 CPUs and 140G RAM size.}. \subsection{Experiments and Results} In this section, we establish a real-time video streaming system to experimentally evaluate QARC, and use Mahimahi~\cite{netravali2015mahimahi:}, a trace-driven emulator, to simulate various network environments. Our results answer the following questions: \begin{enumerate} \item Comparing QARC with previously proposed approaches in different video clips, does QARC stand for the best approach? \item Compared with the baseline algorithm based on high video bitrate and low latency, how much improvement does QARC gain on the results? \item How does the coefficient $\alpha$, $\beta$, and $\gamma$ affect the outcome of QARC? \end{enumerate} \textbf{QARC vs. Existing approaches} In this experiment, we evaluate QARC with existing proposed heuristic methods on several network traces which represent various network conditions by using trace-driven emulation. After running the trace for each approach, we collect the average queuing delay, average video quality and average sending rate from the receiver. We compare their performance to different video clips. In this experiment, QARC is compared with Google Hangout, a famous video conference app, Compound TCP\cite{ha2008cubic}, and Vegas~\cite{brakmo1995tcp}. As illustrated in Figure~\ref{fig:exp3}, one of the results show that QARC outperforms with existing proposed approaches, with improvements in average video quality of 18\% - 25\% and decreases in average queuing delay of 23\% - 45\%. Especially, we observe that QARC also saves the sending rate, which also performs well. \textbf{Video quality first vs. Bitrate first} In this experiment, we aim to evaluate QARC with different QoE parameters and the baseline algorithm which uses the policy based on high video bitrate. Specifically, we compare QARC to the baseline algorithm in terms of queuing delay, the sending rate, and the video quality of the entire video session. As shown in Figure~\ref{fig:exp1} and Figure~\ref{fig:exp2}, compared with the baseline algorithm on broadband and 4G network environments, the performance of QARC outperforms the baseline based on greedy algorithm. In the broadband network environment, despite a shrinkage in average video quality of 4\% - 9\%, QARC decreases the sending rate of 46\% to 60\% and reduces the average queuing delay \footnote{In this paper, queuing delay is regarded as self-inflicted delay, which is a lower bound on the 95\% end-to-end delay that must be experienced between a sender and receiver, given observed network behavior.~\cite{winstein2013stochastic}} from $0.5s$ to $0.04s$. It is noteworthy that if the footage of the video does not switch violently (Figure~\ref{fig:exp1}(b)), for instance, in video conference scenario, sending bitrate decreases from 51\% to 62\% while video quality reduces less than 5\%. We can also find similar results in 4G network environments. Details can be seen in Figure~\ref{fig:exp1}. \textbf{Influence of $\alpha$,$\beta$ and $\gamma$:} Figure~\ref{fig:exp1} and Figure~\ref{fig:exp2} show the results of QARC with different initial QoE reward parameters. Unsurprisingly, initialize QoE reward with small latency coefficient $\alpha$ yield high-performance improvement over the one with a bigger $\alpha$ in wired network conditions, however, in 4G network environments, it performs a very different performance. In conclusion, there is no optimal pair can fit any network conditions. \section{Related Work} \subsection{Real-time Rate Control Methods} Traditional real-time rate control methods have been proposed and applied about two decades. These schemes are mainly classified into three types, loss-based bitrate approach, delay-based bitrate approach and model-based bitrate approach. \textbf{Loss-based: } Loss-based approaches such as TFRC~\cite{handley2002tcp} and rate adaptation protocol (RAP)~\cite{752152}, have been widely used in TCP congestion control, and these methods increase bitrate till packet loss occurs, which means that the actions are always late, because when packet loss occurs, latency also increases. Furthermore, using packet loss event as the control signal may cause its throughput to be unstable, especially in error-prone environments~\cite{geng2015delay}. \textbf{Delay-based: } Delay-based approaches, try to adjust sending rate to control the transmission delay, can be divided into the end-to-end delay (RTT) approaches, for example, TCP Vegas~\cite{brakmo1995tcp}; one-way delay approaches, such as LEDBAT (Over UDP) and TCP-LP~\cite{rossi2010ledbat,kuzmanovic2006tcp}; and delay gradient approaches~\cite{carlucci2016analysis}. \textbf{Model-based:} Model-based bitrate control method, such as Rebera~\cite{kurdoglu2016real}, GCC~\cite{carlucci2016analysis} and so on, they control sending bitrate based on previous network status observed including end-to-end latency, receiving rate which is measured by the receiver, and past sending bitrate, loss ratio which is measured by the sender. \subsection{Video Quality Metrics} \label{sec:videoquality} Video quality is a characteristic to measure the perceived video degradation while passing through a video transmission system. Up to now, the video quality metrics which are commonly used are shown as follows. \textbf{PSNR:} A traditional signal quality metric~\cite{hore2010image}, which is directly derived from mean square error (MSE) or its square root (RMSE). Due to the simplicity and low complexity of its calculation, PSNR continues to be the most popular evaluation of the video quality. However, the result cannot precisely reflect the visual quality seen by human eyes. \textbf{SSIM:} An image quality metric, submitted in 2004 by Wang et al.~\cite{1284395}. Unlike previously proposed video quality evaluation criteria, SSIM uses the structural distortion measurement instead of mean square error. Due to the consideration of the whole picture, SSMI can give a properer evaluation of the video quality experienced by users. However, SSIM is not a professional tool for video quality assessment. \textbf{VMAF:}Video Multi-method Assessment Fusion (VMAF)~\cite{rassool2017vmaf} is an objective full-reference video quality metric which is formulated explicitly by Netflix to estimate subjective video quality based on a reference and distorted video sequence. Using machine learning techniques, VMAF provides a single output score in the range of $[0,100]$ per video frame. In general, this metric is focused on describing the quality degradation due to compression and rescaling and it is closer to users' real experience of video quality than previous schemes. \vspace{-5pt} \subsection{Deep Reinforcement Learning Approaches} Deep reinforcement learning, one of the deep learning methods, aims to maximize the $reward$ of each $action$ taken by the agent in given $states$ per step. Recent years, several approaches (e.g.~ \citep{winstein2013stochastic,mao2017neural,DDASH}) have been made to optimize the network control algorithm. \textbf{Remy:} Remy~\cite{winstein2013tcp} decides with ``a tabular method' , and it collects experience from the network simulator with network assumptions, however, like all TCP variants, when the real network deviates from Remy`s input assumption, performance degrades. \textbf{Pensieve:} ~\citeauthor{mao2017neural}\cite{mao2017neural} develop a system that uses deep reinforcement learning to select bitrate for future video chunks. Unlike most of the adaptive bit rate(ABR) algorithms, Pensieve does not need any predefined rules and assumptions to make decisions, and it can automatically adjust itself to the change of network conditions. By comparing with the existing ABR algorithms, Pensieve performs very well. \vspace{-5pt} \section{Conclusion} In this paper, we propose QARC, a deep-learning-based rate control algorithm in the real-time video streaming scenario. Unlike previously proposed approaches, we try to get a higher video quality with possibly lower sending rate. Due to that fixed rules cannot effectively handle the complicated scenarios caused by perplexing network conditions and various video content, we use deep reinforcement learning to select the future video bitrate, which can adjust itself automatically to the change of its inputs. To reduce the state space of the reinforcement learning model, we derive the neural network into two parts and train them respectively. After training on a board set of network data, we explore the performance of QARC over several network conditions and QoE metrics. We find that QARC outperforms existing rate control algorithms. \section*{Acknowledgement} We thank the anonymous reviewers for their valuable feedback. The work is supported by the National Natural Science Foundation of China under Grant No. 61472204 and 61521002, Beijing Key Laboratory of Networked Multimedia No. Z161100005016051, and Key Research and Development Project under Grant No. 2018YFB1003703.
{ "timestamp": "2018-10-30T01:08:01", "yymm": "1805", "arxiv_id": "1805.02482", "language": "en", "url": "https://arxiv.org/abs/1805.02482" }
\section{Introduction} Various kinds of jet-like structures (i.e., spicules, chromospheric anemone jets, macrospicules, surges, X-ray/UV/EUV jets, etc.) are episodically present in the solar atmosphere. Study of these jet-like structures is one of the important areas in solar physics research using observations/numerical simulations. Therefore, our depth of knowledge about them (e.g., formation, evolution, plasma properties, etc.) is continuously improving (e.g., \citealt{Wilhelm2000,DePon2004,DePon2007,Murawski2010,Per2014, Shibata2007,Nisi2008,Bohlin1975, Wilhelm2000, Kamio2007,Murawski2011,Kayshap2013, Yokoyama1995,Sch1995, Yokoyama1996,Canfield1996,Chae1999,Cirtain2007,Abhi2011,Mortan2012,Kayshap2013a,Mark2015,Mulay2016a,Mulay2016b,Rao17}). \textbf{It is to be noted that the jet acceleration mechanisms strongly depend on the height of the drivers (e.g. \citealt{Shibata2007,Takasao2013})}. Recently, \cite{Rao2016} have reviewed various aspects of the solar coronal jets in observations, theory and numerical modeling.\\ Recently, one more class of jet-like structures is discovered using IRIS coronal hole (CH) observations (network jets; \citealt{Tian2014}). The network jets are typically the TR phenomena. The TR, which is an interface between relatively cool chromosphere ($\sim$ 6$\times$10$^{3}$ K) and hot corona ($\sim$ 10$^{6}$ K), is a very complex/dynamic layer (\citealt{Kay15}). TR is not studied with the fine details due to the unavailability of its high-resolution observations. Now, the IRIS mission (i.e., slit-jaw-images (SJIs) $\&$ spectral observations) is particularly dedicated to TR (\citealt{DePon2014}). The networks jets are well observed in the TR filters (e.g., IRIS/SJI: C~{\sc ii} 1330~{\AA} and Si~{\sc iv} 1400~{\AA}) as they are the most dominant/prominent features of the TR. The network jets have the apparent speed of 80-250 km s$^{-1}$ with lifetimes of 20-80 s and length of 4-10 Mm (\citealt{Tian2014}). The width of these network jets is $\leq$300 km as reported by \cite{Tian2014}. In addition to the properties of these network jets, the magnetic reconnection as inferred from very high speed and associated footpoint brightenings, is reported as a triggering mechanism for these jets (\citealt{Tian2014}). In an another work, it is reported that network jets also occur in the quiet-Sun (QS), and certainly not only in the coronal holes (CHs) (\citealt{Narang2016}). Interestingly, they have also reported that the network jets are faster and longer in CH than the QS. This can be directly attributed to a difference in the magnetic field configuration between QS and CH regions, as well as the height of the TR.\\ The rotating motion is a very well known property of the jet-like structures. The specific Doppler shifts pattern of jet-like structures (i.e., blue on one edge and red on the other edge) predicts the presence of rotating motion (e.g., \citealt{Pike1998,Curdt2011,Mark2015}). The magnetic reconnection between emerging magnetic bi-pole and pre-existing magnetic field produces the rotating jets (e.g., \citealt{Fang2014,Mark2015,Lee2015}). In addition, the photospheric horizontal motions can also add the twist to the magnetic fields which finally produce the helical jet through the magnetic reconnection (e.g., \citealt{Par2009,Pariat2010}). Recently, \citealt{DePon2014} have reported the prevalence of small-scale twist in the solar chromosphere and TR, which is very important from the perspective of the heating of lower atmosphere. \\ In the present work, we use IRIS spectroscopic/imaging observations for the statistical analysis of network jets in the context of rotating motion, and other associated properties (e.g., speed, length, life-time). The observations and data analysis are presented in the Sect~\ref{section:obs}. The section~\ref{section:results} describes observational results. Discussion and conclusions are described in the last section. \section{Observations and Data-Analysis} \label{section:obs} IRIS mission provides the high-resolution imaging/spectroscopic observations from the photosphere up to the corona (\citealt{DePon2014}). The C~{\sc ii} 1330~{\AA} and Si~{\sc iv} 1400~{\AA} imaging filters (slit-jaw images (SJI)) capture the emissions from transition-region (TR). The network jets are best seen in the TR, therefore, imaging observations from these two filters (i.e., C~{\sc ii} $\&$ Si~{\sc iv}) capture the dynamics of recently discovered network jets (\citealt{Tian2014}). We have used three different observations for the study of the dynamics of network jets. The details of the used observations are given in Table~\ref{table}. In this table, we have outlined necessary information (i.e., date $\&$ time, field-of-view (FOV) and exposure times of SJI and raster) about the used observations.\\ \begin{table*}[ht] \centering \caption{The table shows the date $\&$ time, field of view (FOV) and exposure time (in seconds) for SJI and raster for all three sets of the observations. \label{table_obs}} \begin{tabular}{|c|c|c|c|} \hline Observation & Date-Time & FOV & Exposure Time(SJI/Raster; Seconds) \\ \hline Obs$\_$A & 14.12.2014 (15:38-16:35) & 134$"$$\times$119$"$ & 20.0/4.0 \\ \hline Obs$\_$B & 24.09.2014 (18:09-20:17) & 119$"$$\times$119$"$ & 38.0/8.0 \\ \hline Obs$\_$C & 23.09.2014 (07:59-10:56) & 60$"$$\times$65$"$ & 11.0/4.0 \\ \hline \end{tabular} \label{table} \end{table*} The Si~{\sc iv} as well as C~{\sc ii} spectral lines originate from the TR. The C~{\sc ii} resonance lines (i.e., 1334.53~{\AA} and 1335.71~{\AA}) are optically thick lines, however, the Si~{\sc iv} 1393.75~{\AA} is an optically thin line under the normal conditions. Therefore, we have used Si~{\sc iv} 1393.75~{\AA} line to infer the Doppler velocities of these network jets. The level-2 data files from IRIS are the standard scientific products (\citealt{DePon2014}), which are used in the present analysis. The imaging and spectrogram are already aligned with each other in these level-2 files, however, we have also checked this alignment using fiducial marks and found that data are well aligned. In addition, we have also used the observations from AIA (\citealt{Lemen2012}). IRIS/SJI C~{\sc ii} 1330~{\AA} filter captures the significant emissions from the continuum. Therefore, we have used the cross correlation between IRIS/SJI C~{\sc ii} 1330~{\AA} and AIA 1600~{\AA} filter observations for the alignment between IRIS and AIA observations. The IRIS/SJI Si~{\sc iv} 1400~{\AA} observations are used to draw the various properties (e.g., speed, life-time and length) of these network jets. We utilize the space-time technique to draw various properties of these network jets.\\ The single Gaussian can easily characterize the Si~{\sc iv} 1393.75~{\AA} due to its optically thin nature. However, almost all of the Si~{\sc iv} 1393.75~{\AA} spectral profiles are double peak or asymmetric in the vicinity of network jets. So, these profiles are not well fitted by Single Gaussian. The double Gaussian provides a more reliable fitting on the observed spectral profiles. The estimation of the rest wavelength is another crucial issue as it can directly affect the estimated Doppler velocity. To estimate the rest wavelength, very quiet-area is selected from each raster. The wavelength from the averaged spectra of quiet-areas, which are free from any kind of dynamics, represents the rest wavelength of Si~{\sc iv} 1393.75~{\AA}. \section{Observational Results}\label{section:results} IRIS high-resolution spectral/imaging observations are important to diagnose very dynamic network jets of TR. We have visually identified total 51 network jets (19-Obs$\_$A; 19-Obs$\_$B; 12-Obs$\_$C) from all three different observations. The selection of all these jets is made in such a way that each jet should be very well isolated from the others. We have also adopted that the each jet should be visible in three image frames. We have adopted this criterion to avoid any possible error associated with the jet identification in our analysis. Under the light of these criteria, we have selected well isolated 51 network jets for this work. As stated earlier that the IRIS/SJI Si~{\sc iv} 1400~{\AA} $\&$ Si~{\sc iv} 1393.75~{\AA} spectral line are used to diagnose the Doppler velocity and other associated properties ( e.g., life-time, speed and length) of these network jets. \subsection{Temporal Evolution and Kinematics of the Network Jets} \subsubsection{Temporal Evolution of Network Jets}\label{sect:evolution} In Fig.~\ref{fig:jet_evol}, we have shown the evolution of three different network jets in IRIS/SJI Si~{\sc iv} 1400~{\AA}. \begin{figure*} \centering \mbox{ \includegraphics[trim = 1.5cm 2.0cm 4.5cm 3.5cm, scale=1.1]{ref_figure_1a.eps} } \mbox{ \includegraphics[trim = 1.5cm 2.0cm 4.5cm 5.5cm, scale=1.1]{ref_figure_1b.eps} } \mbox{ \includegraphics[trim = 1.5cm 3.0cm 4.5cm 5.5cm, scale=1.1]{ref_figure_1c.eps} } \caption{\small The top-row shows the evolution of a network jet (panels a-d) taken from first observation (Obs$\_$A). The jet originates at 16:22:18~UT on 14 December 2014 from the edge of the magnetic network (panel a) with its maximum phase around 16:23:41 UT (panel c; as outlined by red rectangular box). The jet fades from the view in the last phase (panel d). The middle row shows the evolution of another network jet taken from second observation (Obs$\_$B) and the jet is shown by red rectangular box. Similarly, the bottom row depicts the evolution of one more network jet which is taken from third observation.} \label{fig:jet_evol} \end{figure*} The top row (i.e., panels a to d) of Fig.~\ref{fig:jet_evol} shows evolution of a network jet that is taken from first observation (i.e., Obs$\_$A; table~\ref{table}). The network jets originate from the bright patches (magnetic network; \citealt{Tian2014,Narang2016}). The extended bright patch is visible in the vicinity of this network jet in IRIS/SJI Si~{\sc iv} 1400~{\AA} (cf., panels a to d). The jet started around 16:12:28~UT from one edge of bright patch (panel a). Further, the jet is growing and acquires the maximum phase around 16:23:41~UT (panel c). The red rectangular box outlines the network jet. At later times, the jet fades from the view as visible in the last panel of top row. The network jet reaches up to 2.8 Mm from its base within the life-time of approximately 244.0 seconds.\\ In the middle row, we have shown the evolution of another network jet (panels e to h) that is taken from second observation (i.e., Obs$\_$B; table~\ref{table}). The bright patch is visible in the vicinity of this network jets, which is similar to the first jet. The jet starts to form around 18:22:08~UT on 24 January 2014 from one edge of the magnetic network. The jet further grows obliquely from its initiation site, which is outlined by the red rectangular box (panel f). The black vertical line shows the slit position, which is used to take the spectra. The jet attains its maximum phase after around 136 seconds (18:23:24; panel g). In the later times, the network jet fades from the view (panel h). The network jet reaches up to 5.7 Mm within its total life time of 197.0 seconds.\\ Finally, in the bottom row, we show the temporal evolution of one more network jet (panels i to l) which is taken from third observation (i.e., Obs$\_$C; table~\ref{table}). The bright patch is visible in the vicinity of jet's base, which justify the base of jet is located in the magnetic network. The jet starts around 08:03:32~UT from the brightened area (panel i). The jet is evolving further nearly in the vertical direction. The red rectangular box outlines the jet and black vertical line is the slit position (panel k). The maximum phase of this particular jet occurs around 08:04:35~UT (panel k). The jet attains its maximum length of 3.7 Mm within its total lifetime of around 95 seconds. The jet fades from view in the decay phase (panel l). So, we can say that network jets follow the typical evolution and fades from the view in the decay phase. Most importantly, these network jets always have brightened base and originate from the magnetic networks. \begin{figure*} \centering \mbox{ \includegraphics[trim = 2.0cm 1.5cm 4.5cm 1.5cm, scale=1.0]{ref_ht.eps} } \caption{\small Left-panel: SJI of Si~{\sc iv} 1400~{\AA} pass band of IRIS during the maximum phase of the jet along the selected path (over plotted red plus sign), which is used to produce the height-time diagram. Right-panel: height-time diagram of the network jet, which shows that the speed and life-time of the network jet are $\sim$ 64.57 km s$^{-1}$ and 244.0 seconds, respectively.} \label{fig:ht_map} \end{figure*} \subsubsection{Kinematics of Network Jets} The space-time technique is used to evaluate the kinematics of these network jets (e.g., speed, height and life-time). The Fig.~\ref{fig:ht_map} shows the height-time diagram for the jet from Obs$\_$A (first row; figure~\ref{fig:jet_evol}). The slit position is over-plotted on IRIS/SJI 1400~{\AA} intensity image by red asterisk signs, which is shown in the left-panel of Fig.~\ref{fig:ht_map}. Using this path along the network jet, we have used 3 pixels in the transverse direction to create the space-time diagram of this network jet (right-panel of Fig.~\ref{fig:ht_map}). We have drawn a path (white-dashed line) on the space-time diagram to measure the speed of this jet, which is about 64.57 km s$^{-1}$. The life-time of the jet is 244 seconds, however, the jet reaches up to 4 Mm. The another network jet also appears from the same site, which indicates the concurrent energy release at the origin site.\\ \begin{figure*} \centering \mbox{ \includegraphics[trim = 2.0cm 0.0cm 4.5cm 0.0cm, scale=1.0]{histograms_kinematics.eps} } \caption{\small The figure shows the distribution of apparent speed (panel a), life-time (panel b) and length (panel c) of the observed network jets. The mean speed of network jets is 140.16 km s$^{-1}$ with the mean lifetime of 105.49 seconds. The mean length is 3.16 Mm. Panel d shows the correlation between speed and length, which is positively correlated \textbf{as revealed by high pearson coefficient (R=0.644)}. } \label{fig:hist_kin} \end{figure*} The parameters (e.g., life-time, speed and height) from all 51 network jets are estimated using space-time technique. We have produced the histogram for apparent speed (panel a; Fig.~\ref{fig:hist_kin}), lifetime (panel b; Fig.~\ref{fig:hist_kin}) and length (panel c; Fig.~\ref{fig:hist_kin}). The mean speed is 140.16 km s$^{-1}$ with its standard deviation of 39.41 km s$^{-1}$. However, the apparent speed can vary from 50.0 up to 200 km s$^{-1}$. The network jet can have life-time from 40.0 up to 250.0 seconds. However, the lifetime histogram of network jets predicts that mean life-time is 105.49 seconds with the standard deviation of 51.75 seconds. The histogram for the length of these network jets predicts the mean value of 3.16 Mm with the standard deviation of 1.18 Mm. It should also be noted that the length of these network jets can vary from 1.2 up to 5.8 Mm. In addition, we have also investigated the correlation between speed and length of these jets, which is shown in panel (d). It is clearly visible that speed and length of these jets are very well positively correlated. Such type of correlation is also investigated by \cite{Narang2016} and they have also reported the positive correlation between them. \subsubsection{Hot Counterparts of Network Jets} Using IRIS/SJI Si~{\sc iv} 1400~{\AA} observations, Fig.~\ref{fig:jet_evol} shows temporal evolution of three network jets (cf., section~\ref{sect:evolution}). The various filters from AIA observations are used to investigate the possible hot counterparts of these network jets. However, the previous works are already reported that the network jets are strictly TR phenomena and their hot counterparts are not present in the solar atmosphere (e.g., \citealt{Tian2014,Narang2016}). \begin{figure*} \centering \mbox{ \includegraphics[trim = 1.5cm 3.0cm 4.5cm 5.5cm, scale=1.1]{fig_2a.eps} } \mbox{ \includegraphics[trim = 1.5cm 3.0cm 4.5cm 5.5cm, scale=1.1]{fig_2b.eps} } \mbox{ \includegraphics[trim = 1.5cm 3.0cm 4.5cm 5.5cm, scale=1.1]{fig_2c.eps} } \caption{\small Top row shows the network jet from Obs$\_$A (first jet; Fig.~\ref{fig:jet_evol}) in IRIS/SJI Si~{\sc iv} 1400~{\AA}, AIA-304~{\AA}, AIA 171~{\AA}, AIA 211~{\AA} and AIA 193~{\AA} respectively. The middle and bottom row the network jet from Obs$\_$B (second jet; Fig.~\ref{fig:jet_evol}) and from Obs$\_$C (third jet; Fig.~\ref{fig:jet_evol}) in the same IRIS and AIA filter. It is clearly evident that network jets do not possess hot counterparts.} \label{fig:jet_aia} \end{figure*} We have investigated AIA 304~{\AA} (log T/K = ), AIA-171~{\AA} (log T/K = ), AIA-211~{\AA} (log T/K = ) and AIA-193~{\AA} (log T/K = ) to check the possibility of hot counterparts of the observed network jets. The top row of Fig.~\ref{fig:jet_aia} shows the IRIS/SJI Si~{\sc iv} 1400~{\AA}, AIA 304~{\AA}, 171~{\AA}, 211~{\AA} and 193~{\AA} images for a network jet taken from Obs$\_$A. It is clearly visible that jet is not observed in AIA filters (see red rectangular area). However, some traces of the network jet is visible in AIA~304~{\AA}. The temperature response of AIA~304~{\AA} filter is very wide and it can sample some low temperature plasma also. The middle row shows the second jet (Obs$\_$b) in IRIS and various AIA filters, which also predicts no hot counterparts of this network jet. Similarly, the bottom row shows the network jet in different filters (i.e., AIA and IRIS) and no hot counterparts for this network jet is visible too. Therefore, we can say that the network jets are clearly visible in the IRIS-SJI Si~{\sc iv} 1400~{\AA} (first panel in each row; Fig.~\ref{fig:jet_aia}), however, we do not see any signature of these network jets in the hot temperature filters. We have investigated the hot temperature filters for all network jets (i.e., 51 network jets) and we do not see the signature of network jets in these AIA filters. Therefore, these observations predict that network jets are typically the cool TR features, and our observations are consistent with the previously reported results on this aspect (\citealt{Tian2014,Narang2016}). \subsection{Rotational Nature of the Network Jets} The rotational motion is an important property of the jet-like structures in the solar atmosphere. On the basis of Doppler velocity/Dopplergram analysis, it is reported that the typical coronal/chromospheric jets reveal blue-shifts at its one edge while the other side plasma experience red-shifts \cite[e.g.,][and references cited therein]{Pike1998,Curdt2011,Mark2015}. This spatial pattern of Doppler velocity/Dopplergram reveals the rotating nature of the jet plasma column. In addition, the variations of Doppler velocity across the jet is also a signature of rotating motion of jet plasma column (e.g., \citealt{Young2014,Pariat2010}).\\ \begin{figure*} \centering \mbox{ \includegraphics[trim = 2.0cm 1.0cm 4.5cm 0.0cm, scale=1.0]{spectra_obsa.eps} } \caption{\small The figure shows some sample spectral profiles and their fitting from Obs$\_$A (left-column), Obs$\_$B (middle column) and Obs$\_$ C (right-column). The black diamonds show the observed profiles while the solid red line represents the corresponding fitting. In addition, red dashed line shows the main Gaussian while blue dashed line shows the secondary Gaussian. The double Gaussian fitting leads to the much reliable fitting on the observed profiles.} \label{fig:sample_spectra} \end{figure*} We utilize optically thin TR line (i.e., Si~{\sc iv} 1393.75~{\AA}) to understand the Doppler velocity pattern for these network jets. We have selected Si~{\sc iv} 1393.75~{\AA} spectral profiles across these network jets for each jet. After selecting these spectral profiles, we have averaged these spectral profiles using running average of 2 consecutive spectral profiles to increase the signal-to-noise ratio. We have found that the spectral profiles are significantly asymmetric within the vicinity of the network jets. The spectral profile appears as double peak profile (bottom-most panel; Fig.~\ref{fig:sample_spectra}) in some network jets. So, we have used double Gaussian (i.e., weak and main) fitting to characterize the observed line. Few sample profiles along with their double Gaussian fitting are shown in Fig.~\ref{fig:sample_spectra} from Obs$\_$A (left column), Obs$\_$B (middle column) and Obs$\_$C (right column). \cite{Young2014} have reported the occurrence of weak Gaussian (towards high velocity wing) along with the main Gaussian, which occurs in polar jets due to their very high speed. So, the secondary Gaussian basically contributes to the asymmetry of the line. The network jets are also very high speed plasma structure within the TR, which lead to the secondary Gaussian along with main peak of Si~{\sc iv} 1393.75~{\AA}. The high velocity component of this TR line is directly attributed from very high speed of network jets. Therefore, the secondary Gaussian of Si~{\sc iv} 1393.75~{\AA} line shows the true line-of-sight (LOS) Doppler velocity of these network jets. Interestingly, we have found that the double Gaussian fitting leads to very much reliable fitting of the observed profiles (cf, Fig.~\ref{fig:sample_spectra}). \begin{figure*} \centering \mbox{ \includegraphics[trim = 5.0cm 1.0cm 3.5cm 0.0cm, scale=0.45]{fig5.eps} \includegraphics[trim = -2.0cm 1.0cm 4.5cm 0.0cm, scale=0.45]{dp_hist.eps} } \mbox{ } \caption{\small The left-panel shows variation of Doppler velocity across the first jet (Obs$\_$A; black line), second jet (Obs$\_$B; blue line) and third jet (Obs$\_$C; red line). We have shown the distribution of $\Delta$V in the right-panel. The mean $\Delta$V is 49.56 km s$^{-1}$ with its standard deviation of 27.99 km s$^{-1}$.} \label{fig:rot_mot} \end{figure*} In Fig.~\ref{fig:sample_spectra}, the black diamonds show the observed profile along with their errors. The over plotted solid red line represents the fitted profile. However, the red-dashed line shows the main Gaussian and blue-dashed line shows the secondary Gaussian. So, from all these displayed spectral profiles, we can see that the double Gaussian fitting leads to very much reliable fitting on these observed spectral profiles.\\ The variation of Doppler velocity across the jet is extremely important in the context of their rotational motion. In the left-panel of Fig.~\ref{fig:rot_mot}, the variations of Doppler velocity across the first jet (Obs$\_$A-jet@1; black solid line), second jet (Obs$\_$B-jet@2; blue line) and third jet (Obs$\_$C-jet@3; red line) are shown. First and second jets (jet@1 and jet@2) show that the red shift inverts into the blue shift from one edge of the jet to another edge, which is a typical signature of the rotational motion (e.g., \citealt{Pike1998,Curdt2011,Mark2015}). However, the jet@3 shows the increase in the Doppler velocity from one edge to its another edge. The increase or decrease of the Doppler velocity from one to another edge also signifies the presence of rotational motion within the jet body (\citealt{Young2014}). Using this procedure, we have investigated the Doppler velocity across these jets. It is found that in most of the network jets, the typical spatial pattern of Doppler velocity (i.e., blue shifts on one edge and red shifts on another edge) \textbf{emerge}. However, others have significant variations of the Doppler velocity from one edge of network jets to their another edge. Therefore, these results successfully predict the omnipresence of the rotational motion within these network jets.\\ To quantify the rational motion of these network jets, we took the difference of the Doppler velocity ($\Delta$ V) between edges of any particular jet. \cite{Young2014} have demonstrated that the $\Delta$V reflects the amount the rotation inherited in any particular network jet. We have estimated the $\Delta$V for each network jet to investigated their distribution. Finally, we have produced the histogram of $\Delta$V (right-panel; Fig.~\ref{fig:rot_mot}, which shows the mean value of $\Delta$V is 49.56 km s$^{-1}$ with their standard deviation of 27.99 km s$^{-1}$. However, the $\Delta$V can vary from 20.0 to 100.0 km s$^{-1}$. The angle between the jet's axis and observer (LOS direction) must be known to estimate the angular speed. However, the estimation of this angle (angle between jet axis and observer) is not possible from the used observational data. It should be noted that the angular speed is directly proportional to the $\Delta$V, therefore, it reflects the amount of rotational motion inherited in these network jets. \section{Discussion $\&$ Conclusions}\label{section:dis_con} The high resolution imaging observations of TR from IRIS reveal the ubiquitous presence of network jets. We have used three different IRIS observations of QS, which are located near the disk center. On the basis of careful inspection, 51 network jets are identified from three QS observations and used for further analysis. These 51 network jets are very well resolved and not affected by the dynamics of other jets. The study is focused on the rotating motion of network jets along with the estimation of their other properties (e.g., speed, height and life-time). The mean speed, as predicted by statistical distributions of the speed, is 140.16 km s$^{-1}$ with standard deviation of 39.41 km s$^{-1}$. The mean speed of network jets is very similar as reported in the previous works (e.g., \citealt{Tian2014,Narang2016}). However, in case of their life-time, we found almost double mean lifetime (105.49 s) than the previously reported mean lifetime of the network jets (49.6 s; \citealt{Tian2014}). As we state earlier that we took only those network jet, which are very well resolved in the space as well as in the time, this criterion excludes short lifetime network jets. Therefore, our statistical distribution of the lifetime predicts higher mean lifetime. The mean length of the network jets is 3.16 Mm with its standard deviation of 1.18 Mm. In case of CH network jets, \cite{Tian2014} have reported that most of the network jets have length from 4.0 to 10.0 Mm. However, the mean length for QS network jets are smaller (3.53 Mm; \cite{Narang2016}). So, the mean length for QS network jets from the present work is well in agreement with \cite{Narang2016}. In addition, the apparent speed and length of these network jets are positively correlated, which is very similar as already reported in previous works (\citealt{Narang2016}). Finally, we can say that these networks jets are very dynamic features of the solar TR as revealed by their estimated properties.\\ The spectral profiles from TR are investigated extensively using space based observations. There are some noticeable features in spectral profiles of the TR, e.g., two distinct satellites to the blue and red indicating bi-directional flows during explosive events (\citealt{Dere1989}), enhanced emission in the wings above the networks (\citealt{Peter2000}). In addition, \cite{Peter2000} have demonstrated that the double Gaussian fitting yields the reliable fitting on TR spectral profiles and the secondary component is much more informative regarding the ongoing physical process. Recently, \citealt{Peter2010} have reported the asymmetry in coronal extreme ultraviolet lines in the vicinity of an active-region. The Si~{\sc iv} 1393.75~{\AA} spectral profiles are significantly asymmetric within the observed network jets. The presence of high plasma flow at any particular jet produces the secondary component along with the main component of the line-profile, which leads to the asymmetry of the spectral lines (\citealt{Peter2010}). Therefore, the secondary component of the Si~{\sc iv} 1393.75~{\AA} line as observed in 51 jets in the present work, is most likely the result of high speed plasma flows (i.e., network jets). The LOS Doppler velocity of the secondary component represents the real LOS Doppler velocity from these networks jets. The occurrence of secondary component within the network jets is reported first time in our present work. Our study shows that most of the network jets have opposite Doppler shifts on their edges, which is a typical signature of rotating motion of the jet plasma column (e.g., \citealt{Pike1998,Curdt2011,Mark2015}). In addition, higher LOS Doppler velocity on one side than the Doppler velocity of other side also predicts the rotational motion (e.g., \citealt{Pariat2010,Young2014}) in these observed jets. We have found that some network jets show this pattern (i.e., higher LOS Doppler velocity on one side than the other side Doppler velocity), which also justify their rotational motion. So, it is clear that all the observed network jets have rotating motion. The statistical analysis predicts the mean rotational motion (i.e., $\Delta$V) is 49.56 km s$^{-1}$ with its standard deviation 28.78 km s$^{-1}$. In case of polar jet, \cite{Young2014} reported that $\Delta$V $\approx$ 60.0 km s$^{1}$ (similar to the network jets) with its width 4.5 arcsec. However, in the present analysis, the statistical analysis of widths shows that the mean width is 0.62 arcsec, which is almost seven times lower than the width of polar jet. Qualitatively we can say that the angular speed of these network jets are higher than the usual solar jets (e.g., \citealt{Shen2011,Chen2012,Young2014}). Therefore, these network jets consist more helical magnetic fields than the other solar jets.\\ In addition, the observed properties of these network jets also help us to speculate on their triggering mechanism. The magnetic reconnection between the twisted magnetic field and pre-existing magnetic field may trigger the rotating motion (e.g., \citealt{Par2009, Fang2014,Pariat2016}). In a numerical model (\cite{Fang2014}), the magnetic reconnection between twisted magnetic fields and pre-existing magnetic field sheds the twist on newly reconnected field lines, and plasma flows along these twisted magnetic field lines. However, an another numerical model(\cite{Par2009})is based on photospheric motion, which can inherit the twist onto the magnetic field lines and finally produce the helical jet after the magnetic reconnection due to loosing of equilibrium of the flux-system. The present analysis predicts the ubiquitous presence of twist within these network jets. We have also found that the network jets have recurrent nature (i.e., many jets are triggered from the same location), which may be the result of oscillatory magnetic reconnection as proposed by \cite{Murray2009} and \cite{McL2012}. In addition, \cite{Goodman2014} have reported that Lorentz-force (magnetic acceleration) driven jets can have speed from 66-397 km s$^{-1}$. However, the pressure driven jets can also achieve the maximum speed of $\approx$ 60 km s$^{-1}$ (e.g.,\citealt{Mart2011}). \textbf{We tend to believe that the pressure driven jet may be able to account for the speed, but not the rotational motion}. Therefore, the rotating motion, recurrent nature and apparent high speed of the observed network jets (140.16 km s$^{-1}$) suggest that the magnetic reconnection is the most-likely triggering mechanism in the present case. The similar aspect of the triggering mechanisms (i.e., recurrent nature and high apparent speed) of these network jets have also been reported in some previous studies, which confirm the occurrence of magnetic reconnection in support of the formation of network jets (e.g., \citealt{Tian2014,Narang2016}). However, very few network jets with the lower velocities (i.e., less than 60.0 km s$^{-1}$) may be formed due to gas pressure acceleration (\citealt{Shibata1982}).\\ We conclude that spectral analysis predicts the omnipresence of rotational motion in the network jets, which is reported first time for this class of jet-like structures. The helicity (amount of rotation) is high in the observed network jets compared to the usual other solar jets. In addition, the magnetic reconnection/acceleration is the most-likely cause behind the formation of these network jets. \begin{acknowledgements} PK’s and KM's work was done in the framework of the project from the National Science Centre, Poland (NCN), Grant No. 2014/15/B/ST9/00106. IRIS is a NASA small explorer mission developed and operated by LMSAL with mission operations executed at NASA Ames Research center and major contributions to downlink communications funded by ESA and the Norwegian Space Centre. We also acknowledge the use of SDO/AIA observations for this study. The data provided are courtesy of NASA/SDO, LMSAL, and the AIA, EVE, and HMI science teams. AKS and BND acknowledge the RESPOND-ISRO project, while AKS acknowledges the SERB-DST young scientist project grant. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-05-08T02:17:37", "yymm": "1805", "arxiv_id": "1805.02517", "language": "en", "url": "https://arxiv.org/abs/1805.02517" }
\section{Introduction} Fix an integer $d\ge2$ and consider the hypergeometric series $$F(z)=\sum_{n=0}^{\infty} \left({(1/2)_n\over n!}\right)^d z^n,$$ where $(x)_n$ denotes the product $x(x+1)(x+2)\cdots(x+n-1)$. It is known as the Pochhammer symbol. Let $p$ be a fixed odd prime. For every integer $s\ge0$ we define the truncated series $$ F_{p^s}(z)=\sum_{n=0}^{p^s-1} \left({(1/2)_n\over n!}\right)^d z^n. $$ In particular $F_1(z)=1$. Let $z_0$ be a $p$-adic unit and suppose that $F_p(z_0)$ is also a $p$-adic unit. Then, by a result of Dwork \cite{DworkIV}, we have for all $s\ge1$ that $F_{p^s}(z_0)$ is a $p$-adic unit together with the congruence \begin{equation}\label{cauchy} {F_{p^{s+1}}(z_0)\over F_{p^s}(z_0)}\is {F_{p^s}(z_0)\over F_{p^{s-1}}(z_0)}\mod{p^s}. \end{equation} So the sequence of quotients is a $p$-adic Cauchy sequence. We define the limit $$f(z_0)=\lim_{s\to\infty}{F_{p^s}(z_0)\over F_{p^{s-1}}(z_0)}.$$ The number $f(z_0)$ is refered to as the {\it unit root part} of the Frobenius-action on a suitable $p$-adic cohomology. We shall make this a bit more explicit in Section \ref{motive}. From (\ref{cauchy}) it follows that $f(z_0)\is F_p(z_0)\mod{p}$. But it turns out that for some values of $z_0$ one has stronger congruences, a remarkable phenomenon called {\it supercongruences}. In this paper we prove the following theorem, \begin{theorem}\label{main} Let $\epsilon_p=(-1)^{d(p-1)/2}$ and suppose that $F_p(\epsilon_p)$ is a $p$-adic unit. Then $$F_p(\epsilon_p)\is f(\epsilon_p)\mod{p^2}.$$ \end{theorem} This proves part of the following conjecture we like to propose here. \begin{conjecture}\label{conj1} With the notations as above let $\epsilon=\pm1$ and suppose that $F_p(\epsilon)$ is a $p$-adic unit. Then $F_p(\epsilon)\is f_p(\epsilon)\mod{p^2}$ \end{conjecture} It should be remarked that if $p\is3\mod{4}$ then $F_p(-\epsilon_p)\is0\mod{p}$ by Corollary \ref{Fzero}. So the only values $F_p(\epsilon)$ which are still conjectural are $F_p(-1)$ with $p\is1\mod{4}$. For some choices of $d,\epsilon$ we conjecture some stronger congruences. \begin{conjecture}\label{conj2} Suppose that $F_p(\epsilon)$ is a $p$-adic unit. Then we have $F_p(\epsilon) \is f_p(\epsilon)\mod{p^3}$ in the following cases: $d=3$ and $\epsilon=\pm1$, $d=4$ and $\epsilon=1$, $d=5$ and $\epsilon=1$, $d=6$ and $\epsilon=1$. Moreover, in the latter case we expect $F_p(1)\is f_p(1)\mod{p^5}$. \end{conjecture} There are a number of results which go into this direction, although the formulation does not contain the unit root $f_p(\epsilon)$ but an integer number, usually the $p$-th coefficient of an $L$-series that occurs in number theory. For example, when $d=2$ Mortenson \cite{Mor1} showed that $F_p(1)\is \leg{-4}{p}\mod{p^2}$. Presumably we have $f_p(1)=\leg{-4}{p}$. In general we expect that $f_p(\epsilon)$ is a zero of the $p$-th factor of the L-series associated to the underlying hypergeometric motive. We explain this more in detail in section \ref{motive} In the case $d=3$ several authors (Ishikawa, Van Hamme, Ahlgren) independently proved that $$F_p(1)\is c_p\mod{p^2}$$ where $c_p$ is the $p$-th coefficient of $\eta(4\tau)^6\in S_3(16,\chi(-4))$, see \cite[p322]{Mor2} and the references therein. The notation $S_k(N,\chi)$ stands for the modular cusp forms of weight $k$ with group $\Gamma_0(N)$ and character $\chi$. In particular $\chi(a)$ stands for the Legendre symbol $\leg{a}{.}$. It is a CM form given by $c_p=2(a^2-b^2)$ where $p=a^2+b^2$ with $a$ odd. For a proof we refer to \cite[Thm 4]{Mor2}. Numerical experiment shows that these congruences do not hold modulo $p^3$. Surprisingly enough, these experiments also suggest that $F_p(1)\is f_p(1)\mod{p^3}$. Presumably $f_p(1)$ is the unit root of $x^2-c_px+p^2$ corresponding to the local Euler factor of the $L$-series of the modular form. Kilbourn \cite{Kil} has shown that when $d=4$ we have $$F_p(1)\is a_p\mod{p^3},$$ where $a_p$ is the coefficient of the modular form $\eta(2\tau)^4\eta(4\tau)^4=\sum_{n\ge1}a_nq^n,\ q=e^{2\pi i\tau}$ in $S_4(8,\chi_0)$. By $\chi_0$ we denote the trivial character. Presumably $f_p(1)$ is the $p$-adic unit root of $x^2-a_px+p^3$ corresponding to the local Eulerfactor at $p$ of the L-series of the modular form. We cannot prove this, but if true it implies that $f_p(1)\is a_p\mod{p^3}$. Recently Osburn, Straub, Zudilin \cite{OSZ} proved that $F_p(1)\is b_p\mod{p^3},$ where $b_p$ is the $p$-th coefficient of the unique newform in $S_6(8,\chi_0)$. It is conjectured that this congruence holds modulo $p^5$ for all odd $p$. We believe that $f_p(1)$ is the $p$-adic unit zero of $x^2-b_px+p^5$. Similarly as before this would imply that $f_p(1)\is b_p\mod{p^5}$. Beside these results we like to record the following conjecture. \begin{conjecture}\label{conj3} We make the implicit assumption that $F_p(-1)$ is a $p$-adic unit. When $d=3$ we expect $F_p(-1)\is c_p\mod{p^2}$ where $c_p$ is the $p$-th coefficient of $\eta(\tau)^2\eta(2\tau)\eta(4\tau)\eta(8\tau)^2\in S_3(8,\chi(-8))$. It is a CM-form with coefficients given by $2(2b^2-a^2)$ where $p=a^2+2b^2$ in case $p\is1,3 \mod{8}$. As conjectured in Conjecture \ref{conj2} we also expect that $F_p(-1)\is f_p(-1)\mod{p^3}$. When $d=5$ we expect $F_p(-1)\is d_p\mod{p^2}$ where $d_p=\left({-8\over p}\right) (\delta_p^2-2p^2)$ and $\delta_p$ is the $p$-th coefficient of the form $g\in S_3(256,\chi(-4))$ whose expansion starts with \begin{eqnarray*} g(\tau)&=&q-2\sqrt{-2}q^3+4q^5+8\sqrt{-2}q^7+q^9+10\sqrt{-2}q^{11}+20q^{13}-8\sqrt{-2}q^{15}\\ &&-10q^{17}-10\sqrt{-2}q^{19}+32q^{21}-8\sqrt{-2}q^{23}+9q^{25}-20\sqrt{-2}q^{27}+20q^{29}+\cdots \end{eqnarray*} Since $f_p(-1)$ is (presumably) a zero of $x^2-d_px+p^4$ we should have $f_p(-1)\is d_p\mod{p^4}$. However, experiment shows that $F_p(-1)\is f_p(-1)$ only holds modulo $p^2$. We are indebted to Wadim Zudilin and Dave Roberts for the (conjectural) identification of the coefficients $d_p$. \end{conjecture} A natural, and often asked question, is what happens with the values of $F_{p^s}(\epsilon)$ with $\epsilon=\pm1$. Numerical experiment suggests the following generalization of Theorem \ref{main} might be true. \begin{conjecture} Let $\epsilon=\pm1$ and suppose that $F_p(\epsilon)$ is a $p$-adic unit. Then we have $$F_{p^s}(\epsilon)\is f_p(\epsilon)F_{p^{s-1}}(\epsilon)\mod{p^{2s}}$$ for all integers $s\ge 1$. \end{conjecture} Beside supercongruences for hypergeometric sums with parameters $1/2$ and $1$ there exist several other types for other parameter choices. We refer to \cite{LTYZ} for a proof of Rodriguez-Villegas's mod $p^3$ conjecture for the 14 truncated hypergeometric sums of order $4$ corresponding to Calabi-Yau varieties. The key to the proof of Theorem \ref{main} is the special symmetry of the hypergeometric differential equation for $F(z)$. It reads $\theta^dF=z(\theta+1/2)^dF$, where $\theta$ is the derivation $z{d\over dz}$. A simple verification shows that if $F(z)$ is any solution of this differential equation then so is $z^{-1/2}F(1/z)$. The actual proof of Theorem \ref{main} is completely elementary, but at the end of the proof we sketch the role of the symmetry in the background. \section{Proofs} We start with a few well-known elementary congruences. \begin{lemma}\label{babbage} For any odd prime $p$ and any integers $0<b\le a$ we have $${ap\choose bp}\is{a\choose b}\mod{p^2}.$$ \end{lemma} The theorem was proven by Babbage in 1819, \cite{babbage}. In 1862 Wolstenholme \cite{wolstenholme} showed that this congruence holds modulo $p^3$ for all primes $p\ge5$. \begin{proof} Observe that $${ap\choose bp}=\prod_{k=1}^{(a-b)p}{k+bp\over k}.$$ Split the product into factors with $p|k$ (and write $k=lp$) and factors where $k$ is not divisible by $p$. We get $${ap\choose bp}=\prod_{l=1}^{a-b}{l+b\over l}\prod_{k=1\atop (k,p)=1}^{(a-b)p} \left(1+{bp\over k}\right),$$ where the second product is restricted to $k\not\is0\mod{p}$. The first factor equals ${a\choose b}$, the second is modulo $p^2$ equal to $$1+\sum_{k=1\atop(k,p)=1}^{(a-b)p}{bp\over k}.$$ The well-known fact that $\sum_{k=1}^{p-1}1/k\is0\mod{p}$ implies that the second product is $1\mod{p^2}$. This proves our assertion. \end{proof} \begin{lemma}\label{eisenstein} Let $\gamma=(4^{p-1}-1)/p$. Then $$\sum_{j=1}^{p-1}{(-1)^{j-1}\over j}\is \gamma\mod{p}.$$ \end{lemma} This lemma occurs in the work of Eisenstein \cite{Eisenstein}. \begin{proof} First notice that $${4^{p-1}-1\over p}={1\over 4p}(4^p-4)={2^p-2\over p}{2^p+2\over 4}.$$ By Fermat the last factor is $1\mod{p}$ and we get that $${4^{p-1}-1\over p}\is{2^p-2\over p}\mod{p}.$$ We compute the latter modulo $p$. $${1\over p}(2^p-2)={1\over p}\sum_{k=1}^{p-1}{p\choose k} =\sum_{k=1}^{p-1}{1\over k}{p-1\choose k-1}.$$ The number ${p-1\choose k-1}$ is the coefficient of $x^{k-1}$ in $$(1+x)^{p-1}\is {x^p+1\over x+1}\is 1-x+x^2-x^3+\cdots+x^{p-1}\mod{p}.$$ Hence ${p-1\choose k-1}\is (-1)^{k-1}\mod{p}$ and thus our congruence follows. \end{proof} \begin{lemma}\label{symmetry} Define $\alpha_r={(1/2)_r\over r!}$. Then for any odd prime $p$ and any integer $0\le r<p/2$ we have $$\alpha_{{p-1\over2}-r}\is(-1)^{p-1\over2}\alpha_r\mod{p}.$$ \end{lemma} \begin{proof} Notice that $$\alpha_r\is{(1/2)_r\over r!}\is {(1/2-p/2)_r\over r!}\is(-1)^r{(p-1)/2\choose r} \mod{p}.$$ The symmetry is now immediate from the last expression. \end{proof} A direct corollary is the following. \begin{corollary}\label{Fzero} Suppose $p\is3\mod{4}$. Then $F_p(-\epsilon_p)\is0\mod{p}$. \end{corollary} \begin{proof} Notice that \begin{eqnarray*} F_p(-\epsilon_p)&=&\sum_{r=0}^{(p-1)/2}\alpha_r^d(-\epsilon_p)^r\\ &\is& (-1)^{d(p-1)/2}\sum_{r=0}^{(p-1)/2}\alpha_{{p-1\over2}-r}^d(-\epsilon_p)^r\mod{p}\\ &\is& (-1)^{d(p-1)/2}(-\epsilon_p)^{{p-1\over2}} \sum_{r=0}^{(p-1)/2}\alpha_{r}^d(-\epsilon_p)^r\mod{p}\\ &\is& -F_p(-\epsilon_p)\mod{p}, \end{eqnarray*} which implies our assertion. \end{proof} \begin{lemma}\label{split} Let $p$ be an odd prime and $r,r',t$ integers $\ge0$ with $r=pr'+t$ and $t<p$. Let $\alpha_r$ be as in the previous lemma and $\gamma=(4^{p-1}-1)/p$. If $p/2<t$, then $p$ divides $\alpha_r$ and if $t<p/2$ we have $$\alpha_r\is \alpha_{r'}\alpha_t\left(1-\gamma pr'+ 2pr'\sum_{j=1}^{2t}{(-1)^{j-1}\over j}\right)\mod{p^2}.$$ \end{lemma} Modulo $p$ the congruence reads $\alpha_r\is\alpha_{r'}\alpha_t\mod{p}$. This is known as the Lucas-property for $\alpha_r$. \begin{proof} Instead of $\alpha_r$ we start with ${2r\choose r}$. Notice that $${2r\choose r}={2pr'\choose pr'}{\prod_{k=1}^{2t}(k+2pr')\over \prod_{k=1}^t(k+pr')^2}.$$ Note that if $t>p/2$ the product in the numerator contains the factor $p+2pr'$ and is therefore divisible by $p$. Suppose from now on that $t<p/2$. Consider the equation modulo $p^2$. We apply Lemma \ref{babbage} to the binomial coefficient and get ${2r'\choose r'}$. The product over $k$ becomes ${2t\choose t}$ times $$1+2pr'\left(\sum_{k=1}^{2t}{1\over k}-\sum_{k=1}^t{1\over k}\right)\mod{p^2}.$$ Notice also that $$\sum_{k=1}^{2t}{1\over k}-\sum_{k=1}^t{1\over k}= \sum_{k=1}^{2t}{(-1)^{k-1}\over k}.$$ Finally use the relation ${2r\choose r}=4^r\alpha_r$. Putting everything together we find that $$\alpha_r\is\alpha_{r'}\alpha_t 4^{r'(1-p)}\left( 1+2pr'\sum_{k=1}^{2t}{(-1)^{k-1}\over k}\right)\mod{p^2}.$$ Using $4^{r'(1-p)}\is1-pr'\gamma\mod{p}$ yields our assertion. \end{proof} {\it Proof} of Theorem \ref{main}. In view of congruences (\ref{cauchy}) it suffices to prove that $F_{p^s}(\epsilon_p)\is F_{p}(\epsilon_p)F_{p^{s-1}}(\epsilon_p)\mod{p^2}$ for $s=2$, but we will do it for all $s\ge2$. Use the notation $\alpha_r ={(1/2)_r\over r!}$ and Lemma~\ref{split} to find $$ F_{p^s}(z)=\sum_{r'=0}^{p^{s-1}-1}\sum_{t=0}^{(p-1)/2} (\alpha_{r'}\alpha_t)^dz^{pr'+t} \left(1-\gamma dpr'+2dpr'\sum_{k=1}^{2t}{(-1)^{k-1}\over k}\right)\mod{p^2}. $$ The terms with $t>p/2$ do not occur since $\alpha_r^d\is0\mod{p^2}$ whenever $t>p/2$. This gives $$F_{p^s}(z)\is F_p(z)F_{p^{s-1}}(z^p)+pd\left(G_1(z)-\gamma F_p(z)\right) \sum_{r'=0}^{p^{s-1}-1}r'z^{pr'} \alpha_{r'}^d\mod{p^2}$$ where $$G_1(z)= 2\sum_{t=0}^{(p-1)/2}\left(\sum_{k=1}^{2t}{(-1)^{k-1}\over k}\right) \alpha_t^d z^t.$$ In order to arrive at our result we set $z=\epsilon_p$ and show that $G_1(\epsilon_p)\is\gamma F_p(\epsilon_p)\mod{p}$. Consider $G_1(\epsilon_p)=2\Sigma=\Sigma+\Sigma$ as a sum of two (equal) sums over $t$. In one of these we replace $t$ by $(p-1)/2-t$ and obtain $$\sum_{t=0}^{(p-1)/2}\left(\sum_{k=1}^{p-1-2t}{(-1)^{k-1}\over k}\right) \alpha_{(p-1)/2-t}^d\epsilon_p^{(p-1)/2-t}.$$ Apply Lemma \ref{symmetry} and replace $k$ in the inner summation by $p-k$. We get $$\sum_{t=0}^{(p-1)/2}\left(\sum_{k=2t+1}^{p-1}{(-1)^{-p+k-1}\over p-k}\right) \alpha_{t}^d\epsilon_p^{t}\mod{p}.$$ This equals $$\sum_{t=0}^{(p-1)/2}\left(\sum_{k=2t+1}^{p-1}{(-1)^{k-1}\over k}\right) \alpha_{t}^d\epsilon_p^{t}\mod{p}$$ Thus we obtain after addition of $\Sigma$, $$G_1(\epsilon_p)\is\sum_{t=0}^{(p-1)/2}\left(\sum_{k=1}^{p-1}{(-1)^{k-1}\over k}\right) \alpha_{t}^d\epsilon_p^{t}\is \left(\sum_{k=1}^{p-1}{(-1)^{k-1}\over k}\right) F_p(\epsilon_p)\mod{p^2}.$$ Application of Lemma \ref{eisenstein} yields the desired result. \medskip \hfill$\Box$\medskip \section{The underlying mechanism} The proof of our main result uses a symmetry of the polynomials $F_p(z),G_1(z)$ modulo $p$. We show here how this is forced by the symmetry of the hypergeometric equation. One easily sees that $F_p(z)\mod{p}$ is the unique polynomial of degree $<p/2$ which satifies our hypergeometric differential equation modulo $p$ and which has constant term $1$. Furthermore, $F_p(z)\log z+G_p(z)$ is another solution modulo $p$. By the symmetry of our equation $z^{(p-1)/2}F_p(1/z)$ is also a polynomial solution modulo $p$. Hence, by uniqueness of $F_p$, $z^{(p-1)/2}F_p(1/z)\is\lambda F_p(z)\mod{p}$ for some $\lambda$. To determine $\lambda$ we set $z=\epsilon_p$. Then $\epsilon_pF(\epsilon_p)=\lambda F(\epsilon_p)$. Since $F(\epsilon_p)$ is a $p$-adic unit by assumption we conclude that $\lambda=\epsilon_p$. Hence $F_p(z)$ is a reciprocal or anti-reciprocal polynomial. We observe that $z^{(p-1)/2}F_p(1/z)\log(1/z)+z^{(p-1)/2}G_p(1/z)$ is also a mod $p$ solution. Multiply by $\epsilon_p$ and add $F_p(z)\log z+G_p(z)$. We find the new solution $G_p(z)+\epsilon_pz^{(p-1)/2}G_p(1/z)$ which is a polynomial solution. Hence it equals $\mu F_p(z)$ for some $\mu$. To find the value of $\mu$ we set $z=0$. The constant term of $G_p(z)$ is $0$ and the constant term of $\epsilon_pz^{(p-1)/2}G_p(1/z)$ is the leading term of $\epsilon_pG_p(z)$, which is $2\sum_{j=1}^{p-1}{(-1)^{j-1}\over j}$, hence $2\gamma$ by Lemma \ref{eisenstein}. Using $F_p(0)=1$ we conclude that $\mu=2\gamma$. Now set $z=\epsilon_p$ in $$\epsilon_pz^{(p-1)/2}G_p(1/z)+G_p(z)\is 2\gamma F_p(z)\mod{p}$$ and we obtain that $G_p(\epsilon_p)=\gamma F_p(\epsilon_p)$, the key step in the proof of our theorem. \section{Hypergeometric motives}\label{motive} In this section we explain the nature of the unit root $f_p(z_0)$ via finite hypergeometric sums and their $\zeta$-functions. For any $q=p^k$ we consider a generator $\omega$ of the multiplicative characters on $\bbbf_q^{\times}$. Then we define the Gauss-sum $$g_q(\omega^k)=\sum_{x\in\bbbf_q^{\times}}\omega(x)^k\zeta_p^{\tr(x)},$$ where $\tr:\bbbf_q\to\bbbf_p=\bbbz/p\bbbz$ is the trace map and $\zeta_p$ is a primitive $p$-th root of unity. Let $\phi$ be the unique character of order $2$. Let $t\in\bbbf_q^{\times}$ and define $$H_q(t)={(-1)^d\over1-q}\sum_{m=0}^{q-2}\left({g_q(\phi\omega^m)g_q(\omega^{-m}) \over g_q(\phi)}\right)^d\omega((-1)^dt)^m.$$ It turns out that the values are rational integers which are independent of the choice of $\omega$ and $\zeta_p$. Such functions were introduced by John Greene and independently Nick Katz by the end of the 1980's. According to Katz these sums are traces of the Frobenius operator on $l$-adic cohomology associated to the hypergeometric differential equation. More concretely, hypergeometric sums show up in point counting results on algebraic varieties over finite fields. The relevant example for us is the following. \begin{theorem} Let $q$ be an odd prime power, $t\in\bbbf_q^{\times}$ and $d\ge2$ an integer. Then the number of points with coordinates in $\bbbf_q^{\times}$ on the hypersurface $$X_t:\ \prod_{i=1}^d(x_i+2+x_i^{-1})=4^dt^{-1}$$ is given by $$ {(q-2)^d-(-1)^d\over q-1}-(-1)^d H_q(t). $$ \end{theorem} \begin{proof} This is a consequence of Theorem \cite[Thm 6.1]{BCM}. Since our hypergeometric parameters are just $1/2$ and $1$ we are in a special situation where the parameters $a_i$ from \cite[Thm 6.1]{BCM} read $(2,\ldots,2,-1,\ldots,-1)$ with $d$ repetitions of $2$ and $2d$ repetitions of $-1$. The corresponding variety is given by the intersection of the following varieties in $(\bbbp^2)^d$, $$u_1+v_1+w_1=u_2+v_2+w_2=\cdots =u_d+v_d+w_d=0,\quad \lambda \prod_{i=1}^d u_i^2=\prod_{i=1}^dv_iw_i.$$ Elimination of the $u_i$ gives us $\lambda\prod_{i=1}^d(v_i+w_i)^2= \prod_{i=1}^dv_iw_i$. Then set $x_i=v_i/w_i$ and $\lambda=t/4^d$ to get the equation of our assertion. Theorem \cite[Thm 6.1]{BCM} gives the point count with invertible coordinates in $\bbbf_q$ as $${(q-2)^d\over q-1}+{1\over q^d(q-1)}\sum_{m=1}^{q-2}g_q(\omega^{2m})^d g_q(\omega^{-m})^{2d} \omega(\lambda)^m.$$ Use the Hasse-Davenport relation $g_q(2m)=\omega(4)^m g_q(\omega^m)g_q(\phi\omega^m)/g_q(\phi)$ and $g_q(m)g_q(-m)=(-1)^mq$ to get \begin{eqnarray*} &&{(q-2)^d\over q-1}+{1\over q-1}\sum_{m=1}^{q-2}\left({g_q(\phi\omega^m)g_q(\omega^{-m}) \over g_q(\phi)}\right)^d \omega((-4)^d\lambda)^m\\ &=&{(q-2)^d-(-1)^d\over q-1}-(-1)^d H_q(4^d\lambda) \end{eqnarray*} We find our desired point count after replacing $\lambda$ by $t/4^d$. \end{proof} We now compute $\zeta$-function associated to the values of $H_q(t)$ (with $t\in\bbbf_p^{\times}$) in the usual way, $$Z_p(t,T)=\exp\left({H_{p^s}(t)\over s}T^s\right),$$ which turns out to be a polynomial in $\bbbz[T]$ of degree $d$ when $t\ne1$. When $t=1$ and $d$ odd the degree is $d-1$, when $t=1$ and $d$ even $Z_p(1,T)$ is a polynomial of degree $d-2$ divided by a factor $1-p^{-1+d/2}T$. We shall simply take the $d-2$-degree polynomial for $Z_p(1,T)$ in this case. Here we are not able to prove all this, but we simply mention some folklore results and conjectures which make up a large body of a project on hypergeometric motives by F.Rodriguez-Villegas, D.Roberts and M.Watkins. The latter has implemented the computations in Magma. This is now an impressive library to compute the polynomials $Z_p(T)$, and also to manipulate the global $L$-series that contain the $Z_p(p^{-s})$ as local Euler factors. In addition K.Kedlaya has recently announced a Sage-implementation (largely a port of the Magma-implementation) which also calculates the $Z_p(T)$ for us. We use some of these calculations to illustrate the background to the supercongruences and the origin of the unit-root $f_p(z_0)$. The polynomial $Z_p(t,T)$ can be factored as $\prod_i(1-\mu_iT)$ where the $\mu_i$ are algebraic and all have the same absolute value $p^{(d-1)/2}$ according to the Weil-conjectures. The exponent $d-1$ is called the weight of the $\zeta$-factor $Z_p(t,T)$. By abuse of language we shall call the $\mu_i$ the zeros of $Z_p(t,T)$. The idea is now that if $f_p(z_0)$ is a $p$-adic unit, the polynomial $Z_p(z_0,T)$ has a unique $p$-adic zero which is a unit, namely $f_p(z_0)$. Here are some examples. \medskip When $d=4$ and $z_0=1$ we get $Z_p(1,T)=1-a_pT+p^3T^2$ where $a_p$ is the $p$-th coefficient of $\eta(2\tau)^4\eta(4\tau)^4$. It is clear that when this polynomial has a unit root $f_p(1)$, the Newton polygon has $p$-adic slopes $0,3$. Hence $f_p(1)\is a_p\mod{p^3}$. The missing slopes $1,2$ may account for the occurrence of a supercongruence mod $p^3$. \medskip When $d=6$ and $z_0=1$ we get $Z_p(1,T)=(1-pa_pT+p^5T^2)(1-b_pT+p^5T^2)$, where $a_p$ is as above and $b_p$ the $p$-th coefficients of the newform in $S_6(8,\chi_0)$. The Newton slopes of the first one are $1,4$ (if $a_p$ is a unit) and $0,5$ for the second (if $b_p$ is a unit). This shows that $f_p(1)\is b_p\mod{p^5}$ and one might also consider this as an explanation for the conjectural supercongruence modulo $p^5$. \medskip In general, when $d$ is even and $z_0=1$, we expect a factorization $Z_p(1,T)=U_p(T)V_p(T)$ into two factors in $\bbbz[T]$. The degrees of $U_p,V_p$ are $-1+d/2,-1+d/2$ when $d=2\mod{4}$ and $-2+d/2,d/2$ if $d\is0\mod{4}$. The factor $U_p$ has one Newton slope $1$ and the others higher. The factor $V_p$, when $f_p(1)$ is a unit, has Newton slopes $0,2$ and higher. So, in a way the factorization of $Z_p(1,T)$ separates the slope $1$ from the slopes $0,2,\ldots$. Naturally $f_p(1)$ is the unit root zero of $V_p$. The separation of the slopes may be seen as an explanation of the supercongruences from Theorem \ref{main}. Speculations of this type were first made by Dave Roberts and Fernando Rodriguez-Villegas in their preprint \cite{RobVillegas}. Instead of speaking about Newton slopes they consider Hodge levels in the cohomology of a hypergeometric motive. Finally we record a few factorizations of $Z_p(-1,T)$ when $d$ is odd. This is a case where factorizations are abundant. \medskip When $d=3$ we get $$Z_p(-1,T)=(1-pT)(1-c_pT+\chi(-8)p^2T^2).$$ Here $c_p$ is the $p$-th coefficient of the modular form $\eta(\tau)^2\eta(2\tau)\eta(4\tau)\eta(8\tau)^2$ and is related to the case $d=3$ in Conjecture \ref{conj3}. \medskip When $d=5$ we get $$Z(-1,T)=(1-\gamma_p p^2T)(1-pc_pT+p^4T^2)(1-d_pT+p^4T^2),$$ where $d_p$ is the coefficient defined in Conjecture \ref{conj3} and $c_p$ the $p$-th coefficient of $\eta(4\tau)^6$. The coefficient $\gamma_p$ is $-1$ if $p\is5\mod{8}$ and $1$ otherwise. \medskip When $d=7$ we get $$Z_p(-1,T)=(1-p^3T)(1-pa_pT+p^6T^2)Q_4(T),$$ where $Q_4$ is a factor of degree $4$. Here $a_p=\leg{-4}{p}(\phi_p^2-2p^2)$ where $\phi_p$ is the $p$-th coefficient of the form in $S_3(32,\chi(-4))$ that begins with $$q+4iq^3+2q^5-8iq^7-7q^9-4iq^{11}-14q^{13}+8iq^{15}+18q^{17}-12iq^{19}+32q^{21} +40iq^{23}+\cdots$$ Moreover, when $p\is3,5\mod{8}$ the polynomial $Q_4$ factors into $1-p^6T^2$ times a quadratic factor $1-\gamma_pT+p^6T^2$. However, this does not give us anything stronger than mod $p^2$ congruences. We are indebted to Dave Roberts for the identification of the modular form.
{ "timestamp": "2018-11-01T01:09:29", "yymm": "1805", "arxiv_id": "1805.02467", "language": "en", "url": "https://arxiv.org/abs/1805.02467" }
\section{Introduction} Let $Y$ be a smooth rational surface and let $D\subset Y$ be an effective reduced anticanonical divisor. Such pairs $(Y, D)$, called anti-canonical pairs, have a rich geometry. They were first investigated systematically by Looijenga, and by Friedman etc in the 80s. Note that $Y-D$ comes with a canonical (up to scaling) nowhere-vanishing 2-form $\Omega$ with simple poles along $D$. When the intersection matrix of $D$ is negative definite, $D$ can be contracted and $Y$ becomes a singular analogue of a K3 surface (a normal complex analytic surface with trivial dualizing sheaf). Motivated by mirror symmetry, Gross, Hacking and Keel introduced important new ideas in a series of papers on log Calabi-Yau varieties, beginning with \cite{GrHaKe11} and \cite{GrHaKe12}. In particular, they proved Torelli type results in \cite{GrHaKe12} conjectured by Friedman. In this regard, it was shown in \cite{Pa13} that the symplectic cohomology of $X-D$ is canonically isomorphic to the vector space of global sections of the structure sheaf of its mirror. Readers are also referred to \cite{Auroux}, \cite{GHKK}, \cite{GHS} and the references therein for more about this mirror symmetry story. We have a more topological flavour and we will survey some other aspects of the smooth topology, algebraic geometry, symplectic geometry and contact geometry of anti-canonical pairs in Sections 2, 3, 4, 5 respectively. Let $X$ be a smooth, oriented 4 dimensional manifold. A topological divisor of $X$ refers to a connected configuration of finitely many closed embedded, oriented, labeled smooth surfaces $D=C_1 \cup \dots \cup C_k$ in $X$ such that each intersection between two surfaces is transversal and positive, no three $C_i$ intersect at a common point, and $D$ has empty intersection with $ \partial X$. A topological divisor $D$ is often described by a plumbing graph with vertices corresponding to the surfaces $C_i$ and edges corresponding to intersection points. Associated to $D$ there are plumbed neighborhoods $N_D$ as well as the boundary plumbed 3-manifold $Y_D$, which are all well-defined up to orientation-preserving diffeomorphisms. Given a topological divisor $D=C_1 \cup \dots \cup C_k$ in $X$, we use $[C_i]$ to denote the homology class of $C_i$ in $H_2(X)$ and $H_2(N_D)$, $r(D)=k$ to denote the length of $D$, and $S(D)=(s_1,\cdots, s_{r(D)})$ to denote the sequence of self-intersection numbers. $H_2(N_D)$ is freely generated by $C_i$. The intersection matrix of $D$ is the $k$ by $k$ square matrix $Q_D=(s_{ij}=[C_i]\cdot [C_{j}])$, where $\cdot$ is used for any of the pairings $H_2(X) \times H_2(X), H^2(X) \times H_2(X), H^2(X) \times H^2(X, \partial X)$. Via the Lefschetz duality for $N_D$, the intersection matrix $Q_D$ can be identified with the natural homomorphism $Q_D: H_2(N_D)\to H_2(N_D, Y_D)$. We use homology and cohomology with $\mathbb{Z}$ coefficient unless otherwise specified. For a symplectic 4-manifold $(X, \omega)$ a symplectic divisor is a topological divisor $D$ with each $C_i$ symplectic and having the orientation positive with respect to $\omega$. Let $K_{\omega}$ be the symplectic canonical class of $(X, \omega)$. \begin{definition} A {symplectic log Calabi-Yau pair} $(X,D,\omega)$ is a closed symplectic 4-manifold $(X,\omega)$ together with a nonempty symplectic divisor $D=\cup C_i$ representing the Poincare dual of $-K_{\omega}$. A symplectic log Calabi-Yau pair is called a A {symplectic Looijenga pair} if each $C_i$ is a sphere, called an elliptic log Calabi-Yau pair if $D$ is a torus. \end{definition} Here are some quick observations, which have well known analogues in the holomorphic category. \begin{lemma} For a symplectic log Calabi-Yau pair $(X, D, \omega)$, $\bullet$ $c_1(X-D, \omega)=0$, and $(X-D, \omega)$ is minimal in the sense it has no symplectic sphere with self-intersection $-1$. $\bullet $ $D=\cup C_i$ is either a torus or a cycle of spheres. $ \bullet$ $(X, \omega)$ is a rational or elliptic ruled symplectic 4-manifold. In particular, $\kappa(X, \omega)=-\infty$. $D$ is a cycle of spheres only when $(X, \omega)$ is rational. $\bullet$ $b^+(Q_D)=0$ or $1$. \end{lemma} \begin{proof} The vanishing of $c_1(X-D)$ follows directly from the definition and $X-D$ being minimal follows directly from the adjunction formula. The 2nd bullet is also proved by the adjunction formula. Let $g_i$ be the genus of $C_i$. Then $$ -[C_i]\cdot [C_i]- \sum_{j\ne i} [C_j]\cdot [C_i]=K_{\omega}\cdot [C_i]=-[C_i]\cdot [C_i]+2g_i-2.$$ So $2g_i-2=- \sum_{j\ne i} [C_j]\cdot [C_i] \leq 0$, namely, $g_i\leq 1$ for each $i$. If $g_i=1$ for some $i$, then $ \sum_{j\ne i} [C_j]\cdot [C_i]=0$ which implies that $C_i$ is the only component. The remaining case is that $g_i=0$ for each $i$. In this case, $ \sum_{j\ne i} [C_j]\cdot [C_i]=2$ for each $i$ and clearly $D$ is a cycle of spheres. Since $D$ is a nonempty symplectic divisor representing $-K_{\omega}$ we have $K_{\omega}\cdot [\omega]< 0$. It follows from \cite{Liu96}, \cite{OhOn96} that $(X, \omega)$ is rational or ruled and admits a genus $0$ Lefschetz fibration over a Riemann surface $\Sigma$. Let $F$ be the fibre class. Since $K_{\omega}\cdot F=-2$ and $D$ represents $-K_{\omega}$ the projection of $D$ to $\Sigma$ has nonzero degree. Since $D=\cup C_i$ is either a torus or a cycle of spheres, the genus of $\Sigma$ is at most $1$. The last bullet follows from the fact that $b^+(X)=1$. \end{proof} Therefore elliptic pairs and Looijenga pairs are exactly the symplectic log Calabi-Yau pairs with length $1$ and at least $2$ respectively. We remark that symplectic log Calabi-Yau pairs have vanishing relative symplectic Kodaira dimension (cf. \cite{LiZh11}). The following is the main result in \cite{LiMa16-deformation}. \begin{theorem}[Symplectic deformation] \label{thm: symplectic deformation class=homology classes} Two symplectic log Calabi-Yau pairs are symplectic deformation equivalent if they are homologically equivalent. In particular, each symplectic deformation class contains a K\"ahler pair. Moreover, two symplectic log Calabi-Yau pairs are strictly symplectic deformation equivalent if they are strictly homologically equivalent. \end{theorem} Let us explain the various equivalence notions in the theorem (See \cite{Sa13} for a thorough discussion of equivalence notions for symplectic manifolds). Let $(X^0, D^0, \omega^0)$ and $(X^1, D^1, \omega^1)$ be two symplectic pairs with $r(D^0)=r(D^1)=k$. They are said to be homologically equivalent if there is an orientation preserving diffeomorphism $\Phi: X^0 \to X^1$ such that $\Phi_*[C^0_j]=[C^1_j]$ for all $j=1,\dots,k$. The homological equivalence is said to be strict if, in addition, $\Phi^*[\omega^1]=[\omega^0]$. When $X^0=X^1$, they are said to be symplectic homotopic if $(D^0, \omega^0)$ and $(D^1, \omega^1)$ are connected by a family of symplectic divisors $(D^t, \omega^t)$, and they are further said to be symplectic isotopic if $\omega^t$ can be chosen to be a constant family. $(X^0, D^0, \omega^0)$ and $(X^1, D^1, \omega^1)$ are said to be symplectic deformation equivalent if they are homotopic, up to an orientation preserving diffeomorphism. They are said to be strictly symplectic deformation equivalent if they are symplectic isotopic, up to an orientation preserving diffeomorphism. A sequence $(s_i)$ of integers is said to be anti-canonical if it is realized as $S(D)$ for a symplectic log Calabi-Yau pair $(X, D, \omega)$. Combined with Theorem 3.1 in \cite{Fr}, we obtain \begin{cor} \label{cor: finite deformation} Given a anti-canonical sequence $(s_i)$, there are only finitely many symplectic deformation types of symplectic log Calabi-Yau pairs $(X, D, \omega)$ with $S(D)=(s_i)$. \end{cor} There is an algorithm to write down the anti-canonical sequences, starting from the list of minimal pairs and reverse the minimal reduction process in \cite{LiMa16-deformation}. It is interesting to compare anti-canonical sequences with spherical circular sequences. A spherical circular sequence is the sequence of a cycle of symplectic spheres in a rational surface with minimal complement. An anti-canonical sequence $(s_i)$ is said to be rigid if, for any cycle of symplectic spheres $D\subset (X, \omega)$ with $S(D)=(s_i)$ and $(X-D, \omega)$ minimal, $(X, D, \omega)$ is a symplectic log Calabi-Yau pair. \begin{theorem} [Anti-canonical sequences, \cite{LiMa17-contact}] \label{theorem: anti-canonical} Each spherical circular sequence with $b^+=1$ is anti-canonical, and each anti-canonical sequence with $b^+=1$ is rigid. \end{theorem} From the contact point of view, symplectic log Calabi-Yau pairs are separated into $3$ groups, as stated in the following theorem. Here, $Kod(Y, \xi)$ is the contact Kodaira dimension introduced in \cite{LiMa16}. \begin{theorem} [Contact trichotomy, \cite{LiMa17-contact}] \label{prop: convex-concave} Let $(X,D,\omega)$ be a symplectic log Calabi-Yau pair, $Q_D$ the intersection matrix of $D$ and $(s_i)$ the self intersection sequence. (i) If $Q_D$ is negative definite, then $D$ admits {\rm convex} neighborhoods inducing the same contact $3-$manifold $(Y_D, \xi_D)$, which only depends on $S(D)$ and has $Kod\leq 0$. (ii) If $b^+(Q_D)= 1$, up to local symplectic deformations, $D$ admits {\rm concave} neighborhoods inducing the same contact $3-$manifold $(Y_D, \xi_D)$, which only depends on $S(D)$ and has $Kod=-\infty$. (iii) If $b^+(Q_D)=0$ but $Q_D$ is not negative definite, then it does not admit a regular neighborhood with contact boundary. \end{theorem} Golla and Lisca considered a large family $\cal F$ of torus bundles and showed that these torus bundles are equipped with contact structures arising from Looijenga $D$ with $b^+(Q_D)= 1$ (Theorem 2.5 in \cite{GoLi14}). They also showed, for a subfamily of these torus bundles, such a contact structure is the unique universally tight contact structure with vanishing Giroux torsion (Theorem 1.2 in \cite{GoLi14}). This led them to formulate the following conjecture. \begin{conj} [\cite{GoLi14}] For a concave cycle $D$ of symplectic spheres, the contact structure $\xi_D$ on $Y_D$ is universally tight. \end{conj} Moreover, they investigated Stein (and symplectic) fillings and classified in many cases up to diffeomorphism (Theorems 3.1, 3.2, 3.5 in \cite{GoLi14}). On the other hand, Ohta and Ono classified symplectic fillings of simple elliptic singularities up to symplectic deformation (Theorems 1, 1', 2 in \cite{OhOn03}). Using these results and Corollary \ref{cor: finite deformation}, we establish the following finiteness result. \begin{cor}[Symplectic fillings, \cite{LiMa17-contact}] \label{cor: finite stein} Suppose $(X, D, \omega)$ is a symplectic log Calabi-Yau pair with $b^+(Q_D)=1$. Then $\bullet$ There are finitely many (at least $1$) Stein fillings of $(Y_D, \xi_D)$ up to symplectic deformation, all having $b^+=0$. Moreover, for a Looijenga pair, all Stein fillings have $c_1=0$. $\bullet$ This is also true for minimal symplectic fillings. \end{cor} We end the survey discussing the geography of Stein fillings for negative definite $Q_D$. The first author is grateful for the opportunity to speak at the `Perspectives of Mathematics in the 21st Century: Conference in Celebration of the 90th Anniversary of Mathematics Department of Tsinghua University'. The authors are also grateful to Kaoru Ono for his interest and useful discussions. The authors were supported by NSF grants DMS 1065927 and 1207037, and are supported by NSF grant 1611680. \section{Topology of cycle of spheres in a rational surface} In this section we review some homological facts about topological divisors, especially cycles of spheres, and we refer to \cite{Ne81}, \cite{GoLi14} and \cite{LiMa17-contact} for details. We first introduce a pair of basic operations for topological divisors. \begin{definition} Toric blow-up is the operation adding a sphere component with self-intersection $-1$ between an adjacent pair of components $C_i$ and $C_{i+1}$ and reducing the self-intersection of $C_i$ and $C_{i+1}$ by $-1$. Toric blow-down is the reverse operation. Notice that there is a natural labeling for these operations. Two pairs $(X, D^0)$ and $(X, D^1)$ are said to be toric equivalent if they are connected by toric blow-ups and toric blow-downs. $D$ is said to be toric minimal if no component is an exceptional sphere. Here, an exceptional sphere is a sphere with self-intersection $-1$. \end{definition} They can be performed in the holomorphic and symplectic categories. In the holomorphic category they are often referred as corner blow-up/down. \begin{lemma} The following are preserved under a toric blow-up/down: $\bullet$ $D$ being a cycle of spheres, $\bullet$ the non-degeneracy of the intersection matrix $Q_D$, $\bullet$ the oriented diffeomorphism type of the plumbed 3-manifold $Y_D$. \end{lemma} The 1st bullet is obvious, while the 2nd bullet is by a direct computation. The 3rd bullet is part of Proposition 2.1 in \cite{Ne81}. Here is an example to illustrate how a sphere with $s=0$ can be used to `balance' the self-intersection of the two sides by performing a toric blow-up and a toric blow-down. \begin{example}[Toric move]\label{eg: balancing self-intersection by $0$-sphere} The following three cycles of spheres are toric equivalent: \begin{displaymath} \xymatrix @R=1pc @C=1pc { \bullet ^{3} \ar@{-}[r] \ar@{-}[dr]& \bullet ^{-2} \ar@{-}[d] & \bullet ^{2} \ar@{-}[r] \ar@{-}[d]& \bullet ^{-2} \ar@{-}[d] & \bullet ^{2} \ar@{-}[r] \ar@{-}[d]& \bullet ^{-1} \ar@{-}[dl] \\ & \bullet ^{0} & \bullet ^{-1} \ar@{-}[r] & \bullet ^{-1} & \bullet ^{0} \\ } \end{displaymath} \end{example} From now on $D$ is either a smooth torus or a cycle of smooth spheres. When $D$ is a torus with self-intersection $s$, the boundary 3-manifold is the circle bundle with Euler number $s$. \subsection{The sequence $S(D)$ and the boundary torus bundle} When $D$ is a cycle of spheres the labeling is taken to be cyclic. The orientation of $D$ is a cyclic labeling up to permutation. We will assume now that $D$ is a cycle of spheres with the self-intersection sequence $S(D)=(s_i)$. Let $s(D)=\sum_{i=1}^{r(D)} (s_i +2)$ denote the self-intersection number of $D$. \begin{lemma} [cf. Theorem 2.5 and Theorem 3.1 in \cite{GoLi14}] \label{lem: homology of neighborhood} Let $D$ be a cycle of spheres in $X$ and $V=X-N_D$. $\bullet$ $H_2(N_D)=\mathbb{Z}^{r(D)}=H^2(N_D), H_1(N_D)=H^1(N_D)=\mathbb{Z}, H_3(N_D)=H^3(N_D)=0$. $\bullet$ $H_1(Y_D)\to H_1(N_D)$ is a surjection. If $Q_D$ is non-degenerate, then $b_1(Y_D)=1$ and the map $H_1(Y_D)\to H_1(N_D)$ has a finite kernel, $H_2(Y_D)=H^1(Y_D)=\mathbb{Z}$ and the map $H_2(Y_D)\to H_2(N_D)$ is trivial. $\bullet$ Suppose $Q_D$ is non-degenerate and $b_1(X)=0$, then $b_1(V)=b_3(V)=0$, $b_2(V)=b_2(X)-r(D)-1$ and the map $\mathbb{Z}=H_2(Y_D)\to H_2(V)$ is injective. \end{lemma} \begin{comment} \begin{proof} The homology and cohomology of $N_D$ are straightforward to compute since $N_D$ deformation retracts to $D$. The groups $H_i(Y_D)$ and the homomorphisms to $H_i(N_D)$ are computed via the long homology exact sequence of $(N_D, Y_D)$, the Lefschetz duality $H_i(N_D, Y_D)=H^{4-i}(N_D)$, the homology and cohomology of $N_D$ in the 1st bullet, and the interpretation of $Q_D$ as the restriction map $H_2(N_D)\to H_2(N_D, Y_D)$. The homology of $V$ is computed via the MV sequence of the pair $(N_D, V)$. The vanishing of $b_1(V)$ follows from the portion $ H_1(Y_D) \to H_1(N_D) \oplus H_1(V) \to H_1(X)$, $b_1(X)=0$ and the surjection $H_1(Y_D)\to H_1(N_D)$. The vanishing of $b_3(V)$ follows from the portion $H_4(X)\cong H_3(Y_D) \to H_3(N_D) \oplus H_3(V) \to H_3(X)$ and $b_3(X)=0$. The formula for $b_2(V)$ and $H_2(Y_D)\to H_2(V)$ being injective follow from the portion $$H_3(X) \to H_2(Y_D)=\mathbb{Z} \to H_2(N_D)\oplus H_2(V)\to H_2(X)\to \hbox{Torsion group} $$ and the triviality of the map $H_2(Y_D)\to H_2(N_D)$. \end{proof} \end{comment} Here are obvious restrictions on homologous components of $D$ from the cycle condition. \begin{lemma} \label{lem: homologous components} For a cycle of spheres $D$, $\bullet$ At most three components are homologous in $X$. There are three homologous components only if $r(D)=3$. $\bullet$ There are a pair of homologous components only if $r(D)\leq 4$. $\bullet$ If $[C_i]=[C_{i+1}]$ for some $i$ then $r(D)=3, s_i=s_{i+1}=1$, or $r(D)=2, s_i=s_{i+1}=2$. \end{lemma} When $b^+(X)=1$ there are various restrictions on components with non-negative self-intersection. Let $r^{\geq 0}(D)$ denote the number of components with self-intersection $\geq 0$. \begin{lemma} \label{lem: non-negative components} Suppose $D$ is a cycle of spheres in $X$ with $b^+(X)=1$. $\bullet$ If $C_i$ and $C_j$ are not adjacent and $s_i\geq 0, s_j\geq 0$, then $[C_i]=[C_j]$ and $s_i=s_j=0$. $\bullet$ $r^{\geq 0}(D)\leq 4$. $\bullet$ $r^{\geq 0}(D)=4$ only if $r(D)=4, s_i=0$ for each $i$ and $[C_1]=[C_3], [C_2]=[C_4]$. $\bullet$ Suppose $r(D)\geq 3$. If $s_i\geq0, s_{i+1}\geq 0, s_is_{i+1}\geq 1$ for some $i$, then $[C_i]=[C_{i+1}]$ and $s_i=s_{i+1}=1$. This is only possible when $r(D)=3$. \end{lemma} \begin{comment} \begin{proof} Since $b^+(X)=1$, by the light cone lemma, any two disjoint components with non-negative self-intersection must be homologous and have self-intersection $0$. The 2nd and 3rd bullets follow from the 1st bullet. For the last bullet, we can assume the two spheres are $C_1$ and $C_2$. Since $r(D)\geq 3$ we have $[C_1]\cdot[C_2]=1$. So if $[C_1]\neq[C_2]$ and $s_1s_2>1$, $[C_1]$ and $[C_2]$ form a positive definite two dimensional subspace of $H_2(X)$. Suppose $[C_1]\neq[C_2]$ and $s_1=s_2=1$. By toric blowing up the intersection point between $C_1$ and $C_2$, we get two disjoint non-homologous spheres with self-intersection $0$. Contradiction to the 1st bullet. \end{proof} \end{comment} These constraints follow easily from the $b^+(X)=1$ condition. The following lemma, derived from Lemmas \ref {lem: homologous components} and \ref {lem: non-negative components}, is very useful for Theorems \ref{theorem: anti-canonical}, \ref {prop: convex-concave} and \ref {cor: finite stein}. \begin{lemma} [\cite{LiMa17-contact}] \label{lem: |D| > 4 => D at most two consecutive nonnegative spheres} Suppose $D$ is a cycle of spheres in $X$ with $b^+(X)=1$. Up to cyclic permutation and orientation of $D$, we have $\bullet$ If $r(D)\geq 5$, then $r^{\geq 0}(D)\leq 2$. When $r^{\geq 0}(D)=2$, $s_1\geq 0, s_2=0$. $\bullet$ If $r(D)=4$ and $r^{\geq 0}(D)\geq 3$, then $S(D)=(k\geq 0, 0, l<0, 0), [C_2]=[C_4], l+k\leq 0$. $\bullet$ If $r(D)=4$ and $r^{\geq 0}(D)=2$, then either $S(D)=(0, l_1<0, 0, l_2<0), [C_1]=[C_3]$ or $(s_i)=(k\geq 0, 0, l_1<0, l_2<0), l_1+l_2+k\leq 0$. $\bullet$ If $r(D)=3$ and $r^{\geq 0}(D)=3$, then the only possibilities of $S(D)$ are (i) $(1, 1, 1), [C_1]=[C_2]=[C_3]$, (ii) $(1, 1, 0), [C_1]=[C_2]$, (iii) $(2\geq k\geq 0, 0, 0)$. $\bullet$ If $r(D)=3$ and $r^{\geq 0}(D)=2$, then the only possibilities of $S(D)$ are (i) $(1, 1, p<0), [C_1]=[C_2]$, (ii) $(k\geq 0 , 0, p<0), p+k\leq 2$. $\bullet$ If $r(D)=2$ and $r^{\geq 0}(D)=2$, then the only possibilities of $S(D)$ are $(4, 1), (4, 0), (3, 1),\\ (3,0), (2, 2), (2, 1), (2, 0), (1, 1), (1, 0), (0, 0)$. $\bullet$ If $r(D)=2$ and $r^{\geq 0}(D)=1$, then $S(D)=(k\geq 0, p<0)$. $\bullet$ If $r(D)=2$ and $r^{\geq 0}(D)=0$, then $S(D)$ is one of $(-1, -1), (-1, -2), (-1, -3) $. \end{lemma} \begin{comment} \begin{proof} Suppose $r(D)\geq 5$. If $r^{\geq 0}(D)\geq 3$ then two such components are not adjacent. But this is impossible due to the 1st bullet of Lemma \ref {lem: non-negative components} and the 3rd bullet of Lemma \ref {lem: homologous components}. Hence $r^{\geq 0}(D)\leq 2$ in this case. When $r^{\geq 0}(D)=2$, the two components must be adjacent by the same reasoning. The claim that one of them has self-intersection $0$ follows from the 4th bullet of Lemma \ref {lem: non-negative components} and the 3rd bullet of Lemma \ref {lem: homologous components}. The proof is similar when $r(D)=4$ and $r^{\geq 0}(D)\geq 3$. In this case two such components are not adjacent, say $C_2, C_4$. By the 1st bullet of Lemma \ref {lem: non-negative components}, $[C_2]=[C_4]$, $s_2=0=s_4$. Suppose $r(D)=4$ and $r^{\geq 0}(D)=2$. If two such components are not adjacent, we can assume them to be $C_1, C_3$, which satisfy $[C_1]=[C_3]$ and $s_1=s_3=0$ by the 1st bullet of Lemma \ref {lem: non-negative components}. If the two components are adjacent, we can assume them to be $C_1, C_2$. Notice that $[C_1]\ne [C_2]$ due to the 3rd bullet of Lemma \ref {lem: homologous components}. Now it follows from the 4th bullet of Lemma \ref {lem: non-negative components} that either $s_1=0$ or $s_2=0$. Suppose $r(D)=3=r^{\geq 0}(D)$. Since $s_i\geq 0$ for any $i$, It is easy to see (i), (ii), (iii) give all the possibilities by the 4th bullet of Lemma \ref {lem: non-negative components}. If $r^{\geq 0}(D)=2$, apply the 4th bullet of Lemma \ref {lem: non-negative components} to the pair of components $C_i, C_j$ with $s_i\geq 0, s_j\geq 0$. Suppose $r(D)=2$. Then we just check that the determinant of $Q_D=s_1 s_2-4 \leq 0$. \end{proof} \end{comment} To describe the plumbed 3-manifold $Y_D$, we introduce the matrix in $SL_2(\mathbb Z)$ for a sequence of integers $(-t_1, \cdots, -t_k)$, \[ A(-t_1,\dots,-t_k)= \begin{pmatrix} -t_k & 1 \\ -1 & 0 \end{pmatrix} \begin{pmatrix} -t_{k-1} & 1 \\ -1 & 0 \end{pmatrix} \dots \begin{pmatrix} -t_1 & 1 \\ -1 & 0 \end{pmatrix}.\] \begin{lemma} [Theorem 6.1 in \cite{Ne81}, Theorem 2.5 in \cite{GoLi14}] \label{lem: property of continuous fraction} For a cycle of spheres $D$ with self-intersection sequence $S(D)=(s_1, ..., s_k)$, the plumbed 3-manifold $Y_D$ is the oriented torus bundle $T_A$ over $S^1$ with monodromy $A=A(-s_1,\dots,-s_k)$. The intersection matrix $Q_D$ is non-degenerate if the trace of $A(-s_1,\dots,-s_k)\ne 2$. \end{lemma} \subsection{Toric minimal pairs} \begin{lemma}\label{lem: not negative semi-definite => at least one non-negative} Any cycle of sphere is toric equivalent to a toric minimal one or one with sequence $(-1, p)$. If $S(D)=(-1,p)$, then $Q_D$ is degenerate only if $p=-4$. Suppose $D$ is a toric minimal cycle of spheres with sequence $S(D)=(s_i)$. Then $\bullet$ $b^+(Q_D)\geq 1$ if and only if $s_i\geq 0$ for some $i$. $\bullet$ $Q_D$ is negative definite if $s_i\leq -2$ for all $i$ and less than $-2$ for some $i$. $Q_D$ is negative semi-definite but not negative definite if $s_i= -2$ for each $i$. $\bullet$ $Q_D$ is non-degenerate if either $s_1\geq 0$ and $s_i\leq -2$ for $i\geq 2$, or $s_1=s_2=0$ and $s_i\leq -2$ for $i\geq 3$, \end{lemma} The first statement is by definitions (Notice that we do not allow nodal components). The second statement is obvious. Bullets 1, 2 are well-known (cf. Lemma 8.1 in \cite{Ne81}). To prove the 3rd bullet, by Lemma \ref{lem: property of continuous fraction}, we just need to that the trace of the monodromy matrix is not equal to $2$, which is a direct calculation using Lemma 5.2 in \cite{Ne81}. \begin{comment} \begin{proof} The first statement is by definitions (Notice that we do not allow nodal components). The second statement is obvious. Bullets 1, 2 are well-known (cf. Lemma 8.1 in \cite{Ne81}). To prove the 3rd bullet, by Lemma \ref{lem: property of continuous fraction}, we just need to that the trace of the monodromy matrix is not equal to $2$. For this purpose, we recall the following observation in Lemma 5.2 in \cite{Ne81}: Suppose $t_i \le -2$ for all $i=1,\dots,k$ and $[-t_1,\dots,-t_j]=\frac{p_j}{q_j}$, where $[-t_1,\dots,-t_j]$ is the continuous fraction and $p_j,q_j \in \mathbb{Z}$. Then $A(-t_1,\dots,-t_j)= \begin{pmatrix} p_j & q_j \\ -p_{j-1} & -q_{j-1}\end{pmatrix} $, with $p_j \ge p_{j-1}+1 \ge 0$, $q_j \ge q_{j-1}+1 \ge 0$ and $p_j \ge q_j+1\ge 0$ for all $j$. In \cite{Ne81}, it was not claimed that $p_j \ge q_j+1$ but it is a standard fact and can be verified by induction. We apply this observation to the chain of spheres with negative self-intersection. In the first case, the monodromy matrix is $A(-s_1,-t_1\dots,-t_{k-1})= A(-t_1, \cdots, -t_{k-1}) \begin{pmatrix} -s_1 &1\\-1 &0\end{pmatrix} =\begin{pmatrix} -s_1p_{k-1}-q_{k-1} & p_{k-1} \\ s_1p_{k-2}+q_{k-2} & -p_{k-2}\end{pmatrix}, $ with $s_1\geq 0$ and $p_k, q_k$ as in the observation above. The trace is negative so not equal to $2$. In the second case, the monodromy matrix is $A(0,0, -t_1, \dots,-t_{k-2})= \begin{pmatrix} -p_{k-2} & -q_{k-2}\\ p_{k-3} & q_{k-3} \end{pmatrix} $. Again the trace is negative. \end{proof} \end{comment} Each toric minimal, negative definite cycle $D$ with $s(D)\leq -2$ has a dual cycle $\check D$, with the property that the plumbed manifolds $Y_D$ and $Y_{\check D}$ are orientation reversing diffeomorphic (Theorem 7.1 in \cite{Ne81}). To describe the dual cycle we use the $2$ by $k$ matrix $\begin{pmatrix}a_1 &\dots & a_k\cr b_1 &\dots &b_k \end{pmatrix}$ to represent the sequence $(a_1,-2,\dots,-2,a_2,\dots,a_k),$ where $a_i\leq -3$ and there are $b_i$ many $-2$ between $a_i$ and $a_{i+1}$. For a negative definite toric minimal cycle $D$ with $s(D)\leq -2$, we have either two $a_i$ terms or $a_i\leq -4$ for some $i$. The dual cycle $\check{D}$ is represented by the $2$ by $k$ matrix $\begin{pmatrix} \check{a}_i&=&-b_i-3 \cr \check{b}_i&=&-a_{i+1}-3 \end{pmatrix}$. It is easy to check that $\check D$ is also toric minimal, negative definite and $s(\check D)\leq -2$. A remark is that we can also view the elliptic pairs $(s)$ and $(-s)$ as dual pairs in the sense that boundary 3-manifolds are orientation reversing diffeomorphic. \section{Algebraic geometry of Looijenga pairs} In this section we very briefly review some basic results of Looijenga pairs $(Y, D)$, which have or might have symplectic analogues. Please consult the survey article \cite{Fr} and \cite{GrHaKe11}. \subsection{Torelli and deformation} There are several versions of the Torelli theorem. The following is Theorem 8.5 in \cite{Fr}. \begin{theorem} [A global Torelli] Given Looijenga pairs and an isomorphism of lattices $\mu$ compatible with $D$, there is a isomorphism $f$ of Looijenga pairs such that $\mu=f^*$ if and only if $\mu$ preserves the nef cone. \end{theorem} Two anticanonical pairs are said to be (holomorphically) deformation equivalent if they are both isomorphic to fibers of a family of anticanonical pairs over a connected base. The following two statements are given in Theorem 3.1 and Theorem 5.14 in \cite{Fr} respectively. \begin{theorem} \label{thm: finite kahler deformation} There are only finitely many deformation types of Looijenga pairs with the same self-intersection sequence. Two Looijenga pairs are deformation equivalent if they are homology equivalent. \end{theorem} \subsection{Cusp singularities} A cusp singularity is the germ of an isolated, normal surface singularity such that the exceptional divisor of the minimal resolution is a cycle of smooth rational curves $D$ meeting transversely. For normal surface singularities, there is a notion of Kodaira dimension $\kappa^{\delta}$, and Gorenstein surface singularities with $\kappa^{\delta}=0$ are simple elliptic singularities and cusp singularities (cf. \cite{OhOn09} and the references therein). Cusp singularities come in dual pairs, and their minimal resolutions are given as dual cycles. Every pair of dual cycles embed in a Hirzebruch-Ionue surface as the only curves. A cusp singularity is called rational if its minimal resolution is realized as the anti-canonical divisor of a rational surface. By the Mumford-Grauert criterion, any toric minimal, negative definite Looijenga pair $(Y, D)$ arises as the minimal resolution of a rational cusp singularity. Looijenga proved that a cusp is rational if its dual cusp is smoothable and he conjectured the converse is also true. The Looijenga conjecture was proved in \cite{GrHaKe11} via mirror symmetry and later by integral-affine geometry in \cite{En}. \section{Deformation classes of symplectic log CY pairs} In this section we give a brief outline of the proof of Theorem \ref {thm: symplectic deformation class=homology classes} and Theorem \ref{theorem: anti-canonical}. \subsection{Operations and minimal pairs} It involves the operations of non-toric blow-up/down and the notion of minimal models. A {\it non-toric blow-up} of $D$ is the proper transform of a symplectic blow-up centered at a smooth point of $D$. A non-toric blow-down is the reverse operation which symplectically blows down an exceptional sphere not contained in $D$. These operations preserve the log Calabi-Yau condition and there are analogues in the holomorphic category, sometimes referred as interior blow-up/blow-down. A symplectic log Calabi-Yau pair $(X,D,\omega)$ is called minimal if $(X, \omega)$ is minimal, or $(X, D, \omega)$ is a symplectic Looijenga pair with $X=\mathbb{C}P^2 \# \overline{\mathbb{C}P^2}$. For any symplectic log Calabi-Yau pair $(X,D,\omega)$, we apply first a maximal sequence of non-toric blow-downs using \cite{McOp13} and then a maximal sequence of toric blow-downs. The resulting toric minimal pair, which is actually minimal due to \cite{Pi08}, is called a minimal model of $(X,D,\omega)$. We enumerate the minimal symplectic log Calabi-Yau pairs (modulo cyclic symmetry), all of them having length less than $5$. $\bullet$ Case $(A)$: The base genus of $X$ is $1$. $D$ is a torus. $\bullet$ Case $(B)$: $X=\mathbb{C}P^2$, $c_1=3h$. $(B1)$ $D$ is a torus, $(B2)$ $D$ consists of a $h-$sphere and a $2h-$sphere, or $(B3)$ $D$ consists of three $h-$spheres. $\bullet$ Case $(C)$: $X=\mathbb{S}^2 \times \mathbb{S}^2$, $c_1=2f_1+2f_2$, where $f_1$ and $f_2$ are the homology classes of the two factors. $(C1)$ $D$ is a torus. $(C2)$ $r(D)=2$ and $[C_1]=bf_1+f_2, [C_2]=(2-b)f_1+f_2$. $(C3)$ $r(D)=3$ and $[C_1]=bf_1+f_2, [C_2]=f_2, [C_3]=(1-b)f_1+f_2$ $(C4)$ $r(D)=4$ and $[C_1]=bf_1+f_2, [C_2]=f_1, [C_3]=-bf_1+f_2, [C_4]=f_1$. The graphs in (C1), (C2), (C3) and (C4) are given respectively by $$ \xymatrix{ \bullet^{8} } \quad \xymatrix{ \bullet^{2b} \ar@{=}[r] & \bullet^{4-2b} } \quad \xymatrix{ \bullet^{2b} \ar@{-}[d] \ar@{-}[r] & \bullet^{0} \ar@{-}[dl] \\ \bullet^{2-2b} } \quad \xymatrix{ \bullet^{2b} \ar@{-}[d] \ar@{-}[r] & \bullet^{0} \ar@{-}[d] \\ \bullet^{0} \ar@{-}[r] & \bullet^{-2b} \\} $$ $\bullet$ Case $(D)$: $X=\mathbb{C}P^2 \# \overline{\mathbb{C}P^2}$, $c_1=f+2s$, where $f$ and $s$ are the fiber class and section class with $f\cdot f=0$, $f\cdot s=1$ and $s\cdot s=1$. $(D2)$ $r(D)=2$, and either $([C_1],[C_2])=(af+s,(1-a)f+s)$ or $([C_1],[C_2])=(2s, f)$. $(D3)$ $r(D)=3$ and $[C_1]=af+s, [C_2]=f, [C_3]=-af+s$. $(D4)$ $r(D)=4$ and $[C_1]=af+s, [C_2]=f, [C_3]=-(a+1)f+s, [C_4]=f$. The graphs in (D2), (D3) and (D4) are given respectively by $$ \xymatrix{ \bullet^{2a+1} \ar@{=}[r] & \bullet^{3-2a} } \quad \xymatrix{ \bullet^{4} \ar@{=}[r] & \bullet^{0} } \quad \xymatrix{ \bullet^{2a+1} \ar@{-}[d] \ar@{-}[r] & \bullet^{0} \ar@{-}[dl] \\ \bullet^{-2a+1} \\ } \quad \xymatrix{ \bullet^{2a+1} \ar@{-}[d] \ar@{-}[r] & \bullet^{0} \ar@{-}[d] \\ \bullet^{0} \ar@{-}[r] & \bullet^{-2a-1} \\ } $$ \subsection{Classification by homology equivalence} There are two steps to prove Theorem \ref {thm: symplectic deformation class=homology classes}. One step is to show that each (strict) homology type of minimal pairs contains a unique (strict) deformation class via a combination of pseudo-holomorphic curve techniques and Thurston type symplectic construction in the setting of a pair of a symplectic 4-manifold with a smooth symplectic surface. We also introduce marked divisors and establish the invariance of their (strict) deformation class under toric and non-toric blow-up/down operations (cf. also \cite{OhOn03}). This invariance property reduces Theorem \ref{thm: symplectic deformation class=homology classes} to the minimal case. The statement that each symplectic deformation class contains a K\"ahler pair is not stated in \cite{LiMa16-deformation} but it follows from the proof outlined above since each minimal pair clearly deforms to a K\"ahler pair (cf. Section 3 in \cite{LiMa16-deformation} and Theorem 2.4 in \cite{Fr}) and blow-up/down can be performed in the K\"ahler category. We remark that Theorem \ref {thm: symplectic deformation class=homology classes} should also apply to the cases of irreducible nodal spheres and cuspidal spheres using \cite{Ba} and \cite{OhOn05-cuspidal} respectively. \begin{proof} [Proof of Corollary \ref{cor: finite deformation}] By Theorem \ref {thm: symplectic deformation class=homology classes}, every symplectic deformation class contains a K\"ahler pair. The finiteness of Looijenga pairs follows directly from Theorem \ref{thm: finite kahler deformation}. For elliptic symplectic log Calabi-Yau pairs, where the sequences are of length $1$, the finiteness is more straightforward--it follows from the finiteness of symplectic deformation types in the case of minimal pairs for each $(s)$, where $s=0, 8, 9$ (cf. Section 3 in \cite{LiMa16-deformation}), and the fact that there is only one way to (non-toric) blow up, up to deformation. \end{proof} \subsection{Anti-canonical sequences} Due to the classification of minimal symplectic log Calabi-Yau pairs, it is a combinatorial problem to determine the anti-canonical sequences. There are also various conditions on spherical circular sequences with $b^+=1$ in Lemma \ref{lem: not negative semi-definite => at least one non-negative}, Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}, Lemma \ref{lem: non-negative components}. The first statement of Theorem \ref{theorem: anti-canonical} that every spherical circular sequence with $b^+=1$ is anti-canonical is deduced from these lemmas, the list of minimal pairs, the observation that whether a spherical circular sequence is anti-canonical only depends on its toric equivalence class, and \begin{prop} Suppose $D\subset (X, \omega)$ is a cycle of spheres in a rational surface $(X, \omega)$ with minimal complement. Then $s(D)\leq 9$, and $S(D)\ne (5+l, -l)$ with $l\geq 2$. $D$ represents $c_1 (X, \omega)$ if $\bullet$ $s_i\geq -1$ for any $i$, or $\bullet$ $S(D)=(1, -p_1+1, -p_2, ...., -p_{l-1}, -p_l+1)$ with $p_i\geq 2$, $l\geq 2$. \end{prop} This proposition is proved using Theorem 6.10 in \cite{OhOn03}, Proposition 3.14 in \cite{LiZh11}, Theorem 3.1 in \cite{GoLi14}, and a direct verification to exclude $(5+l, -l)$ with $l\geq 0$. For the second statement of Theorem \ref{theorem: anti-canonical} that any anti-canonical sequence with $b^+=1$ is rigid, it follows from the following propositions and the observation that whether an anti-canonical sequence is rigid only depends on its toric equivalence class. \begin{prop}\label{lem: positive parabolic} Suppose $(s_i)$ is an anti-canonical sequence and it belongs to one in the following list. $\bullet$ $(1, -p_1+1, -p_2, ...., -p_{l-1}, -p_l+1)$ with $p_i\geq 2$, $l\geq 2$ so $r(D)\geq 3$. $\bullet$ $(0, 0, 0, n)$ with $n\leq 0$. $\bullet$ $(1, 1, p), p\leq 1$. $\bullet$ $(1, p)$ with $p\geq 4$. $\bullet$ $(0, n)$ with $n\leq 4$. $\bullet$ $s_i\geq -1$ for each $i$. $\bullet$ $(-1, -2)$ and $(-1, -3)$. Then $(s_i)$ is rigid. \end{prop} \begin{prop} \label{lem: rigid} Suppose $(X, D, \omega)$ is a symplectic Looijenga pair with $b^+(Q_D)= 1$. Then $S(D)$ is toric equivalent to one in Proposition \ref{lem: positive parabolic}. \end{prop} Proposition \ref{lem: positive parabolic}, except for the last bullet, is proved using Proposition 7.1 in \cite{OhOn03}, Theorems 3.1, 3.2, 3.5 in \cite{GoLi14} and similar arguments. The cases $(s_i)=(-1, -2)$ and $(-1, -3)$ are more delicate, requiring a blowup trick. Proposition \ref {lem: rigid} is proved by Lemmas \ref{lem: not negative semi-definite => at least one non-negative}, \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}, \ref{lem: non-negative components}, the toric move in Example \ref{eg: balancing self-intersection by $0$-sphere} and induction on the length of $D$. \begin{comment} \begin{proof} The last bullet follows from \cite{OhOn03}, Proposition 7.1, by smoothing $D$ to a symplectic torus. Bullet 1 follows from Theorem 3.1 in \cite{GoLi14}, the 5th bullet follows from Theorem 3.5 in \cite{GoLi14}, and the sequence $(-1, -3)$ being Stein rigid anti-canonical follows from Theorem 3.2 in \cite{GoLi14}. The case of $p\leq -1$ of the 4th bullet follows from Theorem 3.1 in \cite{GoLi14}, while the remaining sequences $(1, 0), (1,1), (1, 2), (1, 3)$ in Bullet 3 have $s(D)\geq 0$. We next show that the sequence $(s_i)=(0, 0, 0, n), n\leq 0$, is rigid anti-canonical. The proof is similar to Lemma 3.3 in \cite{GoLi14}. $(X,\omega)$ is rational by McDuff. Since $X$ is clearly not $\mathbb{C} P^2$, it is of the form $(S^2\times S^2 )\# l\overline{\mathbb{C} P^2}$. We already have two visible fiber classes and two sections in $D$. Suppose $(X, \omega)$ is not minimal. Then we either have $n=-1$, or any exceptional sphere intersects the $n-$sphere or the opposite $0-$sphere once. One can proceed inductively. We next show that $(1, 1,1)$ is rigid. The classes must be homological by the 4th bullet of Lemma 2.5. We can assume that $[C_1]=[C-2]=[C_3]=h$ and the conclusion follows. For $(1, 1, p\leq 0)$, we reduce to the case $(1, 1, 1)$. $[C_1]=[C_2]=h$ and $[C_3]=h-\sum e_i$. Blowing down $e_i$. \end{proof} \end{comment} \begin{comment} \begin{proof} $\bullet$ Suppose $r(D)=2$. When $r^{\geq 0}(D)=2$, it is rigid by the last bullet. When $r^{\geq 0}(D)=1$, it is rigid in the following cases (with overlap): $(0, n\leq 4)$ (5th bullet), $(1, p\leq -1)$ (4th bullet) and $(k, l), k+l+4>0$ (last bullet). For length $2$ Looijenga pairs, since $r(D)\leq 9$, we have the constraint $k+l\leq 4$, except for $(1, 4)$. In particular, all the cases where $k=0$ or $1$ are rigid. We only need to check the case $(k, l), k\geq 2, k+l+4\leq 0$. Perform Toric blow-up $k-1$ times to $C_1$ to get successively $(k-1, -1, l-1), (k-2, -1, -2, l-1), ...., (1, -1, -2, ..., -2, l-1)$. Since the length is $k+1$ and $l-1\leq -1$ this falls into the 1st bullet of Proposition \ref{lem: positive parabolic}. When $r^{\geq 0}(D)=0$, $(-1, -1), (-1, -2)$ are rigid since $s(D)\geq 0$, while $(-1, -3)$ is Stein rigid. {\bf From now we can assume the initial $D$ has $r(D)\geq 3$, toric minimal and hence $r^{\geq 0}(D)\geq 1$}. $\bullet$ Suppose $r(D)=3$ and toric minimal (if not, reduce to the $r=2$ case). When $r^{\geq 0}(D)=3$, it is rigid by the last bullet of Proposition \ref{lem: positive parabolic}. When $r^{\geq 0}(D)=2$, by the 1st bullet of Lemma 2.6, we can assume that $s_1\geq s_2=0$, or $(s)=(1, 1, p\leq 1)$. The case $(1, 1, p)$ is the 3rd bullet of Proposition \ref{lem: positive parabolic}. Assume $s_1\geq s_2=0$. Apply the toric move in Example 2.8 based at $C_2$ to transform to $\bar D$ with $\bar s_1=\bar s_2=0, \bar s_3=s_3+s_1$. If $\bar s_3\geq -1$, then we are done by the last bullet of Proposition \ref{lem: positive parabolic}. Suppose $\bar s_3\leq -2$. Toric blow up the pair $(\bar C_1, \bar C_2)$ and then contract the proper transform of $C_1, C_2$ to reduce the length to $2$. When $r^{\geq 0}(D)=1$, if $s_1=1$, then it belongs to the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1\geq 2$, toric blow up the pair $C_1, C_2$, and if necessary, apply successive toric blow-ups to the pairs of the proper transforms of $C_1$ and the exceptional spheres to get $\bar D$ with $\bar s_1=1, \bar s_2=-1, \bar s_i\leq -2$ for $i\geq 3$ (so $\bar D$ is not toric minimal). Notice that $r^{\geq 0}(\bar D)=1$ and $r(\bar D)\geq r(D)\geq 3$ so $\bar D$ falls into the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1=0$, apply the toric move in Example 2.8 based at $C_1$ to increase $s_2$ to $\bar s_2=0$ (while decreasing $s_r$ to $\bar s_r-s_2$). Notice that $r^{\geq 0}(\bar D)=2, \bar s_1=\bar s_2=0$, and $\bar D$ toric minimal. We treated this case above. $\bullet$ Suppose $r(D)=4$ and $D$ toric minimal. When $r^{\geq 0}(D)=4$, it is rigid by the last bullet of Proposition \ref{lem: positive parabolic}. When $r^{\geq 0}(D)=3$, by the 2nd bullet of Lemma 2.6, $(s)=(k, 0, l, 0)$ with $k\geq 0, l<0$. Apply the toric move based at $C_4$ to transform to the sequence $(0, 0, l+k, 0)$. Notice that $l+k\leq 0$ by the 3rd bullet of Lemma 2.5, so the sequence $(0, 0, l+k, 0)$ belongs to the 2nd bullet of Proposition \ref{lem: positive parabolic}. When $r^{\geq 0}(D)= 2$, we have $(0, l_1, 0, l_2)$ or $(k\geq 0, 0, l_1, l_2)$. For the case $(k\geq 0, 0, l_1, l_2)$ we apply the toric move in Example 2.8 based at $C_2$ to transform to $\bar D$ with $\bar s_1=\bar s_2=0, \bar s_3=s_3+s_1, \bar s_4=s_4\leq -2$. Notice that $\bar s_3\leq 0$. If $\bar s_3=0$, we are in the case $(0, 0, 0, \bar s_4\leq -2$, which belongs to the 2nd bullet of Proposition \ref{lem: positive parabolic}. If $\bar s_3= -1$, we toric blow down to reduce the length. If $\bar s_3\leq -2$, toric blow up the pair $(\bar C_1, \bar C_2)$ and then contract the proper transform of $C_1, C_2$ to reduce the length to $3$. {\bf For the case $(0, l_1, 0, l_2)$}, apply the toric move based at $C_1$ to get $(0, 0, 0, l_1+l_2)$, which belongs to the 2nd bullet of Proposition \ref{lem: positive parabolic}. When $r^{\geq 0}(D)= 1$, if $s_1=1$, then it belongs to the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1\geq 2$, toric blow up the pair $C_1, C_2$, and if necessary, apply successive toric blow-ups to the pairs of the proper transforms of $C_1$ and the exceptional spheres to get $\bar D$ with $\bar s_1=1, \bar s_2=-1, \bar s_i\leq -2$ for $i\geq 3$ (so $\bar D$ is not toric minimal). Notice that $r^{\geq 0}(\bar D)=1$ and $r(\bar D)\geq r(D)\geq 4$ so $\bar D$ falls into the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1=0$, apply the toric move in Example 2.8 based at $C_1$ to increase $s_2$ to $\bar s_2=0$ (while decreasing $s_r$ to $\bar s_r-s_2$). Notice that $r^{\geq 0}(\bar D)=2, \bar s_1=\bar s_2=0$, and $\bar D$ toric minimal. We treated this case above. \vskip .1in We finally deal with the general case that $r(D)\geq 5$. $\bullet$ Suppose $r(D)\geq 5$ and $D$ toric minimal. Then $r^{\geq 0}(D)\geq 1$ by the 1st bullet of Lemma 2.9 and we can assume that $s_1\geq 0$. By the 1st bullet of Lemma 2.6, $r^{\geq 0}(D)\leq 2$. When $r^{\geq 0}(D)=2$, by the 1st bullet of Lemma 2.6, we can assume that $s_1\geq s_2=0$. Apply the toric move in Example 2.8 based at $C_2$ to transform to $\bar D$ with $\bar s_1=\bar s_2=0, \bar s_3=s_3+s_1, \bar s_r=s_r\leq -2$. Observe that $\bar s_3\leq -1$ by the 1st bullet of Lemma 2.6. If $\bar s_3 =-1$ (then $s_3+s_1=-1$), toric blow down $\bar C_2'$ to reduce the length. Suppose $\bar s_3\leq -2$. Toric blow up the pair $(\bar C_1, \bar C_2)$ and then contract the proper transform of $C_1, C_2$ to get $\bar D'$ with $\bar s_1'=1, \bar s_2'=\bar s_3+1\leq -1, \bar s_{r-1}'=\bar s_r+1\leq -1$ (not necessarily toric minimal). Notice that $r^{\geq 0}(\bar D)=1$ and $r(\bar D')=r(D)-1\geq 4$, so it belongs to 1st bullet in Proposition \ref{lem: positive parabolic}. Suppose $r^{\geq 0}(D)=1$. If $s_1=1$, then it belongs to the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1\geq 2$, toric blow up the pair $C_1, C_2$, and if necessary, apply successive toric blow-ups to the pairs of the proper transforms of $C_1$ and the exceptional spheres to get $\bar D$ with $\bar s_1=1, \bar s_2=-1, \bar s_i\leq -2$ for $i\geq 3$ (so $\bar D$ is not toric minimal). Notice that $r^{\geq 0}(\bar D)=1$ and $r(\bar D)\geq r(D)\geq 5$ so $\bar D$ falls into the 1st bullet in Proposition \ref{lem: positive parabolic}. If $s_1=0$, apply the toric move in Example 2.8 based at $C_1$ to increase $s_2$ to $\bar s_2=0$ (while decreasing $s_r$ to $\bar s_r-s_2$). Notice that $r^{\geq 0}(\bar D)=2, \bar s_1=\bar s_2=0$, and $\bar D$ toric minima. We treated this case above. \end{proof} \end{comment} \begin{comment} \subsection{Operations and minimal models} {Gompf: $D$ can be deformed to be $\omega-$orthogonal. Assume this is the case. $D$ can be made pseudo-holomorphic for a tamed $J$. Let $E$ be the class of a symplectic exceptional sphere. By adjunction, $E\cdot [D]=1$. Suppose $E\cdot [C_i]\geq 0$ for any $i$, then for a generic tamed $J$ making $D$ pseudo-holomorphic, $E$ is represented by an embedded $J-$holomorphic sphere. } {Operations on anti-canonical divisors} Ohta-Ono: The stability of symplectic deformation equivalence under blowup/down We provide an approach investigating properties of marked symplectic divisors. A marked symplectic divisor consists of a five-tuple $$\Theta=(X,D,\{p_j\}_{j=1}^l,\omega,\{I_j\}_{j=1}^l)$$ such that $\bullet$ $D\subset (X, \omega)$ is a symplectic divisor, $\bullet$ $p_j$, called centers of marking, are points on $D$ (intersection points of $D$ allowed), $\bullet$ $I_j: (B(\delta_j),\omega_{std}) \to (X,\omega)$, called coordinates of marking, are symplectic embeddings sending the origin to $p_j$ and with $I_j^{-1}(D)=\{x_1=y_1=0\} \cap B(\delta_j)$ (resp. $I_j^{-1}(D)=(\{x_1=y_1=0\} \cup \{x_2=y_2=0\})\cap B(\delta_j)$) if $p_j$ is a smooth (resp. an intersection) point of $D$. Moreover, we require that the images of $I_j$ are disjoint. {Symp def equivalence for Minimal torus pair} We can fix $X$ since diffeomorphic rational or ruled symplectic 4-manifolds are symplectic deformation equivalent. $\bullet$ When $X=\mathbb{C}P^2$, isotopy due to Sikorav, $\bullet$ When $X=S^2\times S^2$, isotopy due to Siebert-Tian \bigskip $\bullet$ When $X$ is an $S^2-$bundle over $T^2$, it is harder to use $J-$hol isotopy arguments since the tori have trivial paring with $K_{\omega}$. Usher: $D$ is a double covering of the base $T^2$ and $X-D$ has the structure of a pseudo-holomorphic annulus bundle over $T^2$. Moreover, the bundle structure is determined by the diffeomorphism type of $X$. Observe that the annulus bundle isomorphism can be extended to a sphere bundle isomorphism. To get a symplectic deformation equivalence, apply the following lemma (proved by the Thurston trick): \begin{lemma}\label{lem: Thurston trick} Let $\pi: (X,\omega_i,J_i) \to B$ be a symplectic surface bundle over surface such that $J_i$ is $\omega_i$-compatible and fibers are $J_i$ holomorphic for both $i=0,1$. Moreover, we assume the orientation of fibers induced by $J_0$ and $J_1$ are the same and the orientation of the total space induced by $\omega_0^2$ and $\omega_1^2$ are the same. Assume $D \subset (X,\omega_i)$ is a $J_i$ holomorphic surface for $i=0,1$. and $\pi|_D$ is submersive. Then there is a smooth family of (possibly non-homologous) symplectic forms $\omega_t$ on $X$ making $D$ symplectic for all $t \in [0,1]$ joining $\omega_0$ and $\omega_1$. \end{lemma} OO-elliptic {Symplectic isotopy for Minimal Looijenga pair} Symplectic Looijenga pairs: $k-$gons with $k\leq 4$. Again we fix $X$. Consider $D=\cup C_i$ and $\bar D=\cup \bar C_i$ When all the components have self-intersection at least $-1$, the configuration is GW stable, so isotopy of such a configuration follows a standard argument in Gromov-Witten theory. In general, there is at most 1 component with negative self-intersection. Call this component $C_0$ for $D$, $\bar C_0$ for $\bar D$. Notice that $X$ is an $S^2-$bundle over $S^2$. Abreu-McDuff: When $X$ is an $S^2-$bundle over $S^2$, homologous symplectic spheres with negative self-intersection are isotopic to each other. After applying smooth ambient extension, Banyaga extension (using the irreducibility of the fiber class) we can assume the two Looijgenga pairs share the same component with self-intersection $\leq -2$ (and have the same symplectic form). Namely, they are of the form $(X, C_0\cup_{i>0} C_i, \omega), \quad (X, C_0\cup_{i >0} \bar C_i, \omega).$ Then we establish relative isotopy for the remaining stable configuration $\cup _{i>0} C_i, \quad \cup_{i>0} \bar C_i$ by analyzing the codimension of the possible degenerations. \end{comment} \section{Contact aspects} Let $(X, D, \omega)$ be a symplectic log Calabi-Yau pair. A neighborhood $N'$ of $D$ is called a concave (resp. convex) neigborhood if $N'$ is a concave (resp. convex) symplectic manifold. $D$ is called concave (resp. convex) if for any neighborhood $N'$ of $D$, there is a concave (resp. convex) plumbing neighborhood $N_D \subset N'$. A necessary condition for $D$ to be either convex or concave is $\omega$ being exact on the boundary of any plumbing neighborhood. Here is a local criterion. \begin{lemma} \label{lem: non-degenerate intersection form} $\omega|_{Y_D}$ is exact if and only if there is a solution for $z$ to the equation $Q_Dz=a$, where $a=([\omega]\cdot [C_1],\dots,[\omega]\cdot [C_k])$ is the area vector. In particular, this holds if $Q_D$ is non-degenerate. Moreover, this condition only depends on the toric equivalence class. \end{lemma} The first statement is observed in \cite{LiMa14}. Moreover, tori blow-up/down is a local operation that does not change the the diffeomorphism type of $Y_D$ and the exactness of $\omega|_{Y_D}$. One can also check that the solvability for $Q_Dz=a$ is stable under toric blow-up/down by simple linear algebra. When $X$ is a closed manifold, we also have the following criterion. \begin{lemma}\label{lem: orthogonal decomposition} Suppose $X$ is a closed manifold with intersection matrix $Q_X$. Let $I_1=\iota_*(H_2(D);\mathbb{R}) \subset H_2(X;\mathbb{R})$ and $I_2 \subset H_2(X;\mathbb{R})$ be $Q_X$-orthogonal to $I_1$ in $H_2(X;\mathbb{R})$. If the span of $I_1 \cup I_2$ is $H_2(X;\mathbb{R})$, then $\omega|_{Y_D}$ is exact. The existence of $I_2$ is preserved under toric blow-up and toric blow-down. \end{lemma} \begin{comment} \begin{proof} Without loss of generality, we assume $I_1 \cap I_2 =0$ and $I_2$ is a vector subspace. Then we have $H_2(X;\mathbb{R})=I_1 \oplus I_2$. Take a basis for $\beta_1$ and $\beta_2$ for $I_1$ and $I_2$, respectively. Since $X$ is closed, $Q_X$ is non-degenerate. We have a solution $Q_Xz_X=a_X$ for $z_X$, where the matrix and vectors are written with respect to the basis $\beta=\beta_1 \cup \beta_2$ and $a_X$ is the $\omega$-pairing for the basis $\beta$. It is clear that $z_X=z_1+z_2$ with $z_i$ corresponds to a vector $v_i$ in $I_i$ for $i=1,2$. Pick a preimage $\overline{v_1} \in H_2(D)$ such that its image in $H_2(X;\mathbb{R})$ is $v_1$. Since $Q_X$ is a direct sum of two matrices, $\overline{v_1}$ clearly corresponds to a solution $z$ for the equation $Q_Dz=a$. We now prove that the existence of $I_2$ is preserved under toric blow-up and toric blow-down. Suppose $(\overline{X},\overline{D},\overline{\omega})$ is obtained by toric blow-up of $(X,D,\omega)$. If we canonically identify $H_2(\overline{X};\mathbb{R})=H_2(X;\mathbb{R}) \oplus e\mathbb{R}$ with $e$ being the exceptional class, then $I_2$ is lifted to a subspace $\bar I_2$ in $H_2(\overline{X};\mathbb{R})$, which clearly satisfies the requirement. Similarly, for toric blow-down, we just need to descend $I_2$ to get our desired $I_2$ in the blown-down pair. \end{proof} \end{comment} We also recall two criterions for symplectic divisors to be contact and the definition of contact Kodaira dimension. \begin{theorem}[\cite{GaSt09}, \cite{McL12}] \label{GS} A negative definite symplectic divisor is convex. \end{theorem} \begin{theorem} [\cite{LiMa14}] \label{concave} Let $D \subset (W,\omega_0)$ be a symplectic divisor. If $Q_D$ is not negative definite and $\omega_0$ restricted to the boundary of $D$ is exact, then $\omega_0$ can be locally deformed through a family of symplectic forms $\omega_t$ on $W$ keeping $D$ symplectic and such that $(D,\omega_1)$ is a concave divisor. Moreover, the contact structure $\xi_D$ on $Y_D$ is canonically associated to $D$ in this case and in the negative definite case. \end{theorem} \begin{definition}[\cite{LiMa16}, \cite{LiMaYa14}] Let $(W, \omega)$ be a concave symplectic 4-manifold with contact boundary $(Y, \xi)$. $(W, \omega)$ is called a Calabi-Yau cap of $(Y, \xi)$ if $c_1(W)$ is a torsion class, and it is called a uniruled cap of $(Y, \xi)$ if there is a contact primitive $\beta$ on the boundary such that $c_1(W)\cdot [(\omega, \beta)]>0$. The contact Kodaira dimension of a contact 3-manifold $(Y, \xi)$ is defined in terms of uniruled caps and Calabi-Yau caps. Precisely, $Kod(Y, \xi)=-\infty$ if $(Y, \xi)$ has a uniruled cap, $Kod(Y, \xi)=0$ if it has a Calabi-Yau cap but no uniruled caps, $Kod(Y, \xi)=1$ if it has no Calabi-Yau caps or uniruled caps. \end{definition} \subsection{Trichotomy} Theorem \ref{prop: convex-concave} is based on the following observation in \cite{LiMa17-contact} (cf. also Theorem 2.5 in \cite{GoLi14}). \begin{prop}\label{prop: exact} For a symplectic log Calabi-Yau pair $(X, D, \omega)$, $\omega$ is exact on $Y_D$ if and only if $Q_D$ is negative definite or $b^+(Q_D)=1$. \end{prop} This result is proved by the local criterion Lemma \ref{lem: non-degenerate intersection form}, Lemma \ref{lem: not negative semi-definite => at least one non-negative}, Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}, Lemma \ref{lem: non-negative components}, the toric move in Example \ref{eg: balancing self-intersection by $0$-sphere}, and by applying the $I_2-$criterion Lemma \ref {lem: orthogonal decomposition} to the following list of log Calabi-Yau pairs $(X, D, \omega)$ with $r(D)\leq 4$ and $b^+(Q_D)=1$. \begin{enumerate} \item (B2) in the list of minimal models; $I_2=\emptyset$; $S(D)=(1, 4)$. \item (C2) with $b=1$; $I_2=\emptyset$; $S(D)=(2, 2)$. \item (B3); $I_2=\emptyset$; $S(D)=(1, 1, 1)$. \item Non-toric blow-ups of (B3) on $C_3$ and its proper transforms; $I_2=\{e_j-e_{j+1}, 1\leq j\leq \alpha-1\}$; $S(D)=(1, 1, 1-\alpha)$. \item Non-toric blow-ups of (C3) on $C_3$ and its proper transforms; $I_2=\{e_j-e_{j+1}, 1\leq j\leq \alpha-1\}$; $S(D)=(0, 0, 2-\alpha)$. \item (C4) with $b=0$; $I_2=\emptyset$; $S(D)=(0, 0, 0, 0)$. \item Non-toric blow-ups of (C4) with $b=0$ on $C_4$ and its proper transforms; $I_2=\{e_j-e_{j+1}, 1\leq j\leq \alpha-1\}$; $S(D)=(0, 0, 0, -\alpha)$. \end{enumerate} \begin{comment} \begin{proof} The case that $D$ is a torus is clear. So we assume now that $D$ is a cycle of spheres. The case that $D$ is toric equivalent to $D_l$ with sequence $(-1, l)$ is also straightforward since $D_l$ is degenerate only if $l=-4$ and the vector $Q_{D_{-4}}z$ cannot have all entries being positive for any $z \in \mathbb{R}^2$. From now we assume that $D$ is toric equivalent to a toric minimal cycle of spheres. If $Q_D$ is negative definite then $\omega$ is exact on $Y_D$ by Lemma \ref {lem: non-degenerate intersection form}. If $b^+(Q_D)=0$ but $Q_D$ is not negative definite, then by Lemma \ref{lem: not negative semi-definite => at least one non-negative}, $D$ is toric equivalent to a circle $D'$ of self-intersection $-2$ spheres and it is easy to check that for any $z \in \mathbb{R}^k$, $Q_{D'}z$ cannot have all entries being positive. So we assume that $b^+(Q_D)=1$ and $D$ is toric minimal. If $r^{\geq 0}(D)=1$ then we apply the last bullet of Lemma \ref{lem: not negative semi-definite => at least one non-negative}. So we further assume that $r^{\geq 0}(D)\geq 2$. We will use a case by case analysis to show that $\omega$ is exact on $Y_D$. We will often apply the toric moves in Example \ref{eg: balancing self-intersection by $0$-sphere}, which preserves the length, combined with toric blow down which reduces the length to reduce to the toric minimal case with the same or less length. $\bullet$ $r(D)=2=r^{\geq 0}(D)$, and $D$ toric minimal. This case follows from the last bullet of Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres} and items 1 and 2 in the list. $\bullet$ $r(D)=3, r^{\geq 0}(D)\geq 2$, and $D$ toric minimal. If $r^{\geq 0}(D)=2$ there are two cases by the 3rd bullet of Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}. The case that $s_1=s_2=1$ is item 3 in the list. For the case that $s_1=0$ apply the toric move to reduce to item 4 in the list. If $r^{\geq 0}(D)=3$ there are similarly two cases. The case that $s_1=s_2=s_3=1$ is item 5 in the list. For the case that $s_1=0$ apply the toric move to reduce to item 4 in the list (not necessarily toric minimal). $\bullet$ $r(D)=4, r^{\geq 0}(D)\geq 2 $, and $D$ toric minimal. In this case we can assume that $s_1=0$ by the 1st bullet of Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}. If $r^{\geq 0}(D)=4$, it is item 6 in the list by the 3rd bullet of Lemma \ref{lem: non-negative components}. If $r^{\geq 0}(D)=3$, then a pair of non-adjacent components have non-negative self-intersection. By the 1st bullet of Lemma \ref{lem: non-negative components}, we can assume that $s_1=0=s_3, s_2\geq 0$. Applying the toric move based at $C_3$, we get item 7 in the list. If $r^{\geq 0}(D)=2$ there are two cases. The case that $s_1=s_3=0, s_2<0$ is again toric equivalent to item 7 in the list. For the case $s_1=0, s_2\geq 0$, it is toric equivalent (move based at $C_1$) to $\bar D$ with $\bar s_1=0=\bar s_2, \bar s_3\leq -2, \bar s_4\leq -1$. If $s_4\leq -2$, then we apply teh last bullet of Lemma \ref{lem: not negative semi-definite => at least one non-negative} to the toric minimal $\bar D$. If $s_4=-1$ we toric blow down $\bar C_4$ to reduce to $\tilde D$ with $r(\tilde D)=3$ and notice that $\tilde D$ is toric equivalent to a toric minimal $\tilde D'$ with $r(\tilde D')\leq 3$. Finally, we deal with the general case. $\bullet$ $r(D)\geq 5$, $r^{\geq 0}(D)\geq 2$, and $D$ toric minimal. By Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres} we have $r^{\geq 0}(D)= 2$. The case that $s_1=0=s_2$, follows from the last bullet of Lemma \ref{lem: not negative semi-definite => at least one non-negative}. The remaining case is that $s_1>0, s_2=0$. We apply the toric move based at $C_2$ to obtain $\bar D$ with $\bar s_1=0=\bar s_2$ and $r(\bar D)=r(D)\geq 5$. Notice that $\bar s_3\geq s_3$ but $\bar s_3 \leq -1$ since $r^{\geq 0}(\bar D)\leq 2$ by Lemma \ref{lem: |D| > 4 => D at most two consecutive nonnegative spheres}. If $\bar s_3\leq -2$ then $\bar D$ is toric minimal since $\bar s_i=s_i$ for $i\geq 3$, and we are done by the last bullet of Lemma \ref{lem: not negative semi-definite => at least one non-negative}. If $\bar s_3=-1$ we toric blow down $\bar C_3$ to reduce the length, and we are done by induction. \end{proof} \end{comment} For Case (iii) of Theorem \ref{prop: convex-concave}, it follows from Proposition \ref{prop: exact} that $\omega$ is not exact on $Y_D$. For Case (i) of Theorem \ref{prop: convex-concave}, $Q_D$ is negative definite and hence there is a convex plumbing neighborhood $N_D$ with contact boundary $(Y_D, \xi_D)$ by Theorem \ref{GS}. Notice that $P=X-N_D$ is a symplectic cap of $Y_D$ with vanishing $c_1$, namely, it is a Calabi-Yau cap. It follows that $Kod(Y_D, \xi_D)\leq 0$. For Case (ii) of Theorem \ref{prop: convex-concave}, it follows from Theorem \ref{concave} and Proposition \ref{prop: exact} that, up to a local symplectic deformation, there is a concave plumbing neighborhood $N_D$ with contact boundary $(Y_D, \xi_D)$. Moreover, since $D$ is symplectic and represents $c_1(X)$, for any contact primitive $\alpha$ of $\omega|_{Y_D}$, we have $c_1(N_D)\cdot [(\omega,\alpha)]= c_1(X)|_{N_D} \cdot [(\omega,\alpha)] = D\cdot [(\omega, \alpha)]=D\cdot [\omega]>0$. Thus $N_D$ is a uniruled cap. \begin{remark}\label{thm: as a support of ample line bundle} Applying Theorem \ref{prop: convex-concave}, Theorem \ref{thm: symplectic deformation class=homology classes} and Proposition 4.1 in \cite{GrHaKe12}, it is not hard to prove the following statement: For a symplectic log Calabi-Yau pair $(X, D, \omega)$ with $b^+(Q_D)= 1$, there exists a K\"ahler log Calabi-Yau pair $(\overline{X},\overline{D},\overline{\omega})$ in its symplectic deformation class such that $\overline{D}$ is the support of an ample line bundle. Then $(\overline{X}-\overline{D},\overline{\omega})$ provides a Stein filling with $b^+=0$ and $c_1=0$. \end{remark} \begin{comment} \subsection{Ampleness in the Non negative semi-definite case} Applying the Trichotomy Theorem and Gross-Hacking-Keel, we can show: \begin{cor}[Ample] \label{thm: as a support of ample line bundle} Let $(X,D,\omega)$ be a symplectic log Calabi-Yau pair. If $b^+(Q_D)\geqq 1$, then there exists a K\"ahler log Calabi-Yau pair $(\overline{X},\overline{D},\overline{\omega})$ in the symplectic deformation equivalence class of $(X,D,\omega)$ such that $\overline{D}$ is the support of an ample line bundle. The contact structure is Stein fillable. \end{cor} For the other direction, it is clearly true that the support of an ample line bundle do not have negative semi-definite intersection form By the Trichotomy Theorem (ii), $\overline{\omega}$ is exact on $\partial P(\overline{D})$ and hence there is a lift $\sum\limits_{i=1}^k z_i[\overline{C_i}] \in H_2(P(\overline{D}))$ of $PD(\overline{\omega}|_{P(\overline{D})}) \in H_2(P(\overline{D}),\partial P(\overline{D}))$. In particular, we have $Q_D z = a$, where $z=(z_1,\dots,z_k)$ and $a=(\overline{\omega}[\overline{C_1}],\dots,\overline{\omega}[\overline{C_k}])$. \begin{lemma}\label{trichotomy} Let $Q$ be a k by k symmetric matrix with off-diagonal entries being all non-negative. Assume that there exist $a \in (0,\infty)^k$ such that there exist $z \in \mathbb{R}^k$ with $Qz=a$. Suppose also that $Q$ is not negative definite. Then, there exists $z \in (0,\infty)^k$ such that $Qz \in (0,\infty)^k$. \end{lemma} Thus there is $\tilde{z}=(\tilde{z_1},\dots,\tilde{z_k}) \in (0,\infty)^k$ s.t. $Q_D \tilde{z} $ has all entries positive. It means that $\sum\limits_{i=1}^k \tilde{z_i}[\overline{C_i}]$ pairs positively with all $[\overline{C_i}]$. Then Theorem implies that $(X,D,\omega)$ is symplectic deformation equivalent to a (holomorphic) Looijenga pair $(\overline{X},\overline{D},\overline{\omega})$. Moreover, by Proposition 4.1 of GHK12, we can at first choose a complex structure compatible with $\overline{\omega}$ such that $(\overline{X},\overline{D})$ is a generic pair. By adjunction equality and Hodge index theorem, any algebraic curve which do not intersect $\overline{D}$ is a self-intersection $-2$ rational curve or a self-intersection $0$ elliptic curve. There is no self-intersection $-2$ rational curve for the generic pair $(\overline{X},\overline{D})$, by definition. There is also no self-intersection $0$ elliptic curve by homological reasoning. This is because we can assume there is a self-intersection $0$ component of $\overline{D}$ (possibly after toric blow-down and non-toric blowup) which has homology class $H-E$ with $H$ being the hyperplane class and $E$ an exceptional class. Then the self-intersection $0$ elliptic curve (if exist) pairs $H-E$ trivially and hence a multiple of $H-E$. However, $H-E$ intersects other components of $\overline{D}$, which means that the elliptic curve intersects $\overline{D}$. Any algebraic curve that intersects $\overline{D}$ but not contained in $\overline{D}$ has positive pairing with $\sum\limits_{i=1}^k \tilde{z_i}[\overline{C_i}]$. Also, by the choice of $\sum\limits_{i=1}^k \tilde{z_i}[\overline{C_i}]$, it pairs positively with any irreducible curve in $\overline{D}$. Therefore, by Nakai-Moishezon criterion, $\sum\limits_{i=1}^k \tilde{z_i}[\overline{C_i}]$ is an ample divisor and the support is $\overline{D}$. When $Q_D$ is not negative definite, we can pick $(X,D=C_1 \cup \dots \cup C_k,\omega)$ such that $X$ is Kahler and $D$ is the support of an ample line bundle Then, $f=-\log(\|\phi\|)$ is an exhausting plurisubharmonic function on $X-D $ and hence give a Stein structure on $X-D$. $(N_D, \xi_D)$ admits a CY Stein filling with vanishing 1st Betti number. Golla-Lisca: Stein fillings have vanishing 1st class and vanishing 1st Betti numbers. Unique for elliptic, finite for parabolic and hyperbolic in the family and share the same Betti numbers. Conjecture: Concave cycles of spheres have universally tight contact structure. \end{comment} \subsection{Symplectic fillings} In the context of torus bundles, Golla-Lisca investigated symplectic fillings in the case $b^+(Q_D)= 1$. Here is a summary of their results. \begin{theorem} [Theorems 1.1, 3.1, 3.5 in \cite{GoLi14}]\label{thm: GoLi} For a large family $\cal F$ of torus bundles $T_A$ arising from $D$ with $b^+(Q_D)= 1$, all Stein fillings of $(T_A=Y_D, \xi_D)$ have $c_1=0$, $b_1=0$ and the same $b_2$. Moreover, up to diffeomorphism, there are only finitely many Stein fillings, and there is a unique Stein filling if $|tr A|<2$. Here $A$ is the monodromy matrix of $Y_D$. These results also hold for minimal symplectic fillings for this family, except possibly 3 torus bundles with $|tr A|<2$. \end{theorem} According to Corollary \ref{cor: finite stein}, the finiteness property holds more generally. \begin{proof}[Proof of Corollary \ref{cor: finite stein}] By Theorem \ref{prop: convex-concave} $(Y_D, \xi_D)$ is fillable and all the symplectic fillings have $b^+=0$. For Looijenga pairs, the Stein filliability follows from Remark \ref{thm: as a support of ample line bundle}. For an elliptic pair with self-intersection $s>0$, there is an obvious Stein filling diffeomorphic to the neighborhood of a torus with self-intersection $-s$. The finiteness of symplectic fillings for elliptic pairs is proved in \cite{OhOn03} (see Theorem \ref{OO-simple elliptic}). Now observe that if $D$ is concave and (Stein) rigid then any (Stein) symplectic filling of $(Y_D, \xi_D)$ is the complement of a symplectic log CY pair with the same self-intersection sequence. Now we invoke the second statement of Theorem \ref{theorem: anti-canonical} and Corollary \ref{cor: finite deformation} to conclude the finiteness of Stein symplectic fillings for all Looijenga pairs and the finiteness of symplectic fillings except for the toric equivalence classes of $(-1, -2), (-1, -3)$. Clearly, the fillings have vanishing $c_1$. \end{proof} Together with Theorems 1.3 and 1.8 in \cite{LiMaYa14}, Theorem \ref{prop: convex-concave} has the following consequence: when $Q_D$ is negative definite, the Betti numbers of exact fillings of $(Y_D, \xi_D)$ are bounded. For elliptic pairs, we have the following: \begin{theorem} [Theorem 2 in \cite{OhOn03}] \label{OO-simple elliptic} Any simple elliptic singularity has finite number of symplectic fillings, arising either from a smoothing or the minimal resolution . \end{theorem} For Looijenga pairs, when $D$ is negative definite and toric minimal, $\xi_D$ coincides with the contact structure arising from the corresponding cusp singularity and hence is Stein fillable with a Stein filling diffeomorphic to $N_D$. Notice that $b_1(N_D)=1$ by Lemma \ref{lem: homology of neighborhood}. We provide some explicit Betti number bounds for Stein fillings below when $D$ is negative definite. \begin{prop}[\cite{LiMa17-contact}]\label{p:AfterGlue} Suppose that $D$ is toric minimal and negative definite and $V=X - N_D$. If $U$ is a Stein filling of $Y_D$, then $X_U=U \cup V$ has either $b^+=1$ or $3$, and $b^+(X)=1+b^+(U)+b_2^0(U), b_2^0(U)+b_1(U)=1$. When $b^+(X_U)=1$, $X_U$ is rational or an integral homology Enriques surface, and $U$ is negative definite with $b_1(U)=1$. In this case $e(U)=b^-(U)$, where $e$ is the Euler number. When $b^+(X_U)=3$, $X_U$ is an integral homology $K3$, $(b_2^+(U),b_2^0(U),b_1(U))=(1,1,0)$ or $(2,0,1)$. In either case, $c_1(U)=0$ and $2\leq e(U)\leq 21$. \end{prop} \begin{comment} \begin{proof} Since $U$ is Stein, we have $1=b_1(Y_D) \ge b_1(U)$. By the MV sequence of the pair $(U, V)$, we have the exact sequence $H_1(Y_D) \to H_1(U) \oplus H_1(V) \to H_1(X_U)\to 0$. By Lemma \ref {lem: homology of neighborhood}, $b_1(V)=0$, so $b_1(X_U) \leq 1$. Since $V$ is a Calabi-Yau cap, it follows from $b_1(X_U)\leq 1$ and Theorem in \cite{LiMaYa14} that either (i). $b^+(X_U)=1, b_1(X_U)=0$ and $X_U$ is rational, an integral homology Enrique surface, or (ii). $b^+(X_U)=3, b_1(X_U)=0$ and $X_U$ is an integral homology $K3$. Since $b_3(X_U)=0, b_1(V)=0$, we have the exact sequence over $\mathbb{Q}$, $$0 \to H_2(Y_D;\mathbb{Q})=\mathbb{Q} \to H_2(U;\mathbb{Q})\oplus H_2(V;\mathbb{Q})\to H_2(X_U;\mathbb{Q})\to H_1(Y_D;\mathbb{Q})=\mathbb{Q}\to H_1(U;\mathbb{Q})\to 0. $$ Let $b_2^0(U)$ denote the dimension of the maximal isotropic subspace of $H_2(U;\mathbb{Q})$, which is the rank of the map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(U;\mathbb{Q})$. Notice that $b_2^0(U)=0$ or $b_2^0(U)=1$, and $b_1(U)=0$ or $b_1(U)=1$. We claim that $b_2^0(U)+b_1(U)=1$. First observe that $b_2^0(U)=1$ means that map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(U;\mathbb{Q})$ is injective, and since the map $H_2(Y_D;\mathbb{Q})\to H_2(V;\mathbb{Q})$ is injective by Lemma \ref {lem: homology of neighborhood}, the map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(U;\mathbb{Q})$ is injective if and only if the map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(X_U;\mathbb{Q})$ is injective. Next observe that $b_1(U)=0$ if and only if the connecting homomorphism $H_2(X_U;\mathbb{Q})\to H_1(Y_D;\mathbb{Q})$ has rank $1$. Finally observe that the map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(X_U;\mathbb{Q})$ is injective if and only if the map $H_2(X_U;\mathbb{Q})\to H_1(Y_D;\mathbb{Q})=\mathbb{Q}$ has rank $1$. We give a geometric argument. Take a smooth surface $S_1$ in $Y_D$ representing the generator of $H_2(Y_D)=\mathbb{Z}$. Suppose there is a surface $S_2$ in $X_U$ such that $[S_2 \cap Y_D]\ne 0 \in H_1(Y_D;\mathbb{Q})$. Since $b_1(Y_D)=1$, viewed as classes of $Y_D$, $[S_1] \cdot [S_2 \cap Y_D] \neq 0$ by the Poincare duality for $Y_D$. Hence $[S_1] \cdot [S_2] \neq 0$ in $X_U$, which implies that $[S_1]\ne 0 \in H_2(X_U;\mathbb{Q})$. Reversing the argument proves the converse. The three observations together give the claim $b_2^0(U)+b_1(U)=1$. Notice that the three observations also provide a decomposition of $H_2(X_U;\mathbb{Q})$. There are two cases depending on the whether the map $\mathbb{Q}=H_2(Y_D;\mathbb{Q})\to H_2(X_U;\mathbb{Q})$ has rank $0$ or $1$. When the rank is zero, then $H_2(X_U;\mathbb{Q})$ naturally decomposes as $H_2(U;\mathbb{Q})\oplus H_2(V;\mathbb{Q})$ and $b^{\pm}(X_U)=b^{\pm}(U)+b^{\pm}(V)$. When the rank is $1$, $H_2(X_U;\mathbb{Q})$ naturally decomposes as $H_2(U;\mathbb{Q})/[S_1] \oplus H_2(P;\mathbb{Q})/[S_1]\oplus \mathbb{Q}[S_1] \oplus \mathbb{Q}[S_2]$. The intersection pairing is non-degenerate on the orthogonal subspaces $H_2(U;\mathbb{Q})/[S_1]$ and $ H_2(V;\mathbb{Q})/[S_1]$, which implies that $b^{\pm}(X_U)\geq b^{\pm}(U)+b^{\pm}(V)+1$ since $[S_1]\cdot [S_1]=0$. Therefore we have $b^{\pm}(X_U)=b^{\pm}(U)+b^{\pm}(V)+1$ in this case since $b^+(X_U)+b^-(X_U)=b_2(X_U)=b^+(U)+b^-(U)+b^+(V)+b^-(V)+2$. Observe that $b^+(X)=b^+(P)+b^+(U)+b_2^0(U)$ in both cases since $b_2^0(U)=0$ in the previous case. We are now able to verify that $b^+(X)=1+b^+(U)+b_2^0(U)$. Since $V$ is a symplectic cap we have $b^+(P)\geq 1$. On the other hand $b^+(X)=1$ implies that $b^+(V)\leq 1$. So $b^+(V)=1$. Finally, we compute the Euler number $e(U)$. Notice that $b_3(U)=b_4(U)=0, b_0(U)=1$. So $e(U)=1-b_1(U)+ b_2(U)$. When $b^+(X_U)=3$, 5here are two cases: $(b_2^+(U),b_2^0(U),b_1(U))=(1,1,0)$ or $(2,0,1)$. In the frist case, $b_2(U)=1+1+b_2^-(U), b_1(U)=0$ so $e(U)=3+b_2^-(U)\geq 3$. In the second case, $b_2(U)=2+0+b_2^-(U), b_1(U)=1$ so $e(U)=2+b_2^-(U)\geq 2$. The inequality $e(U)\leq 21$ follows from $24=e(X_U)=e(U)+e(V)$, and $e(V)=\geq 3$ (Lemma 4.3 in \cite{FM83}). \end{proof} \end{comment} Finally, we discuss the potential implication of Proposition \ref{p:AfterGlue} for Stein fillings of cusp singularities. By the now confirmed Looijenga conjecture which states that a cusp singularity is smoothable if and only if has a rational dual, a smoothing of a cusp singularity provides a Stein filling with $b^+=1$. In light of this, Proposition \ref{p:AfterGlue} provides some evidence to the following symplectic/contact analogue of the Looijenga conjecture. \begin{speculation} If a cusp singularity does not have a rational dual, then it admits only negative definite Stein fillings. \end{speculation}
{ "timestamp": "2018-05-08T02:11:01", "yymm": "1805", "arxiv_id": "1805.02196", "language": "en", "url": "https://arxiv.org/abs/1805.02196" }
\section{Introduction} Cosmology, the study of the origin of the universe \cite{cosmobook,weinberg,baumann}, has recently been characterized by an intriguing combination of phenomenological success and theoretical ambiguity. The so-called $\Lambda$-CDM model has fitted most of the universe's macroscopic characteristics with just a few parameters, including gravitating but not otherwise interacting ``dark matter'' and an anomalously gravitating dark energy \cite{planck}. The inflationary paradigm \cite{guth} has eliminated the necessity of fine-tuning to explain the global structure of spacetime, producing a nearly-homogeneous and nearly flat universe from generic initial conditions via an early exponentially expanding phase driven by a dynamically changing ``temporary cosmological constant''. These parameters explain a wide variety of both present and past features, from galaxy rotation curves to structure formation. However, at the moment, this phenomenological paradigm severely lacks a particle physics underpinning. For instance, we have no idea of what the composition of dark matter is. It has yet to be directly detected and models where it appears naturally, such as supersymmetry, have failed to be experimentally confirmed. Within phenomenological models, dark matter is just assumed to be a dust of heavy but non-interacting particles, with residual interactions more and more tightly constrained experimentally \cite{constraint}. Similarly, no one knows what is the nature of ``the inflaton''. Its theoretical formulation is understood, at a semiclassical level, to be that of a nearly-flat (``slow-rolling'') false vacuum plateau, with a true vacuum where our universe moved after Inflation and stabilized after reheating. No currently-known particle has the required characteristics for reproducing this behaviour and there is quite a lot of numerical evidence \cite{owe} that the quantum structure of such a theory is not well-defined. This theoretical ambiguity requires new thinking, in particular whether different kinds of physics are capable of reproducing the same scenario. Here, natural candidates are non-abelian gauge theories. Unlike scalar field theories, there is little doubt about their fundamental mathematical and theoretical soundness \cite{wilson,creutz} and although their main qualitative features, notably confinement, are not rigorously derived from the Yang-Mills Lagrangian, there is a consensus regarding their basic nature. Crucially, this nature is ``universal'', that is, common to a family of theories which share basic properties such as asymptotic freedom. Concurrently, the heavy ion program \cite{hicexp,hicexp2,hicexp3} has done a great deal to elucidate the equilibrium and transport properties of these theories, on both a theoretical and a phenomenological level. In this work, after a coincise overview of the thermal and transport properties of Yang-Mills theories, we shall argue that a Yang-Mills theory with a large number of colors ($N_c>3$) and no flavors \cite{thooft,panero}, with a strong-coupling scale of at least order \textit{TeV}, could provide a scenario to explain several independent features of standard cosmology. We then focus on Inflation and try to see how it could emerge in such a model \cite{thesis}. \section{A review of $SU(N_c)$ pure gauge theories} \subsection{Basic theory and phase structure \label{phases}} Confining pure Gauge theories (Gauge group $SU(N_c)$ with no fermions) have been extensively studied, as they provide a much simplified, both analytically and numerically but still qualitatively, similar model to $QCD$ \cite{thooft,creutz,panero}. The Lagrangian of this theory is simply: \begin{equation} \label{lagrangian} \mathcal{L} = -\frac{1}{4\lambda_{YM}(Q)} \mathrm{Tr}\left( F^{\mu \nu a} F^a_{\mu \nu}\right) \phantom{AA},\phantom{AA} F_{\mu \nu}^a = \partial_\mu A_\nu^a - \partial_\nu A_\mu^a + i \frac{\lambda_{YM}(Q)}{N_C} \sum_{b,c} f^{abc} A_\mu^b A_\nu^c \end{equation} where $A^\mu_a$ are gluon fields; $N_c$ stands for the free parameter number of colours; $f^{ijk}$ are the structure constants associated with the $SU(N_c)$ group \cite{panero} and $\lambda_{YM}(Q)$ is a bare coupling constant defined at a momentum scale $Q$, in a thermal system, usually the temperature $T$. There are two fundamental parameters of this theory, which for the purposes here can be thought of as independent. The first is $N_c$, the number of colors, which specifies the gauge group and it is evident in the choice of Lagrangian. The second parameter is absent in the classical Lagrangian but it appears when the theory is quantized as the scale at which the coupling of the theory becomes relevant, $\Lambda_{YM}$. Since $\lambda_{YM}(Q \rightarrow \infty) \rightarrow 0$, there must be a scale $\Lambda_{YM}$ for which $\lambda_{YM}(\Lambda_{YM}) \sim 1$. While for QCD this scale is approximately $10^2$ MeV, generically it is an arbitrary parameter. Particularly, in the case where the $N_c \rightarrow \infty$ limit is reached continuously, $\Lambda_{YM}$ is independent of $N_c$ \cite{thooft}. This limit seems to be reached very fast for pure gauge theory \cite{panero}, although for theories with fermions its structure is more complicated \cite{menc}. In this work, we consider a case where $\Lambda_{YM} \sim TeV$. The theory without fundamental fermions is characterized by a deconfinement transition \cite{kapusta,rafelski} between two phases: a plasma of $\order{N_c^2}$ massless gluons where the Lagrangian (\ref{lagrangian}) is manifest and a gas of massive glueballs \cite{glue1,glue2,glue3}. The lightest of those particles has a mass of the order of the phase transition temperature $(T_c)$, spin zero and no color dependence. In the 't Hooft limit \cite{thooft,panero} they are also weakly interacting, with a coupling proportional to $N_c^{-1}$. We note that physical $SU(3)$ QCD has a goldstone light mode, the pion, due to the presence of light quarks and, consequently, chiral symmetry. Therefore, the evidence against a light dark matter particle (``hot'' dark matter) means that we must assume the hidden gauge theory has no light flavors. The full equation of state of the system is sketched in Fig. \ref{eos}. For pure gauge theory, the emergence of a $Z(N_c)$ theory, with the Polyakov loop as an expectation value \cite{polyakov}, ensures that at large $N_c$ the phase transition is of first-order. In-between around the critical temperature, the nature of the effective degrees of freedom is unclear - they may be quasiparticles \cite{peshier}, Hagedorn states \cite{jorge} or thermal excitations \cite{teaney} - but it is reasonable to suppose they interact strongly. \begin{figure*}[h] \epsfig{width=18cm,clip=1,figure=fig_betafunc.eps} \caption{\label{betafunc} As schematic representation of the mass gap (left panel) and coupling constant (right panel) as a function of temperature for Yang-Mills matter } \end{figure*} \begin{figure*}[h] \epsfig{width=15cm,clip=1,figure=fig_eos.eps} \caption{\label{eos} Energy density (left panel) and speed of sound (right panel) as a function of temperature for Yang-Mills matter } \end{figure*} The scale $\Lambda_{YM}$ determines, up to a factor $\order{1}$ calculable on the lattice \cite{creutz}, both the phase transition temperature $T_c$ and the mass of the confined low-lying state $m_h$. These two quantities can be determined logarithmically from the magnitude of the coupling constant at the renormalization UV scale, typically taken to be the Planck scale. In our theory, $\Lambda_{YM}$ also represents the scale of both Inflation and the formation of dark matter, hence it has to be of the order of \textit{TeV}, although still much smaller than the Planck scale, to ensure consistency with semiclassical gravity \cite{weakg}. Thus, neglecting higher spin Regge excitations, the effective Lagrangian is \cite{thooft,glue1,glue2,glue3}: \begin{eqnarray} \label{eftlagrangian} \mathcal{L}_h = \frac{1}{2}\partial_{\mu}\phi\partial^{\mu}\phi - \frac{1}{2}m_h^{2}\phi^{2} - \sum_{n=3}^\infty \order{\frac{1}{N_c^n}} \left(\frac{p}{\Lambda_{ym}}\right)^n\phi^{n} +... \end{eqnarray} \noindent It represents weak interactions for low-temperature systems such as the present universe. The equation of state of the system describes a massless gas at $T>T_c$ and a massive gas at $T<T_c$, both weakly interacting. The thermodynamics of both these systems is well-known \cite{rafelski}, such that at $T \gg T_c$, the EoS will be given by: \begin{eqnarray} p_g (T)= e_g(T)/3+ f_e (\lambda_{YM},T)-B \\ e_g(T) \sim N_c^2 T^4 + f_p (\lambda_{YM},T)+B \\ B^2 \sim p_{g}(T_c) - p_{h}(T_c) \sim N_C^2 \Lambda_{YM} \end{eqnarray} \noindent{where} $p_g$ and $p_h$ stand respectively for the gluon and hadron, in this case the glueball, pressure and $e_g$ for the gluon energy density. B represents the bag constant (latent heat), $f_e(\lambda_{YM},T)$ and $f_p (\lambda_{YM},T)$ are interaction terms, non-trivial to calculate \cite{arnold,strickland} but can be neglected within an order-of-magnitude calculation. At $T<T_c$, the EoS is given by \cite{rafelski}: \begin{eqnarray} \label{eq:Pressure_scalar} p_h(T) & = & \frac{m_h^{2}T^{2}}{2\pi^{2}}K_{2}\left(\frac{m_h}{T}\right) +\sum_{n>4} \order{\left( \frac{T}{N_c \Lambda_{YM}} \right)^n},\\ e_h(T) & = & \frac{m_h^{2}T}{2\pi^{2}}\left[ K_{2}\left(\frac{m_h}{T}\right)T-\frac{m_h} {2}\left(K_{1}\left(\frac{m_h}{T}\right)+K_{3}\left(\frac{m_h}{T}\right) \right)\right] +\sum_{n>4} \order{\left( \frac{T}{N_c \Lambda_{YM}} \right)^n} \end{eqnarray} where $K_1$, $K_2$ and $K_3$ are modified Bessel functions and the summation is of order four. \subsection{The transport properties \label{transports}} Around the critical temperature $T_c$, the theory is expected to be strongly coupled. Thus, the shear viscosity $\eta$ and the relaxation time $\tau_\pi$ should be small. Particularly, from quantum mechanics and string theory arguments \cite{mikloseta,kss}, one expects $\eta/s \sim \order{10^{-2}}$ and $\tau_\pi \sim \eta/(sT)$ \cite{hicexp2}, where $s$ is the entropy density. We also note that $\eta/s$ is generally irrelevant for a nearly homogeneous solution since shear gradients vanish for this solution. The effect of these anisotropies might however have a relevance for modern dark energy \cite{floerchinger}. The behaviour of bulk viscosity is however non-trivial and might have crucial phenomenological consequences, both in heavy ion collisions and in cosmology. From symmetry arguments as well as quantitative calculations, it is generally agreed \cite{amybulk} that $\zeta/s$ is vanishing at $T \gg T_c$. However, at $T \sim T_c$, the system is strongly interacting and it is in the vicinity of a phase transition with formation of condensates. One should remember that while shear viscosity depends on momentum transport, hence on elastic reactions, bulk viscosity is more dependent on diffusion {\em across the diagonal of the energy momentum tensor}, thus, on the thermalization time of inelastic reactions \cite{weinberg,jeon}. Therefore, when condensates form, it is natural to expect bulk viscosity to peak even when shear viscosity is small \cite{mebulk1,mebulk2,cavitation}. In the $SU(2)$ case, this peak should diverges around the deconfinement second order phase transition due to the sensitiveness of the Polyakov loop expectation value to $T$ around $T_c$. However, for all $SU(N_c>2)$ cases, the deconfinement transition is of first order and the $T_c$ centered peak does not diverge \cite{kharbulk,gubser,latbulk}. Although its height and width dependence on $N_c$ are not clear, it is reasonable to expect that the peak height goes as the chemical thermalization timescale, which is sensitive to the difference in entropies of the gluonic and hadronic phases during their coexistence period, this is, $N_c^2$. It is also not clear, from fundamental arguments, how the peak width changes as a function of $N_c$. Bulk viscosity of a mixture of two phases is additive \cite{jeon}, which should reduce the width below $\sim N_c^2$ scaling. For a high enough peak, hydrodynamics solutions such as the Hubble expansion become unstable against small perturbations \cite{mebulk2,cavitation}. This scenario, when viscous forces overwhelm advective forces in the zero chemical potential limit, can be an indication that hydrodynamics fails as an effective theory. However, this is {\em not} the case here, because momentum equilibration happens fast and the lack of chemical equilibrium is related to the presence of a phase transition. Furthermore, the experimental evidence from heavy ion collisions \cite{hicexp,hicexp2,hicexp3} suggests that matter around $T_c$ continues to behave as a very good fluid independently of the existence of a bulk viscosity peak. In summary, the likely behavior of the shear and bulk viscosities for $SU(N_c)$ theories is shown in Fig. (\ref{etafig}): bulk viscosity has a Gaussian-like peak around $T_c$, while shear viscosity has a dip in the same region. The Gaussian peak is likely to be very sharp around $T_c$, while the dip goes up logarithmically with temperature \cite{mikloseta}. Below deconfinement, shear viscosity is large, while bulk viscosity is small because the glueballs are infinitely massive and non-interacting. Using the weakly coupled Lagrangian in the previous section (equation \ref{eftlagrangian}), we expect: \begin{equation} \frac{\eta}{s} \sim \frac{\Lambda_{YM}}{T^2} \phantom{AA},\phantom{AA} \frac{\zeta}{s} \sim \frac{T}{\Lambda_{YM}} \end{equation} \begin{figure*}[h] \epsfig{width=18cm,clip=1,figure=fig_etafig.eps} \caption{\label{etafig} Conjectured behavior of the shear (right panel) and bulk viscosity (left panel) as a function of temperature for Yang-Mills matter} \end{figure*} \section{Cosmology with a hidden $SU(N_c)$} \subsection{Introduction \label{seccosmo}} From the discussion of the previous section, we saw that a hidden $SU(N_c)$ sector with $\Lambda_{YM} \geq 1$ TeV could {\em potentially} provide several of the ingredients normally used to construct cosmological models naturally. If, as argued in \cite{mebulk1,mebulk2}, at $T\sim T_c$ bulk viscosity overwhelms advective pressure but hydrodynamics continues to work, the effective pressure will be negative, as noted before \cite{lima}. Thus, a peak in bulk viscosity as well as a mixed phase during confinement could naturally provide the large cosmological constant needed for Inflation. Glueballs could be a suitable candidate for dark matter and the small bulk viscosity of glueball's self-interactions could account for dark energy, switching on only recently. All of these features would be dictated by the single scale $\Lambda_{YM}$. Admittedly, this scenario is very different from standard cosmology \cite{baumann}. There, the universe starts out cold in a semiclassical field configuration dominated by the vacuum energy, then it finds the true vacuum while converting that energy into thermally distributed particles via reheating. Here, expansion proceeds from an initial Planck temperature and all of the history of the universe is reproduced by changes in the equation of state and transport coefficients. From our vantage point, this scenario could look very similar to the cosmological standard model for all observables. We note that a similar inflationary model was already explored in \cite{qcdbulk}, where the QCD phase transition would be responsible for a negative pressure driving Inflation. However, as the authors found out, the scales of Inflation and of the QCD deconfinement transition did not match by orders of magnitude, hence the need for a QCD-like beyond the standard model theory. In this context, during the pre-inflationary era, between temperatures $T_P \sim G^{-1/2}$ and $T\sim \Lambda_{YM}$, the universe would be composed of a hot plasma of $N_C^2$ ``gluons'' of the hidden sector plus standard model matter and radiation. The contribution of the latter to the total entropy density is expected to be subdominant for large $N_c$. As $T \rightarrow T_c$, this plasma would become a good fluid, in local thermal equilibrium. In this regime, thermal fluctuations are approximately Poissonian ($ \frac{1}{e} \frac{de}{dx} \sim T$) and bulk viscosity is negligible. As $T=T_c$, the bulk viscosity shoots up, so the effective pressure becomes: \begin{equation} \label{zetainfl} p - \zeta(T \simeq T_c) \frac{\dot{a}}{a} \ll 0 \end{equation} \noindent{where} $a$ is the cosmological scale factor. Intuition from transport theory would imply that if the effective pressure is negative then the Knudsen number is large, thus, terms beyond shear and bulk viscosity (from Israel-Stewart hydrodynamics \cite{hicexp3}), transport theory and so on should be taken in consideration. However, bulk viscosity diverges due to non-conformal strongly coupled dynamics rather than due to the lengthening of the mean free path. Therefore, at least in an approximately homogeneous universe, we expect those terms to stay small \cite{mebulk1}. Driven by a negative effective pressure, the universe acquires a cosmological constant, a la \cite{goo,bulkdark1,bulkdark2}. However, differently from those scenarios, this effective pressure gets large, dominating pressure and energy density (eq. \ref{zetainfl}), thus, closely matching the dynamics of the Inflation era until $T \leq T_c$. The duration of Inflation depends on the $N_c$ in $SU(N_c)$, since $\zeta(T)$ (Fig.\ref{etafig}) will maintain the peak value of $T_c$ for the whole mixed phase, that is, approximately $N_C^2$ in energy density. Therefore, one can choose a convenient $N_c$ to achieve an appropriate number of efoldings. As soon as $T <T_c$, the $SU(N_c)$ plasma freezes out into a self-interacting gas of heavy glueballs ($m_g \sim \Lambda_{YM}$) whose entropy content (eq. \ref{eq:Pressure_scalar}) is much smaller than the entropy of the standard model sector and whose bulk viscosity goes as $\zeta/s \sim T/m \ll 1$ \cite{weinberg}. Hence, Inflation naturally stops and the energy-matter content of the dark sector gets transferred to non-relativistic and non-interacting massive glueballs, which compose dark matter. \subsection{Implementation} The usual Friedmann equations are simply the continuity equation and Einstein's equation for an isotropic and homogeneous background. As such, the only dynamical value is the scale factor of the universe $a(t)$ and non-gravitational dynamics is provided by the equation of state and the transport coefficients. Numerically, it is usually convenient to write those expressions in conformal time ($\tau$) coordinates rather than locally Minkowski time ($t$) coordinates. The two are related by: \begin{equation} \tau = \int \frac{dt}{a(t)} \phantom{AA},\phantom{AA} \frac{da}{d \tau}=a' \end{equation} In these coordinates, the FRW equations \cite{cosmobook} are: \begin{equation} a'^2 + k a^2 = 2 \alpha e a^4 \label{eq1} \end{equation} \begin{equation} a'' + ka = \alpha \left( e - 3p \right) a^3 \end{equation} where $\alpha=8 \pi G/3$ and $G$ is the gravitational constant. Bulk viscosity turns the effective pressure with respect to local time into: \begin{equation} p_{ef} \rightarrow p - \zeta \frac{1}{a} \frac{da(t)}{dt} \end{equation} These equations are usually analytically or semi-analytically solvable for a simple equation of state, such as $p=c_s^2 e$ or at least a polytrope $p = C e^n$. However, for us, as seen in sections \ref{phases} and \ref{transports}, this is not the case, especially considering the effect of bulk viscosity. Nevertheless, these equations can be put into a form amenable to simple numerical integration. Defining the non-local Hubble constant, one can transform equation \ref{eq1} into an algebraic one: \begin{equation} f= \frac{a'}{a}\phantom{AA},\phantom{AA} a = \sqrt{\frac{f^2+k}{2 \alpha e}} \end{equation} which, after some reshuffling, becomes: \begin{equation} \begin{array}{cl} e' =& -3 f (e+p_{ef}) \\ f' =& \frac{1}{2} \left( \frac{e-3p_{ef}}{e} \right) \left( f^2 +k \right) - k -f^2\\ p_{ef} =& p - 3\zeta e^{3/4} \frac{f}{a} +\frac{e}{3} \end{array} \label{friedm} \end{equation} These equations can be solved numerically using the equation of state in \ref{phases}, constructed by interpolating lattice data from \cite{panero}, and a bulk viscosity of \ref{transports}, modelled by a Gaussian function, to get the number of efoldings in terms of the parameters of the bulk viscosity peak. We note that the position of the peak is a free parameter of the theory. The height and the width, however, should be calculable from first principles, for example from lattice simulations. Nevertheless, in this work, we treat them as free parameters because we want to examine the relationship between these quantities and the inflationary dynamics. Such a study is important in case future lattice calculations determine the precise shape of this peak, in which case our inflationary model may be falsifiable. \section{Bulk viscosity-driven Inflation} The results of the numerical calculations, which formed the bulk of the computational work in \cite{thesis}, are summarized in Fig. \ref{pefftime} and Fig. \ref{efolds}. As Fig. \ref{pefftime} shows, the time evolution of the effective pressure follows the pattern summarized in section \ref{seccosmo}. At the approach of the bulk viscosity peak the effective pressure becomes negative, stays negative for an amount of time monotonically dependent on the peak's height, and then resumes expansion, returning to be positive. However, for values below the critical value of height $A \simeq 0.37$, the universe enters a never-ending Inflation phase, which is quite different from the ``eternal Inflation'' of \cite{linde} and, unlike it, incompatible with the present universe. \begin{figure*}[h] \epsfig{width=13cm,clip=1,figure=fig_peff_time.eps} \caption{\label{pefftime} The effective pressure as the function of conformal time, for different peak heights of the bulk viscosity Gaussian function.} \end{figure*} We define the number of efoldings $N$ as: \begin{equation} N=\int^{t_f}_{t_i} \frac{\dot{a}}{a}dt = \int^{\tau_f}_{\tau_i} \frac{a'}{a}d\tau \end{equation} To compare existing limits on $N$ from standard model inflationary theory, we numerically find the duration of the bulk viscosity peak in conformal time and calculate it for this period. The result confirms that the number of efoldings depends in a diverging way on the peak's height $A$. Initially, $N$ increases approximately exponentially with $A$ but, as this quantity approaches a critical value, an arbitrary number of efoldings can be obtained, converging to a never-ending Inflation. \begin{figure*}[h] \epsfig{width=13cm,clip=1,figure=fig_peak_efolds.eps} \caption{\label{efolds} The number of efoldings as a function of the height of the peak in bulk viscosity, for several peak widths as a percentage of the height } \end{figure*} The existence of this phase, which mirrors the original problems with the latent heat-driven Inflation of \cite{guth}, is actually not so surprising. We remember that the physical action of viscosity is to convert ``work'' (the expansion of the universe) into ``heat'' (entropy density) and the rate of this conversion is proportional to $\zeta (\partial s)^2$ \cite{weinberg}. We also note that, in a curved spacetime, energy is conserved only ``locally'', while the FRW equations control global dynamics. Hence, if homogeneity is imposed, it is not surprising that, for a high enough peak entropy, energy creation is stronger than the exponential expansion of the universe, triggering an Inflation epoch that never ends. The appearance of a hot eternal Inflation, however, diminishes the naturalness appeal of our model, especially since the height of the peak is only a free parameter due to our ignorance. It remains to be seen how natural it is the obtaining of finite Inflation with respect to the expected height of the $\zeta/s$ peak. The dynamics described here is only weakly dependent on the width of the peak, as Fig. \ref{efolds} shows. However, it strongly depends on the relative location of $T_c$ with respect to the Planck scale, the essential reason why the QCD based model of \cite{qcdbulk} did not work. To study this dependence, the only free parameter of the model is $e_c$, since $T_c \sim \Lambda_{YM}$ and the equation of state has been rewritten such that $p(e)$, turning its dependence with temperature implicit. We varied $e_c/e_{planck} \sim e_c G^2$ in the interval $[10^{-5},1]$, where, qualitatively, the equivalent curves to Fig. \ref{pefftime} look similar, differing only on a shift in the position of the bulk viscosity peak, this is, the conformal time at which the effective pressure gets negative. This different location affects the number of efoldings approximately linearly, as shown in Fig. \ref{efoldsec} in a logarithmic scale. Note that its shape is qualitatively similar to that of Fig. \ref{efolds}. \begin{figure*}[h] \epsfig{width=13cm,clip=1,figure=fig_efolds_ec1.eps} \epsfig{width=13cm,clip=1,figure=fig_efolds_ec.eps} \caption{\label{efoldsec} The number of efoldings as a function of the height of the peak in bulk viscosity for several peak positions as a fraction of the planck energy density. The top figure represents a zoomed-in version of the bottom one } \end{figure*} In the continuum limit of $N_c$, under the 't 'Hooft scaling, and assuming that the peak height is proportional to $N_c^2$, one can conclude that the number of efoldings $N \geq \order{10}$ implies a constraint on $N_c$ and $\Lambda_{YM}$. This constraint together with the calculation of the abundance of dark matter (i.e. glueballs) should make our theory falsifiable. This will be discussed in the next section. \section{Discussion, challenges and prospects} Given a successful evasion of the never-ending Inflation issue outlined in the previous section, the next phenomenological challenge for our model would be a successful description of the current dark matter abundance. From the Lagrangian in Eq. \ref{eftlagrangian} it is clear that the interaction of dark matter particles is local on a scale of $\Lambda_{YM}$ and, in the 't Hooft limit, it gets suppressed by $N_c^{-2}$. At $T<T_c$, the glueballs become weakly self-interacting and come out of equilibrium, so one cannot trust the calculation of the equation of state in the previous section from the end of deconfinement onwards. However, one can functionally think of the gas of glueballs as a ``dust'' of conserved particles, this is, a distribution of non-relativistic and non-interacting massive particles. Assuming an ideal equation of state for standard model matter (energy density $e_S$, pressure $p_S=e_S/3$, degeneracy $g_S$), the equations \ref{friedm} of the last section can be appended by: \begin{equation} e = m_h n + e_S \phantom{AA},\phantom{AA} p = \frac{e_S^{1/4}}{g_S m_h} n + \frac{1}{3}e_S \phantom{AA},\phantom{AA} e_S \sim g_S T^4 \end{equation} Where $n$ for glueballs' number density. We also add a conservation equation for the number density of glueballs: \begin{equation} \frac{d n}{d \tau}= f n + \order{\frac{1}{N_c^2}\left( \frac{T}{\Lambda_{YM}} \right)^{n\geq 4}} \end{equation} \noindent{where the last} term vanishes rapidly after the deconfinement phase transition, since we do not expect scattering and annihillation of glueballs to be sizeable in the confined phase. We note that dark matter density at its formation should increase monotonically, $n(T=T_c)\sim N_c^2 \Lambda_{YM}^3$, since dark matter mass density should be comparable to energy density at deconfinement. In the previous section, however, we saw that, in Inflation, $N_c$ and $\Lambda_{YM}$ are also correlated for a fixed number of efoldings. Our model, therefore, predicts a correlation between the number of efoldings and the dark matter abundance which is in principle testable. These equations will be examined in a forthcoming publication. The gas of glueballs naturally tracks the perturbations that formed in the inflationary era. While the usual source of perturbations - quantum fluctuations of the inflaton field \cite{baumann} - are inapplicable here, hydrodynamic instabilities could provide an alternative mechanism for generating fluctuations, since it has been shown \cite{mebulk2} that the Hubble hydrodynamic solution is unstable against small perturbations. Furthermore, provided that the scale separation between the macroscopic Hubble factor $\dot{a}/a$ and the microscopic bulk viscosity $\zeta/(sT)$ is wide enough, these perturbations could have a scale-free spectrum, based on Kolmogorov's arguments \cite{turbulence}. Since the glueballs are heavy, weakly self-interacting and, by assumption, flavourless, its gas naturally plays the role of the sinks of cold dark matter assumed in $\Lambda CDM$ cosmology. To complete our model's scenario, we mention that a residual interaction between glueballs might have a role in the constitution of dark energy explaining why it switched on only recently. For a weakly coupled massive gas, $\eta/s \gg 1$, but the relaxation time is also large, $\eta/(Ts) \gg \dot{a}/a$. Hence, the shear viscosity will have a large turn-on time and it will only appear long after the formation of the glueballs. Quantitatively, this could be implemented by solving the FRW equations with Israel-Stewart dynamics \cite{hicexp3} and adding the Israel-Stewart equation for $\Pi$: \begin{equation} p_\zeta \rightarrow p- \Pi \phantom{AA},\phantom{AA} \tau_\pi \left( \dot{\Pi} + \Pi \frac{\dot{a}}{a} \right) + \Pi = 3 \zeta \frac{\dot{a}}{a} \end{equation} \noindent{with} $\tau_\pi \sim \eta/(Ts) \sim \Lambda_{YM} N_C^2/T$. In this context, large enough $\Lambda_{YM}$ and $N_c$ might explain a late switch on of dark energy. Finally, we should mention that a non-zero $\theta$ parameter \cite{theta} associated with the dark sector could in principle account for baryogenesis. It would lead to Eq. \ref{lagrangian} being augmented by a term of the form $\sim \theta Tr_a\left[ \epsilon_{\alpha \beta \mu \nu} F_a^{\alpha \beta} F_a^{\mu \nu} \right]$ and a suppressed interaction between the standard model and the dark sector. In conclusion, in this work we have argued that some ideas developed in quark-gluon plasma physics when extended to a hypothetical Yang-Mills theory without flavours and with a transition scale in the \textit{TeV} range, can unify apparently unrelated features of the standard cosmological model such as Inflation and dark matter. We have shown that Inflation due to the bulk viscosity peak at deconfinement can reproduce the required number of efoldings. Nevertheless, avoiding a hot never-ending Inflation phase might require some fine-tuning. We have listed the cases capable of falsifying our model and the potential ways this scenario could attempt to explain the standard features of our universe: dark matter, perturbations, dark energy and baryogenesis. We hope quantitative progress on the points listed above will confirm whether this scenario is phenomenologically viable. \vskip 0.3cm GT acknowledges support from FAPESP proc. 2017/06508-7 and CNPQ bolsa de produtividade 301996/2014-8. MM acknowledges support from CNPQ process 132657/2015-5, Global Affairs Canada through DFATD, by the concession of an ELAP scholarship at McGill University and CRC-TR211 by the concession of a Master's qualification fellowship at J. W. Goethe University. We thank Marco Panero, Jose Ademir Lima, Rodolfo Valentim, Pedro Holanda and Jean-Sebastian Gagnon for discussions and suggestions.
{ "timestamp": "2018-05-08T02:16:08", "yymm": "1805", "arxiv_id": "1805.02441", "language": "en", "url": "https://arxiv.org/abs/1805.02441" }
\section*{Introduction} Among the monomial ideals, the squarefree monomial ideals play a distinguished role as they are linked in many ways to combinatorial objects such as simplicial complexes and graphs. Squarefree monomials in a polynomial ring $K[x_1, \ldots, x_n]$ which generate these ideals are monomials of the form $x_{i_1}\cdots x_{i_d}$ with $i_1 < i_2< \cdots < i_d$. In this paper, we call a monomial $x_{i_1}x_{i_2}\cdots x_{i_d}$ with $i_1\leq i_2\leq\cdots \leq i_d$ {\em $t$-spread,} if $i_j- i_{j-1}\geq t$ for $2\leq j \leq n$. Note that, any monomial is $0$-spread, while the squarefree monomials are $1$-spread. A monomial ideal in $S$ is called a {\em $t$-spread monomial ideal}, if it is generated by $t$-spread monomials. For example, $I=(x_1x_4x_8,x_2x_5x_8,x_1x_5x_9,x_2x_6x_9,x_4x_9) \subset K[x_1, \ldots, x_9]$ is a $3$-spread monomial ideal, but not $4$-spread, because $x_2x_5x_8$ is not a $4$-spread monomial. Note that $2$--spread monomial ideals appear as initial ideals for the defining ideals of the fiber cones of monomial ideals in two variables \cite{HQS}. There is a well-known deformation, called polarization, which assigns to each monomial ideal a squarefree monomial ideal, preserving all homological properties of these ideals. In this way, many problems regarding monomial ideals can be reduced to the study of squarefree monomial ideals. On the other hand, in shifting theory, in particular for symmetric algebraic shifting, one used another operator, called {\em stretching operator}, see \cite{K}, \cite{HHBook}. To transform an arbitrary monomial $u=x_{i_1}\cdots x_{i_d}$ with $i_1 \leq i_2\leq \cdots \leq i_d$ into a squarefree monomial, one defines the stretched monomial $\sigma(u)=x_{i_1}x_{i_2+1}x_{i_3+2}\cdots x_{i_d+(d-1)}$. Let $I$ be a monomial ideal and $G(I)=\{u_1, \ldots, u_m\}$ be the unique minimal monomial set of generators of $I$. Then $I^{\sigma}$ is defined to be the ideal with $G(I^{\sigma})=\{\sigma(u_1), \ldots, \sigma(u_m)\}$. In contrast to polarization, the stretching operator is not a deformation and in general does not preserve any of the homological properties of the ideal. For example, if $I=(x_1^2, x_2^2)$, then $I^{\sigma}= (x_1x_2, x_2x_3)$. In this example $I$ is a complete intersection, but $I^{\sigma}$ does not have this property. In fact, $I^{\sigma}$ has a linear resolution. Applying again the operator $\sigma$ to $I^{\sigma}$, we obtain the ideal $I^{\sigma^2}=(x_1x_3, x_2x_4)$ which again is a complete intersection. It can be easily seen that the $t$-fold iterated operator $\sigma^t$ establishes a bijection between all monomials in the polynomial ring $T=K[x_1, x_2, \ldots]$ and all $t$-spread monomials in $T$; see Corollary~\ref{bijection}. While, in general $I$ and $I^{\sigma}$ may have different graded Betti numbers, it turns out that the graded Betti numbers coincide when $I$ is a strongly stable ideal. This fact has been used in shifting theory to define symmetric algebraic shifting; see for example \cite[Section 11.2.2]{HHBook}. More generally, as one of the main result of this paper, we show that $I$ is a $t$-spread strongly stable ideal if and only if $I^{\sigma}$ is a $t+1$-spread strongly stable ideal (Proposition~\ref{equal}), and $I$ and $I^{\sigma}$ have the same graded Betti numbers; see Theorem~\ref{betti}. The concept $t$-spread strongly stable ideal generalizes the concepts of strongly stable and squarefree strongly stable ideals, and is defined as follows: a monomial ideal $I$ is called {\em $t$-spread strongly stable}, if for all $t$-spread monomials $u\in I$, all $j\in \supp(u)$ and all $i<j$ such that $x_i(u/x_{j})$ is $t$-spread, it follows that $x_i(u/x_j)\in I$. By using Theorem~\ref{betti} and a well known result of Eliahou-Kervaire \cite{EK}, we obtain in Corollary~\ref{formula} an explicit formula for the graded Betti numbers of a $t$-spread strongly stable ideal. As for ordinary strongly stable ideals, one defines Borel generators of a $t$-spread strongly stable ideal $I$ as a set of $t$-spread monomials in $I$ with the property that $I$ is the smallest $t$-spread strongly stable ideal containing these generators. Of particular interest is the case when $I$ has precisely one Borel generator. In the special case when the Borel generator in $K[x_1, \ldots, x_n]$ is $u=x_{n-t(d-1)}\cdots x_{n-t} x_n,$ the resulting ideal is called a \emph{$t$-spread Veronese ideal}. It is generated by all $t$-spread monomial of degree $\deg(u)$. Theorem~\ref{big} lists the homological and algebraic properties of $t$-spread Veronese ideals and their Alexander duals. The results of this theorem are then used in Theorem~\ref{height} to determine the height of any $t$-spread strongly stable ideal. As a consequence, Cohen-Macaulay $t$-spread strongly stable ideals can be classified; see Corollary~\ref{cmclassified}. In Section~\ref{alg}, we study the toric $K$-algebras whose generators are the generators of a $t$-spread principal Borel ideal. Generalizing a result of De Negri \cite{N}, we show that these algebras are Koszul, Cohen-Macaulay normal domains. Finally, in Section~\ref{genini}, we show that the generic initial ideal of a $t$-spread strongly stable ideal is simply obtained by the inverse of the $t$-fold iterated operator $\sigma$. It should be noted that the graded Betti numbers of $I$ and $I^{\sigma}$ may coincide not only for $t$-spread strongly stable ideals. In fact, it can be easily seen that if $I=J^{\sigma^n}$ for a monomial ideal $J \subset K[x_1, \ldots, x_n]$, then for any $t$, $I$ and $I^{\sigma^t}$ have the same graded Betti numbers, because for such ideals the application of the operator $\sigma$ simply amounts to rename the variables. It would be interesting to determine all monomial ideals for which $I$ and $I^{\sigma}$ coincide. \section{$t$--spread strongly stable ideals} The section is intended to generalize the concepts of stable and squarefree stable ideals. Let $K$ be a field and $S=K[x_1,\ldots,x_n]$ the polynomial ring in $n$ variables over $K$. We denote by $\Mon(S)$ the set of all the monomials in $S.$ For a monomial $u$ we denote by $\max(u)$ ($\min(u)$) the maximal (minimal) index $i$ for which $x_i$ divides $u.$ \begin{Definition} A $t$-spread monomial ideal $I \subset S$ is called {\em $t$-spread stable}, if for all $t$-spread monomials $u\in I$ and for all $i < \max(u)$ such that $x_i(u/x_{\max(u)})$ is a $t$-spread monomial, it follows that $x_i(u/x_{\max(u)})\in I$. The ideal $I$ is called {\em $t$-spread strongly stable}, if for all $t$-spread monomials $u\in I$, all $j\in \supp(u)$ and all $i<j$ such that $x_i(u/x_{j})$ is $t$-spread, it follows that $x_i(u/x_j)\in I$. \end{Definition} Note that a $t$-spread strongly stable ideal is also $t$-spread stable. \begin{Lemma}\label{gen} Let $I$ be a $t$-spread monomial ideal. The following conditions are equivalent: \begin{enumerate} \item[{\em (a)}] $I$ is $t$-spread strongly stable. \item[{\em (b)}] If $u \in G(I)$, $j\in \supp(u)$ and $i< j$ such that $x_i(u/x_j)$ is a $t$-spread monomial, then $x_i(u/x_j)\in I$. \end{enumerate} \end{Lemma} \begin{proof} (a) $\Rightarrow$ (b) is obvious. To prove (b) $\Rightarrow$ (a), let $u \in I$ be a $t$-spread monomial and $i<j$ such that $u'=x_i(u/x_j)$ is a $t$-spread monomial. Let $v \in G(I)$ such that $v|u$. If $x_j \notin \supp(v)$, then $v|u'$ and $u' \in I$. Otherwise, if $x_j \in \supp(v)$, then $v'=x_i(v/x_j) \in I$ by our assumption and $v'|u'$ and again we have $u' \in I$. \end{proof} \medskip The following lemma is crucial for the study of $t$-spread strongly stable ideals. \begin{Lemma}\label{canonical} Let $I$ be a $t$-spread strongly stable ideal and $w \in I$ be a $t$-spread monomial. Then $w=w_1w_2$ such that $\max(w_1) < \min(w_2)$ for some $w_1 \in G(I)$ and $w_2 \in \Mon(S)$. \end{Lemma} \begin{proof} We may assume that $t >0$, because for $t=0$ such a decomposition for $w$ is known; see \cite[Lemma 1.1]{EK}. Now, let $w=w'_1w'_2$ with $w'_1 \in G(I)$ such that if some $v \in G(I)$ with $v | w$, then $\deg(w'_1) \leq \deg(v)$. Of course, both $w'_1$ and $ w'_2$ are $t$-spread monomials. Suppose that $k=\max(w'_1) - \min(w'_2) \geq 0$. Then we show that there exists $w''_1 \in G(I)$ such that $w=w''_1 w''_2$ for some monomial $w''_2 \in \Mon(S)$ such that $\max(w''_1) - \min(w''_2) < k$. Let $j=\max(w'_1)$ and $i= \min(w'_2)$. Then $w''_1=x_i(w'_1/x_j)$ is $t$-spread because $\supp(w''_1) \subseteq \supp(w)$. Let $w''_2=x_j(w'_2/x_i) $. Then $w''_2$ is $t$-spread as well and $w=w''_1w''_2$. Since $I$ is $t$-spread strongly stable and $i<j$, we have $w''_1 \in I$. Also, $\deg(w'_1)= \deg(w''_1)$, and hence, by the assumption on the $\deg(w'_1)$, we see that $w''_1 \in G(I)$. Moreover, $\max(w''_1) < \max(w'_1)$ and $\min(w'_2) < \min(w''_2)$ and $\max(w''_1) - \min(w''_2) <k $. By applying induction on $k$, we get the desired result. \end{proof} \medskip A $t$-spread stable ideal need not to have linear quotients. For example, the ideal $I=(x_1x_3x_5,x_1x_4x_6)$ is $2$-spread stable, but does not have linear quotients. However, we have \begin{Theorem} \label{ayesha} The $t$-spread strongly stable ideals have linear quotients. In particular, they are componentwise linear. \end{Theorem} \begin{proof} Let $G(I)=\{u_1, u_2, \ldots, u_m\}$ ordered with respect to the pure lexicographical order. Let $r \leq m$ and $J=( u_1,\ldots,u_{r-1})$. Then in order to show that $J:u_r$ is generated by variables, it is enough to show that for all $1 \leq k \leq r-1$ there exists $x_i \in J:u_r$ such that $x_i$ divides $u_k/\gcd(u_k, u_r)$. Let $u_k=x_{i_1}x_{i_2}\cdots x_{i_s}$ with $i_1\leq i_2\leq \cdots \leq i_s$ and $u_r=x_{j_1}x_{j_2}\cdots x_{j_t}$ with $j_1\leq j_2\leq \cdots \leq j_t$. Since $u_k >_{\lex} u_r$, there exists $d$ with $1 \leq d \leq t$ such that $i_1=j_1, \ldots, i_{d-1}=j_{d-1}$ and $i_d < j_d$. Let $v=x_{i_d} (u_r/x_{j_d})$. Then $v=x_{j_1}x_{j_2}\cdots x_{j_{d-1}}x_{i_d}x_{j_{d+1}}\cdots x_{j_{t}}$. Since $i_d -j_{d-1} = i_d-i_{d-1} \geq t$ and $j_{d+1} -i_{d} > j_{d+1}-j_{d} \geq t$, it follows that $v$ is $t$-spread, and so $v \in I$ and $v >_{\lex} u_r$. In fact, $v \in J$. Indeed, by Lemma~\ref{canonical}, there exists $u_l \in G(I)$ such that $v =u_lw$ and $\max(u_l) < \min(w)$. Suppose that $v \notin J$. Then $u_l \leq_{lex} u_r$. From the presentation of $v=u_lw$, it follows that $v \leq_{lex} u_r$, a contradiction. Now, as we know that $v \in J$, it follows that $x_{i_d} \in J:u_r$. This completes the proof, since $x_{i_d}$ divides $u_k/\gcd(u_k, u_r)$. \end{proof} Let $I$ be a $t$-spread strongly stable ideal with $G(I)=\{u_1, u_2, \ldots, u_m\}$ ordered with respect to the pure lexicographic order. As in \cite{HT} we define \[ \set(u_k) = \{i\:\; x_i\in (u_1,\ldots,u_{k-1}):u_k\} \quad \text{for $k=1,\ldots,m$}. \] The proof of Theorem~\ref{ayesha} shows that $\set(u_k)$ is the set of positive integers $i$ satisfying \begin{equation}\label{henning} \text{$i<\max(u_k)$, $i\not\in\supp(u_k)$ and $i-j\geq t$ for all $j\in \supp(u_k)$ with $j<i$}. \end{equation} We set $I_j=(u_1,\ldots,u_j)$ for $j=1,\ldots, m$. Let $M(I)$ be the set of all monomials in $I$. The {\em decomposition map} $g\: M(I)\to G(I)$ is defined as follows: for $u\in M(I)$ we let $g(u)=u_j$, where $j$ is the smallest number such that $u\in I_j$. The decomposition map is {\em regular}, if $\set(g(x_iu_k))\subset\set(u_k)$ for all $i\in\set(u_k)$ and all $u_k\in G(I)$. \medskip The resolution of monomial ideals with linear quotients and regular decomposition function can be explicitly described; see \cite[Theorem 1.12]{HT}. Stable and squarefree stable ideals have regular decomposition functions. However, even 2-spread strongly stable monomial ideals in general do not have regular decomposition functions. For example, consider the 2-spread strongly stable ideal \[ I=(x_1x_3,x_1x_4, x_1x_5,x_1x_6,x_2x_4, x_2x_5,x_2x_6,x_3x_5,x_3x_6). \] Then, $\set(x_3x_6)= \{1,2,5\}$, $g(x_2x_3x_6)= x_2x_6$ and $\set(g(x_2x_3x_6))=\{1,4,5\} \not\subseteq \set(x_3x_6)$. \medskip In what follows, we will establish a bijection between $t$-spread strongly stable ideals and $t+1$-spread strongly stable ideals which preserves the graded Betti numbers. Let $T= K[x_1, x_2, \ldots]$ be a polynomial ring in infinitely many variables. We denote by $\Mon(T;t)$ the set of all $t$-spread monomials in $T$. Then $\Mon(T;0)$ is just the set of all monomials of $T$ which we simply denote by $\Mon(T)$. \begin{Definition} Let $u=\prod_{j=1}^{d}x_{i_j} \in T$ with $i_1 \leq i_2 \leq \cdots \leq i_d$. Then we define $\sigma: \Mon(T) \rightarrow \Mon(T)$ by \[ \sigma(u)=\prod_{j=1}^{d} x_{i_{j}+(j-1)}. \] \end{Definition} Note that, $\sigma$ induces a map $\Mon(T;t) \rightarrow \Mon(T;t+1)$ which we again denote by $\sigma$. Indeed, if $u$ is a $t$-spread monomial then $\sigma(u)$ is a $t+1$-spread monomial, because $(i_{j+1}+j)- (i_j+(j-1))= i_{j+1}- i_j+1 \geq t+1$. \begin{Lemma}\label{bijection} The map $\sigma: \Mon(T;t) \rightarrow \Mon(T;t+1)$ is bijective. \end{Lemma} \begin{proof} Let $u=\prod_{j=1}^{d}x_{i_j} \in T$ with $i_1 \leq i_2 \leq \cdots \leq i_d$. We define the inverse map of $\sigma$ by $\tau:\Mon(T;t+1) \rightarrow \Mon(T;t)$ by \[ \tau(u)=\prod_{j=1}^{d} x_{i_{j-(j-1)}}. \] \end{proof} \begin{Corollary}\label{bijection} The iterated map $\sigma^t: \Mon(T) \rightarrow \Mon(T;t)$ establishes a bijection between the set of all monomial in $T$ and the set of all $t$-spread monomials in $T$. \end{Corollary} \begin{Definition} Let $I$ be a monomial ideal. Then we let $I^{\sigma}$ be the ideal generated by the monomials $\sigma(u)$ with $u \in G(I)$. \end{Definition} Observe that if $I$ is a $t$-spread ideal then $I^{\sigma}$ is a $t+1$-spread ideal. \begin{Proposition}\label{equal} Let $I$ be a monomial ideal. Then $I$ is a $t$-spread strongly stable ideal if and only if $I^{\sigma}$ is a $t+1$-spread strongly stable ideal. \end{Proposition} \begin{proof} Let $I$ be a $t$-spread ideal and $\sigma(u)=\prod_{j=1}^{d} x_{i_{j}+(j-1)}$ with $u \in G(I)$. We want to show that for all $j \in \supp(\sigma(u))$ and $k<j$ such that $v=x_k(\sigma(u)/x_j)$ is a $t+1$-spread monomial then $v \in I^{\sigma}$. Since $x_j | \sigma(u)$, it follows that $j=i_l+{l-1}$ for some $1 \leq l \leq d$. Then \[ v=x_{i_1}x_{i_2+{1}} \cdots x_{{i_{l-1}}+(l-2)} x_k x_{{i_{l+1}}+l}\cdots x_{{i_d}+(d-1)} \] Let $w=\tau(v)$. Then, first we show that $w \in I$. Indeed, \[ w=x_{i_1}x_{i_2} \cdots x_{{i_{l-1}}} x_{k-(l-1)} x_{{i_{l+1}}}\cdots x_{{i_d}}, \] therefore, $w=x_{k-(l-1)}(u/x_{i_l})$ and $k-(l-1) <i_l$ . Moreover, $w$ is $t$-spread. Indeed, since $v$ is $t+1$-spread, we have $k-(i_{l-1}+(l-2) )\geq t+1$ which implies $k-(l-1)-i_{l-1}\geq t$, and we have $i_{l+1}+l-k\geq t+1$ which implies $i_{l+1}-(k-(l-1)) \geq t+1$. Then, by Lemma~\ref{canonical}, $w=w_1w_2$ such that $\max(w_1) < \min(w_2)$. This implies that $v=\sigma(w)=\sigma(w_1)w'$ where $w'$ is a monomial. Therefore, $v \in I^{\sigma}$. The converse may be handled in a similar way. \end{proof} For the proof of Theorem~\ref{betti}, we use the following result which is the immediate consequence of \cite[Lemma 1.5]{HT}. \begin{Lemma}\label{set1} Let $I$ be a monomial ideal with linear quotients. Then \[ \beta_{i,i+j}(I)= |\{\alpha \subset \set (u)\; : \; u \in G(I)_j \text{ and } |\alpha|=i\}|, \] where $G(I)_j=\{u \in G(I)\; : \; \deg(u)=j\}$. \end{Lemma} \begin{Theorem}\label{betti} Let $I$ be a $t$-spread strongly stable ideal. Then $\beta_{i,i+j}(I)= \beta_{i,i+j}(I^{\sigma})$ for all $i$ and $j$. \end{Theorem} \begin{proof} Let $u=x_{i_1}x_{i_2}\cdots x_{i_d} \in G(I)$. Let $\set(u)=\{a_1< \cdots < a_r\}$ and \[ b_i=a_i+\max\{l \; : \; i_l <a_i\} \] for $i=1, \ldots, l$. We claim that $b_1 < \cdots < b_r$ and $\set(\sigma(u))=\{b_1, \ldots , b_r\}$. The claim together with Lemma~\ref{set1} yields the desired result. \medskip Proof of the claim: Let $k <j$ and $i_l < a_k<i_{l+1}$ and $i_m < a_j<i_{m+1}$. Then $m \geq l$ and $b_j-b_k= a_j+m -(a_k+l)= a_j-a_k+(m-l) >0$. Next, we show that $b_i \in \set(\sigma(u))$. Indeed, if $a_i \in \set(u)$ and $i_l < a_i < i_{l+1}$, then by (\ref{henning}) we have $a_i -i_l \geq t$. Therefore, $i_{l}+(l-1) < a_i+l < i_l+(l+1)$ and $a_i+l - (i_l+(l-1)) \geq t+1$. Since, $b_i=a_i+l$, this shows that $b_i \in \set(\sigma(u))$. Conversely, let $c \in \set(\sigma(u))$. Then from (\ref{henning}), we see that there exists an integer $l$ such that $i_l +(l-1)< c < i_{l+1}+l$ and $c-(i_l+(l-1)) \geq t+1$. This shows that $i_l < c -l< i_{l+1}$ and $(c-l)-i_l \geq t$. Therefore, $c-l =a_i$ for some $i$ and $c=a_i+l=b_i$. \end{proof} In general, Theorem~\ref{betti} is not valid for an arbitrary $t$-spread monomial ideal. For example, let $I=(x_1^2, x_2^2)$. Then $I^{\sigma}= (x_1x_2,x_2x_3)$, and $I^{\sigma}$ has a linear resolution while $I$ does not. \begin{Corollary}\label{formula} Let $I$ be a $t$-spread strongly stable ideal. Then \[ \beta_{i,i+j}(I) = \sum_{u \in G(I)_j} \binom{ \max(u)-t(j-1)-1}{i}. \] \end{Corollary} \begin{proof} We know that $I^{\tau^t}$ is strongly stable. From \cite{EK}, we know that \[ \beta_{i, i+j}(I^{\tau^t})=\sum_{u \in G(I^{\tau^t})_j} \binom{\max(u)-1}{i}. \] By Theorem~\ref{betti}, we have $\beta_{i, i+j}(I^{\tau^t})= \beta_{i, i+j}(I)$, therefore, \[ \beta_{i, i+j}(I)=\sum_{u \in G(I)_j} \binom{\max(\tau^t(u))-1}{i}. \] The proof follows, because $\max(\tau^t(u)))= \max(u)-t(\deg(u)-1)$, for all $u \in G(I)$. \end{proof} \section{$t$--spread Borel generators} In the theory of stable ideals, Borel generators play an important role. In this section, we introduce the similar concept of $t$-spread strongly stable ideals. Let $u_1,\ldots, u_m$ be $t$-spread monomials in $S$. There exists a unique smallest $t$-spread strongly stable ideal containing $u_1,\ldots, u_m$, which we denote by $B_t(u_1, \ldots, u_m)$. The monomials $u_1, \ldots, u_m$ are called the {\em $t$-spread Borel generators} of $B_t(u_1, \ldots, u_m)$. For example, let $I=B_2(x_2x_4, x_1x_5)$. Then $G(I)=\{x_1x_3, x_1x_4,x_1x_5,x_2x_4\}$. \begin{Proposition}\label{gen} Let $I=B_t(u_1, \ldots, u_m).$ Then $I^{\sigma} = B_{t+1}(\sigma(u_1), \ldots, \sigma(u_m))$. \end{Proposition} \begin{proof} Let $w \in G(I)$ and $w=x_{j_1}\cdots x_{j_d}$. Then there exists $u_l=x_{i_1}\cdots x_{i_d}$ such that $j_k \leq i_k$ for all $k=1, \ldots, d$. It gives $j_k + (k-1)\leq i_k+(k-1)$ for all $k=1, \ldots, d$. Therefore, $\sigma(w) \in B_{t+1}(\sigma(u_l)) \subseteq B_{t+1}(\sigma(u_1), \ldots, \sigma(u_m))$. Since $I^{\sigma}$ is generated by elements $\sigma(w)$ with $w \in G(I)$, it shows that $I^{\sigma} \subseteq B_{t+1}(\sigma(u_1), \ldots, \sigma(u_m))$. Furthermore, $B_{t+1}(\sigma(u_1), \ldots, \sigma(u_m))$ is the smallest $t$-spread strongly stable containing $\sigma(u_1), \ldots, \sigma(u_m)$. Therefore, $B_{t+1}(\sigma(u_1), \ldots, \sigma(u_m)) \subseteq I^{\sigma} $, because $\sigma(u_1), \ldots, \sigma(u_m) \in I^{\sigma}$ and $I^{\sigma}$ is $t$-spread strongly stable . \end{proof} We call a $t$-spread strongly stable ideal $I$ {\em $t$-spread principal Borel}, if there exists a $t$-spread monomial $u \in I$ such that $I=B_t(u)$. \medskip Let $u=x_{i_1}\cdots x_{i_d}$. Observe that $x_{j_1}\cdots x_{j_d} \in G(B_t(u))$ if and only if \begin{enumerate} \item[(i)] $j_1 \leq i_1, \ldots, j_d \leq i_d$, and \item[(ii)] $j_k-j_{k-1} \geq t$ for $k=2, \ldots, d$. \end{enumerate} \medskip In what follows, we study an important special class of $t$-spread principal Borel ideals. \begin{Definition} Let $d\geq 1$ be an integer. A monomial ideal in $S=K[x_1, \ldots, x_n]$ is called a {\em $t$-spread Veronese ideal of degree $d$}, if it is generated by all $t$-spread monomials of degree $d$. \end{Definition} We denote by $I_{n,d,t} \subset S$, the $t$-spread Veronese ideal in $S$ generated in degree $d$. Note that $I_{n,d,t} \neq (0)$ if and only if $n > t(d-1)$. Observe that the $t$-spread Veronese ideal of degree $d$ is indeed a $t$-spread principal Borel ideal. In fact, \[ I_{n,d,t}=B_t(\prod_{i=0}^{d-1}x_{n-it}). \] Theorem~\ref{gen} implies that \[ I_{n,d,t}^{\sigma}=I_{n+d-1,d,t+1} \quad \text{and} \quad I_{n,d,t}^{\tau}=I_{n-d+1,d,t-1} \quad \text{if } t \geq 1. \] \medskip Therefore, \begin{equation}\label{id} [(x_1,\ldots, x_{n-t(d-1)})^d]^{\sigma^t}= I_{n,d,t}. \end{equation} \medskip There exists a simplicial complex $\Delta$ on the vertex set $[n]$ such that $I_{n,d,t}$ is the Stanley-Resner ideal of $\Delta.$ We denote by $I_{n,d,t}^\vee$ the Stanley-Reisner ideal of the Alexander dual of $\Delta.$ \begin{Theorem}\label{big} Let $t\geq 1$ be an integer and $I_{n,d,t} \subset S$ the $t$-spread Veronese ideal generated in degree $d$. We assume that $\bigcup_{u\in G(I_{n,d,t})}\supp(u)=\{x_1,\ldots,x_n\}.$ Then we have the following: \begin{enumerate} \item[{\em (a)}] $\height (I_{n,d,t})=n-t(d-1)$. \item[{\em (b)}] $I_{n,d,t}^\vee$ is generated by the monomials \[ \prod_{i=1}^{n} x_i/(v_{i_1,t}\cdots v_{i_{d-1},t}) \text{ with } i_{j+1}-i_j \geq t, \text{ for }\quad 1 \leq j \leq d-2, \] where $v_{i_k,t}=x_{i_k}x_{{i_k}+1}\cdots x_{{i_k}+t-1}$ for $1 \leq k \leq d-1.$ \item[{\em (c)}] $I_{n,d,t}$ is Cohen-Macaulay and has a linear resolution. \item[{\em (d)}] $\beta_i (S/I_{n,d,t})=\binom{d+i-2}{d-1}\binom{n-(t-1)(d-1)}{d+i-1}$ for all $i\geq 1.$ In particular, $\mu (I_{n,d,t})={\binom{n-(t-1)(d-1)}{d}}.$ \item[{\em (e)}] $\beta_i (S/I_{n,d,t}^\vee)=\binom{n-t(d-1)+i-1}{i-1}\binom{n-t(d-1)+d}{d-i}$ for all $i\geq 1.$ In particular, $\mu (I_{n,d,t}^\vee)={\binom{n-t(d-1)+d}{d-1}}.$ \end{enumerate} \end{Theorem} \begin{proof} Let $\Delta$ be the simplicial complex whose Stanley-Reisner ideal is $I_{n,d,t}$ and let ${\mathcal F}(\Delta)$ the set of facets of $\Delta.$ We prove that every facet of $\Delta$ is of the form {\small \[ F=\{j_1,j_1+1,\ldots,j_1+(t-1),j_2,j_2+1,\ldots,j_2+(t-1),\ldots, j_{d-1},j_{d-1}+1,\ldots,j_{d-1}+(t-1)\} \]} for some $j_1,\ldots,j_{d-1}$ such that $j_\ell-j_{\ell-1}\geq t$ for $2\leq \ell\leq d-1.$ This shows that all the facets of $\Delta$ have the same cardinality, namely $t(d-1),$ thus $\dim \Delta=t(d-1)-1.$ It follows that $\dim(S/I_{n,d,t})=t(d-1),$ thus $\height (I_{n,d,t})=n-t(d-1)$ which proves (a). Moreover, $I_{n,d,t}$ has the primary decomposition \[I_{n,d,t}=\bigcap_{F\in {\mathcal F}(\Delta)}P_{[n]\setminus F}\] where $P_{[n]\setminus F}$ is the monomial prime ideal generated by all the variables $x_j$ with $j\in [n]\setminus F.$ By \cite[Corollary 1.5.5]{HHBook}, statement (b) holds. To begin with, we show that every set \begin{equation}\label{eq1} F=\{j_1,\ldots,j_1+(t-1),j_2,\ldots,j_2+(t-1),\ldots, j_{d-1},\ldots,j_{d-1}+(t-1)\} \end{equation} for some $j_1,\ldots,j_{d-1}$ such that $j_\ell-j_{\ell-1}\geq t$ for $2\leq \ell\leq d-1$ is a facet of $\Delta.$ We have $F\in \Delta$ since $x_F=\prod_{j\in F}x_j\not\in I_{\Delta}$. On the other hand, we claim that $F\cup\{j\}\not\in \Delta$ for every $j\in [n]\setminus F.$ This will show that $F$ is indeed a facet of $\Delta.$ Let $j\in [n]\setminus F.$ If $j<j_1,$ we get \[x_jx_{j_1+(t-1)}\cdots x_{j_{d-1}+(t-1)}\in I_{\Delta},\] thus $\{j, j_1+(t-1),\ldots, j_{d-1}+(t-1)\}$ is a non-face of $\Delta,$ which implies that $F\cup\{j\}\not\in \Delta.$ If $j\geq j_{d-1}+t,$ we get the non-face $\{j_1,\ldots,j_{d-1},j\}$, thus $F\cup\{j\}\not\in \Delta.$ Finally, if there exists $2\leq\ell\leq d-1$ such that $j_{\ell-1}+(t-1)<j<j_\ell,$ then $\{j_1,\ldots,j_{\ell-1},j,j_{\ell}+(t-1),\ldots, j_{d-1}+(t-1)\}$ is a non-face of $\Delta.$ Consequently, $F\cup\{j\}\not\in \Delta.$ Therefore, we have shown that every set $F$ as in (\ref{eq1}) is a facet of $\Delta.$ Our purpose is to show that the sets of the form (\ref{eq1}) are the only facets. This is equivalent to showing that for every face $G\in \Delta$, there exists $F\in {\mathcal F}(\Delta)$ of the form (\ref{eq1}) which contains $G.$ Let $G\in \Delta$ and $i_1=\min G.$ Inductively, for $\ell\geq 2,$ we set \[i_\ell=\min\{i\in G: i\geq i_{\ell-1}+t\}.\] The sequence $i_1<i_2<\cdots$ has at most $d-1$ elements. Otherwise, $G\supseteq \{i_1,\ldots,i_d\}$ with $i_\ell\geq i_{\ell-1}+t$ for $2\leq \ell\leq d.$ But $\{i_1,\ldots,i_d\}\not\in \Delta$ since $x_{i_1}\cdots x_{i_d}\in I_{\Delta}.$ Thus $G\not\in\Delta,$ a contradiction. Therefore, $G$ has the form \[G=\{i_1,i_1+1,\ldots,i_1+q_1,\ldots, i_k,i_k+1,\ldots,i_k+q_k\}\] for some $k\leq d-1, 0\leq q_1,\ldots,q_k\leq t-1,$ and $i_\ell\geq i_{\ell-1}+t$ for $2\leq \ell\leq k.$ Obviously, $G\subseteq G^\prime$ where \[G^\prime=\{i_1,i_1+1,\ldots,i_1+(t-1),\ldots,i_{k-1},i_{k-1}+1,\ldots,i_{k-1}+(t-1),i_k,\ldots,i_k+q\}\] where we set $q=q_k.$ \textbf{Claim}. For $k\leq d-2,$ there exists $H\in \Delta,$ $H\supset G^\prime\supset G,$ with \[H=\{i^\prime_1,\ldots,i^\prime_1+(t-1),\ldots,i^\prime_{k},i^\prime_{k}+1,\ldots,i^\prime_{k}+(t-1),i^\prime_{k+1},\ldots, i^\prime_{k+1}+q^\prime\} \] for some $0\leq q^\prime\leq t-1,$ $i^\prime_1\leq t,$ and $i^\prime_{\ell}\geq i^\prime_{\ell-1}+t$ for $2\leq \ell\leq k+1.$ \emph{Proof of the Claim.} If $i_1=\min G^\prime\geq t+1,$ then $G\subset G^\prime\subset H=\{1,\ldots,t\}\cup G^\prime\in \Delta$ and the claim follows. Let now $i_1\leq t$ and assume for the beginning that $i_\ell=i_{\ell-1}+t$ for $2\leq \ell\leq k.$ Then \[i_k=i_1+(k-1)t\leq kt\leq (d-2)t\leq n-t-1.\] In the last inequality we used the condition $n\geq 1+(d-1)t$ which must be satisfied by $n.$ Then we get $i_k+t\leq n-1$ hence we may take \[H=\{i_1,i_1+1,\ldots,i_1+(t-1),\ldots,i_{k},i_{k}+1,\ldots,i_{k}+(t-1),i_{k+1}=i_k+t\}.\] To complete the proof of the Claim, we need to consider a last case, namely when there exists $\nu$ such that $i_\nu>i_{\nu-1}+t.$ Let $\ell=\max\{\nu: i_\nu>i_{\nu-1}+t\}.$ Then it follows that $i_k>i_{\ell-1}+(k-\ell+1)t$ and we may take \[H=\{i_1,\ldots,i_1+(t-1),\ldots,i_{\ell-1},\ldots,i_{\ell-1}+(t-1),i^\prime_\ell,\ldots, i^\prime_\ell+(t-1),\ldots,\] \[i^\prime_k,\ldots, i^\prime_k+(t-1),i^\prime_{k+1}\ldots,i^\prime_{k+1}+s^\prime\}\] for some $s^\prime\geq 0,$ where $i^\prime_\ell=i_{\ell-1}+t,i^\prime_{\ell+1}=i_{\ell-1}+2t,\ldots, i^\prime_{k+1}=i_{\ell-1}+(k-\ell+1)t.$ By our Claim, it is now clear that every face $G\in \Delta$ is contained in a larger face $H$ of the form \begin{equation}\label{eq2} H=\{i_1,\ldots,i_1+(t-1),\ldots,i_{d-2},\ldots,i_{d-2}+(t-1),i_{d-1},\ldots,i_{d-1}+s\} \end{equation} for some $0\leq s\leq t-1,$ where $i_1\leq t, $ and $i_\ell\geq i_{\ell-1}+t$ for $2\leq \ell\leq d-1.$ It remains to show that there exists $F\in {\mathcal F}(\Delta)$ which contains $H.$ But this follows if we show that for every $s\leq t-2,$ $H$ is contained in a face of $\Delta$ of the form \[\{i^\prime_1,\ldots,i^\prime_1+(t-1),\ldots,i^\prime_{d-2},\ldots,i^\prime_{d-2}+(t-1),i^\prime_{d-1},\ldots, i^\prime_{d-1}+(s+1)\}.\] Let $s\leq t-2.$ Of course, if $i_{d-1}+s<n,$ then we may get the larger face immediately, just by adding to $H$ the vertex $i_{d-1}+(s+1).$ Let $i_{d-1}+s=n.$ If $i_\ell=i_{\ell-1}+t$ for all $2\leq \ell\leq d-1,$ then $i_{d-1}=i_1+(d-2)t,$ thus $i_1+(d-2)t+s=n$ which implies that \[i_1=n-(d-2)t-s\geq 1+(d-1)t-(d-2)t-s\geq 3.\] Then, we can take \[H\subset \{i^\prime_1,\ldots,i^\prime_1+(t-1),\ldots,i^\prime_{d-2},\ldots,i^\prime_{d-2}+(t-1),i^\prime_{d-1},\ldots, i^\prime_{d-1}+(s+1)\}\] where $i^\prime_1=i_1-1,i^\prime_2=i_2-1,\ldots,i^\prime_k=i_k-1.$ Finally, let us choose the maximal $\ell$ such that $i_\ell>i_{\ell-1}+t.$ In this case, we take \[H\subset \{i^\prime_1,\ldots,i^\prime_1+(t-1),\ldots,i^\prime_{d-2},\ldots,i^\prime_{d-2}+(t-1),i^\prime_{d-1},\ldots, i^\prime_{d-1}+(s+1)\}\] with $i^\prime_1=i_1,\ldots, i^\prime_{\ell-1}=i_{\ell-1}, i^\prime_\ell=i_{\ell}-1, i^\prime_{\ell+1}=i_{\ell+1}-1,\ldots, i^\prime_{d-1}=i_{d-1}-1.$ \medskip In order to prove that $I_{n,d,t}$ has a linear resolution, it is enough to apply Theorem~\ref{ayesha}. Since $I_{n,d,t}$ is generated in a single degree, it follows that it has a linear resolution. Next, we show that $I_{n,d,t}^\vee$ has linear quotients. Then, by \cite[Proposition 8.2.5]{HHBook}, it follows that the simplicial complex $\Delta$ is shellable, thus, by \cite[Theorem 8.2.6]{HHBook}, $I_\Delta=I_{n,d,t}$ is Cohen-Macaulay. Let $w_1,\ldots,w_q$ be the minimal monomial generators of $I_{n,d,t}^\vee$ ordered decreasingly with respect to the lexicographic order. Let \[w_i=\prod_{i=1}^{n} x_i/(v_{i_1}\cdots v_{i_{d-1}}) \text{ and } w_j=\prod_{i=1}^{n} x_i/(v_{j_1}\cdots v_{j_{d-1}})\] with $i\neq j.$ In order to simplify a little the notation, we removed the index $t$ in $v_{j_k,t}$ and $v_{i_k,t}.$ A simple calculation shows that \[ \frac{w_i}{\gcd(w_i,w_j)}=\frac{v_{j_1}\cdots v_{j_{d-1}}}{\gcd(v_{i_1}\cdots v_{i_{d-1}},v_{j_1}\cdots v_{j_{d-1}})}.\] Let $i<j.$ Then $w_i>_{\lex} w_j,$ that is, $v_{j_1}\cdots v_{j_{d-1}}>_{\lex} v_{i_1}\cdots v_{i_{d-1}}$ which is equivalent to the condition that there exists an integer $s\geq 1$ such that $j_1=i_1,\ldots,j_{s-1}=i_{s-1}$ and $j_s<i_s.$ We first observe that $x_{j_s}\mid (w_i/\gcd(w_i,w_j))$ since $x_{j_s}\mid v_{j_1}\cdots v_{j_{d-1}}$ and it does not divide the product $v_{i_1}\cdots v_{i_{d-1}}$ because $i_s>j_s.$ Let us assume that there exists a least integer $\ell\leq d-2$ such that $j_{\ell+1}>j_\ell+t.$ Let \[w_k=\prod_{i=1}^{n} x_i/(v_{j_1}\cdots v_{j_{s-1}}v_{j_s+1}v_{j_s+2}\cdots v_{j_{\ell}+1}v_{j_{\ell+1}} \cdots v_{j_{d-1}}).\] Obviously, $w_k>_{\lex} w_j$, thus $k<j,$ and we claim that $w_k/\gcd(w_k,w_j)=x_{j_s.}$ An easy calculation shows that \[\gcd(v_{j_1}\cdots v_{j_{s-1}}v_{j_s+1}v_{j_s+2}\cdots v_{j_{\ell}+1}v_{j_{\ell+1}}\cdots v_{j_{d-1}},v_{j_1}\cdots v_{j_{d-1}} )= \frac{v_{j_1}\cdots v_{j_{d-1}}}{x_{j_s}}.\] Then, \[\frac{w_k}{\gcd(w_k,w_j)}=\frac{v_{j_1}\cdots v_{j_{d-1}}}{((v_{j_1}\cdots v_{j_{d-1}})/x_{j_s})}=x_{j_s}.\] If $j_{\ell+1}=j_\ell+t$ for $2\leq \ell\leq d-2,$ we get \[j_{d-1}=j_s+(d-s-1)t<i_s+(d-s-1)t\leq i_{d-1}\leq n-t+1.\] Thus $j_{d-1}+(t-1)\leq n$, and we may consider the monomial $v_{j_{d-1}+1}$. In this case we take \[w_k=\prod_{i=1}^{n} x_i/(v_{j_1}\cdots v_{j_{s-1}}v_{j_s+1}v_{j_s+2}\cdots v_{j_{d-1}+1})\] and check that $w_k/\gcd(w_k,w_j)=x_{j_s}$. Finally, for the calculation of the Betti numbers of $I_{n,d,t}$ and $I_{n,d,t}^\vee$, we employ \cite[Theorem 4.5]{BH98} which gives the Betti numbers of a Cohen-Macaulay ideal $I$ in a polynomial ring $R$ with pure resolution of type $(d_1,\ldots,d_p)$. We have \[\beta_i(R/I)=(-1)^{i+1}\prod_{j\neq i} \frac{d_j}{d_j-d_i}, i\geq 1.\] In our case, the type of the resolution of $S/I_{n,d,t}$ is given by $d_j=d+j-1$ for $1\leq j\leq p=n-t(d-1).$ Therefore, \[\beta_i(S/I_{n,d,t})=(-1)^{i+1}\prod_{j=1}^{i-1}\frac{d+j-1}{j-i}\prod_{j=i+1}^p\frac{d+j-1}{j-i}=\] \[ =\frac{d(d+1)\cdots (d+i-2)}{(i-1)!}\cdot \frac{(d+i)(d+i+1)\cdots (d+p-1)}{(p-i)!}=\] \[=\binom{d+i-2}{d-1}\binom{n-(d-1)(t-1)}{d+i-1}.\] By Eagon-Reiner Theorem \cite[Theorem 8.1.9]{HHBook}, it follows that $I_{n,d,t}^\vee$ is also Cohen-Macaulay and has a linear resolution. Thus, we may compute the Betti numbers of $S/I_{n,d,t}^\vee$ as we did for $S/I_{n,d,t}.$ Note that, in this case, we have $\height(I_{n,d,t}^\vee)=\projdim (S/I_{n,d,t}^\vee)=d$, and the degree of the generators of $I_{n,d,t}^\vee$ is equal to the height of $I_{n,d,t}.$ We omit the remaining part of the calculation of Betti numbers since it is completely similar to the above part of the proof. We end the proof with the following remark. One may get an alternative proof of part (d) by using (\ref{id}) and Theorem~\ref{betti}. \end{proof} As an application of Theorem~\ref{big}, we prove the following \begin{Theorem}\label{height} Let $I$ be $t$-spread strongly stable ideal. Then \[ \height(I)=\max \{\min(u): u \in G(I)\} . \] \end{Theorem} \begin{proof} Let $u_0\in G(I)$ such that $\min(u_0)=\max \{\min(u): u \in G(I)\}$, and let $P=(x_i : i \leq \min(u_0))$. Then $I \subset P$, because for all $ w \in G(I)$ one has $\min(w) \leq \min(u_0)$. This shows that $\height(I) \leq \min(u_0)$. Conversely, let $u_0=x_{i_1}\cdots x_{i_d}$. Then $u'_0= x_{i_1} x_{i_1+t}\cdots x_{i_1+t(d-1)}$ belongs to $I$ because $I$ is $t$-spread strongly stable. Let $I'=B_t(u'_0)$. Then $I' \subset I$ and Theorem~\ref{big} implies that \[ \height(I) \geq \height(I') = i_1+t(d-1)-t(d-1)=i_1=\min(u'_0)=\min(u_0). \] \end{proof} \begin{Corollary} \label{cmclassified} Let $I \subset S=K[x_1, \ldots, x_n]$ be a $t$ spread strongly stable ideal such that $\bigcup_{u \in G(I)}\supp(u)=\{x_1, \ldots, x_n\}.$ Then $S/I$ is Cohen-Macaulay if and only if there exists $u \in G(I)$ of degree $d$ such that $u=x_{n+t(d-1)}\cdots x_{n-t} x_n $. In particular, if $I$ is generated in a single degree then $S/I$ is Cohen-Macaulay if and only if $I$ is $t$-spread Veronese. \end{Corollary} \begin{proof} From Corollary~\ref{formula}, it follows that \[ \pd(S/I)=\max \{ \max(u)-t(\deg(u)-1) \; : u \in G(I) \}, \] and from Theorem~\ref{height}, it follows that \[ \dim(S/I)=n- \max\{\min(u) \; : u \in G(I) \}. \] By using Auslander-Buchsbaum theorem we conclude that $S/I$ is Cohen-Macaulay if and only if \begin{equation}\label{equ} \max \{ \max(u)-t(\deg(u)-1) \; : u \in G(I) \}=\max\{\min(u) \; : u \in G(I) \}. \end{equation} Let $u_0 \in G(I) $ with $\min(u_0)=\max\{\min(u) \; : u \in G(I) \}$. Since \[\min(u) \leq \max(u)-t(\deg(u)-1)\] for all $ u \in G(I),$ equality (\ref{equ}) holds if and only if \[\min(u_0)= \max(u_0)-t(\deg(u_0)-1).\] In other words, $S/I$ is Cohen-Macaulay if and only if there exists $u_0 \in G(I)$ with \[ \min(u_0)=\max\{\min(u)\; : u \in G(I)\} \text{ and } u_0=x_{i_1}x_{i_1+t}\cdots x_{i_1+t(d-1)} \] Since $\bigcup_{u \in G(I)}\supp(u)=\{x_1, \ldots, x_n\}$, there exists $u \in G(I)$ such that $\max(u)=n$ and $\min (u) \leq \min(u_0)$. Note that \[ \max(u) -t(\deg(u)-1) \leq i_1= \min(u_0). \] Therefore, this implies that $n \leq i_1+t(\deg(u)-1)$. Suppose that $\deg (u) \leq \deg(u_0)=d$, then it follows that $n =i_1+t(d-1)$, as required. On the other hand, if $\deg (u)> d$, then $u=x_{j_1}\cdots x_{j_d}x_{j_{d+1}}\cdots x_n$ with $j_1 \leq j_2 \leq \cdots \leq n.$ Let $u'=x_{j_1}\cdots x_{j_d}$. Since $u \in G(I)$, we have $j_d > i_1+t(d-1)$, otherwise, $j_k \leq i_k$ for all $1 \leq k \leq d$. Then, since $I$ is $t$-strongly stable, we obtain $u' \in I$ and $u'|u$, a contradiction. Since $j_d > i_1+t(d-1)$, we get \[ \max(u)-t(\deg(u)-1) \geq \max(u')-t(\deg(u')-1) > i_1, \] a contradiction. \end{proof} \section{$t$--spread principal Borel algebras}\label{alg} Let $t\geq 1$ and $u \in S$ be a $t$-spread monomial. In this section, we consider the toric algebra $K[B_t(u)]$ which is generated by the monomials $v$ with $v \in G(B_t(u))$. If $u=x_{n-t(d-1)}\cdots x_{n-t}x_n$, then $B_t(u)=I_{n,d,t}$ and in this case $K[B_t(u)]$ is called a {\em $t$-spread Veronese algebra}. Let us first recall the notion of sortable sets of monomials. For more information we refer to \cite[Section 6.2]{EHBook}. Let $u,v$ two monomials of degree $d.$ We write $uv=x_{i_1}x_{i_2}\cdots x_{i_{2d}}$ with $1\leq i_1\leq i_2\leq \cdots\leq i_{2d}$, and consider the monomials $u^\prime=x_{i_1}x_{i_3}\cdots x_{i_{2d-1}}, v^\prime =x_{i_{2}}x_{i_4}\cdots x_{i_{2d}}.$ The pair $(u^\prime,v^\prime)$ is called the \emph{sorting} of $(u,v).$ We write $(u^\prime,v^\prime)=\sort(u,v).$ A subset ${\mathcal S}\subset S_d$ is called \emph{sortable} if $\sort(u,v)\in {\mathcal S}\times {\mathcal S}$ for all $(u,v)\in {\mathcal S}\times {\mathcal S}.$ \begin{Proposition}\label{sort} The set $G(B_t(u))$ is sortable. \end{Proposition} \begin{proof} Let $u=x_{i_1}\cdots x_{i_d}$ with $i_1 \leq \cdots \leq i_d$. Let $w,v\in G(B_t(u))$ and write $wv=x_{j_1}x_{j_2}\cdots x_{j_{2d}}.$ Then $w^\prime=x_{j_1}x_{j_3}\cdots x_{j_{2d-1}}, v^\prime =x_{j_{2}}x_{j_4}\cdots x_{j_{2d}}$. By \cite[Lemma 2.7]{N}, we have $j_{2k}, j_{2k-1} \leq i_k$ for $k=1 \ldots, d$. It remains to be shown that $j_{2\ell+1}-j_{2\ell-1}\geq t$ and $j_{2\ell+2}-j_{2\ell}\geq t$ for all $1\leq \ell\leq d-1.$ We prove only the first inequality since the second one may be proved in a similar way. If $x_{j_{2\ell-1}},x_{j_{2\ell+1}}$ divide the same monomial, say $w,$ then the inequality holds since $w\in G(B_t(u)).$ Else, we may consider that $x_{j_{2\ell-1}}\mid w$ and $x_{j_{2\ell+1}}\mid v.$ If $x_{j_{2\ell}}\mid w,$ then $j_{2\ell+1}-j_{2\ell-1}\geq j_{2\ell}-j_{2\ell-1}\geq t,$ since $w\in G(B_t(u)).$ If $x_{j_{2\ell}}\mid v,$ then $j_{2\ell+1}-j_{2\ell-1}\geq j_{2\ell+1}-j_{2\ell}\geq t$ since $v\in G(B_t(u))$. \end{proof} Let $R$ be the polynomial ring $K[t_v\; | \; v \in G(B_t(u))]$, and $\phi: R \rightarrow K[B_t(u)]$ be the $K$-algebra homomorphism which maps $t_v$ to $v$ for all $v \in G(B_t(u))$. We denote by $J_u$, the kernel of $\phi$. By using properties of algebras generated by sortable sets of monomials (\cite{St95} or \cite[Theorem 6.16]{EHBook}), we obtain the following result. \begin{Theorem}\label{algebra} The set of binomials ${\mathcal G}=\{t_ut_v-t_{u^\prime}t_{v^\prime}: (u,v) \text{ unsorted }, (u^\prime, v^\prime)=\sort(u,v)\}$ is a Gr\"obner basis of the toric ideal $J_u$. \end{Theorem} Since an algebra whose defining ideal has a quadratic Gr\"obner basis is Koszul, we get the following corollary of the above theorem. \begin{Corollary} $K[B_t(u)]$ is Koszul. \end{Corollary} Theorem~\ref{algebra} has another nice consequence. \begin{Corollary} $K[B_t(u)]$ is a Cohen-Macaulay normal domain. \end{Corollary} \begin{proof} Theorem~\ref{algebra} shows, in particular, that $J_u$ has a squarefree initial ideal. By a theorem due to Sturmfels \cite{St95}, it follows that $K[B_t(u)]$ is a normal domain. Next, by a theorem of Hochster \cite{Ho72}, it follows that $K[B_t(u)]$ is Cohen-Macaulay. \end{proof} \section{The generic initial ideals of $t$-spread strongly stable ideals}\label{genini} The following theorem generalizes Theorem 11.2.7 in \cite{HHBook}. For a homogeneous ideal $I\subset S=K[x_1, \ldots, x_n]$, $\Gin(I)$ stands for the generic initial ideal of $I$ with respect to the reverse lexicographic order. Throughout this section we assume that $\chara(K)=0$. \begin{Theorem}\label{Gin} Let $I\subset S$ be a $t$--spread strongly stable ideal. Then $I=(\Gin(I))^{\sigma^t}$. In particular, $\Gin(I)= I^{\tau^t}$ and $\Gin(I^{\sigma^t})=I$. \end{Theorem} \begin{proof} We may assume $t>0$ since the equality $I=\Gin(I)$ for strongly stable ideals is known \cite[Proposition 4.2.6]{HHBook}. The proof is very similar to the proof of \cite[Theorem 11.2.7]{HHBook}, but we present it in all the details for the convenience of the reader. We use induction on the largest $\max(u)$ where $u\in G(I).$ By \cite[Lemma 11.2.8]{HHBook}, we may assume that there exists $u\in G(I)$ with $\max(u)=n.$ Following the proof of \cite[Theorem 11.2.7]{HHBook}, let $I^\prime=I:(x_n)$ and $I^{\prime\prime}$ be the ideal generated by all the monomials $u\in G(I)$ with $\max(u)<n.$ Then, both ideals $I^\prime$ and $I^{\prime\prime}$ are $t$-spread strongly stable and $I^{\prime\prime}\subset I\subset I^\prime.$ By the inductive hypothesis, we have \[ I^\prime=(\Gin(I^\prime))^{\sigma^t} \text { and } I^{\prime\prime} =(\Gin(I^{\prime\prime}))^{\sigma^t} \] which implies that \[I^{\prime\prime} \subset (\Gin(I))^{\sigma^t}\subset I^\prime. \] It is enough to show that \begin{equation}\label{equGin1} I\subset (\Gin(I))^{\sigma^t}. \end{equation} Indeed, it is well known that $I$ and $\Gin(I)$ have the same Hilbert function. By Theorem~\ref{betti}, it follows that $\Gin(I)$ and $(\Gin(I))^{\sigma^t}$ have the same Hilbert function as well, therefore, $I$ and $(\Gin(I))^{\sigma^t}$ have the same Hilbert function. This remark together with (\ref{equGin1}) show that $I= (\Gin(I))^{\sigma^t}.$ Let us now prove (\ref{equGin1}). Since $I^{\prime\prime}\subset (\Gin(I))^{\sigma^t},$ we are reduced to proving that all the monomials $u\in G(I)$ with $\max(u)=n$ belong to $(\Gin(I))^{\sigma^t}.$ Let $w_1,\ldots,w_q$ be the monomials in $G((\Gin(I))^{\sigma^t})$ with $\max(w_j)=n$ for all $j$ and $\deg w_1\leq \cdots \leq \deg w_q.$ Since $(\Gin(I))^{\sigma^t}\subset I^\prime,$ we have $x_nw_j\in I.$ As $\max(w_j)=n$ and $I$ is a squarefree monomial ideal, it follows that $w_j\in I$ for $1\leq j\leq q. $ This implies that, for every $1\leq j\leq q$, there exists $u_j\in G(I)$ such that $u_j \mid w_j.$ Moreover, if $\max(u_j)<n,$ then $u_j\in I^{\prime\prime} \subset (\Gin(I))^{\sigma^t},$ which is impossible since $u_j\neq w_j$ and $w_j$ is a minimal generator of $(\Gin(I))^{\sigma^t}.$ Therefore, $\max(u_j)=n$ for $1\leq j\leq q.$ Let $u\in G(I)$ with $\max(u)=n.$ By Corollary~\ref{formula}, we have \[\beta_{n-t(\deg u-1)-1,n-t(\deg u-1)-1+\deg u }(I)=\sum_{\stackrel{v\in G(I)}{\deg v=\deg u}} \binom{\max(v)-t(\deg u-1)-1}{n-t(\deg u-1)-1}= \] \[ =|\{v\in G(I): \max(v)=n, \deg v=\deg u\}|. \] On the other hand, \[\beta_{n-t(\deg u-1)-1,n-t(\deg u-1)-1+\deg u }((\Gin(I))^{\sigma^t})=\] \[=\sum_{\stackrel{w\in G((\Gin(I))^{\sigma^t})}{\deg w=\deg u}} \binom{\max(w)-t(\deg u-1)-1}{n-t(\deg u-1)-1}= \] \[=|\{w\in G((\Gin(I))^{\sigma^t}): \max(w)=n, \deg w=\deg u\}|. \] But, for every $ i,j,$ we have \[\beta_{i, i+j}(I)\leq \beta_{i,i+j}(\Gin(I))=\beta_{i,i+j}((\Gin(I))^{\sigma^t}).\] Thus, we obtain: \begin{eqnarray}\label{equGin2} |\{w\in G((\Gin(I))^{\sigma^t}): \max(w)=n, \deg w=\deg u\}|\geq \\ \nonumber |\{v\in G(I): \max(v)=n, \deg v=\deg u\}|. \end{eqnarray} In particular, this implies that $\deg u_1$ cannot be strictly smaller than $\deg w_1,$ hence $\deg u_1=\deg w_1$ and $u_1=w_1$. In addition, if we assume that $u_1=w_1,\ldots,u_k=w_k,$ then $\deg u_{k+1}$ cannot be strictly smaller than $\deg w_{k+1},$ which further implies that $u_{k+1}=w_{k+1}$ as well. Thus, $w_j\in G(I)$ for $1\leq j\leq q$, which yields \begin{equation}\label{equGin3} \{w\in G((\Gin(I))^{\sigma^t}): \max(w)=n\}\subset \{u\in G(I): \max(u)=n\}. \end{equation} However, by (\ref{equGin2}), we have \[ |\{w\in G((\Gin(I))^{\sigma^t}): \max(w)=n\}|\geq |\{u\in G(I): \max(u)=n\}|. \] Consequently, in relation (\ref{equGin3}) we have equality. This shows that every monomial $u\in G(I)$ with $\max(u)=n$ belongs to $(\Gin(I))^{\sigma^t}.$ This proves (\ref{equGin1}) and completes the proof of the theorem. \end{proof}
{ "timestamp": "2018-06-05T02:09:37", "yymm": "1805", "arxiv_id": "1805.02368", "language": "en", "url": "https://arxiv.org/abs/1805.02368" }
\section{Introduction} \label{sec:intro} \indent\indent Phonon-mediated cryogenic detectors using massive absorbers are a mature technology extensively employed in rare event physics experiments, like neutrinoless double beta decay ({\ensuremath{0\nu DBD}}) searches~\cite{DellOro:2016tmg} (see for example CUORE~\cite{2018PhRvL.120m2501A}, CUPID-0~\cite{2018PhRvL.120w2502A}, LUMINEU~\cite{2015JInst..10P5007A}, AMoRE~\cite{2016ITNS...63..543L}...) and dark matter direct detection experiments~\cite{Goodman:1984dc,Strigari:2013iaa} (EDELWEISS~\cite{2016EPJC...76..548H} , SuperCDMS~\cite{dur24168}, CRESST~\cite{2016EPJC...76...25A}...). Generally the working temperature is $<$100~mK and the most common phonon sensors are Neutron Transmutation Doped (NTD) Ge or ion-implanted Si thermistors~\cite{McCammon}, which are glued or bonded to the detector surface and depending on the gluing characteristics are more or less sensitive to the ballistic component, or Transition Edge Sensors (TES)~\cite{Irwin}, which usually are sensitive to the ballistic phonons. Recently other kinds of sensor are being developed, as Metallic Magnetic Calorimeters (MMC)~\cite{MCC} or Kinetic Inductance Detectors (KIDs, the ones used in this work)~\cite{2003Natur.425..817D}. In all cases, low-threshold detection and/or an identification of the event topology (multi-site event, bulk vs surface...) and nature of the interacting particle ($\alpha$, $\beta/\gamma$, nuclear recoil...) are mandatory, and hence a good understanding of the phonon transport mechanism and heat losses in the interfaces is fundamental. On the other hand, a good understanding of these processes could also be useful to mitigate the effects of unwanted phonon-mediated signals. This is the case of cryogenic bolometers employed for CMB measurements in space, which are severely affected by cosmic rays~\cite{2014A&A...569A..88C,2016A&A...592A..26C}, and superconducting qubits, where phonons generated by cosmic rays and natural radioactivity can modify the qubit state~\cite{2018PhRvL.121o7701S,2018PhRvL.121k7001G}. \par Monte Carlo (MC) simulation of particles transport and interactions in matter is one of the basic ingredients for the design of a particle detector and the detection efficiency calculation. In particular the GEANT4 package~\cite{geant4}, initially developed for high energy Physics, now is used by a much wider community, including astroparticle, space and medical Physics. Nevertheless, at the level of phonon physics, despite the fundamental role they play in the energy collection in cryogenic bolometers, there is no generalized use of this analysis tool. Recently GEANT4 has incorporated condensed matter physics elements as phonon and electron-hole pairs, essential for a more complete understanding of a cryogenic detector. The code was first developed by the CDMS cryogenic Dark Matter experiment~\cite{Agnese:2015ywx} and subsequently integrated into GEANT4 as a general open-source package called G4CMP (GEANT4 Condensed Matter Physics)~\cite{Brandt:2014imy,g4cmp}. It has been validated for germanium, reproducing quite accurately the results of some experiments using heat pulses (produced for example by a focused laser beam ) to excite ballistic phonons~\cite{Brandt:2012zzb}, and also giving a good description of the CDMS detectors: Ge cylinders with interleaved ionization and grounded phonon electrodes coupled to tungsten TES to read the phonon signal. The MC simulation reproduces the arrival time of the ballistic phonons into the TES and the energy partition between the phonon and charge~\cite{Leman:2011cc,McCarthy:2011sx}. \par A correct treatment of the phonon scattering/transmission at the interfaces is a main ingredient of the simulation when the sensitive area is a small fraction of the total detector surface. Nevertheless, phonon scattering at the interfaces is still an open question, and there is no general agreement about the model to describe the experimental data, being the most well established ones the acoustic mismatch model (AMM) and the diffuse mismatch model (DMM)~\cite{1989RvMP...61..605S}. AMM, which proposes specular reflection on the interface in analogy with the Snell's law for light, has been very successful at low temperatures~\cite{1987ApPhL..51.2200S}, while DMM, in which phonons undergo diffuse reflection, is sensitive to surface roughness and preferred at temperatures above 1~K~\cite{2017PhRvB..95t5423H}. G4CMP include a basic implementation of the phonon reflection mechanism based on these models in which a phonon in the boundary undergoes a reflection (specular for the AMM model or diffuse (Lambertian) for the DMM model), or is transmitted through the boundary with a certain probability given by a transmission coefficient, but the experimental determination of the phonon transmission coefficients at the interfaces is a hard task, and currently large uncertainties exist. \par In this work we apply the G4CMP package to model two prototypes of the CALDER project~\cite{Battistelli:2015vha}, that is part of the R\&D activities under development for the future upgrade of CUORE (the first ton-scale cryogenic detector in operation looking for {\ensuremath{0\nu DBD}}\cite{2018PhRvL.120m2501A}). The CALDER goal is to develop large-area high-sensitivity light detectors able to measure the very weak Cherenkov light that follows a {\ensuremath{0\nu DBD}} event and allow to distinguish it from other backgrounds. The light is detected by superconducting KIDs~ of few mm$^2$ of active area deposited on a substrate of several cm$^2$ that acts as a light absorber and generates phonons that will be absorbed in the superconductor and produce the signal. The main advantage of using KIDs for this study is that their response can be modeled as a function of measurable parameters of the Mattis-Bardeen theory, so we are able to estimate the total energy transformed into quasi-particles, and make a direct comparison with the MC results. In addition, the small fractional area covered by the sensors with respect to the total absorber one enhances the influence of the phonon reflection/transmission model in the final results. We apply the G4CMP package to a silicon wafer read by one or several KIDs. Comparing the simulation results with our data we found a notable agreement for the AMM model, and we are able to estimate the transmission coefficients at the interfaces Si-Al ({\TSiAl}) and Si-Teflon. The results that we present here can be extended to other kind of detectors based on thin Al sensors. \par The structure of the paper is as follows: Section~\ref{sec:mc} presents a brief description of the main physics ingredients included in the MC code and the parameters used in our implementation. Section~\ref{sec:calder} describes the general aspects of our detectors, experimental setup, data analysis, and the specific experimental configurations simulated. The details of the MC simulation are outlined in Section~\ref{sec:g4cmp}, while in Section~\ref{sec:results} we compare the MC results with the experimental data and make an estimation of the relevant parameters. Finally, we present the summary in Section~\ref{sec:discussion}. \section{Phonon physics} \label{sec:mc} \indent\indent In this section we describe the basic phonon physics mechanisms implemented in the MC simulation, referring to~\cite{Brandt:2014imy,Leman:2011by} for a more complete description. Table~\ref{tab:params} reports the numerical parameters used in our simulation, whose meaning is given in the following. \par In a phonon-mediated cryogenic detector, particles (optical photons in our case) hitting the absorber produce optical phonons that decay promptly to the acoustic branch, producing an athermal population of high energy. The interaction length of these energetic phonons is very short, so they propagate quasidiffusively, with numerous changes in direction and polarization mode as they decay to lower energy states. When the phonon energy drops sufficiently, its mean free path becomes larger than the dimensions of the crystal and it propagates following almost straight lines at the speed of sound in the material, a state that we call ballistic. If the dimensions of the sensor are small compared to the absorber size, as for CALDER detectors, ballistic phonons can undergo a large number of reflections at the substrate faces before reaching the KID, where they have a certain probability {\TSiAl} of being absorbed, or escaping detection (i.e., they are thermalized in the substrate or absorbed at the supports or the feedline). \begin{table}[ht] \begin{center} \begin{tabular}{@{\extracolsep{\fill}} l l l l} \hline \hline Symbol & Parameter description & Value & Ref \\ \hline d & density & 2.33 g/cm$^3$ & \cite{1985PhRvB..31.2574T} \\ $C_{11}$ & elastic constant & 165.6~GPa & \cite{Ashcroft76} \\ $C_{12}$ & elastic constant & 63.9~GPa & \cite{Ashcroft76} \\ $C_{44}$ & elastic constant & 79.5~GPa & \cite{Ashcroft76} \\ $\beta$ & 2$^{nd}$ order elastic constant & -42.9~GPa & \cite{1985PhRvB..31.2574T} \\ $\gamma$ & 2$^{nd}$ order elastic constant & -94.5~GPa & \cite{1985PhRvB..31.2574T} \\ $\lambda$ & Lam\'e constant & 52.4~GPa & \cite{1985PhRvB..31.2574T} \\ $\mu$ & Lam\'e constant & 68.0~GPa & \cite{1985PhRvB..31.2574T} \\ DOS(L) & density of states L & 0.093 & \cite{1991PhRvB..44.3001T} \\ DOS(FT) & density of states FT & 0.376 & \cite{1991PhRvB..44.3001T} \\ DOS(ST) & density of states ST & 0.531 & \cite{1991PhRvB..44.3001T} \\ $R_A$ & anharmonic decay rate & 7.41$\times10^{-56}~$s$^4$ & \cite{1993PhRvB..4813502T} \\ $R_I$ & isotopic scattering rate & 2.43$\times10^{-42}~$s$^3$ & \cite{1993PhRvB..4813502T} \\ ${\nuDebye}$ & Debye frequency & 15~THz & \cite{Ashcroft76} \\ ${\etapb}$ & pair-breaking efficiency & 0.57 &\cite{2000PhRvB..6111807K} \\ $\xi_\textrm{tr}$ & fraction of phonons tracked & 0.02 & \\ ${\nmax}$ & maximum number of reflections & 1000 & \\ ${\TSiAl}$ & Si-Al transmission coefficient & [0.1 - 1] & \\ ${\TSiTef}$ & Si-Teflon transmission coefficient & [0.1 - 1] & \\ \hline \hline \end{tabular} \end{center} \caption{Parameters of the G4CMP Monte Carlo simulation. Unless otherwise stated their values are for Si.} \label{tab:params} \end{table} \par Phonon tracking in crystalline structures strongly differs from the usual particle propagation in GEANT code because (1) an acoustic phonon can be in three different polarization states, one longitudinal (L) and two transversal, fast (FT) and slow (ST), with different velocities for every state; (2) the direction of energy propagation, that occurs along the group velocity vector, $\nabla_k \omega(\textbf{k})$, where $\omega$ is the phonon frequency, does not flow in general parallel to the wavevector direction \textbf{k}. This fact, that depends on the crystal lattice symmetry and physical properties (mainly the elastic constants), causes the phonons to travel in preferred directions along the crystal, a phenomenon known as ``phonon focusing''~\cite{1969PhRvL..23..416T,1979PhRvL..43.1424N}. Silicon has a face centered cubic crystal structure for which we expect quasi isotropic transport for longitudinal phonons, but highly anisotropic one for the transversal modes. To check that the caustics are correctly generated in our code we perform a simulation starting with low energy phonons of around 0.1~THz produced in a small spot at the surface of the Si wafer. Phonons of this frequency are ballistic in Si, so they propagate along an almost unchanged trajectory and polarization state until they reach the opposite face, forming the characteristic phonon focusing structures observed in Si by laser-beam experiments~\cite{1985PhRvB..32.2568H} (see Fig.~\ref{fig:caustics}). \begin{figure*}[ht] \centering \includegraphics[width=1\textwidth]{SixyPol_cont.pdf} \caption[]{Simulation of 0.1~THz phonons generated in one small spot of the Si wafer surface and detected in the opposite face. The panels show the flux intensity for every polarization: L (left), ST (center) and FT (right). The image spans an angle of $\pm$72$^\circ$. Phonon focusing structures (bright colors) are clearly formed, especially for ST and FT modes. } \label{fig:caustics} \end{figure*} \par The phonon propagation in the crystal is mainly governed by two processes: \begin{enumerate} \item isotopic scattering: when the substrate is composed of different isotopes, as it is usually the case, there is a disruption in the propagation path that causes the phonon to scatter off and change direction with no energy loss. The energy-dependent rate is modeled as $R_I\nu^4$, where $R_I$ depends on the material (see Table~\ref{tab:params}) and $\nu=\omega/2\pi$ is the phonon frequency. The single scattering process depends on the inner product of the polarization vectors of the initial and final phonons, but the total expected rate is isotropic. Thus, the isotropic approximation in which the scattered phonon has random direction and polarization distribution according to the density of states (DOS) is pretty accurate after several scatters and much less time consuming; \item anharmonic decay: due to nonlinear terms in the elastic coupling between adjacent lattice ions, a phonon spontaneously splits into two (or more) lower frequency ones, with a rate that depends on the phonon frequency as $R_A\nu^5$, where $R_A$ is a material-dependent constant (see Table~\ref{tab:params}). A complete treatment of the scattering process is computationally too expensive, so usually the isotropic approximation, in which only L phonons can decay via L$\rightarrow$L+T and L$\rightarrow$T+T processes, is adopted. \end{enumerate} \par As said before, there is not yet a complete understanding of the mechanisms that govern the phonon physics at the interfaces. Ideally phonons reflect and transmit conserving the energy and the component of \textbf{k} parallel to the interface, but polarization conversion can occur, being in general three reflected waves (trirefringence) and at most two transmitted ones (birefringence)~\cite{2005imph.book.....W}. The AMM and DMM models are the most extended but none of them is sufficient to entirely explain the experimental data. \section{Phonon-mediated kinetic inductance detectors} \label{sec:calder} \indent\indent KIDs operation principle is based on the properties of a superconducting film biased with AC (microwave) current. The inertia of Cooper pairs to momentum change produces an additional inductance, called kinetic inductance (L$_{KI}$), which depends on the density of Cooper pairs, and that can be measured embedding the superconductor in a resonant RLC circuit with resonant frequency $\nu_0=1/2\pi\sqrt{LC}$. An energy release larger than twice the superconductor gap $\Delta$ (about 200~$\mu$eV for thin Al films) breaks Cooper pairs into quasiparticles, modifying both the residual resistance due to quasiparticles (the only dissipative term in the RLC resonant circuit) and the inductance due to Cooper pairs and changing the amplitude and phase of a microwave signal transmitted past the circuit. Slightly modifying the capacitance of every resonator we can make them resonate at close but different frequencies, and in that way many of them can be read with the same line. \par The detector used in this work follows a Lumped Element KID (LEKID) design~\cite{2008JLTP..151..530D} that uses a separate meander section (inductor) and an interdigital capacitor to form a resonator coupled (inductively or capacitively) to a Coplanar Waveguide (CPW) for excitation and readout. They are fabricated at Istituto di Fotonica e Nanotecnologie of CNR (Rome). They are pattered by electron beam lithography in a 60~nm Al film deposited by electron-gun evaporator on a thin ($\sim$300~$\mu$m) high-resistivity Si (100) substrate~\cite{2016NIMPA.824..177C,2016JLTP..184..131C}. In order to reduce the thermal quasiparticles population we operate the detector well below the Al critical temperature. The Si wafer is fixed to a copper holder by small Teflon supports that act as thermal link to the heat sink while the holder is anchored to the coldest point of a dilution refrigeration, at a base temperature of about 20$~$mK. \par KIDs are excited with a fixed-frequency signal typically in the few GHz range. After transmission through the device, the signal $S_{21}$ is amplified by a CITLF4 SiGe cryogenic low noise amplifier (with noise temperature T$_N\sim$7~K) operated at 4~K and the rest of the electronics are at room temperature~\cite{Battistelli:2015vha}. \par The signal transmitted through the feedline can be written as a function of the frequency $\nu$ as follows: \begin{equation} \label{eq:S21} S_{21}(\nu)={\I} + i{\Q} = 1-\frac{Q/Q_c}{1+2iQ\frac{\nu-\nu_0}{\nu_0}}, \end{equation} where $S_{21}$ is the forward scattering amplitude in the standard scattering matrix representation, {\I} and {\Q} indicate real and imaginary part of $S_{21}$ and $Q$ is the quality factor of the resonant circuit, which is given by the addition in parallel of the coupling quality factor $Q_c$ (that account for losses through the coupling) and the internal quality factor $Q_i$ (dissipation due to quasiparticles and all other losses), so that $Q^{-1} = Q_c^{-1} + Q_i^{-1}$. When $\nu$ sweeps around the resonance, the signal traces out a circle in the ${\I\Q}$ plane of diameter equal to $Q/Q_c$ (see inset of Fig.~\ref{fig:pulse}). We determine the circle center and radius, taking into account distortions introduced by the power stored in the resonator and possible impedance mismatches~\cite{2016JLTP..184..142C}, to translate the {\I}(t) and {\Q}(t) components into phase $\delta\phi$(t) and amplitude $\delta a$(t) variations relative to the center of the resonance loop (calibration). \par Once the resonance is calibrated we choose the most sensitive frequency (or frequencies, in the case of reading several KIDs through the same line) and excite the resonators at an adequate power level~\cite{2017ApPhL.110c3504C}. We run an amplitude threshold trigger algorithm on the continuously acquired signals to capture particle passages through the detector, and register a window of configurable length around the position of each trigger. Fig.~\ref{fig:pulse} shows a typical response to a 36~keV energy deposit in the Si substrate. The $\delta\phi(t)$ component usually features much better signal-to-noise ratio (SNR) than $\delta a$(t), so in the following we use only this signal. \subsection{Phonon time constant} \label{sec:pulseTime} Athermal phonons arrive to the KIDs with a characteristic time distribution that depends on the detector material and geometry. In general it can be modeled by two time constants, accounting for the pulse rise (${\tph}^\textrm{rise}$) and decay (${\tph}$), so the number of phonons at the KID can be written as: \begin{equation}\label{eq:phPulse} N_{ph}(t)=\frac{N_\textrm{ph}}{{\tph}-{\tph}^\textrm{rise}}\left({\expp}^{-t/{\tph}} - {\expp}^{-t/{\tph}^\textrm{rise}}\right). \end{equation} When ${\tph}^\textrm{rise}\ll{\tph}$, as in the case of the detectors analyzed in this work, the expression \ref{eq:phPulse} can be approximated by a single exponential with constant {\tph}. \par In order to infer {\tph} from the KID signal, we have to disentangle the contribution of other temporal constants: (1) at the KID phonons break Cooper pairs and generate quasiparticles with a probability given by the pair-breaking efficiency ${\etapb}$ (see Tab.~\ref{tab:params}), which recombine again into Cooper pairs with lifetime ${\tqp}$. The recombination rate depends not only on superconductor properties, but also on the quasiparticle density, and consequently also on temperature and microwave power ({\power})\cite{2014PhRvL.112d7004D}; (2) the $Q$ factor determines the time constant at which the power dissipation decays as {${\tring}=Q/\pi\nu_0$}, hence high-$Q$ resonators are more sensitive but are also slower. The temporal evolution of the signal is a convolution of these effects: \begin{widetext} \begin{equation}\label{eq:pulse} \delta \phi(t)=\Phi_\textrm{qp}{\tqp}\left[\frac{{\tqp}{\expp}^{-t/{\tqp}}}{({\tqp}-{\tph})({\tqp}-{\tring})} + \frac{{\tph}{\expp}^{-t/{\tph}}}{({\tph}-{\tqp})({\tph}-{\tring})} + \frac{{\tring}{\expp}^{-t/{\tring}}}{({\tring}-{\tqp})({\tring}-{\tph})}\right], \end{equation} \end{widetext} where $\Phi_\textrm{qp}$ is the pulse integral and its expression is derived in the next section. \par As we explain in next section, we excite the substrate by a LED pulse whose duration {\Tex} is of the order of few $\mu$s, so the final waveform results from the convolution of Eq.~\ref{eq:pulse} with a rectangular function of length {\Tex}. \par For every acquired signal we fit the $\delta\phi$ pulse evolution to the pulse shape described above, fixing {\tring} to the value corresponding to the measured $Q$ factor. In this way we obtain {\tph} that we compare with the MC results. Superimposed to the pulses of Fig.~\ref{fig:pulse} we show the results form the fit for the $\delta\phi(t)$ and $\delta a(t)$ signals. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{pulseWithReso.pdf} \caption[]{$\delta\phi$ and $\delta a$ pulse time evolution following an energy deposition of 36~keV in the Si substrate. The signals are fitted to the pulse shape of Eq.~\ref{eq:pulse}, taking also into account {\Tex}=1~$\mu$s (in red for $\delta\phi$ and blue for $\delta a$). The $\chi^2$/NDF for the fit in the $\delta\phi$ component is 2.5. The resulting $\delta\phi$ fit parameters are shown in the legend. Inset: resonance circle that we calibrate to obtain $\delta\phi$ and $\delta a$ components from the real and imaginary parts of the $S_{21}$ signal. } \label{fig:pulse} \end{figure} \subsection{Response to energy absorption} We can relate the energy release in the substrate $E$ to the energy absorbed at every resonator ${\Eabs}$ through an efficiency factor $\eta$, so that ${\Eabs}=\eta E$. The efficiency can be factorized as $\eta={\etageom}{\etapb}$, where ${\etageom}$ depends on the geometry of the detector and the transmission coefficients at the interface, and it is the parameter that we shall extract from the MC simulation, and ${\etapb}$ is the pair-breaking efficiency in Al, which we take as $\sim$0.57~\cite{2000PhRvB..6111807K}. Now, {\Phiqp} in Eq.~\ref{eq:pulse} represents the overall change in $\delta\phi$ corresponding to an increment in the quasi-particle population {\Nqp}={\Eabs}/$\Delta=\eta E/\Delta$, that can be calculated from the Mattis-Bardeen theory in the thin film limit. After some analytical approximations~\cite{Mazin:2005zy} we can write: \begin{equation} \label{eq:phase} {\Phiqp}=\frac{\alpha S_2(\nu,{\Tqp})Q}{N_0V\Delta({\Tqp})}\frac{\eta E}{\Delta({\Tqp})}, \end{equation} where $N_0V$ is the single spin density of states at the Fermi level (1.72$\times 10^{10}$~eV$^{-1}$~$\mu$m$^{-3}$ for Al~\cite{2003Natur.425..817D}) multiplied by the active volume of the resonator, $\alpha$ is the fraction of kinetic inductance $L_{KI}/L$, {\Tqp} is the effective temperature of the quasiparticle system, larger than the sink temperature due to {\power}, and $S_2$ is a dimensionless factor given by the Mattis-Bardeen theory. The parameters $\Delta$, $\alpha$, $S_2$ and $Q$ are measurable quantities for a given {\power}, therefore from the pulse fit we can obtain $\Phi_\textrm{qp}$ and determine through Eq.~\ref{eq:phase} the efficiency $\eta$ of every pixel in order to compare with the MC results. \subsection{Experimental configurations} \label{sec:exp} \indent\indent We study two different detector configurations with different KID characteristics and layout. \par The first prototype (P1 in the following) consists of a single KID lithographed on a $380~\mu m$ thick Si substrate with a size of 2$\times$2~cm$^2$. Fig.~\ref{fig:1kid} shows a picture of the detector mounted in the copper holder (left panel) and a schematic design of the single KID (right panel). The inductor section is a meander of 30 strips of 62.5~$\mu$m$\times$2~mm, with gap of 5~$\mu$m between them, and the capacitor is composed of only two fingers. The total active area is 4.0~mm$^2$, excluding the gaps and including the active region that connects the inductor to the capacitor. The feedline is a 72~$\mu$m width CPW that cuts across the Si substrate from side to side. The pixel and feedline are made of 60~nm thick Al. Four cylindrical Teflon supports, one at every corner of the substrate, fix the detector to a copper holder that is anchored to the cryostat. The contact area between Si and Teflon is lower than 3~mm$^2$ at every support. For detailed results of this prototype, see~\cite{2017ApPhL.110c3504C}. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{kid1.pdf} \caption[]{Left: Picture of the P1 prototype: A 60~nm thick Al KID deposited on a $2\times2~\textrm{cm}^2$ 380~$\mu$m thick Si substrate. Four Teflon supports, one at every corner, fix the detector to a copper holder that is anchored to the cryostat. Right: geometry of the single pixel: An inductor made of 30 strips of 62.5~$\mu$m$\times$2~mm, with gap of 5~$\mu$m between them, and a capacitor composed of two fingers.} \label{fig:1kid} \end{figure} \par In the second prototype, that we label as P4 (see Fig.~\ref{fig:4kid}), the wafer is $375~\mu m$ thick and there are four Al KIDs with an inductive meander made of 14 connected strips of 80~$\mu$m$\times$2~mm closed by a capacitor made of 5 interdigitated fingers of 1.4~mm$\times$50~$\mu$m. The active area of the single pixel is $1.15\times2$mm$^2$. The feedline is a 420~$\mu$m width and 60~nm thick CPW. \par Compared to P1, P4 has smaller contact area between Si and Teflon, as it is held by only two supports at opposite edges in the middle of the substrate. The contact area at every support is about 3~mm$^2$, so the total interface Si-Teflon is halved with respect to P1. In turn, the feedline is $\sim$6 times wider. \par \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{kid4Figure.pdf} \caption[]{Left: Picture of the P4 prototype: Four Al KIDs are deposited on a 300~$\mu$m thick Si substrate with a size of 2$\times$2~cm$^2$. Two cylindrical Teflon supports with a contact area of around 3~mm$^2$ each hold the substrate in the copper structure. Right: geometry of the single pixel (60~nm thick Al film): an inductive meander with 14 connected strips of 80~$\mu$m$\times$2~mm and capacitor made of 5 interdigitated fingers of 1.4~mm$\times$50~$\mu$m. The active area of the single pixel is $1.15\times2$mm$^2$.} \label{fig:4kid} \end{figure} \begin{table*}[ht] \begin{center} \begin{tabular}{@{\extracolsep{\fill}} l l p{1.8cm} p{1.5cm} p{3cm} l l l l l l l l l} \hline \hline Prototype & coupling & T$_c$ & $\Delta_0$ & $\alpha$ & $P_{\mu\nu}$ & KID & $Q$ & $Q_i$ & $Q_c$ & $\tau_\textrm{ring}$ & \multicolumn{3}{c}{Source}\\ & & [K] & [$\mu$V] & [\%] & [dBm] & & [k] & [k] & [k] & [$\mu$s] & pos [mm] & $\phi$ [mm] & {\Tex} [$\mu$s] \\ \hline P1 & inductive & 1.180$\pm$0.020 & 179$\pm$3 & 2.54 $\pm$0.9$_{stat}\pm$0.26$_{syst}$ & -76.8 & 1 & 149 & 2301 & 159 & 18.2 & (0,-6) & 4.66 & 10 \\ \hline \multirow{4}{*}{P4} & \multirow{4}{*}{capacitive} & \multirow{4}{*}{1.300$\pm$0.025} & \multirow{4}{*}{197$\pm$4} & \multirow{4}{*}{2.14$\pm$0.04$_{stat}\pm$0.27$_{syst}$} & \multirow{4}{*}{-79.1} & 1 & 18.6 & 69.7 & 25.4 & 2.23 & \multirow{4}{*}{ (0,0) } & \multirow{4}{*}{4.66} & \multirow{4}{*}{1} \\ & & & & & & 2 & 38.4 & 99.6 & 62.4 & 4.59 \\ & & & & & & 3 & 138 & 899 & 162 & 16.4 \\ & & & & & & 4 & 266 & 407 & 772 & 31.6 \\ \hline \hline \end{tabular} \end{center} \caption{Experimental relevant parameters of the simulated experiments with P1 and P4 prototypes. See text for details.} \label{tab:Q} \end{table*} \par We operate both prototypes as described at the beginning of this section. The first step is to select the excitation power {\power}. High powers feature in general a better SNR, as the noise in our setup is dominated by the amplifier and goes with $1/\sqrt{\power}$, but as we raise {\power} the resonances show an increasing distortion and the relationship of Eq.~\ref{eq:phase} is not longer valid~\cite{2017ApPhL.110c3504C}. Therefore we perform a power scan and select the largest {\power} before distortion. \par For every prototype we measure the parameters that enter in Eq.~\ref{eq:phase} and report their values at the selected {\power} in Table~\ref{tab:Q}. The $Q$, $Q_i$ and $Q_c$ factors are computed by fitting the resonance circle as described in \cite{2016JLTP..184..142C}. We determine the critical temperature $T_c$ (that for thin films depends on thickness and other parameters, as the quality of the deposition) during the cooling-down and infer $\Delta_0$ from BCS theory. Then, we compute $\alpha$ from the resonant frequency shift as we increase the thermal quasiparticle density by increasing the base temperature of the system. We fit the resulting curve to the Mattis-Bardeen theory prediction~\cite{2006NIMPA.559..585G}, keeping $\Delta_0$ fixed in the fit. For the P4 prototype we average the results of the four resonators. \par The detectors are illuminated on the back of the substrate by an optical fiber coupled to a fast warm LED ($\lambda=$400~nm). The LED equivalent energy is calibrated with a photomultiplier, and the calibration is checked at very low intensity by photon counting Poisson statistics~\cite{2018SuScT..31g5002C}. In Table~\ref{tab:Q} we report also the source position with respect to the center of the substrate, the diameter of the illuminated spot ($\phi$) and the optical pulse duration. \par We take $\mathcal{O}(2000)$ LED pulses for every configuration. In order to improve the SNR we apply a software low-pass filter with 100~kHz cut-off whose effect is included in the pulse fitting. Finally, we average the pulses and perform a fit as described in Sec.~\ref{sec:pulseTime} to obtain {\tph} and $\eta$. We report the results for each KID in Table~\ref{tab:data}. \begin{table}[ht] \begin{center} \begin{tabular}{@{\extracolsep{\fill}} p{2cm} p{1cm} p{1.5cm} p{3.5cm}} \hline \hline Prototype & KID & ${\eta}$ [\%] & ${\tph}$ [$\mu$s] \\ \hline P1 & 1 & 13.3$\pm$1.1 & $25.4\pm0.1_{stat}\pm0.2_{syst}$ \\ \hline \multirow{4}{*}{P4} & 1 & 2.9$\pm$0.3 & $16.8\pm1.4_{stat}\pm2.3_{syst}$ \\ & 2 & 6.7$\pm$0.7 & $8.64\pm0.14_{stat}\pm0.84_{syst}$ \\ & 3 & 6.2$\pm$0.7 & $9.09\pm0.05_{stat}\pm0.56_{syst}$ \\ & 4 & 2.7$\pm$0.4 & $15.5\pm0.5_{stat}\pm2.8_{syst}$ \\ \hline \hline \end{tabular} \end{center} \caption{Experimental results of the P1 and P4 prototypes. For every KID we report the efficiency ${\eta}$ and the characteristic phonon arrival time ${\tph}$. } \label{tab:data} \end{table} The error in $\eta$ is dominated by the systematic error in $\Delta_0$ and $\alpha$. For {\tph}, in addition to the statistical error of the fit, we estimate a systematic one by starting from different sets of initial fit parameters and by taking pulses with different {\Tex} ranging from 1 to 10~$\mu$s. The $\chi^2$/NDF of the fits range between 1 and 3.5 for all the KIDs except for KID3, for which we obtain values between 4 and 6.8. In P4 prototype there is a very small ($\sim$200~$\mu$m) rightwards shift of the kids layout with respect to the center of the substrate. It is not appreciable in Fig.~\ref{fig:4kid}, but it is responsible for the slight ($\sim$7\%) larger efficiency of KIDs 1 and 2 with respect to KIDs 3 and 4, as they are slightly closer to the source. The simulation also includes the shift, so we expect to observe this small effect in the MC results as well. \section{G4CMP MC implementation}\label{sec:g4cmp} \par The G4CMP package simulates: (1) the generation of acoustic phonons and electron-hole pairs in a material after an energy deposition; (2) their propagation in the media, anisotropic according to the material elastic constants for phonons and driven by an electric field for the charge; (3) the two principal phonon scattering processes described in Sec.~\ref{sec:mc} with isotropic approximation; and (4) a simplified implementation of the reflection and transmission mechanisms at interfaces, in which the multirefringence is not considered: the phonon is transmitted through the boundary with a probability given by the transmission coefficient, or it is reflected back, following a specular reflection for the AMM model or a Lambertian one for DMM. So, in the current implementation no mode conversion occurs. \par In our simulation, as no electric field is applied to the detector, the charge is not taken into account and all the energy of the interaction goes to the phonon channel. Following a photon absorption in the Si substrate, acoustic phonons are generated isotropically along the incident particle track. The energy distribution of the primordial phonons is unknown, nevertheless their effects are wiped out after the quasidiffusion stage, so we take the Debye energy ($\sim$62~meV in Si) as starting point and select the polarization L, FT or ST randomly according to the DOS in the material. The history of every phonon is followed recording its polarization, $\omega$ and \textbf{k} until one of the following conditions is verified: (1) it is absorbed in Al (KIDs or feedline) or Teflon; (2) its energy drops below $2\Delta$; (3) a predefined number of reflections {\nmax} is reached. \par We simulate a simplified geometry of the detector with four main components: Si wafer, Teflon supports, the feedline and the KIDs, both made of Al (see Fig.~\ref{fig:simuGeom}). For the sake of keeping the simulation computing time at a reasonable level, only a certain fraction of the phonons $\xi_\textrm{tr}$ are tracked (see Tab.~\ref{tab:params}) and the final results are scaled with this value. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{simulationP1P4.pdf} \caption[]{A sketch of the main components included on the MC simulation of the prototype P1 (left) and P4 (right). In both cases the fiber spot (brown) has a diameter of 4.66~mm and fires on the opposite side of the KIDs.} \label{fig:simuGeom} \end{figure} In order to determine the effect of the reflection model and transmission coefficients we generate a batch of simulations spanning {\TSiAl} and {\TSiTef} from 0.1 to 1, for both models. It is worth mentioning that the code does not implement phonon propagation in Al, so a phonon absorbed in the KIDs generates a signal with probability {\etapb} or is killed. Hence, (1-{\TSiAl}) include the probability of a phonon to enter the Al and to be reflected back to the Si substrate. \par A single simulation event starts with the generation of about 10$^4$ optical photons ($\lambda$=400~nm), uniformly distributed at the 4.66~mm diameter fiber spot, that are stopped in the first micron of the Si substrate at the opposite face to the KIDs. The spot is centered in the middle of the substrate in P4 simulation, while for P1 it is shifted 6~mm far from the KID in vertical direction, and the photons are distributed in time according to a square pulse of duration {\Tex} (see Table~\ref{tab:Q}). For every configuration we generate between 20 and 50 single events. The outputs of the simulation are the time, energy, position and polarization of every phonon absorbed in the Teflon, the feedline or the KIDs. \section{Results and discussion} \label{sec:results} \indent\indent For each fiber event in the wafer we construct the phonon pulse evolution for every time and integrate it to obtain the total energy absorbed in the simulation at every KID, that we denote as $\Delta$E$_{ph}$. Then, we scale with the number of tracked phonons $\xi_\textrm{tr}$ and the pair-breaking efficiency {\etapb} to calculate the absorbed energy, and divide by E to obtain the efficiency in a single KID as \begin{equation} \eta=\frac{1}{E}\frac{\etapb}{\xi_\textrm{tr}}\Delta E_{ph}. \end{equation} Fig~\ref{fig:pulsesPh} displays one of such events for AMM model, {\TSiAl}=0.36 and {\TSiTef}=0.4 for both prototypes. The simulation does not include resonator-related time constants ({\tring}, {\tqp}), so the pulse shape is described by Eq.~\ref{eq:phPulse}. The rise time of the phonon pulses is around one order of magnitude smaller than the decay time, so we consider only one time constant {\tph} calculated as {\tphSim}/2.2, where {\tphSim} is the 90$^{th}$ minus the 10$^{th}$ percentile of the phonon distribution. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{pulses1_4KID.pdf} \caption[]{Phonon distribution at the KIDs for P1 (upper panel) and P4 (bottom panel), corresponding to AMM model, {\TSiAl}=0.36 and {\TSiTef}=0.4.} \label{fig:pulsesPh} \end{figure} \par We observe no substantial variations in arrival time among the three polarizations, despite their different velocity ($\sim$9000~m/s longitudinal, $\sim$5400~m/s for the transversal modes) since modes are highly mixed as a consequence of the scattering processes. For example, for the P1 pulse in Fig.~\ref{fig:pulsesPh} we obtain {\tph}=(21.3,~21.3,~21.0)~$\mu$s for (L,~FT,~ST) components separately and {\tph}=21.2~$\mu$s for the three modes together. \par The choice for the {\nmax} parameter is not of great importance in the final results: for the configurations with low values of the transmission coefficients ({\TSiAl}$\sim$0.2, {\TSiTef}$\sim$0.1), only 1-3\% (0.1-0.4\%) phonons undergo more than 200 (500) reflections. For values of {\TSiAl} and {\TSiTef} around 0.4, the percentages are 0.5-1\% (0.05-0.1\%). \par We also study the amount of phonon absorption at every material as a function of phonon frequency and show the results in Fig.~\ref{fig:freqDistrib} for the same configuration as Fig.~\ref{fig:pulsesPh}. The geometric differences among the two prototypes described in Sec.~\ref{sec:exp} (more Teflon in P1, $\sim$6 times wider feedline in P4) are clearly reflected in the simulation: while for P1 most of the phonons are absorbed in Teflon (about 60\% of the total), in P4 the element that is taking the major part is the feedline ($\sim$55\%), followed by the KIDs ($\sim$28\%) and then the Teflon ($\sim17$\%). The maximum of the distributions are at phonon frequencies between 0.7 and 0.9~THz and they are slightly asymmetric, with positive skewness. When the origin of the phonon pulse is close to the absorbing element, as for the feedline and KIDs 2 and 3 in P4, the asymmetry is more pronounced with a longer tail to higher frequencies. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{distribPh1_4KIDArticlev2.pdf} \caption[]{Frequency distribution of the phonons absorbed in the different materials (Teflon, feedline, KIDs) for P1 (upper panel) and P4 (middle panel). In the bottom panel the P4 distribution at every KID is plotted separately.} \label{fig:freqDistrib} \end{figure} \par Finally, in Figs.~\ref{fig:1kidResults} and \ref{fig:4kidResults} we compare the MC results with the experimental data. The red (blue) lines correspond to simulations with constant values of the {\TSiAl} ({\TSiTef}) coefficient, while the points are taken from Table~\ref{tab:data}. In order to estimate a systematic error associated to the simulation we have identified the most sensitive parameters of the model to be the decay constants $R_A$ and $R_I$ and the elastic constants $C_{11}$, $C_{12}$ and $C_{44}$. We have considered a variation of $\pm$5\% for the elastic constants~\cite{1964JAP....35.3312M, 1985PhRvB..32.3792N, CHEN19921} and $\pm$20\% for the decay constants and calculated the variation in {\tph} and $\eta$ for some simulated configurations. The result for AMM model, {\TSiAl}=0.36, {\TSiTef}=0.4 is $\pm$3\% in {\tph} and $\pm$2\% in $\eta$ (green lines in Fig.~\ref{fig:1kidResults}). Similar results are obtained for other configurations. As regards the fraction of tracked phonons $\xi_\textrm{tr}$, increasing it from 2\% up to 20\% produces an error below 0.2\%. For P1, with one single KID, phonon pulses are faster and more energetic for larger {\TSiAl} values. When we increase {\TSiTef} instead, they are also faster, but less energy is collected, as phonons are lost in Teflon. This rule no longer holds true when more than one KID is competing for the same energy deposition, as it is the case of P4: the sensors far from the source (KID1 and KID4 in Fig.~\ref{fig:4kidResults}) reverse behavior, and the collected energy is lower for larger values of {\TSiAl} because it is being more quickly absorbed in the near KIDs and the feedline. The small shift in the KIDs position towards the right side of the wafer in P4 is also noticeable in the simulation and results in larger energy depositions in KID1 and KID2 compared to those of KID3 and KID4. \begin{figure}[h] \centering \includegraphics[width=0.49\textwidth]{1kidTimeEneSpc_syserr.pdf} \includegraphics[width=0.49\textwidth]{1kidTimeEne.pdf} \caption[]{Comparison of the MC results with experimental data for P1 prototype and AMM model (upper panel) or DMM model (bottom panel). The red (blue) lines correspond to simulations with constant values of the {\TSiAl} ({\TSiTef}) coefficient, while the points are taken from Table~\ref{tab:data}. The green error bars represent the systematic uncertainty associated to the MC parameters.}. \label{fig:1kidResults} \end{figure} \begin{figure*}[t] \centering \includegraphics[width=1\textwidth]{4kidTimeEneBothModelsSameFrame.pdf} \caption[]{Same as Fig.~\ref{fig:1kidResults} for the four KIDs of P4.} \label{fig:4kidResults} \end{figure*} \par In general, simulations with DMM model produce slower and less energetic phonon pulses than those with AMM, except when KIDs are very close to the phonon source, as it is the case of KIDs 2 and 3 in P4. An explanation for this behaviour can be found in the very different propagation patterns that phonons follow once they enter the ballistic regime for specular or diffusive reflection. In our geometry we observe a much larger density of phonon tracks in the central part of the wafer for the AMM simulation rather than for DMM. The origin of this different distribution could be, as pointed out by some authors \cite{1984PhRvL..52.2156N}, that phonon caustics survive up to some degree with the specular reflection while a more homogeneous distribution of phonons is expected for a Lambertian reflection For our geometry, the larger concentration of phonons in the central part of the wafer results in a more effective energy collection at the KIDs than in Teflon. \par We obtain a consistent picture between data and simulation for both prototypes for AMM model, while our experimental data cannot be modelled considering only diffuse reflection, unless extreme values of the transmission coefficients are introduced. In the case of P1 (the most sensitive probe for the reflection model, as the energy deposition is far away from the KID), for the same transmission coefficients, the DMM model produce phonon pulses between 2 and 4 $\mu$s slower. This corresponds to between 10 and 20 times our experimental uncertainty. Our simulation with DMM model is not able to produce so fast an energetic phonon pulses as the ones we have measured for this setup, unless a transmission coefficient of almost 1 is considered for {\TSiAl}. \par The range of values of {\TSiAl} that best describe the experimental data is [0.30-0.55]. In the case of {\TSiTef}, P1 data points to the region [0.1-0.15], nevertheless the P4 simulations do not impose a large constraint, as in general the whole {\TSiTef} range agrees with the experimental point at 1$\sigma$ error as a result of the reduced Si-Teflon interface. At a closer look, the AMM P4 simulation could be affected by a systematic bias: in the MC less energy is collected at the KIDs far from the source (KID1 and KID4) with respect to the measurement. This distance-dependent bias could suggest a deficiency of the model that appears when the number of phonon reflections is large. This could be due to the simplification of the reflection mechanisms in the simulation, that currently do not include mode conversion, or other phenomena non considered in the present implementation. For example, a slight dependence of the transmission coefficients with phonon frequency would result in distinct absorption for far and near KIDs, as the phonon frequency distribution is different (see Fig.~\ref{fig:freqDistrib}). A larger substrate and/or a different KID layout would be necessary to test this conjecture. \par It is worth noting that in general we expect the phonon transmission coefficient to be dependent on the thickness for thin films. The experimental data presented here correspond to an Al thickness of 60~nm, and so does the {\TSiAl} transmission coefficient that we have determined. Future measurements with different films will allow us to study this dependency. \section{conclusion} \label{sec:discussion} We have implemented a phonon MC simulation based on the G4CMP extension of the GEANT4 code and applied it to model phonon-mediated cryogenic detectors with thin Si absorbers and Al KID readout, clamped by Teflon supports to a dilution unit at about 20~mK. We have performed two different experiments with different geometries and KID layouts and we have compared the results with those of the MC simulations, considering two different reflection mechanisms at the interfaces (a specular reflection based on the AMM model and a diffuse one for the DMM model) and transmission coefficients spanning form [0.1-1] for the Si-Teflon and Si-Al interfaces. We found a good agreement for transmission coefficients Si-Al in the range [0.3-0.55] and Si-Tef in the range [0.1-0.15] for AMM model, while the simulation with diffuse reflection based on the DMM model does not provide a realistic description of our data. The Si-Al result is valid for an Al film with a thickness of 60~nm. We observe also a hint of a systematic bias in our simulation when the number of phonon reflections is large: simulated phonon pulses are less energetic than data. In the future we will further investigate this issue with larger detectors. The results that we have presented are applicable to other cryogenic detectors with thin Al sensors. \section*{Acknowledgments} This work was supported by the European Research Council (FP7/2007-2013) under Contract No. CALDER No. 335359 and by the Italian Ministry of Research under the FIRB Contract No. RBFR1269SL. The authors thanks the personnel of INFN Sezione di Roma for the technical support, in particular M. Iannone. \bibliographystyle{apsrev} \input{phMC.bbl} \end{document}
{ "timestamp": "2019-06-14T02:19:13", "yymm": "1805", "arxiv_id": "1805.02495", "language": "en", "url": "https://arxiv.org/abs/1805.02495" }
\section{Introduction} \label{sec:intro} NGC 1275 is a well-known galaxy located at the center of the Perseus cluster, at a redshift of $z=0.0179$, with an active galactic nucleus (AGN) classified as a Seyfert 1.5. It is one of a few non-blazar AGNs detected in both high-energy (HE; $>0.1$ GeV) and very high-energy (VHE; $>0.1$ TeV) $\gamma$ rays so far, and is the brightest radio galaxy at GeV energies \citep[e.g.,][]{2010ApJ...720..912A, 2016arXiv161102986R}. Analyzing long-term observations of such bright sources and studying differences between low flux (quiescent) and high flux (flaring) intervals could contribute to our understanding of the physical mechanisms responsible for the $\gamma$-ray emission of AGN. {Generally however, the flux variations of various types of AGN, originating from different emission regions such as the accretion disk, disk coronae, and jets, are of the colored noise-type \citep[e.g.,][and reference therein]{2017ApJ...837..127G}.} For this reason, the distinction between quiescent and flaring states of AGN is always arbitrary; however, we can approximately categorize them into two such states by using the difference of their flux in the $\gamma$-ray band. NGC 1275 has been widely observed at different wavelengths from the radio band to VHE $\gamma$-ray band. In the radio band, NGC 1275 hosts the exceptionally bright radio source Perseus A (also known as 3C 84) with a pair of radio jets and large-scale Fanaroff-Riley type I (FR-I) radio morphology. 3C 84 has been studied in detail with very long baseline interferometry (VLBI). These observations reveal a compact core and jet components to the south that are moving steadily outwards at 0.3 milli-arcseconds per year \citep{2009AJ....138.1874L}. The presence of a faint counter-jet implies a jet angle to our line of sight, $\theta$ = 30\hbox{$^\circ$}--55\hbox{$^\circ$}\ on milli-arcsecond scales \citep{1994ApJ...430L..41V,1994ApJ...430L..45W,2006PASJ...58..261A}, with lower estimates of $\theta \leq 14.4\hbox{$^\circ$}$ for the smallest-scale structures \citep{1992A&A...260...33K} Taking these constraints together indicates curvature of the jet away from the line of sight at larger scales \citep{2006MNRAS.366..758D}. Curiously, a newer sub-pc scale component was discovered near the nucleus in 2007 with continuously increasing radio flux \citep{2012MNRAS.423L.122N}, {and even larger suggested jet angle to the line of sight of $\theta \sim 65\hbox{$^\circ$}$ \citep{2017MNRAS.465L..94F}.} The increase of the radio flux is considered to have originated in the activity of the {jets.} Furthermore, recent studies reported that {increase} in the $\gamma$-ray {flux} may be correlated with the radio {flux densities} in NGC 1275 \citep{2014MNRAS.442.2048D,J. A. Hodgson}. On short timescales {(days and weeks)}, the radio {flux densities were} not highly variable while the $\gamma$-ray {flux} varied widely, indicating that the $\gamma$ rays are produced closer to the core than the radio emission. A systematic $\gamma$-ray study is therefore valuable to investigate the physical origin of the high-energy emission from either the jets, or closer to the central supermassive black hole. In the optical band, photometric observations of the core exhibited some flares and hour-scale time variations \citep{1999A&A...351...21P}. There are two scenarios ascribing the observed optical variability either to the accretion disk in the system \citep{1995A&A...296..628N}, or to the unresolved segment of the jets \citep{2000MNRAS.314..359H}. However, the low optical polarization in the core, at the level of $\sim 0.4\%$ from Kanata observations \citep{2013PASJ...65...30Y}, indicates the jet (synchrotron) emission is not a major component in the optical band. In contrast, \cite{2014A&A...564A...5A} reported that the optical core (KVA telescopes) and $\gamma$-ray (\textit{Fermi} \ Large Area Telescope, LAT) light curves from 2009 October to 2011 February are in good agreement at a $4-5~\sigma$ significance level, thereby suggesting that the $\gamma$-ray nonthermal continuum from NGC 1275 has the same origin as the optical emission. The Perseus cluster is one of the brightest clusters in X-rays, with the 0.5--8 keV emission dominated by the thermal bremsstrahlung of the intracluster medium cooling flow \citep{2003ApJ...590..225C,2011MNRAS.418.2154F}. Although $Swift$-BAT could not resolve the nucleus spatially, an excess of a nonthermal hard X-ray emission from the cluster central regions (galaxy NGC 1275) was detected in the 15--55 keV range \citep{2009ApJ...690..367A}. A correlation between the variable X-ray (5--10 keV) and HE $\gamma$-ray fluxes was reported \citep{2016arXiv160803652F}, but the origin of the nuclear X-ray emission (i.e., disc/corona versus jets) is still under debate. In high-energy $\gamma$ rays, NGC~1275 was discovered with the {$\textit{Fermi}$-LAT}, with an overall spectral energy distribution (SED) from the radio to VHE band {consistent with the standard} synchrotron self-Compton (SSC) jet model \citep{2009ApJ...699...31A,2010ApJ...715..554K}. Together with simultaneous MAGIC VHE observations (from 2009 to 2011), its $\gamma$-ray spectrum from 100 MeV to 650 GeV can be well fit either by a log-parabola or by a power-law function with a sub-exponential cutoff. {The applied SSC model indicates} a relatively small $\theta = 10$\hbox{$^\circ$}--15\hbox{$^\circ$}, and a jet bulk Lorentz factor, $\Gamma_{\rm b} \sim 10$ \citep{2014A&A...564A...5A}. These physical parameters indicate that NGC 1275 is a misaligned BL Lac object \citep{2017arXiv170407960X}. However, the results were obtained from an analysis over a relatively small time interval ($\sim$ 1.4 years). Because NGC 1275 exhibits different $\gamma$-ray flux states, the estimates of the jet physical parameters in the SSC model (i.e., Doppler factor, electron spectrum) in different activity states can help in gaining an understanding of the changing temporal and spectral behaviors of NGC 1275. In this study, we investigate the $\gamma$-ray emission from NGC 1275 with the increased photon statistics of 8 years of $\textit{Fermi}$-LAT observations. These data allow us to study the spectral properties, variations in flux and photon index, and the distribution of the highest-energy photons. In particular, the $\gamma$-ray flux states are well characterized in the flux and spectral hardness (i.e., hardness ratio, photon index) plane, in which NGC 1275 is known to exhibit different radiation states \citep{2014MNRAS.442.2048D}. The $\textit{Fermi}$-LAT long-term observational data help us to understand the transitions between these radiation states. We present the details of the $\textit{Fermi}$-LAT analysis and data reduction in Section \ref{sec:observations}. Our analysis results are presented in Section \ref{sec:results} and discussed in Section \ref{sec:discussion}, based on fitting the overall SED data of NGC 1275 with the one-zone SSC model. Our conclusions are presented in Section \ref{sec:conclusion}. {In this paper we assume a standard flat $\Lambda \rm CDM$ cosmology with $H_0=70 \ \rm km \ s^{-1} \ Mpc^{-1}$ and $\Omega_m=0.29$ \citep{2014ApJ...794..135B}. This corresponds to a linear scale of 1 arcsec$ \ =\ $360 pc at a luminosity distance of $D_L$ =76.7 Mpc.} Throughout this paper, the errors correspond to $1~\sigma$ confidence level. \newpage \section{Observations and Analysis} \label{sec:observations} \subsection{$\textit{Fermi}$-LAT Observations} \label{sec:fermi} The LAT is a pair-conversion telescope onboard the $\textit{Fermi}$ spacecraft, which was launched in 2008, and is designed to cover the energy band from 20 MeV to greater than 300 GeV. The LAT has a large effective area ($\sim$ 9000 cm$^2$ on axis at 10 GeV) and a large field of view ($\sim$ 2.4 sr). The 68\% containment radius for $E > 10$ GeV is approximated as $\theta_{68} = 0\fdg15$, and is approximated as $\theta_{68} = 3\fdg5$ for $E = 100$ MeV. A detailed description of the detector is provided in \cite{2009ApJ...697.1071A}. The 8 years of data used in this study comprise {spacecraft data obtained in sky-survey mode} between 2008 August 4 and 2016 November 15 (MJD 54683 and 57707, respectively). We applied a zenith angle cut of 90\hbox{$^\circ$}\ to reduce the contamination due to $\gamma$ rays from the Earth's limb. The same zenith cut is considered in the exposure calculation using the \textit{Fermi}\ Science Tool {\tt gtltcube}\footnote{The \textit{Fermi}\ Science Tools and standard diffuse emission models are available from the $\textit{Fermi}$ Science Support Center, \url{http://fermi.gsfc.nasa.gov/ssc}}. We used the recommended ``Source" class events \citep{2012ApJS..203....4A} appropriate for a standard analysis. The lower energy bound was fixed at 100 MeV, and the region of interest (ROI) radius was fixed at 30\hbox{$^\circ$}\ in this study to consider the tails of the LAT point-spread function (PSF) sufficiently. The data were analyzed using the $\textit{Fermi}$ Science Tools version v10r0p5, and Instrument Response Functions (IRFs) {P8R2\_SOURCE\_V6\footnote{\url{https://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}}}. The Pass 8 data provide considerable improvements over the data used in earlier \textit{Fermi}-LAT studies, with enhancements in direction reconstruction and classification of events, better energy measurements, and significantly increased effective area allowing us to study the $\gamma$-ray emission from NGC 1275 more precisely. To investigate the $\gamma$-ray flux variations of NGC 1275 (RA=49\fdg951, DEC=41\fdg512; J2000), we calculated the light curve and photon index variation by using a binned {\tt gtlike} {analysis} (standard maximum-likelihood spectral estimator provided with the $\textit{Fermi}$ science tools) using 14-day time bins. For simplicity, we fit the data with a single power-law function. The definition is provided in Section~\ref{sec:8-yearSPEC}, Eq.~\ref{eq:PLbestfit}. As the diffuse background emission should not be variable, we fixed the parameters of the Galactic (gll\_iem\_v06.fits) and isotropic diffuse (iso\_P8R2\_SOURCE\_V6\_v06.txt) templates to their maximum likelihood values for the entire 8-year data set. In addition, we let the normalization and the index of NGC 1275 free. We considered the $\gamma$-ray point sources listed in the 3rd \textit{Fermi}-LAT source catalog (3FGL) \citep{2015ApJS..218...23A} within 30\hbox{$^\circ$}\ of NGC 1275 {(3FGL J0319.8+4130)}. {We also considered a new point source (RA=48\fdg321, DEC=41\fdg526; J2000) found by generating a residual Test Statistic \citep[TS;][]{1996ApJ...461..396M} map over a $10\hbox{$^\circ$} \times 10\hbox{$^\circ$}$ region centered on NGC 1275 using the {\tt gttsmap} tool.} While the spectral parameters for the point sources within 10\hbox{$^\circ$}\ of NGC 1275 were left free in the fits, we fixed the parameters for the point sources beyond 10\hbox{$^\circ$}\ to the maximum likelihood values for the 8-year data set. Figure~\ref{fig:LC_2weeks} shows the variation of the flux and photon index and the TS value against time. For every time bin, the obtained TS values exceed 40 (corresponding to $\sim 6~\sigma$). We note that the effects of the systematic uncertainties are not included in the error bars, and we estimate them to be on the order of 5\% based on the systematic uncertainty of the effective area\footnote{See \url{ https://fermi.gsfc.nasa.gov/ssc/data/analysis/LAT_caveats.html}}. In addition, the systematic uncertainties are not independent between time bins in the analysis. \subsection{$NuSTAR$ Observations} \label{sec:nustar} The $NuSTAR$ Observatory consists of two co-aligned telescopes focusing hard X-rays in the 3--79 keV range onto two focal plane modules, FPMA and FPMB \citep{2013ApJ...770..103H}. It provides relatively low-background imaging capabilities (18\arcsec \ full-width half-maximum) in the hard X-ray band with 2 $\mu$sec relative timing resolution. NGC 1275 was observed by $NuSTAR$ on 2015 November 3 starting at 03:21:08 UT {in target-of-opportunity mode.} The effective on-source exposure was 20.0 ksec. The FPMA and FPMB data were re-processed following the standard $NuSTAR$ data analysis system pipeline, nupipeline v0.4.5 and HEASoft (v6.19), together with the $NuSTAR$ calibration files from CALDB version 20160502. In this paper, we extracted the 3-79 keV spectrum of NGC 1275 using a region with a radius of 10\arcsec \ and evaluated {the local background in an annulus around the source with the inner and outer radii of 10\arcsec and 30\arcsec, respectively.} The resultant {\it NuSTAR} spectrum of NGC 1275 is shown in Section~\ref{SSCmodel}. \subsection{$Chandra$ Observations} \label{sec:chandra} {The Advanced CCD Imaging Spectrometer (ACIS-I) detector on board the $Chandra$ X-ray Observatory has an angular resolution of $\sim 0.5\arcsec$ on-axis operating in the range of 0.2--10 keV. Its very high resolution allows us to investigate the nonthermal emission from the vicinity of the core. To avoid the effects of pileup, we selected three observations (ObsId 12025, 12033, 12036; PI Fabian) with large offset angles $> 7.5 \arcmin$ from the nucleus \citep{2014A&A...564A...5A}. The exposure times are 18.2 (on 2009 November 25), 19.1 (on 2009 November 27) and 48.5 ksec (on 2009 December 2), respectively. The data were analyzed using the $Chandra$ Interactive Analysis of Observations (CIAO) software v4.8, and the $Chandra$ CALDB v4.7.4. Spectral analysis was performed using XSPEC v12.9. We extracted the 0.5-9.5 keV spectrum of NGC 1275 using a region with a radius of 5\arcsec \ and evaluated the local background in an annulus around the source with the inner and outer radii of 5\arcsec and 15\arcsec, respectively. The merged three spectra were fitted simultaneously by adopting the model $phabs \times (mekal + zphabs \times powerlaw)$ in XSPEC, where $phabs$ and $zphabs$ correspond to the Galactic and internal photoelectric absorptions, $mekal$ represents the thermal emission from the hot diffuse gas, and $powerlaw$ is the nonthermal power-law emission from the unresolved central core. We fixed the hydrogen column density from the Galaxy to $1.5 \times 10^{21} \ \rm cm^{-2}$ \citep{2013PASJ...65...30Y}, and the internal absorption column density of $(1.4 \pm 0.3) \times 10^{21} \ \rm cm^{-2}$ is obtained from the fit. The metal abundance and hydrogen density of $mekal$ were fixed to 0.7 solar value and $0.1 \ \rm cm^{-3}$, respectively \citep{2014A&A...564A...5A}. We therefore obtained the temperature of $14.4 \pm 3.3 \ \rm keV$. The photon index was $2.11 \pm 0.16$, and the total integral flux in the 2--10 keV band was $1.14 \times 10^{-11} \ \rm erg \ cm^{-2} \ s^{-1}$. The resultant power-law component of the {\it Chandra} X-ray spectrum (bow-tie) of NGC 1275 is shown in Section~\ref{SSCmodel}. } \section{Results} \label{sec:results} \subsection{Light curve}\label{sec:lightcurve} \begin{figure}[ht!] \centering \includegraphics[scale=0.21, bb=58 0 1268 848]{LC_2weeks.pdf} \caption{$\textit{Fermi}$-LAT light curve and variation of photon index for NGC 1275 over the time interval from 2008 August - 2016 November in 2-week time-bins. $\bf{Top \ panel:}$ changes in the $E > 100$ MeV flux. {The magenta dashed line shows the average flux derived from the 8 year analysis.} $\bf{Middle \ panel:}$ changes in the power-law photon index. {The magenta dashed line shows the average photon index.} $\bf{Bottom \ panel:}$ changes in the {Test Statistic}. According to the differences in the {photon index behaviors during the flaring intervals,} we have divided the light curve into two time intervals showing a {large variation} (epoch A) and {no significant variation} (epoch B). In addition, we defined quiescent and the flaring intervals within each epoch as indicated by dotted and dashed double-headed arrows, respectively. {The open and filled diamonds in the middle panel represent the time of the $Chandra$ and $NuSTAR$ observations, respectively.} } \label{fig:LC_2weeks} \end{figure} As can be seen in Figure~\ref{fig:LC_2weeks}, the $\gamma$-ray flux increases gradually by a factor of eight from MJD 55200 to MJD 57300. {We divided the light curve into intervals with relatively low-flux (quiescent interval) and high-flux (flaring interval) states -- i.e., quiescent intervals (MJD 54683--54865, 55061--55369, 55607--56278) and flaring intervals (``flare 1": MJD 54865--55061, ``flare 2": MJD 55369--55607, ``flare 3": MJD 56503--57371).} To verify the validity of this separation {between the flaring and quiescent intervals}, we used the two-sample Kolmogorov--Smirnov test for the flux data, which measures the probability that a univariate dataset is drawn from the same parent population as the other dataset. The calculated probabilities that the quiescent and flaring flux distributions are the same are $0.04\%$ in epoch A and $0.0008\%$ in epoch B. {Then, we divided the light curve into epoch A and epoch B according to the {photon index behaviors during the flaring intervals.} While the photon index changes largely during the flaring intervals at the earliest times (MJD $<$ 55607 = epoch ``A" in Figure~\ref{fig:LC_2weeks}), there is no apparent change in the photon index during later times (MJD $>$ 55607 = epoch ``B" in Figure~\ref{fig:LC_2weeks}) despite the presence of larger $\gamma$-ray flares. {In fact, the variance value of the photon index of epoch A ($0.020 \pm 0.004$) is larger than that of epoch B ($0.007 \pm 0.001$) at $\sim 3 \ \sigma$ significance level.} Regarding the quiescent intervals, we can assume that the states of epoch A and epoch B are almost the same. It is divided just for convenience to calculate the difference spectra between quiescent and flaring intervals (in Section~\ref{sec:resolvedSPEC} and Section~\ref{SSCmodel}).} \subsection{Spectral analysis}\label{sec:spectral} \subsubsection{Eight-year accumulated spectrum}\label{sec:8-yearSPEC} We used a binned likelihood analysis with {\tt gtlike} to investigate NGC 1275's average $\gamma$-ray spectrum for the 8-year LAT data set. We first fitted the $\gamma$-ray emission with {a single power-law function} \begin{equation} \frac{dN}{dE} = N_0 \left( \frac{E}{100~\rm MeV} \right) ^{-\Gamma}. \label{eq:PLbestfit} \end{equation} \noindent The Galactic and isotropic diffuse background components are assumed to exhibit a power-law spectrum as well, and we allow their normalizations to be free. The maximum likelihood power-law parameters obtained from a binned {\tt gtlike} analysis were $N_{\rm 0} = (3.82\pm 0.04) \times 10^{-9} ~\rm {ph~cm^{-2}\,s^{-1}\,MeV^{-1}}$ at 100 MeV and $\Gamma = 2.10 \pm 0.01$, with a corresponding average flux of $F_{>100 \rm MeV} = (3.48 \pm 0.02) \times 10 ^{-7}\rm {ph~cm ^{-2}s ^{-1}}$ (only statistical uncertainties are considered throughout). We then obtained a $\gamma$-ray spectrum by running {\tt gtlike} separately in 22 equally-spaced logarithmic energy bands from 100 MeV to 204.8 GeV. Because the significance was low (TS = 9.3) in the 204.8 to 300.0 GeV band, we calculated a 2 $\sigma$ upper limit. In the analysis, the normalizations of the diffuse backgrounds were fixed to their maximum likelihood values from the entire 8-year data set. The energy range of the 8-year LAT spectrum extends to slightly higher energies than in previous works, which indicated significant emission up to 102.4 GeV \citep{2010ApJ...715..554K,2011MNRAS.413.2785B}. \begin{figure}[ht!] \centering \includegraphics[scale=0.45, bb=20 0 567 386]{nuFnu.eps} \caption{Average 8-year $E > 100$ MeV LAT spectrum of NGC 1275 from 2008 August 4 to 2016 November 15. The dashed line indicates the power-law function determined from {\tt gtlike} while the solid line indicates the best-fit power law with a sub-exponential cutoff. The MAGIC spectrum (from 2009 October to 2010 February) is represented by green dashed bow-tie \citep{2014A&A...564A...5A}.} \label{fig:Cnu} \end{figure} Figure~\ref{fig:Cnu} clearly indicates a cutoff in the spectrum. Thus, we refit the data with {a sub-exponentially cutoff power-law function} \begin{equation} \frac{dN}{dE} = N_0 \left(\frac{E}{100~\rm {MeV}} \right) ^{-\Gamma} \exp \left(- \sqrt{\frac{E}{E_{\rm c}}} \right). \label{eq:PLEXPbestfit} \end{equation} \noindent The maximum likelihood parameters were $N_{\rm 0} = (3.69 \pm 0.04) \times 10^{-9} ~\rm {ph~cm^{-2}\,s^{-1}\,MeV^{-1}}$ at 100 MeV, $\Gamma = 1.93 \pm 0.01$, $E_{\rm c} = 12.0 \pm 1.7$ GeV, and an average flux, $F_{>100 \rm MeV} = (3.34 \pm 0.03) \times 10 ^{-7}\rm {ph~cm^{-2}\,s^{-1}}$. The Test Statistic is TS = 89986 corresponding to a formal significance of $\sim~300~\sigma$. By comparing the log-likelihood \citep{1996ApJ...461..396M} with a single power-law function, we can conclude that {a sub-exponentially cutoff power-law function provides a better representation of the data than a single power-law with a significance of $\sqrt{\Delta \rm TS} \sim~13~\sigma$ ($\Delta$TS = 2 $\times$ ($\log{L_{\rm plexpcut}} - \log{L_{\rm pl}}$) = 169.8,} where $L_{\rm plexpcut}$ and $L_{\rm pl}$ are the likelihood of the sub-exponentially cutoff power-law function and single power-law function, respectively.). Additionally, the obtained best-fit parameters are within $\sim~1~\sigma$ of the MAGIC VHE data up to 650 GeV \citep{2014A&A...564A...5A}, as shown in Figure~\ref{fig:Cnu}. \subsubsection{Time-resolved spectrum}\label{sec:resolvedSPEC} Next, we calculated the time-resolved spectrum {divided into 10 equally-spaced logarithmic energy bands from 100 MeV to 102.4 GeV} for each of the intervals we defined in Section~\ref{sec:lightcurve}. {According to the modeling results presented in Section~\ref{sec:8-yearSPEC}, we assume that their spectra can well be represented by sub-exponentially cutoff power-law functions.} The best-fit parameters are shown in Table~\ref{table:eachspec}, along with the significances compared with {single-power-law fits.} Moreover, to extract representative flaring state spectra, we calculated differences between the spectra for each epoch as the difference between the flaring and quiescent data points. Figure~\ref{fig:subtracted_SED} shows the SED of the quiescent intervals and flaring intervals in epoch A (left panel) and epoch B (right panel). {The obtained photon index of the best-fit power law with a sub-exponential cutoff function for each difference spectrum is shown in Table~\ref{table:eachspec}, where the cutoff energy were fixed to their maximum likelihood values from the entire 8-year data set.} The difference spectra in epoch A are harder than that in epoch B, which indicates that a hard spectral component is injected during the flares in epoch A. On the other hand, the {photon index} of the difference spectrum in epoch B is almost the same as the {photon index} before subtraction. These results suggest that the physical origins of the flares {in epoch A and epoch B} are different. \begin{deluxetable*}{ccccccc}[] \tablecaption{Best-fit parameters for the defined sub-intervals} \tablecolumns{7} \tablewidth{0pt} \tablehead{ \colhead{Epoch} & \colhead{State} & \colhead{$N_0 ~[10^{-9} ~\rm {ph~cm ^{-2} s ^{-1} MeV ^{-1}}$]} & \colhead{$\Gamma$} & \colhead{$E_{\rm c}~\rm[GeV]$} & \colhead{Significance\tablenotemark{a}~[$\rm \sigma$]} & \colhead{{$\Gamma$ (Difference spectrum)\tablenotemark{b}}} } \startdata A & Quiescent & $1.89 \pm 0.08$ & $1.94 \pm 0.04$ & $18 \pm 10$ & 4.2 & $-$ \\ & Flare 1 & $2.14 \pm 0.13$ & $1.87 \pm 0.05$ & $20 \pm 13$ & 3.4 & $1.51 \pm 0.10$ \\ & Flare 2 & $3.06 \pm 0.09$ & $1.79 \pm 0.04$ & $11 \pm 4$ & 6.1 & $1.72 \pm 0.04$ \\ \hlin B & Quiescent & $2.69 \pm 0.05$ & $1.93 \pm 0.03$ & $8 \pm 2$ & 7.8 & $-$ \\%61.5 & Flare 3 & $5.48 \pm 0.09$ & $1.93 \pm 0.02$ & $12 \pm 3$ & 11.1 & $1.88 \pm 0.02$ \\%123.4 \enddata \tablenotetext{a}{Significance of the sub-exponentially cutoff power-law function compared with a single power-law function.} \tablenotetext{b}{{The photon index of the best-fit power law with a sub-exponential cut-off function for each difference spectrum.}} \tablecomments{The parameters of the best-fit power law with a sub-exponential cut-off function for each time-resolved spectrum are obtained by {\tt gtlike}. The definitions of the parameters are in Eq.~\ref{eq:PLEXPbestfit}.} \label{table:eachspec} \end{deluxetable*} \begin{figure*}[] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.72, bb=20 20 360 252]{DifferenceSpectrum_former.eps} \label{subfig:A} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.72, bb=20 20 360 252]{DifferenceSpectrum_latter.eps} \label{subfig:B} \end{minipage} \caption{{Spectra and difference spectra} ($\nu F_{\nu}$), calculated as the difference between the flaring and quiescent {data points}. {The flaring states are plotted as open triangles (flare 1 in epoch A) and open circles (flare 2 in epoch A, flare 3 in epoch B). The quiescent states in both epochs are plotted as filled squares. $\bf{Left \ panel:}$ difference spectra in epoch A (flare 1: magenta filled triangles, flare 2: red filled circles) and their best-fit lines (flare 1: magenta dotted line, flare 2: red dashed line) $\bf{Right \ panel:}$ difference spectrum in epoch B (blue filled circles) and its best-fit line (blue dashed line).}} \label{fig:subtracted_SED} \end{figure*} \subsection{Angular separation of $\gamma$-ray photons} \label{sec:EMAX} To examine whether the highest-energy photons detected by $\textit{Fermi}$-LAT near NGC 1275 are associated with the galaxy, we {investigated} the angular separations of the individual $\gamma$ rays from NGC 1275, as shown in Figure~\ref{fig:EMAX}. We calculated probabilities that the detected photons are associated with NGC 1275 using the $\textit{Fermi}$ Science tool, {\tt gtsrcprob}, which indicated that the highest-energy photons {are likely} associated with NGC 1275. {To run {\tt gtsrcprob}, we performed an unbinned likelihood analysis here.} The contribution of {IC 310, which is a point-like VHE $\gamma$-ray emitter, is considered} to be small because {it lies $\sim 0.6^{\circ}$ from NGC 1275 \citep{2016A&A...589A..33A}.} \begin{figure}[ht!] \centering \includegraphics[scale=0.72, bb=10 0 360 252]{EMAX.eps} \caption{Angular separation of $\gamma$-ray photons against NGC 1275 as a function of photon energy ($E > 10$ GeV). The black filled circles and blue open circles describe the photons which have probabilities greater than 95\% or less than 95\% calculated by {\tt gtsrcprob}, respectively. In addition, the LAT 68\% and 95\% containment radii are indicated with a dotted magenta and blue lines, respectively. We note that the PSF curves represent averages over the field of view. The roman numerals label the five highest-energy photons in order.} \label{fig:EMAX} \end{figure} The five highest-energy photons with measured energies greater than 100 GeV, with 95\% probabilities of being associated with NGC 1275, are plotted in Figure~\ref{fig:EMAX}, with details provided in Table~\ref{table:EMAX}. Although \cite{2010ApJ...715..554K} reported that the highest-energy photon detected was 67.4 GeV during the first year of LAT observations ({MJD 54683--55061}), according to our analysis, the highest energy of the detected photons is 222 GeV during the 8-year interval considered here. Moreover, we plot the high-energy photons with energies greater than 50 GeV on the 8-year light curve, as shown in Figure~\ref{fig:LC_50GeV}. {This suggests that the arrival times of these high-energy photons are almost consistent with the flare intervals.} {In particular,} the highest-energy photon of 222 GeV was detected during the flare 2 interval in epoch A, which might imply that the electrons in the jet were accelerated to higher energies during this interval. \begin{deluxetable*}{cccccc}[htb] \tablecaption{Details of the five highest-energy LAT photons} \tablecolumns{6} \tablewidth{0pt} \tablehead{ \colhead{ } & \colhead{Energy [GeV]} & \colhead{Time [MJD]} & \colhead{RA [\hbox{$^\circ$}]\tablenotemark{a}} & \colhead{DEC [\hbox{$^\circ$}]\tablenotemark{a}} & \colhead{Probability} } \startdata I & 221.5 & $55402.4$ & 49.92 & 41.49 & 0.997\\ $\rm I \hspace{-.1 em} \rm I$ & 164.0 & 56760.9 & 50.03 & 41.50 & 0.997\\ $\rm I \hspace{-.1 em} \rm I \hspace{-.1 em} \rm I$ & 125.6 & 56610.8 & 50.06 & 41.48 & 0.994\\ $\rm I \hspace{-.1 em} \rm V$ & 123.3 & 56578.0 & 49.98 & 41.53 & 0.999\\ V & 109.2 & 57694.7 & 49.94 & 41.51 & 0.999\\ \enddata \tablenotetext{a}{J2000.} \tablecomments{The {Roman} numerals in the first column correspond to the order of the photon energies, as shown in Figure~\ref{fig:EMAX}. The probabilities being associated with NGC 1275 were calculated using the $\textit{Fermi}$ science tool {\tt gtsrcprob}.} \label{table:EMAX} \end{deluxetable*} \begin{figure}[ht!] \centering \includegraphics[scale=0.75, bb=10 0 360 252]{EMAX50GeV_with_LC.eps} \caption{Arrival times and energies of high-energy photons plotted on the 8-year $E > 100$ MeV LAT light curve in 2-week bins ({black points}). The left vertical axis indicates the photon energy, while the right vertical axis indicates $\gamma$-ray flux. The blue circles indicate photons with energies {from 50 GeV to 100 GeV}. The magenta squares indicate photons with energies greater than 100 GeV (see Table \ref{table:EMAX}).} \label{fig:LC_50GeV} \end{figure} \section{Discussion} \label{sec:discussion} \subsection{Hardness ratio} To investigate the physical origins of the $\gamma$-ray flux increase in NGC 1275 for each epoch, we calculated the hardness ratio (HR) defined as the LAT measured 1--300 GeV flux divided by 0.1--1 GeV flux. Figure~\ref{fig:HR_30days} shows the HR in monthly bins and the flux light curve for comparison. {The HR plot indicates that the spectral shape in epoch A changes considerably with relatively large variations in flux, while the HR values do} not change significantly in epoch B even during the large flares. \begin{figure}[ht!] \centering \includegraphics[scale=0.75, bb=10 0 360 252]{HR_30days.eps} \caption{Hardness ratio defined as 1--300 GeV flux divided by 0.1--1 GeV flux against time for NGC 1275, in {1-month bins} (blue points). For comparison, we plotted the light curve with 1-month bin as well ({black points}).} \label{fig:HR_30days} \end{figure} {In Figure~\ref{fig:HR_vs_flux}, we show the {relation} between $\gamma$-ray HR and flux for the large flaring intervals in epoch A and B.} {We obtained {the Spearman's rank correlation coefficients of -0.60 for the flare 1, 0.31 for the flare 2, and 0.01 for the flare 3; the corresponding chance probabilities of no correlation are 0.02, 0.24 and 0.92, respectively.}} {In epoch A, the HR value changes significantly during the flaring intervals.} {Interestingly, the spectrum became hard (HR $\sim 0.15$) a few months after the flux peak of the flare 1 interval. {This time lag might be explained by the gradual acceleration of the injected soft spectral component in the jet.} On the other hand, in epoch B, the HR value does not change significantly during the flare 3 interval even though the flux change is very large. Considering the difference between the slopes of the lines that fit the hardness ratio-flux points, we can suggest that these defined epochs represent different emission states. \begin{figure}[ht!] \centering \includegraphics[scale=0.72, bb=10 0 360 252]{HR_vs_flux.eps} \caption{Hardness ratio against {flux} for NGC 1275, with {2-week bins}. {The data shown correspond to the three different flaring intervals, and lines denote the corresponding best-fit linear fits.} } \label{fig:HR_vs_flux} \end{figure} We assume two scenarios to explain the temporal and spectral behaviors in epochs A and B. {For epoch A, the variations of high-energy components can be interpreted as due to injections of freshly accelerated high-energy electrons into the emission zone.} In this case, the electrons are accelerated by some mechanism such as an internal shock in the jet \citep{1978MNRAS.184P..61R}. Meanwhile, the flux variations in epoch B are due to changes in the Doppler factor ($\delta$) of a moving blob and/or changes in the electron density of the radiation zone. When the Doppler factor changes, we can observe the spectral peaks with different energies ($\nu \propto \delta$), time variability ($t_{\rm var} \propto \delta ^{-1}$), and luminosities ($L_{\rm obs} \propto \delta ^ 4$) because of relativistic beaming. According to this scenario, the photon index does not change when the flux increases. \subsection{Fractional variability} {In order to characterize in more detail the difference in the source spectral variability between epochs A and B}, we evaluated fractional variabilities of the $\gamma$-ray light curves in different energy bands: 100--178 MeV, 178--316 MeV, 316--562 MeV, 562--1000 MeV, 1--3.16 GeV, 3.16--10 GeV, 10--54.8 GeV, and 54.8--300 GeV. {First, in each energy band we calculated the excess variance $\sigma_{\rm rms}^2$, which is the net variance obtained after subtracting the noise variance from the total variance \citep[e.g.,][and reference therein]{2002ApJ...572..762Z}:} \begin{equation} \sigma_{\rm rms}^2 = \frac{1}{N \overline{x}} \sum_{i=1}^{N} [(x_i -\overline{x}) ^2 - \sigma_i ^2] = \frac{1}{\overline{x}^2} (\sigma_{\rm tot}^2 - \sigma_{\rm noise}^2), \label{eq:Fvar} \end{equation} where $x_i$ is the flux for the $i$-th bin in the light curve and $\overline{x}$ is the mean of $x_i$. {The error estimate on $\sigma_{\rm rms}^2$ is $s_{\rm D}/(\overline{x}^2 \sqrt{N})$, where $s_{\rm D}$ is given by} \begin{equation} s_{\rm D}^2 = \frac{1}{N-1} \sum_{i=1}^{N} \{ [(x_i -\overline{x}) ^2 - \sigma_i ^2] - \sigma_{\rm rms}^2 \overline{x}^2 \}^2. \label{eq:sD} \end{equation} Finally, we obtained the fractional variability {parameters in each energy band, $F_{\rm var} = \sqrt{\sigma_{\rm rms}^2}$, as shown in Figure~\ref{fig:Fvar} for the two epochs A and B.} \begin{figure}[ht!] \centering \includegraphics[scale=0.72, bb=15 0 360 252]{Fvar_AB.eps} \caption{Energy dependence of variability of NGC 1275 with the red filled circles for epoch A and the blue open circles for epoch B. The variability parameter, i.e., the fractional variability ($F_{\rm var}$), was calculated for the eight different energy bands. {The red dotted and blue dashed lines are the best-fit logarithmic functions for each epoch.}} \label{fig:Fvar} \end{figure} {The best-fit line for each epoch in Figure~\ref{fig:Fvar} is calculated by fitting the data with a logarithmic function. The obtained best-fit slopes of epochs A and B are $12.4 \pm 2.2$ and $6.7 \pm 1.7$, respectively, and they are different at $\sim~3~\sigma$ significance level.} At higher energies, the values of $F_{\rm var}$ in epoch A are greater than that in epoch B. {This implies that the $\gamma$-ray continuum of the source in epoch A varies more strongly in the high-energy band, consistently with the idea that the origin of the flux increase in this epoch is related to changes (hardening) in the underlying electron energy distribution due to an enhanced acceleration of particles.} {In epoch B, we do observe an elevated fractional variability in the high-energy band as well, albeit here the difference between the low- and high-energy segments of the $\gamma$-ray continuum is much less significant in this respect, in {agreement} with the idea that in this epoch at least the bulk of the observed flux {variations} is due to the changes in the jet Doppler factor, rather than an enhanced particle acceleration.} \subsection{Synchrotron self-Compton model fits} \label{SSCmodel} {The overall double-peaked SED of the radio galaxy NGC 1275 is similar to that of a blazar, and in particular a low-power blazar of the BL Lac type} \citep{2009ApJ...699...31A,2014A&A...564A...5A}. {Therefore, we attempt to model it with a standard homogeneous one-zone SSC model developed and widely used for BL Lac objects in general} \citep[for details see][]{1996ApJ...463..555I,2003ApJ...593..667M}. The SSC model considers an electron energy density distribution with a form of $N(\gamma) = K \gamma^{-n} {(1+\gamma/\gamma_{\rm brk})^{-1}}$ for an electron Lorentz factor $\gamma$ ($\gamma_{\rm min} < \gamma < \gamma_{\rm max}$), where $\gamma_{\rm brk}$ represents the electron break Lorentz factor at which the radiative cooling time equals the acceleration time. The electron density and the electron spectrum slope are represented by $K$ and $n$, respectively. In addition, the other physical parameters of this model are the source radius, $R$, the magnetic field, $B$, and the Doppler factor, $\delta = 1/[\Gamma_{\rm b} (1-\beta \cos \theta)]$, where $\beta$ is the bulk speed of the plasma moving along the jet, the bulk Lorentz factor, $\Gamma_{\rm b} = [1-\beta^2]^{-1/2}$, and $\theta$ is the angle between the {jet axis and the line of sight}. The source size $R$ can be approximately constrained by {the observed variability (flux doubling) timescale in the LAT band, $t_{\rm var} \simeq$ a few months, to be $R < c t_{\rm var} \delta \lesssim 10^{18}$ cm for the expected $\delta \leq 10$.} The magnetic field $B$ can be obtained from the ratio of the synchrotron and SSC luminosities, $L_{\rm sync}/L_{\rm SSC} \simeq U_B/U_{\rm rad}$, where $U_B = B^2 /8 \pi$ is the magnetic energy density and $U_{\rm rad}$ is the synchrotron radiation energy density \citep{1979rpa..book.....R}. \subsubsection{Epoch A} The left panel of Figure~\ref{fig:SSC} shows the multi-wavelength $\nu F_{\nu}$ SED of NGC 1275 obtained with the radio-to-high-energy $\gamma$-ray data including the LAT data for the {quiescent intervals (MJD 54683--54865, 55061--55369) and the flaring intervals (flare 1: MJD 54865--55061, flare 2: MJD 55369--55607)} {described in Section~\ref{sec:resolvedSPEC}}. In the radio band, RATAN 600 \citep{2009ApJ...699...31A}, MOJAVE \citep{2009ApJ...699...31A}, and the archival NED (NASA/IPAC extragalactic database) data were used. We used the same radio data in {all} of the quiescent and flaring intervals because the radio emission is considered to be considerably less variable than the $\gamma$-ray band \citep{2014MNRAS.442.2048D}. In the optical/UV band, MITSuME \citep{2009ApJ...699...31A}, {\it Swift}-UVOT, and NED data were used. As the optical emission is contaminated by the host galaxy, the optical data do not contribute to the SSC fitting. The RATAN 600, MOJAVE, and MITSuME data are contemporaneous with the LAT quiescent data in 2008, and the {\it Swift}-UVOT data {were} obtained from an observation in 2007. The data in the X-ray band such as {\it Chandra} {(this work; MJD 55160--55167)} and {\it Swift}-BAT \citep{2009ApJ...690..367A} correspond to the quiescent state of NGC 1275. The VHE data are derived from the MAGIC observations from 2009 to 2010 \citep{2014A&A...564A...5A}. We fit the SED with the one-zone SSC model using the observational data from the quiescent and flaring $\gamma$-ray flux states. {The overall trend of the SEDs is adequately represented by the one-zone SSC model both in the quiescent and flaring intervals as shown in Figure~\ref{fig:SSC}; however, detailed comparison suggests significant deviation between the data and model especially in the soft X-ray band. A similar discrepancy can be seen in Figure 7 of \cite{2009ApJ...699...31A}, and that paper therefore considered a more complicated, decelerating flow model \citep{2003ApJ...594L..27G} to obtain a better fit to the data. In reality, not only the velocity gradient but other physical parameters, such as magnetic field strength, jet cross section and even the electron spectrum itself may vary at the same time along the jet path \citep[see, for example,][]{1980ApJ...235..386M}. Nevertheless, the one-zone SSC model is a rough but useful way to consider the origin of the SED evolution without any complexities in the model \citep[e.g.,][]{1997A&A...320...19M}. Table \ref{table:SSCparam} reports the parameters obtained from our SSC model.} The derived physical parameters for the quiescent interval such as the {source radius of $R = 0.8 \times 10^{18}$ cm, the magnetic field of $B = 0.04$ G, the electron density of $K \sim 45 \ \rm cm^{-3}$, the electron spectrum slope of $n = 2.6$ and the Doppler factor of $\delta = 2.7$ are almost the same as those for the flaring intervals}, and the values of the magnetic field are in the typical range found for BL Lacs \citep{2010MNRAS.401.1570T}. However, we changed the {maximum Lorentz factor from $\gamma_{\rm max} = 2.5 \times 10^5$ (quiescent interval) to $\gamma_{\rm max} = 4.0 \times 10^5$ (flare 1) and $\gamma_{\rm max} = 3.5 \times 10^5$ (flare 2).} {We additionally changed the break Lorentz factor from $\gamma_{\rm brk} = 0.8 \times 10^5$ (quiescent interval) to $\gamma_{\rm brk} = 1.0 \times 10^5$ (flare 1) and $\gamma_{\rm brk} = 1.8 \times 10^5$ (flare 2).} This SSC fitting indicates that the flux variation between the quiescent and flaring states is explained by changing only the electron Lorentz factor parameters that are related to the acceleration of electrons, which is consistent with our hypothesis for the $\gamma$-ray flux changes during epoch A. The highest-energy photon of $\epsilon_{\rm max}=222$ GeV was detected {in the flare 2 interval} in epoch A as described in Section~\ref{sec:EMAX}. This photon is {considered to originate} from scattering in the Klein-Nishina (KN) regime, because the energy of the seed photon in the rest frame of the relativistic electron is larger than 511 keV. As the energy of the scattered photon in the KN regime is provided by $\epsilon_{\rm max} \sim m_{\rm e} c^2 \gamma_{\rm max} \delta$ \citep{1998ApJ...509..608T}, we can estimate the maximum electron Lorentz factor to be $\gamma_{\rm max} \sim 1.6 \times 10^5$, which is {smaller than} the result from the SSC fitting {($\gamma_{\rm max} = 3.5 \times 10^5$ in the flare 2 interval).} {However, we note that the MAGIC observation detected higher energy photons of $\sim 650$ GeV \citep{2014A&A...564A...5A}, which implies a larger maximum electron Lorentz factor.} \subsubsection{Epoch B} The LAT data for the quiescent interval (MJD 55607--56278) and the flaring interval {(flare 3: MJD 56503--57371)} are plotted in the right panel of Figure~\ref{fig:SSC}. For epoch B, we also plot the {\it NuSTAR} data {on 2015 November 3} (this work), which corresponds to {MJD 57329 in} the $\gamma$-ray flaring interval in epoch B. The other data plots are the same as described for epoch A. We fit the SED with the one-zone SSC model to the emission of the $\gamma$-ray quiescent state and flaring state, and the best-fit parameter values are shown in Table \ref{table:SSCparam}. Moreover, we can confirm that the SED data of NGC 1275 for epoch B are well represented by the one-zone SSC model, as shown in Figure~\ref{fig:SSC}. In particular, the fit in the flaring interval is consistent with the {\it NuSTAR} data, which suggests that the X-ray variability component is the same as in the $\gamma$ ray (i.e., originating in the jet). The derived physical parameters for epoch B such as the magnetic field of $B = 0.04$ G, the electron spectrum slope of $n=2.6$ and the maximum electron Lorentz factor of $\gamma_{\rm max} = 1.0 \times 10^5$ (which is smaller than used to fit the data in epoch A) are unchanged between the quiescent and flaring intervals. Meanwhile, the Doppler factor of the flaring interval is $\delta =3.6$, which is larger than that during the quiescent interval, $\delta = 2.7$. In addition, the electron density changed from $K = 48 \ \rm{cm^{-3}}$ (quiescent interval) to $K = 270 \ \rm{cm^{-3}}$ {(flare 3)}, and the source radius changed from $R = 1.0 \times 10^{18} ~\rm cm$ (quiescent interval) to $R = 0.4 \times 10^{18} ~\rm cm$ {(flare 3)}, which indicates that the physical parameters of the blobs such as the Doppler factor $\delta$ and the viewing angle $\theta$ in the jet for the two intervals are not the same. {Interestingly, the overall SED data cannot be fitted solely by changing the bulk Lorentz factor from $\Gamma_{\rm b} = 2.0$ (quiescent interval) to $\Gamma_{\rm b} = 3.3$ {(flare 3)}. It also requires the jet-viewing angle to be changed from $\theta= 20\hbox{$^\circ$}$ (quiescent interval) to $\theta= 16\hbox{$^\circ$}$ {(flare 3)}.} This could indicate the direction of motion of the blob in the jet is closer to the line of sight when the flux increased. {We can therefore assume} that the bright $\gamma$ rays are emitted {in the proximity} of the core (milli-arcsecond scales){,} considering the VLBI observations \citep{1992A&A...260...33K,1994ApJ...430L..41V,1994ApJ...430L..45W,2006PASJ...58..261A} suggest the jet angle to the line of sight decreases with proximity to the core. Thus, we likely observed different $\gamma$-ray {flux} because of changes in Doppler beaming {due to} changing locations of the emission region. From the obtained SSC results, we can suggest that the origins of the flux variations of NGC 1275 are clearly different depending on the observation periods. Although the jet-viewing angle parameters are small ($\theta \sim 20\hbox{$^\circ$}$) in both epochs compared with $\theta = 30-55\hbox{$^\circ$}$ obtained by VLBI radio observations, the fitting results {generally support the scenarios we presented here.} \begin{figure*}[] \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.72, bb=10 10 360 252]{MultiSED_former2.eps} \label{subfig:A_SSC} \end{minipage} \begin{minipage}[b]{0.5\linewidth} \centering \includegraphics[scale=0.72, bb=10 10 360 252]{MultiSED_latter.eps} \label{subfig:B_SSC} \end{minipage} \caption{Overall SED of NGC 1275 obtained with multi-wavelength data, using RATAN-600 \citep{2009ApJ...699...31A}, MOJAVE \citep{2009ApJ...699...31A}, MITSuME \citep{2009ApJ...699...31A}, NASA/IPAC Extragalactic Database, {\it Swift}-UVOT \citep{2009ApJ...699...31A}, {\it NuSTAR} (this work), \textit{Fermi}-LAT (this work), {\it Chandra} bow-tie {(this work)}, {\it Swift}-BAT bow-tie \citep{2009ApJ...690..367A}, MAGIC bow-tie \citep{2014A&A...564A...5A}, and Whipple upper limit \citep{2006ApJ...644..148P}. {The quiescent and flaring SEDs are fitted with the one-zone SSC model and denoted by blue dashed line (quiescent intervals), magenta dotted line (flare 1), red solid line (flare 2), and magenta solid line (flare 3).} $\bf{Left \ panel:}$ the SSC model fitting for epoch A. {The blue squares, magenta open triangles, and red open circles represent the LAT data in the quiescent intervals (MJD 54683--54865, 55061--55369), flare 1 (MJD 54865--55061), and flare 2 (MJD 55369--55607), respectively. {The {\it Chandra} data (brown bow-tie), which correspond to MJD 55160--55167 in the $\gamma$-ray quiescent interval, are also plotted.} } $\bf{Right \ panel:}$ the SSC model fitting for epoch B. The blue squares and magenta open circles represent the LAT data in the quiescent interval (MJD 55607--56278) and {flare 3} (MJD 56503--57371), respectively. The {\it NuSTAR} data (orange open triangles) which correspond to {MJD 57329 in the $\gamma$-ray flaring interval} are also plotted.} \label{fig:SSC} \end{figure*} \begin{deluxetable*}{cccccccccccc}[htb] \tablecaption{{The fitted physical} parameters for the SSC model reported in Figure~\ref{fig:SSC}} \tablecolumns{12} \tablewidth{0pt} \tablehead{ \colhead{Epoch} & \colhead{State} & \colhead{$R$ [cm]} & \colhead{$B$ [G]} & \colhead{$K$ $[\rm cm^{-3}]$} & \colhead{$n$} & \colhead{$\gamma_{\rm min}$} & \colhead{$\gamma_{\rm brk}$} & \colhead{$\gamma_{\rm max}$} & \colhead{$\delta$} & \colhead{$\Gamma_{\rm b}$} & \colhead{$\theta$ $[^{\circ}]$} } \startdata A & Quiescent & $0.8 \times 10^{18}$ & 0.04 & 45 & 2.6 &10.0 & $0.8 \times 10^3$ & $2.5 \times 10^5$ & 2.7 & 2.0 & 20 \\ & Flare 1 & $0.7 \times 10^{18}$ & 0.04 & 50 & 2.6 & 10.0 & $1.0 \times 10^3$ & $4.0 \times 10^5$ & 2.7 & 2.0 & 20\\ & Flare 2 & $0.6 \times 10^{18}$ & 0.04 & 50 & 2.6 & 10.0 & $1.8 \times 10^3$ & $3.5 \times 10^5$ & 2.7 & 2.0 & 20\\ \hline B & Quiescent & $1.0 \times 10^{18}$ & 0.04 & 48 & 2.6 &10.0 & $0.8 \times 10^3$ & $1.0 \times 10^5$ & 2.7 & 2.0 & 20\\ & Flare 3 & $0.4 \times 10^{18}$ & 0.04 & 270 & 2.6 &10.0 & $0.7 \times 10^3$ & $1.0 \times 10^5$ & 3.6 & 3.3 & 16\\ \enddata \tablecomments{The obtained physical parameters for the quiescent and flaring states for both epochs A and B. The parameters are the source radius $R$, the magnetic field $B$, the electron density $K$, the electron spectrum slope $n$, the minimum electron Lorentz factor $\gamma_{\rm min}$, the break Lorentz factor $\gamma_{\rm brk}$, the maximum electron Lorentz factor $\gamma_{\rm max}$, the Doppler factor $\delta$, and the bulk Lorentz factor $\Gamma_{\rm b}$, the angle between the {jet axis and the line of sight} $\theta$.} \label{table:SSCparam} \end{deluxetable*} \section{Conclusions} \label{sec:conclusion} We presented an analysis of 8 years of $\textit{Fermi}$-LAT data for the nearby radio galaxy NGC 1275. The LAT spectrum accumulated over 8 years is best described by a power law with {a sub-exponential cutoff}, with {the photon index $\Gamma = 1.93 \pm 0.01$ and the} cut-off energy $E_{\rm c} = 12.0 \pm 1.7$ GeV. This is consistent with the result in the VHE band ($\sim$ 65--650 GeV) from MAGIC observations \citep{2014A&A...564A...5A}. Based on positional coincidence, we found the highest-energy photon during the {8 years, with $E \sim 222$ GeV, has} $>99\%$ probability of association with NGC 1275. We analyzed the variations in the LAT lightcurve over the 8-year timespan and found that the correlation between the $\gamma$-ray flux and photon index changed around MJD 55607. In epoch A (MJD $<$ 55607), the emission from NGC 1275 is {interpreted as the injection of high-energy electrons in the jet}. On the other hand, there is no apparent correlation in epoch B (MJD $>$ 55607) despite larger flares observed than in epoch A. To explain these evidently different behaviors, we suggested different scenarios for the two epochs with the flux variations due to acceleration of the electrons during epoch A, and due to variations of the Doppler factor and/or the electron density during epoch B. In order to verify these hypotheses, we fit the overall SED data with one-zone SSC models for flaring and quiescent time intervals during each epoch. {The simultaneous observations of {\it Chandra} and {\it NuSTAR} can help us to obtain more accurate parameters.} The SSC fitting for epoch A requires changing the maximum Lorentz factor from $\gamma_{\rm max} = 2.5 \times 10^5$ (quiescent interval) to {$\gamma_{\rm max} = 4.0 \times 10^5$ (flare 1) and $\gamma_{\rm max} = 3.5 \times 10^5$ (flare 2).} Meanwhile, the flares in epoch B may be caused by variation of the Doppler factor from $\delta = 2.7$ (quiescent interval) to $\delta =3.6$ {(flare 3)}, which is interpreted as being due to changes of the bulk Lorentz factor and the angle between the blob velocity and the line of sight. Although the jet-viewing angle parameter is small ($\theta \sim 20\hbox{$^\circ$}$) in both epochs compared with the VLBI radio observations \citep{1994ApJ...430L..41V,1994ApJ...430L..45W,2006PASJ...58..261A,2017MNRAS.465L..94F}, the fitting results support our scenarios. Particularly, for epoch B, the fitting requires a change of the jet-viewing angle from 20\hbox{$^\circ$}\ (quiescent interval) to 16\hbox{$^\circ$}\ {(flare 3)}, which indicates that the direction of motion of the blob in the jet is closer to the line of sight when the flux increases because of relativistic beaming effect. The previous report that there is some curvature of the jet away from the core \citep{2006MNRAS.366..758D,2012ApJ...746..140S} supports this relationship between the jet-viewing angle and the flux increase. Although we considered a scenario with only one emission zone in this study, the emission region and the radiation mechanism may be more complicated. In fact, a few radio-emission jet components, known as C1 and C3 (moving to the south), exist near the nucleus \citep{2012MNRAS.423L.122N}. Moreover, \cite{J. A. Hodgson} found that some of the $\gamma$-ray emission likely originates in the C3 region, while short-time scale variability may be better correlated with the C1 mm-radio emission, suggesting multiple simultaneous sites of $\gamma$-ray emission within the same source. Hence, we suggest that ultimately a multi-zone study might be justified, when more multi-wavelength data are considered. { The multi-zone internal shock scenario involves sequential ejections of many blobs which have various emission region sizes, inducing multiple collisions at various distances from the core and a series of flares \citep{2001ApJ...560..659K,2001MNRAS.325.1559S,2003ApJ...584..153T}. In particular, the different correlation between the HR and flux during the flares in epoch A may indicate multiple emission zones in the jet. The observed fluxes are then sums from the multiple zones; however its spectral behavior may change due to the emission from the different dominating regions where the electrons are injected. } In conclusion, we suggest that the origins of the flux variations of NGC 1275 are different for different epochs. This result is derived from the analyzing the 8 years of the \textit{Fermi}-LAT data, which included both flaring and quiescent states of $\gamma$-ray emission. It is possible that these findings are be applicable to other FR-I AGNs, and we will report on investigation of long-term observations for other objects in the future. \acknowledgments The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \`a l'Energie Atomique and the Centre National de la Recherche Scientifique / Institut National de Physique Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. Additional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515. Work by C.C.C. at NRL is supported in part by NASA DPR S-15633-Y. This work was supported by JSPS KAKENHI Grant Numbers JP17H06362 (M.A.). M.A. acknowledges the support from JSPS Leading Initiative for Excellent Young Researchers program. \software { HEASoft (v6.19), Fermi Science Tools, nupipeline (v0.4.5) } \include{99_bibliography} \end{document}
{ "timestamp": "2018-05-18T02:12:05", "yymm": "1805", "arxiv_id": "1805.02361", "language": "en", "url": "https://arxiv.org/abs/1805.02361" }
\section{Introduction} \label{section 1} Let $(u_n)_{n\geq 0}$ be an integral linear recurrence, that is, $(u_n)_{n\geq 0}$ is a sequence of integers and there exist $a_1, \dots, a_k\in\mathbb{Z}$, with $a_k\neq 0$, such that $$u_{n}=a_{1}u_{n-1}+\cdots+a_{k}u_{n-k},$$ for all integers $n\geq k$, with $k$ a fixed positive integer. We recall that $(u_n)_{n\geq 0}$ is said to be non-degenerate if none of the ratios $\alpha_{i}/\alpha_{j}$ $(i \neq j)$ is a root of unity, where $\alpha_{1}, \dots,\alpha_{r}\in\mathbb{C}$ are all the pairwise distinct roots of the characteristic polynomial $$f_{u}(X)=X^{k}-a_{1}X^{k-1}-\cdots-a_{k}.$$ Moreover, $(u_n)_{n\geq 0}$ is said to be a Lucas sequence if $u_0=0, u_1=1,$ and $k=2$. We note that the Lucas sequence with $a_1=a_2=1$ is known as the Fibonacci sequence. We refer the reader to \cite[Chapter 1]{EPSW} for the basic terminology and theory of linear recurrences. The function $g_{u}(n):=\gcd(n,u_n)$ has attracted the interest of several authors. For example, the set of fixed points of $g_{u}(n)$, or equivalently the set of positive numbers $n$ such that $n|u_n$, has been studied by Alba~Gonz\'alez, Luca, Pomerance, and Shparlinski \cite{ALPS}, under the mild hypotheses that $(u_{n})_{n\geq 0}$ is non-degenerate and that its characteristic polynomial has only simple roots. Moreover, this problem has been studied also by Andr\'e-Jeannin \cite{J}, Luca and Tron \cite{LT}, Sanna \cite{S2}, Smyth \cite{SM} and Somer \cite{SO}, when $(u_{n})_{n\geq 0}$ is a Lucas or the Fibonacci sequence. On the other hand, Sanna and Tron \cite{S3, ST} have analysed the fiber $g_{u}(y)^{-1}$, when $(u_{n})_{n\geq 0}$ is non-degenerate and $y=1$, and when $(u_{n})_{n\geq 0}$ is the Fibonacci sequence and $y$ is an arbitrary positive integer. Moreover, the image $g_{u}(\mathbb{N})$ has been investigated by Leonetti and Sanna \cite{LS}, again when $(u_{n})_{n\geq 0}$ is the Fibonacci sequence. Other important questions about the function $g_{u}(n)$ are related to its behaviour on average and its distribution as arithmetic function. From now on, we focus on the specific case in which $(u_n)_{n \geq 0}$ is a non-degenerate Lucas sequence with non-zero discriminant $\Delta_u = a_1^2 + 4a_2$. Otherwise, the sequence reduces to $u_n=n\alpha^{n}$, for a suitable $\alpha\in\mathbb{Z}$, and $g_u(n)=n$, for every positive integer $n$. Even in this particular situation, it is very difficult to find information on the distribution of $g_{u}(n)$, because of its oscillatory behaviour. For this reason, it is natural to consider the flatter function $\log(g_{u}(n))$, for which an asymptotic formula for its mean value, and more in general for its moments, has been given by Sanna, who proved the following theorem \cite[Theorem 1.1]{S}. \begin{thm} \label{thm 1.1} Fix a positive integer $\lambda$ and some $\varepsilon>0$. Then, for all sufficiently large $x$, how large depending on $a_1,a_2,\lambda$ and $\varepsilon$, we have \begin{equation} \label{eq: 1.1} \sum_{n\leq x}(\log g_{u}(n))^{\lambda}=M_{u,\lambda}x+E_{u,\lambda}(x), \end{equation} where $M_{u,\lambda}>0$ is a constant depending on $a_1,a_2$ and $\lambda$, and the error term is bounded by $$E_{u, \lambda}(x)\ll_{u,\lambda}x^{(1+3\lambda)/(2+3\lambda)+\varepsilon}.$$ \end{thm} Also, Sanna showed that the constant $M_{u,\lambda}$ can be expressed through a convergent series. An immediate consequence of the previous result is the possibility of finding information about the distribution of $g_{u}$ \cite[Corollary 1.3]{S}. \begin{cor} \label{cor 1.2} For each positive integer $\lambda$, we have \begin{equation} \label{eq: 1.2} \#\{n\leq x: g_{u}(n)>y\}\ll_{u,\lambda}\frac{x}{(\log y)^{\lambda•}}, \end{equation} for all $x,y>1$. \end{cor} In the same article, Sanna raised the question of finding an asymptotic formula for the moments of the function $g_{u}(n)$ itself. We are not able to answer to this apparently difficult question, but we can at least give a non-trivial estimate for them. The result is the following. \begin{thm} \label{thm 1.3} For every integer $k\geq 1$ and $u_n$ a non-degenerate Lucas sequence, we have \begin{equation} \label{eq: 1.3} \sum_{n\leq x} g_u(n)^{k}\leq x^{k+1}\exp\left(-\left(1+o(1)\right)\sqrt{(\log x)(\log \log x)}\right), \end{equation} as $x$ tends to infinity and where the $o(1)$ depends on $u$ and $k$. \end{thm} For each positive integer $m$ relatively prime with $a_2$, let $z_u(m)$ be the rank of appearance of $m$ in the Lucas sequence $(u_n)_{n\geq 0}$, that is, $z_u(m)$ is the smallest positive integer $n$ such that $m$ divides $u_n$. It is well known that $z_u(m)$ exists (see, e.g., \cite{R}). Also, put $\ell_u(m) :=lcm(m, z_u(m))$. There is a simple trick to relate the moments of $g_u(n)$ with the rate of convergence of the series $\sum_{m>x, (m,a_2)=1}1/ \ell_u(m)$, which has been partially studied by several authors. We will deduce a slightly weaker version of Theorem \ref{thm 1.3}, in which the constant in the exponential is replaced by $-1/\sqrt{6}+\varepsilon+o(1)$, for every $\varepsilon>0$, from it and the following bound. \begin{prop} \label{prop 1.4} For every non-degenerate Lucas sequence $u_n$, we have \begin{equation} \label{eq: 1.4} \sum_{\substack{m>x \\(m,a_2)=1}}\frac{1}{\ell_u(m)}\leq\exp(-(1/ \sqrt{6}-\varepsilon+o(1))\sqrt{(\log x)(\log \log x)}), \end{equation} when $x$ is large in terms of $\varepsilon$ and where the $o(1)$ depends on $u$. \end{prop} In the proof of Proposition \ref{prop 1.4} we highlight a method, based essentially on the distribution of smooth numbers, to achieve the above bound. It seems reasonable to think that a deeper analysis of the structure of $\ell_u(n)$ could lead to understand better the behaviour of $\sum_{\substack{m>x,(m,a_2)=1}}1/\ell_u(m)$ and consequently to improve the result about the moments of $g_u(n)$. Nevertheless, using a completely different and more direct approach that we will describe later, we can obtain the stronger stated estimate in Theorem \ref{thm 1.3}. It is immediate to deduce from Theorem \ref{thm 1.3} the following improvement on the distribution of $g_{u}(n)$ at least when $y$ varies uniformly in a certain range. \begin{cor} \label{cor 1.5} We have \begin{equation} \label{eq: 1.5} \#\{ n\leq x : g_u(n)>y\}\leq \frac{x^{2}}{y\exp((1+o_{u}(1))\sqrt{(\log x)(\log \log x)})}, \end{equation} for every $y\geq 1$, when $x$ is sufficiently large. \end{cor} \begin{proof} By using \eqref{eq: 1.5} with $k=1$ we obtain \begin{equation} \label{eq: 1.6} \#\{ n\leq x : g_u(n) >y\}\leq \sum_{n\leq x}\frac{g_u(n)}{y} \end{equation} $$\leq \frac{x^{2}}{y\exp((1+o_{u}(1))\sqrt{(\log x)(\log \log x)})},$$ for every $y\geq 1$. \end{proof} We observe that this is an improvement of \eqref{eq: 1.2}, only for certain values of $y$, e.g. like for those satisfying \begin{equation} \label{eq: 1.7} x\exp(-(1/2+o_{u}(1))\sqrt{(\log x)(\log \log x)})\leq y\leq x. \end{equation} Consider now the multiplicative function $L_u(n)$ such that $L_u(p^{k})=\ell_u(p^{k})$, for every prime number $p\nmid a_2$ and power $k\geq 1$, and $L_u(p^{k})=p^{k}$, otherwise. Using arguments coming from the theory of Dirichlet series of multiplicative functions, we end up with the following estimate. \begin{prop} \label{prop 1.6} For every $u_n$ non-degenerate Lucas sequence, we have \begin{equation} \label{eq: 1.8} \sum_{\substack{n>x}}\frac{1}{L_u(n)}\ll_u x^{-1/3+\varepsilon}, \end{equation} for every $\varepsilon>0$, when $x$ is sufficiently large with respect to $\varepsilon$. \end{prop} The above result shows that the lack of multiplicativity of $\ell_u(n)$ is the principle cause for the weaker upper bound in \eqref{eq: 1.4}. \section{Notations} For a couple of real functions $f(x), g(x)$, with $g(x)>0$, we indicate with $f(x)=O(g(x))$ or $f(x)\ll g(x)$ that there exists an absolute constant $c>0$ such that $|f(x)|\leq cg(x)$, for $x$ sufficiently large. When the implicit constant $c$ depends from a parameter $\alpha$ we indicate the above bound with $f(x)\ll_{\alpha} g(x)$ or equivalently with $f(x)=O_{\alpha}(g(x))$. Throughout, the letter $p$ is reserved for a prime number. We write $(a,b)$ and $[a,b]$ to denote the greatest common divisor and the least common multiple of integers $a,b$. As usual, we denote with $\lfloor w\rfloor$ the integer part of a real number $w$ and we indicate with $P(n)$ the greatest prime factor of a positive integer $n$. \section{Preliminaries} We begin by recalling the definition of the Jordan's totient function. \begin{defi} \label{def 3.1} The Jordan's totient function of degree $k$ is defined as $$J_{k}(n)=n^{k}\prod_{p\mid n}\left(1-\frac{1}{p^{k}•}\right),$$ for every $k\geq 1$ and natural integers $n$. \end{defi} Clearly, $J_{1}(n)=\varphi(n)$, the Euler's totient function, and it is immediate to see that $J_k(n)$ verifies the following identity. \begin{lem} We have \begin{equation} \label{eq: 3.1} n^{k}=\sum_{d\mid n}J_{k}(d), \end{equation} for any $k\geq 1$ and natural integers $n$. \end{lem} The next lemma summarizes some basic properties of $\ell_u(n)$ and $z_u(n)$, which we will implicitly use later without further mention. \begin{lem} \label{lem 3.2} For all positive integers $m$, $n$ and all odd prime numbers $p$, we have: \begin{enumerate} \item $m \mid u_n$ if and only if $z_u(m) \mid n$ and $(m,a_2)=1$. \item $z_u([m, n]) = [z_u(m), z_u(n)]$, whenever $(mn,a_2)=1$. \item $m \mid \gcd(n, u_n)$ if and only if $(m, a_2)=1$ and $\ell_u(m) \mid n$. \item $\ell_u([m, n]) = [\ell_u(m), \ell_u(n)]$, whenever $(mn,a_2)=1$. \item $\ell_u(p^{j}) = p^{j} z_u(p)$ if $p\nmid \Delta_u,$ and $\ell_u(p^{j}) = p^{j}$ if $p\mid\Delta_u$, for every $p\nmid a_2$ and $j\geq 1$. \item $z_u(p)|p\pm 1$, if $p\nmid \Delta_u,$ and $z_u(p)=p$ if $p\mid\Delta_u$, for every $p\nmid a_2$. \end{enumerate} \end{lem} For any $\gamma>0$, let us define $$\mathcal{Q}_{u,\gamma}:=\{p: p\nmid a_2, z_u(p)\leq p^{\gamma}\}.$$ The following is \cite[Lemma 2.1]{ALPS}. \begin{lem} \label{lem 3.3} For all $x^{\gamma},y\geq 2$ and for any non-degenerate Lucas sequence $(u_n)_{n\geq 0}$, we have $$\#\{p: z_u(p)\leq y\}\ll_u \frac{y^{2}}{\log y},\ \ \mathcal{Q}_{u,\gamma}(x)\ll_u \frac{x^{2\gamma}}{\gamma\log x}.$$ \end{lem} It has been proven by Sanna and Tron \cite[Lemma 3.2]{ST} that the series $\sum_{(n, a_2)=1}1/\ell_u(n)$ converges. We consider the following identity \begin{equation} \label{eq: 3.2} \sum_{\substack{n>x \\ (n,a_2)=1}}\frac{1}{\ell_u(n)}=\sum_{\substack{n>x\\ P(n)>y\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}+\sum_{\substack{n>x\\ P(n)\leq y\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}. \end{equation} We note that the first sum in the right hand side of \eqref{eq: 3.2} has been already investigated by Sanna \cite[Lemma 2.5]{S} and we report here the result which he obtained. \begin{prop} \label{prop 3.4} We have \begin{equation} \label{eq: 3.3} \sum_{\substack{(m,a_2)=1\\P(m)>y}}\frac{1}{\ell_u(m)}\ll_u\frac{1}{y^{1/3-\varepsilon}•}, \end{equation} for all $\varepsilon\in(0, 1/4]$ and $y\gg_{u,\varepsilon}1.$ \end{prop} Regarding the second sum in the right hand side of \eqref{eq: 3.2} we provide an estimate in the next lemma. \begin{lem} \label{lem 3.5} Supposing that $y>(\log x)^{2}$ and $v=\log x/\log y$ tends to infinity as $x$ tends to infinity, we have \begin{equation} \label{eq: 3.4} \sum_{\substack{n>x\\ P(n)\leq y\\ (n, a_2)=1}}\frac{1}{\ell_u(n)}\ll_u (\log y)e^{-\sqrt{y}/2\log y}+\frac{\log y}{\log v}e^{-v\log v}. \end{equation} \end{lem} \begin{proof} Since $\ell_u(n)\geq n$, we may write \begin{equation*} \sum_{\substack{n>x\\ P(n)\leq y\\ (n, a_2)=1}}\frac{1}{\ell_u(n)}\leq \int_{x}^{\infty}\frac{d\psi(t,y)}{t}, \end{equation*} where $\psi(t,y)$ is the counting function of the $y$-smooth numbers less than $t$. Clearly, we have \begin{equation} \label{eq: 3.5} \int_{x}^{\infty}\frac{d\psi(t,y)}{t}=\frac{\psi(t,y)}{t}\bigg|_{x}^{\infty}+\int_{x}^{\infty}\frac{\psi(t,y)}{t^{2}}dt. \end{equation} To estimate the second term on the right hand side of \eqref{eq: 3.5} we suppose first that $y>\log^{2}(x)$ and then we split it into two parts: \begin{equation*} \int_{x}^{\infty}\frac{\psi(t,y)}{t^{2}}dt=\int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt+\int_{z}^{\infty}\frac{\psi(t,y)}{t^{2}}dt, \end{equation*} where we put $z=e^{\sqrt{y}}$. Using the estimate \cite[Theorem 1, \S 5.1, Chapter III]{T} \begin{equation} \label{eq: 3.6} \psi(t,y)\ll te^{-\log t/2\log y}=t^{1-1/2\log y}, \end{equation} valid uniformly for $t\geq y\geq 2$, we obtain \begin{equation} \label{eq: 3.7} \int_{z}^{\infty}\frac{\psi(t,y)}{t^{2}}\ll \int_{z}^{\infty}t^{-1-1/(2\log y)}dt\ll (\log y) z^{-1/(2\log y)}=(\log y)\exp\left(-\frac{\sqrt{y}}{2\log y}\right). \end{equation} By the Corollary of the Theorem 3.1 in \cite{CEP}, we know that $$\psi(t,y)\leq t\exp\left(-(1+o(1))\frac{\log t}{\log y}\log\left(\frac{\log t}{\log y}\right)\right),$$ in the region $y>\log^{2}t$. Here the $o(1)$ is with respect to $\log t/\log y\rightarrow \infty$. If $v=\log x/ \log y$ tends to infinity as $x$ tends to infinity, then we may use the simpler \begin{equation} \label{eq: 3.8} \psi(t,y)\leq t\exp\left(-\frac{\log t}{\log y}\log\left(\frac{\log t}{\log y}\right)\right), \end{equation} for any $x\leq t\leq z$. Note that equation \eqref{eq: 3.8} also follows from the aformentioned Corollary in \cite{CEP}. Let us suppose to be in this situation. Now, inserting this bound and using the change of variable $s=\log t$, we get $$\int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt\leq \int_{\log x}^{\sqrt{y}}\exp\left(-\frac{s}{\log y}\log\left(\frac{s}{\log y}\right)\right)ds,$$ which after another change of variable $s=w\log y$ becomes $$(\log y)\int_{\log x/\log y}^{\sqrt{y}/\log y}\exp(-w\log w)dw.$$ Using that $w\geq v$ and putting $w\log v=r$, we find \begin{equation} \label{eq: 3.9} \int_{x}^{z}\frac{\psi(t,y)}{t^{2}}dt\leq \frac{\log y}{\log v}\int_{v\log v}^{\sqrt{y}\log v/\log y}e^{-r}dr\leq \frac{\log y}{\log v}e^{-v\log v}. \end{equation} Regarding the first term on the right hand side of \eqref{eq: 3.5}, we note that $$\frac{\psi(t,y)}{t}\bigg|_{x}^{\infty}\leq \lim_{t\rightarrow\infty}\frac{\psi(t,y)}{t}\ll \lim_{t\rightarrow\infty}t^{-1/2\log y}=0,$$ by \eqref{eq: 3.6}. Collecting the results, we obtain the estimate \eqref{eq: 3.4}. \end{proof}\ \\ Finally, we can deduce the stated estimate on $\sum_{n>x}1/\ell_u(n)$. \begin{proof}[Proof of Proposition \ref{prop 1.4}] By Proposition \ref{prop 3.4} and Lemma \ref{lem 3.5} we conclude that $$\sum_{\substack{n>x\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \frac{1}{y^{1/3-\varepsilon}}+\frac{\log y}{\log v}e^{-v\log v},$$ for every $\varepsilon>0$, if $y$ is sufficiently large in terms of $\varepsilon$. It is immediate to see that the best choice for $y$ is of the form $y=\exp(C\sqrt{(\log x)(\log\log x)})$, with $C$ a suitable positive constant to be chosen later. After some easy considerations, we obtain $$\sum_{\substack{n>x\\(n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \exp\left(-C(1/3-\varepsilon)\sqrt{(\log x)(\log \log x)}\right)$$ $$+\exp\left(-\frac{1}{2C} (1-o(1))\sqrt{(\log x)(\log \log x)}\right),$$ where $o(1)$ tends to zero from the right as $x$ goes to infinity. Now, choosing $C=1/\sqrt{2(1/3-\varepsilon)}$, we see that $$\sum_{\substack{n>x\\ (n,a_2)=1}}\frac{1}{\ell_u(n)}\ll_u \exp\left(-\frac{(1-o(1))(1-\varepsilon)}{\sqrt{6}}\sqrt{(\log x)(\log \log x)}\right),$$ for every $\varepsilon>0$ and $x$ sufficiently large with respect to $\varepsilon$. \end{proof} \section{Proof of weak version of Theorem 1.3} \begin{proof} We start inserting equation \eqref{eq: 3.1} inside our main sums. \begin{equation} \label{eq: 4.1} \sum_{n\leq x}(n, u_{n})^{k}=\sum_{n\leq x}\sum_{\substack{d\mid (n, u_n)}} J_{k}(d)=\sum_{d\leq x}J_{k}(d)\sum_{\substack{n\leq x\\ d\mid (n, u_n)}}1=\sum_{\substack{d\leq x\\ (d,a_2)=1}}J_{k}(d)\sum_{\substack{n\leq x\\ \ell_u(d)\mid n }}1, \end{equation} by part (3) of Lemma \ref{lem 3.2}. Clearly, the last sum in \eqref{eq: 4.1} is \begin{equation} \label{eq: 4.2} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}J_{k}(d)\bigg\lfloor \frac{x}{\ell_u(d)}\bigg\rfloor\leq x\sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{J_{k}(d)}{\ell_{u}(d)}\leq x\sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}. \end{equation} But now we observe that \begin{equation*} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}=\sum_{\substack{d\leq x^\delta \\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}+\sum_{\substack{x^\delta <d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)} \end{equation*} $$\ll x^{k\delta}+x^{k}\sum_{\substack{d>x^\delta\\ (d, a_{2})=1}}\frac{1}{\ell_{u}(d)}$$ $$\ll x^{k}\exp(-(1/\sqrt{6}-\varepsilon+o(1))\sqrt{\delta}\sqrt{(\log x)(\log \log x)}),$$ for any $\delta\in (0,1)$, using that the series $\sum_{n}1/\ell_u(n)$ converges and the bound \eqref{eq: 1.4}, and for any $x$ large in terms of $\delta$ and $\varepsilon$. Now, choosing $\delta$ close to 1 as a function of $\varepsilon$, and by the arbitrarity of $\varepsilon$, we find \begin{equation} \label{eq: 4.3} \sum_{\substack{d\leq x\\ (d, a_{2})=1}}\frac{d^{k}}{\ell_{u}(d)}\leq x^{k}\exp(-(1/\sqrt{6}-\varepsilon+o(1))\sqrt{(\log x)(\log \log x)}), \end{equation} where the $o(1)$ depends on $u$ and $k$ and $x$ is chosen large enough with respect to $\varepsilon$. Inserting \eqref{eq: 4.3} in \eqref{eq: 4.2} and \eqref{eq: 4.2} in \eqref{eq: 4.1}, the proof is finished. \end{proof} \section{Proof of Theorem 1.3} \begin{proof} Let $y:=\exp(\frac{1}{2}\sqrt{(\log x)(\log\log x)})$. We define a partition of $\{n: n\leq x\}$, by setting \begin{equation*} \begin{array}{lllll} E_{1}(x)=\{n\leq x: P(n)\nmid u_n\};\\ \\ E_2(x)=\{n\leq x: P(n)\leq y\};\\ \\ E_3(x)=\{n\leq x: P(n)>y^{6},\ P(n)\in Q_{u,1/3}(x)\};\\ \\ E_4(x)=\{n\leq x: P(n)>y^{6},\ P(n)\not\in Q_{u,1/3}(x)\};\\ \\ E_5(x)=\{n\leq x\}\setminus E_1\cup E_2\cup E_3\cup E_4.\end{array} \end{equation*} Let $S_i=\sum_{n\in E_i(x)}(n, u_n)^{k}$, for every $i=\{1,2,3,4,5\}$. We note that if $n\in E_1(x)$, then $(n,u_n)|(n/P(n))$ and we deduce that \begin{equation} \label{eq: 5.1} S_1\leq \sum_{n\leq x}\left(\frac{n}{P(n)}\right)^{k}\leq x^{k}\sum_{n\leq x}\frac{1}{P(n)^{k}}\leq x^{k+1}\exp((-\sqrt{2k}+o(1))\sqrt{(\log x) (\log\log x)}), \end{equation} where the last inequality follows by \cite[equation 1.6]{IP}. Moreover, it is immediate to see that \begin{equation*} S_2\leq x^{k}\psi(x,y)\leq x^{k+1}\exp(-(1+o(1))u\log u), \end{equation*} by the Corollary of Theorem 3.1 in \cite{CEP}, where $u=\log x/\log y$ and $o(1)$ tends to zero as $u$ tends to infinity. We observe that we can apply this result because we chose a value of $y$ sufficiently large. Notice also that by our choice of $y$ we have actually got \begin{equation} \label{5.2} S_2\leq x^{k+1}\exp(-(1+o(1))\sqrt{(\log x)(\log\log x)}), \end{equation} which dominates \eqref{eq: 5.1}. Regarding the third sum, we simply use $S_3\leq x^{k}\# E_3(x)$. Now, if $n\in E_3(x)$ we can factorize $n=P(n)m$, with $P(n)>y^{6}$ and $P(n)\in Q_{u,1/3}(x)$. This implies that $m<x/y^6$ and that $P(n)\in Q_{u,1/3}(x/m)$. Consequently \begin{equation*} \#E_3(x)\leq \sum_{m\leq x/y^6}\#Q_{u,1/3}(x/m)\ll x^{2/3}\sum_{m\leq x/y^6}\frac{1}{m^{2/3}}\ll \frac{x}{y^{2}}, \end{equation*} by Lemma \ref{lem 3.3} and a standard final computation. This leads to \begin{equation} \label{5.3} S_3\ll x^{k+1}\exp(-2\log y), \end{equation} which is of the same order of magnitude of \eqref{5.2}. For what concerns the fourth sum, by part $(1)$ and $(6)$ of Lemma \ref{lem 3.2}, we have that $z_u(P(n))|n$ and $z_u(P(n))|P(n)\pm 1$, implying that $P(n)z_u(P(n))|n$. Note that we can affirm the first two divisibility conditions, because we can suppose $P(n)\nmid a_2 \Delta_u$ and odd, since $y$ is large enough. We deduce that \begin{equation*} \#E_4(x)\leq \sum_{\substack{p>y^6 \\ p\not\in Q_{u,1/3}(x)}}\frac{x}{pz_u(p)}\leq \sum_{p>y^6}\frac{x}{p^{4/3}}\ll \frac{x}{y^{2}}, \end{equation*} by a standard computation. Therefore, we find \begin{equation} \label{5.4} S_4\leq x^{k}\# E_4(x)\ll x^{k+1}\exp(-2\log y), \end{equation} which coincides with \eqref{5.3}. We are left then with the estimate of $S_{5}(x)$. To this aim we strictly follow an argument already employed in the proof of \cite[Theorem 2]{ALPS}. For any non-negative integer $j$, let $I_j:=[2^j,2^{j+1})$. We cover $I:=[y,y^{6})$ by these dyadic intervals, and we define $a_j$ via $2^j=y^{a_j}$. We shall assume the variable $j$ runs over just those integers with $I_j$ not disjoint from $I$. For any integer $k$, define $\mathcal{P}_{j,k}$ as the set of primes $p\in I_j$ with $z_u(p)\in I_k$. Note that, by Lemma \ref{lem 3.3}, we have $\#\mathcal{P}_{j,k}\ll 4^k$. We have \begin{equation} \#E_5(x)\leq\sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\sum_{\substack{n\leq x\\P(n)|u_n\\ P(n)=p}}1\leq \sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\psi\left(\frac{x}{pz_u(p)},p\right) \end{equation} $$\leq \sum_j\sum_k\sum_{p\in\mathcal{P}_{j,k}}\frac{x}{pz_u(p)y^{2/a_j+o(1)}},$$ as $x\to\infty$, where we have used the Corollary of Theorem 3.1 in \cite{CEP} for the last estimate. For $k>j/2$, we use the estimate $$ \sum_{p\in\mathcal{P}_{j,k}}\frac1{pz_u(p)}\leq 2^{-k}\sum_{p\in I_j}\frac1p\leq 2^{-k} $$ for $x$ large. For $k\le j/2$, we use the estimate $$ \sum_{p\in\mathcal{P}_{j,k}}\frac1{pz_u(p)}\ll\frac{4^k}{2^j2^k}=2^{k-j}, $$ since there are at most order of magnitude $4^k$ such primes, as noted before. Thus, \begin{equation} \sum_k\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}= \sum_{k>j/2}\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}+\sum_{k\le j/2}\sum_{p\in\mathcal{P}_{j,k}}\frac{1}{pz_u(p)}\ll 2^{-j/2}=y^{-a_j/2}. \end{equation} Collecting the above computations, we find $$ \#E_5(x)\leq\sum_j\frac{x}{y^{a_j/2+2/a_j+o(1)}},\ \textrm{as}\ x\to\infty. $$ Since the minimum value of $t/2+2/t$ for $t>0$ is $2$ occuring at $t=2$, we may affirm that $$\#E_5(x)\leq x/y^{2+o(1)},\ \textrm{as}\ x\to\infty,$$ which leads to an estimate for $S_5$ as large as that one for $S_2$. We conclude that $$\max\{S_1,S_2,S_3,S_4,S_5\}\leq x^{k+1}\exp(-(1+o(1))\sqrt{(\log x) (\log\log x)}),$$ proving Theorem \ref{thm 1.3}. \end{proof} \section{The multiplicative analogous of $\ell_u(n)$} Let us define the multiplicative function $L_u(n)$ such that $L_u(p^{k})=\ell_u(p^{k})$, for every prime numbers $p\nmid a_2$ and power $k\geq 1$, and $L_u(p^{k})=p^{k}$, otherwise. Now, consider the Dirichlet series of the function $n/L_u(n)$, given by $$\alpha(s)=\sum_{n\geq 1} \frac{n}{n^{s}L_u(n)•}.$$ Suppose that it converges for $s>\sigma_{c}$, where $\sigma_c$ is the abscissa of absolute and ordinary convergence of $\alpha(s)$. Certainly, since $\ell_u(n)\leq L_u(n)$, for every $n$, and since we know that the series of the reciprocals of $\ell_u(n)$ converges, we have $\sigma_{c}\leq 1$. Then, for any $s\in\mathbb{C}$ with $\Re(s)=\sigma>\sigma_{c}$ we can consider the Euler product and it converges to the Dirichlet series in this range. Therefore, we can write $$ \alpha(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\sum_{k\geq 1}\frac{f(p^{k})}{p^{ks}}\right)\beta(s), $$ where $f(n)=n/L_u(n)$ and $\beta(s)$ is an analytic function in $\Re(s)>0$. Since by property (5) of Lemma \ref{lem 3.2} we find that $f(p^{k})=1/z_u(p)$, for any $k\geq 1$ and prime $p\nmid 2a_2\Delta_u$, we have \begin{equation} \label{eq: 6.1} \alpha(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\frac{f(p)}{p^{s}}\frac{p^{s}}{p^{s}-1•}\right)\beta(s)=\prod_{p\nmid 2a_2\Delta_u}\left(1+\frac{1}{z_u(p)(p^{s}-1)}\right)\beta(s). \end{equation} Now, the final product in \eqref{eq: 6.1} converges if and only if \begin{equation*} \sum_{p\nmid 2a_2\Delta_u}\frac{1}{z_u(p)(p^{s}-1)•} \end{equation*} converges. Therefore, it suffices to prove that $$\lim_{x\rightarrow \infty}\sum_{\substack{p>x}}\frac1{z_u(p)(p^{\sigma}-1)}=0.$$ We estimate the last sum separating between primes $p\in\mathscr{Q}_{u,\gamma}$ or $p\not\in \mathscr{Q}_{u,\gamma}$. In the first case we obtain \begin{equation} \label{eq: 6.2} \sum_{\substack{p>x\\ p\in\mathscr{Q}_{u,\gamma}}}\frac{1}{z_u(p)(p^{\sigma}-1)•}\ll \int_{x}^{\infty}\frac{d(\#\mathscr{Q}_{u,\gamma}(t))}{t^{\sigma}}\ll_u \frac{1}{(\sigma-2\gamma)x^{\sigma-2\gamma}•}, \end{equation} by Lemma \ref{lem 3.3}, if we choose $\sigma>2\gamma$. On the other hand, in the second case we get \begin{equation} \label{eq: 6.3} \sum_{\substack{p>x\\p\not\in\mathscr{Q}_{u,\gamma}}}\frac{1}{z_u(p)(p^{\sigma}-1)•}\ll \sum_{p>x}\frac{1}{p^{\sigma+\gamma}•}\ll \frac{1}{(\sigma+\gamma-1)x^{\sigma+\gamma-1}}, \end{equation} if we choose $\sigma+\gamma>1$. Comparing \eqref{eq: 6.2} with \eqref{eq: 6.3}, we are led to take $\gamma=1/3$ and we have showed that \begin{equation} \label{eq: 6.4} \sum_{\substack{p>x}}\frac1{z_u(p)(p^{\sigma}-1)}\ll_u \frac{1}{\varepsilon x^{\varepsilon}•}, \end{equation} if $\sigma=2/3+\varepsilon$, for every $\varepsilon>0$, and consequently that $\alpha(s)$ converges for every $s$ with $\Re(s)>2/3$, or equivalently that $\sigma_c\leq 2/3$. An immediate application of this result is the following. Let us define $$F(s)=\sum_{n\geq 1}\frac{1}{n^{s}L_u(n)•}.$$ Then, $F(s)$ has the abscissa of convergence $\sigma_c '\leq -1/3$. This is equivalent to have obtained a strong bound on the tail of $F(0)$. The intermediate passage is made explicit in the next lemma (see e.g. \cite[\S 11.3, Lemma 1]{A}). \begin{lem} \label{lem 6.1} Suppose that $G(s)=\sum_{n\geq 1}a_n n^{-s}$ is a Dirichlet series of a sequence $(a_n)_{n\geq 1}$ of positive real numbers, with abscissa of convergence $\sigma_c '$. Suppose that $G(0)$ converges. Then, we have $\sigma_c '=\inf\{\theta : \sum_{n>x}a_n\ll x^{\theta}\}.$ \end{lem} Since $F(s)$ satisfies the hypotheses of the Lemma \ref{lem 6.1}, by \eqref{eq: 6.4}, we deduce that \begin{equation*} \sum_{n>x}\frac{1}{L_u(n)•}\ll_u x^{-1/3+\varepsilon}, \end{equation*} for every $\varepsilon>0$, proving Proposition \ref{prop 1.6}. \begin{rmk} We believe that a finer study of $L_u(n)$ could lead to understand better the structure of $\ell_u(n)$, though the lack of multiplicativity of the latter makes difficult its study starting with information from the former. For instance, it can be shown that the integers $n$, which have at least two prime factors $p_1,p_2$ such that a fixed prime $q$ divides both $z_u(p_1)$ and $z_u(p_2)$, have asymptotic density $1$. Thus, when calculating $z_u(n)$ as a least common multiple, there is a cancellation of a factor $q$. In other words, for any positive real number $C$, most integers $n$ have $L_u(n)/\ell_u(n) > C$. This suggests that the two mentioned functions are not always very close each other. \end{rmk} \section*{Acknowledgements} I would like to thank Carlo Sanna for suggesting this problem and for introducing me to the theory of linear recurrences. A special thanks goes also to the anonymous referee, for careful reading and useful advice. \bibliographystyle{amsplain}
{ "timestamp": "2019-01-08T02:27:07", "yymm": "1805", "arxiv_id": "1805.02225", "language": "en", "url": "https://arxiv.org/abs/1805.02225" }
\section*{Introduction} Let $\bfk$ be an algebraically closed field. Fix $q\in \bfk$, $q\ne 0,1$ and $d\in\mathbb{Z}_{\geqslant 0}$. We study in this paper several versions of Hecke and Schur algebras of type $A$ including in particular a new higher level affine Schur algebra. \subsection*{Hecke algebras and their Schur versions} To introduce the players, let $\operatorname{H}^{\operatorname{fin}}_d(q)$ be the ordinary Hecke algebra of rank $d$ over the field $\bfk$ (i.e.,\ $\operatorname{H}^{\operatorname{fin}}_d(q)$ is a $q$-deformation of the group algebra $\bfk\mathfrak{S}_d$ arising from the convolution algebra of complex valued functions on the finite group $\operatorname{GL}_d(\mathbb{F}_q)$ which are constant on double cosets for a chosen Borel subalgebra). Let $\operatorname{H}_{d}(q)$ be its (extended) affine version, that means it equals $\operatorname{H}^{\operatorname{fin}}_d(q)\otimes \bfk[X_1^{\pm 1},\ldots,X_d^{\pm 1}]$ as a vector space and with a certain multiplication such that both tensor factors are subalgebras. It naturally arises from the convolution algebra of compactly supported functions defined on the p-adic group $\operatorname{GL}_d(\mathbb{Q}_q)$ which are constant on double cosets for an Iwahori subalgebra. These algebras play a crucial role in p-adic representation theory, see e.g. \cite{Bushnelletal}, \cite{KLHecke}. The algebra $\operatorname{H}_{d}(q)$ has a family of remarkable finite dimensional quotients $\operatorname{H}_{d}^{\bfQ}(q)$, called \emph{cyclotomic Hecke algebras} or {\it Ariki-Koike algebras} which are deformations of the group algebra $\bfk (\mathfrak{S}_d\ltimes (\mathbb{Z}/\ell\mathbb{Z})^d)$. These algebras are well-studied objects in representation theory. For an excellent overview we refer to \cite{Mathascycl}. The Dipper-James-Mathas cyclotomic $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ was defined in \cite{DJM} in the following way. For each $\ell$-composition $\lambda$ of $d$ they construct some element $m_\lambda$ in $\operatorname{H}_{d}^{\bfQ}(q)$. Then they define the algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ as an endomorphism algebra of the right $\operatorname{H}_{d}^{\bfQ}(q)$-module $\bigoplus_\lambda m_\lambda \operatorname{H}_{d}^{\bfQ}(q)$. We would like to define an affine version $\operatorname{S}_{d,\bfQ}(q)$ of the algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ such that \begin{itemize} \item $\operatorname{S}_{d,\bfQ}(q)$ has a nice faithful polynomial representation, and \item $\operatorname{S}_{d,\bfQ}(q)$ surjects to $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$. \end{itemize} So, we ask the following question. \centerline{\it What should be the correct definition of the algebra $\operatorname{S}_{d,\bfQ}(q)$?} One might expect that the affine version $\operatorname{S}_{d,\bfQ}(q)$ can be defined similarly as an endomorphism algebra of the $\operatorname{H}_{d}(q)$-module $\bigoplus_\lambda m_\lambda \operatorname{H}_{d}(q)$. However, this approach does not work. The reason is that in the cyclotomic case, some polynomials appear in the definition of the element $m_\lambda$. These polynomials play an important role for the structure of the $\operatorname{H}_{d}^{\bfQ}(q)$-module $m_\lambda\operatorname{H}_{d}^{\bfQ}(q)$. But in the affine case, these polynomials do nothing with the $\operatorname{H}_{d}(q)$-module $m_\lambda \operatorname{H}_{d}(q)$. So, the $\operatorname{H}_{d}(q)$-modules $m_\lambda \operatorname{H}_{d}(q)$ becomes quite boring. However, this approach is known to work in the "no level" case: the (no level) affine $q$-Schur algebra is defined in \cite{vigneras} as the endomorphism algebra of an $\operatorname{H}_{d}(q)$-module, very much in parallel to \cite{DJM}. At the same time, the cyclotomic $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ is defined for higher levels. Our goal, is to give a higher level version $\operatorname{S}_{d,\bfQ}(q)$ of the affine $q$-Schur algebra $\operatorname{S}_{d}(q)$. The existence of such an algebra seems to be natural from the analogy with the KLR algebras. Indeed, the affine higher level Schur version of the KLR algebra (the higher level quiver Schur algebra) is defined in \cite{SW}. However, the definition in \cite{SW} is purely geometric (as usually happens for KLR-like algebras), while the definitions of the Hecke-like algebras are algebraic. So the definition of the higher level quiver Schur algebra in \cite{SW} does not tell us what the definition of the higher level affine $q$-Schur algebra should be. Finally, we define the higher level affine $q$-Schur algebra $\operatorname{S}_{d,\bfQ}(q)$ in the following way: the definition is in two steps. First, we define the higher level version $\operatorname{H}_{d,\bfQ}(q)$ of $\operatorname{H}_{d}(q)$ by generators and relations. After that, we define $\operatorname{S}_{d,\bfQ}(q)$ as the endomorphism algebra of some $\operatorname{H}_{d,\bfQ}(q)$-module. As we explained before, in the cyclotomic case, the $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ is defined in \cite{DJM} in one step from $\operatorname{H}_{d}^{\bfQ}(q)$. However, in the affine case, there is no known direct way to define $\operatorname{S}_{d,\bfQ}(q)$ from $\operatorname{H}_{d}(q)$. This is probably the reason why the algebra $\operatorname{S}_{d,\bfQ}(q)$ was not known before. One more important point is to define the polynomial representation of $\operatorname{S}_{d,\bfQ}(q)$. This is easy for the KLR-like algebras because the polynomial representations appears naturally from geometry. On the other hand, the construction of the polynomial representation of $\operatorname{S}_{d}(q)$ is via long and difficult computations. We don't want to follow this approach, but instead give a more conceptual argument. We construct a polynomial representation of $\operatorname{S}_{d,\bfQ}(q)$ as a subrepresentation of the defining representation of $\operatorname{S}_{d,\bfQ}(q)$. We believe that our methods can be transferred to the construction and study of other types of Schur algebras. Although we stick to a very special class of algebras in this paper, our approach seems to work in much more generality (including the case of Clifford-Hecke algebras, \cite{NazarovCliffHecke} or affine zigzag algebras, \cite{KlMuth}). \subsection*{KLR algebras and their Schur versions} Around 10 years ago, Khovanov-Lauda \cite{KL} and Rouquier \cite{Rou2KM} introduced the \emph{quiver Hecke algebra} (also called \emph{KLR algebra}) ${R}_{\nu}$. Again it arises from a convolution algebra structure, but now on the Borel-Moore homology of a Steinberg type variety defined using the moduli space of isomorphism classes of flagged representations of a fixed quiver with dimension vector $\nu$, \cite{VV}, \cite{Rou2KM}. The major interest in these algebras is due to the fact that they are naturally graded and are used to categorify the negative part of a quantum group. This holds in particular for the finite or affine type $A$ versions; the algebras arise in several categorification results on the level of 2-morphisms. They were recently also used to approach modular representation theory of general linear groups, \cite{RW}. These KLR algebras again have a family of interesting (finite dimensional) quotients ${R}_{\nu}^{\bfQ}$ (called \emph{cyclotomic KLR algebras}). Apart from being interesting on their own, these quotients ${R}_{\nu}^{\bfQ}$ categorify simple modules over the before-mentioned quantum group, \cite{LV}, but also give concrete descriptions of categories arising in geometric and super representation theory. A higher level version ${R}_{\nu,\bfQ}$ of the KLR algebra (called \emph{tensor product algebra}) was introduced by Webster \cite{Webster}. The cyclotomic quotient ${R}_{\nu,\bfQ}^{\bfQ}$ of the algebra ${R}_{\nu,\bfQ}$ categorify tensor products of simple modules over a quantum group. Let us give an overview on connections between these algebras. The cyclotomic Hecke algebra $\operatorname{H}_{d}^{\bfQ}(q)$ has a block decomposition $\operatorname{H}_{d}^{\bfQ}(q)=\bigoplus_\nu \operatorname{H}^\bfQ_{\nu}(q)$. Brundan and Kleshchev constructed in \cite{BKKL} an isomorphism between the block $\operatorname{H}^\bfQ_{\nu}(q)$ and the cyclotomic KLR algebra ${R}_{\nu}^{\bfQ}$ of type $A$. A different proof of this isomorphism was given by Rouquier in \cite{Rou2KM} as a consequence of an isomorphism between (an idempotent version of) a localization of $\operatorname{H}_{d}(q)$ and a localization of ${R}_{\nu}$. It is also possible to give a similar proof, using completions instead of localizations, see \cite{Webstergraded}, \cite{MS}. (The completion/localization of $\operatorname{H}_{d}(q)$ depends on $\nu$.) To understand the relation between the parameters of the Hecke and KLR algebras note that the Hecke algebra $\operatorname{H}_{d}(q)$ depends on $q\in\bfk\backslash\{0,1\}$, and the cyclotomic quotient $\operatorname{H}_{d}^{\bfQ}(q)$ of $\operatorname{H}_{d}(q)$ furthermore on an $\ell$-tuple $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\in(\bfk^*)^\ell$. On the other hand, the KLR algebra ${R}_{\nu}$ depends on a quiver $\Gamma$ and on a dimension vector $\nu$ for $\Gamma$. The cyclotomic quotient ${R}_{\nu}^{\bfQ}$ of ${R}_{\nu}$ depends also on an $\ell$-tuple $\mathbf{Q}=(Q_1,\ldots,Q_\ell)$ of vertices of $\Gamma$. To describe the blocks $\operatorname{H}^\bfQ_{\nu}(q)$ of $\operatorname{H}_{d}^{\bfQ}(q)$ in terms of KLR algebras, we have to take the quiver $\Gamma=\Gamma_\mathcal{F}$ as in Section~\ref{subs-isom_lHeck-tens-compl}. In particular, this choice of $\Gamma$ allows us to consider $\mathbf{Q}\in(\bfk^*)^\ell$ as an $\ell$-tuple of vertices of the quiver., see \eqref{Corona}. For this choice of $\Gamma$ we have then the isomorphism $\operatorname{H}^\bfQ_{\nu}(q)\simeq {R}_{\nu}^{\bfQ}$ from \cite{BKKL}, \cite{Rou2KM}. The second author and Webster defined in \cite{SW} the \emph{quiver Schur algebra} $A_\nu$ (that is a Schur version of the KLR algebra ${R}_{\nu}$) and its generalizations, the higher level quiver Schur algebras ${A}_{\nu,\bfQ}$ together with a family of cyclotomic quotients $A_{\nu,\bfQ}^{\bfQ}$. Moreover, in \cite{SW}, the isomorphism $\operatorname{H}^\bfQ_{\nu}(q)\simeq {R}_{\nu}^{\bfQ}$ was extended to an isomorphism $\operatorname{S}_{\nu,\bfQ}^{\operatorname{DJM}}(q) \simeq A_{\nu,\bfQ}^{\bfQ}$, where $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)=\bigoplus_{\nu}\operatorname{S}_{\nu,\bfQ}^{\operatorname{DJM}}(q)$ is the Dipper-James-Mathas cyclotomic $q$-Schur algebra (that is the Schur version of $\operatorname{H}_{d}^{\bfQ}(q)$). On the other hand, an affine (no level) version of the isomorphism $\operatorname{S}_{\nu,\bfQ}^{\bfQ}(q)\simeq A_{\nu,\bfQ}^{\bfQ}$ was constructed by Miemietz and the second author, \cite{MS}. It was proved in \cite[Thm.~9.7]{MS} that a completion of the affine Schur algebra $\operatorname{S}_{d}(q)$ (the completion depends on $\nu$) is isomorphic to a completion of the quiver Schur algebra $A_\nu$. \subsection*{The zoology} The zoology of the algebras discussed above can be grouped into two big families: \begin{center} {\it the Hecke family} and {\it the KLR family}. \end{center} An algebra in either family can be \begin{center} affine or cyclotomic, \quad higher level or no level,\quad Schur or not Schur. \end{center} We briefly describe all the possible cases. \begin{enumerate}[(i)] \item {\bf No level, not Schur, cyclotomic.} \newline The algebra in the Hecke family is the cyclotomic Hecke algebra $\operatorname{H}_{d}^{\bfQ}(q)$, its analogue in the KLR family is the cyclotomic KLR algebras ${R}_{\nu}^{\bfQ}$. The isomorphism between a block of the algebra $\operatorname{H}_{d}^{\bfQ}(q)$ and the algebra ${R}_{\nu}^{\bfQ}$ is due to Brundan-Kleshchev \cite{BKKL} and Rouquier \cite{Rou2KM}. \item{\bf No level, not Schur, affine. } \newline The algebra in the Hecke family is the affine Hecke algebra $\operatorname{H}_{d}(q)$, its analogue in the KLR family is the (affine) KLR algebra ${R}_{\nu}$. We have surjections $\operatorname{H}_{d}(q)\to \operatorname{H}_{d}^{\bfQ}(q)$ and ${R}_{\nu}\to{R}_{\nu}^{\bfQ}$. However the isomorphism between a block of $\operatorname{H}_{d}^{\bfQ}(q)$ and ${R}_{\nu}^{\bfQ}$ does in general not lift to the affine level. There are however isomorphisms after suitable completions of $\operatorname{H}_{d}(q)$ and of ${R}_{\nu}$ (where the completion of $\operatorname{H}_{d}(q)$ depends on $\nu$), \cite{Webstergraded}, \cite{MS}. A similar construction using localizations instead of completions was already given in \cite{Rou2KM}. \setcounter{nameOfYourChoice}{\value{enumi}} \end{enumerate} We observe a difference between the cyclotomic case and the affine case:\\For the cyclotomic case, a block of the algebra in the Hecke family is isomorphic to the algebra in the KLR family. In the affine case, the completion of the algebra in the KLR family is isomorphic to the completion of the algebra in the KLR family. We will see that exactly the same thing happens in all the remaining cases below. \begin{enumerate}[(i)] \setcounter{enumi}{\value{nameOfYourChoice}} \item {\bf Higher level, Schur, cyclotomic. } \label{3} \newline The algebra in the Hecke family is the cyclotomic Dipper-James-Mathas $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ from \cite{DJM}. Its analogue in the KLR family is the cyclotomic quiver Schur algebra $A_{\nu,\bfQ}^{\bfQ}$ defined in \cite{SW}. It is proved in \cite{SW} that each block of the algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ is isomorphic to the algebra $A_{\nu,\bfQ}^{\bfQ}$ for some $\nu$. \item {\bf No level, Schur, cyclotomic.} \newline These algebras have no special names. They (and the corresponding isomorphisms) can be obtained as idempotent truncations of the algebras in \eqref{3}. \item {\bf No level, Schur, affine.} \newline The algebra in the Hecke family is the affine $q$-Schur algebra $\operatorname{S}_{d}(q)$ from e.g. \cite{Greenaff}, \cite{vigneras}. Its analogue in the KLR family is the (no level, affine) quiver Schur algebra defined in \cite{SW}. It is proved in \cite{MS} that the algebras $\operatorname{S}_{d}(q)$ and $A_\nu$ are isomorphic after completion. (The completion of $\operatorname{S}_{d}(q)$ depends on $\nu$.) \setcounter{nameOfYourChoice}{\value{enumi}} \end{enumerate} A construction of the algebras in the Hecke families of the remaining cases is done in our paper. All our constructions do not make any assumptions on the characteristic of the underlying field. \begin{enumerate}[(i)] \setcounter{enumi}{\value{nameOfYourChoice}} \item {\bf Higher level, not Schur, affine.}\label{six} \newline The algebra in the KLR family is Webster's tensor product algebra ${R}_{\nu,\bfQ}$, \cite{Webster}. \setcounter{nameOfYourChoice}{\value{enumi}} \end{enumerate} We define the Hecke analogue $\operatorname{H}_{d,\bfQ}(q)$ of ${R}_{\nu,\bfQ}$, called the \emph{higher level affine Hecke algebra}, by generators and relations (algebraically and diagrammatically) in Section~\ref{sec-lHeck} and then prove in Theorem~\ref{thm-isom-lHeck-tens-comp} that the algebras $\operatorname{H}_{d,\bfQ}(q)$ and ${R}_{\nu,\bfQ}$ are isomorphic after completions. On the Hecke side this is with respect to maximal ideals of the centre which we describe in Proposition~\ref{lem-cen-Hdl}. After having finished writing this paper, we were informed that Webster had defined already a similar algebra with an analogous isomorphism result in \cite[Sec.~4]{Webstergraded}. \begin{enumerate}[(i)] \setcounter{enumi}{\value{nameOfYourChoice}} \item {\bf Higher level, not Schur, cyclotomic.}\label{seven} \newline The algebra in the KLR family is here the cyclotomic quotient ${R}_{\nu,\bfQ}^{\bfQ}$ of the tensor product algebra ${R}_{\nu,\bfQ}$ defined in \cite{Webster}. \setcounter{nameOfYourChoice}{\value{enumi}} \end{enumerate} The Hecke analogue of ${R}_{\nu,\bfQ}^{\bfQ}$ is a similar quotient $\operatorname{H}^\bfQ_{d,\bfQ}(q)$, see Definition~\ref{defcylHeck}, of the algebra $\operatorname{H}_{d,\bfQ}(q)$. We prove that each block of the algebra $\operatorname{H}^\bfQ_{d,\bfQ}(q)$ is isomorphic to the algebra ${R}_{\nu,\bfQ}^{\bfQ}$ for some $\nu$, see Theorem~\ref{prop-isom-lHeck-tens-cycl}. As a byproduct we can determine in Corollary~\ref{coro-eigenv_lHeck} the possible eigenvalues of the Laurent polynomial algebra acting on the regular representation $\operatorname{H}^\bfQ_{d,\bfQ}(q)$, which is a well-known fact for ordinary cyclotomic Hecke algebras, \cite[Prop.~3.7]{JamesMathas}. It is important to know the possible eigenvalues to be able to say that the corresponding algebra ${R}_{\nu,\bfQ}^{\bfQ}$ is defined with respect to the quiver $\Gamma_\mathcal{F}$ as in Section~\ref{subs-isom_lHeck-tens-compl}. In particular, this is a quiver of type $A$. \begin{enumerate}[(i)] \setcounter{enumi}{\value{nameOfYourChoice}} \item {\bf Higher level, Schur, affine.}\label{eight} \newline The algebra in the KLR family is the (affine higher level) quiver Schur algebra ${A}_{\nu,\bfQ}$, defined in \cite{SW}. \end{enumerate} We define a Hecke analogue $\operatorname{S}_{d,\bfQ}(q)$ of this algebra in Section~\ref{sec-Schur}, and prove (Theorem \ref{thm-isom-qS-QS-comp}) that the algebras $\operatorname{S}_{d,\bfQ}(q)$ and ${A}_{\nu,\bfQ}$ are isomorphic after completion (the completion of $\operatorname{S}_{d,\bfQ}(q)$ depends on $\nu$). As an important tool, which we feel is important on its own, we construct in Corollary~\ref{prop-polrep-lS} a nice polynomial representation involving partially symmetric polynomials. Altogether, this completes the construction of the Hecke families in all cases together with the corresponding isomorphism theorems. All these algebras arise as algebras (or quotient algebras in the cyclotomic case) of morphisms in some monoidal category. It is the {\it universal higher level category}, Definition~\ref{univtensor}, in the not Schur cases and the {\it universal thickened higher level category}, Definition~\ref{Corona2}, in the Schur cases.\footnote{The former category is in fact a subcategory of the latter, but it is not a full subcategory and to make the embedding compatible with the relations imposed later on, one needs to chose it in a non-obvious way relying on Lemma~\ref{lem-black_cross}.} Both categories are generated on the level of objects by sets $I_b$ and $I_r$, but for the definition of the algebras the set $I_r$ is only involved in the higher level cases. These monoidal categories allow a diagrammatic approach for all the involved algebras and in the non-Schur case diagrammatic presentations. \subsection*{The structure of the paper} The definition of the higher level affine Hecke algebra $\operatorname{H}_{d,\bfQ}(q)$ by generators and relations (algebraically and diagrammatically) can be found in Section~\ref{sec-lHeck}. Next, in Section~\ref{sec-KLR_tens} we construct an isomorphism between a completion of $\operatorname{H}_{d,\bfQ}(q)$ (this completion depends on $\nu$) and a completion of ${R}_{\nu,\bfQ}$. To do this, we use the same strategy as in \cite{MS} (namely the identification of faithful polynomial representations). This is also very much analogous to \cite{Webstergraded}, where similar algebras were introduced, but from a different point of view. Webster's approach is via weighted KLR algebras, whereas our focus is on Schur algebras. In particular our approach is guided by giving a method how to {\it Schurify} different types of Hecke algebras. A cyclotomic version of the isomorphism between the completions of $\operatorname{H}_{d,\bfQ}(q)$ and ${R}_{\nu,\bfQ}$ follows easily from the affine version. This is done in Section~\ref{sec-cycl-quot} and completes case \eqref{seven}. Section~\ref{sec-Schur} contains the definition of the {\it higher level affine Schur algebra} $\operatorname{S}_{d}(q)$, the Hecke analogue $\operatorname{S}_{d,\bfQ}(q)$ of the higher level quiver Schur algebra ${A}_{\nu,\bfQ}$. The isomorphism between a completion of $\operatorname{S}_{d,\bfQ}(q)$ (this completion depends on $\nu$) and a completion of ${A}_{\nu,\bfQ}$ can be found in Section~\ref{sec-QSchur} - completing Case \eqref{eight}. \subsection*{\it Acknowledgments} We thank Alexander Kleshchev for sharing ideas that simplified the construction of polynomial representations of the affine Schur algebras, and also the referee for useful comments. R. M. is grateful for the support and hospitality of the MPI for Mathematics in Bonn, where a big part of this work is done. \subsubsection*{Conventions} We fix as ground field an algebraically closed field $\bfk$ and denote $\bfk^*=\bfk-\{0\}$. All vector spaces, linear maps, tensor products etc. are taken over $\bfk$ if not otherwise specified. For $a,b\in\mathbb{Z}$ with $a\leqslant b$ we abbreviate $[a;b]=\{a,a+1,\ldots,b-1,b\}$. For $d\in\mathbb{Z}_{\geqslant 0}$ we denote by $\mathfrak{S}_d$ the symmetric group of order $d!$ with length function ${l}$. \section{Higher level affine Hecke algebras ${\operatorname{H}_{d,\bfQ}(q)}$} \label{sec-lHeck} \begin{setup} \label{set-up} We fix $q\in \bfk^*$, $q\ne 1$ and integers $d\geqslant 0$, $\ell\geqslant 0$ called {\it rank} and {\it level}, and {\it parameters} $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\in(\bfk^*)^\ell$. We denote $J=\{0,1\}$ and call its elements {\it colours} with $0$ viewed as {\it black} and $1$ viewed as {\it red}. \end{setup} \subsection{The algebraic version} In this section we introduce the main new player, a higher level version of the affine Hecke algebra. \begin{df} Let $J^{\ell,d}\subset J^{\ell+d}$ be the set of $(\ell+d)$-tuples ${\mathbf{c}}=(c_1,\hdots,c_{\ell+d})$ such that $\sum_{i=1}^{\ell+d}c_i=\ell$ (i.e., the tuples containing $d$ black and $\ell$ red elements). Let $\mathfrak{S}_{\ell+d}$ act on $J^{\ell,d}$ by permuting the entries of the tuple in the way such that $\pi({\mathbf{c}})_m={\mathbf{c}}_{\pi^{-1}(m)}$ for $\pi\in \mathfrak{S}_{\ell+d}$. \end{df} \begin{df} \label{def-extaffHecke_alg} The $\ell$-\emph{affine Hecke algebra} ${\operatorname{H}_{d,\bfQ}(q)}$ is the $\bfk$-algebra generated by $e({\mathbf{c}})$ for ${\mathbf{c}}=(c_1,\ldots,c_{\ell+d})\in J^{\ell,d}$, $T_r$ for $r\in[1;\ell+d-1]$ and $X_j$, $X_j'$ for $j\in[1;\ell+d]$, subject to the following defining relations \begin{eqnarray} \sum_{{\mathbf{c}}\in J^{\ell,d}}e({\mathbf{c}})=1,&{and}&e({\mathbf{c}})e({\mathbf{c}})=e({\mathbf{c}}),\label{lHecke1} \end{eqnarray} \vspace{-0.5cm} \begin{eqnarray} X_ie({\mathbf{c}})=X'_ie({\mathbf{c}})=0 &&\mbox{ if }c_i=1,\label{rel_Xe=0}\label{lHecke2new}\\ X_iX'_ie({\mathbf{c}})=X'_iX_ie({\mathbf{c}})=e({\mathbf{c}})&&\mbox{ if }c_i=0,\label{lHecke7} \end{eqnarray} \vspace{-0.5cm} \begin{eqnarray} X_ie({\mathbf{c}})=e({\mathbf{c}})X_i,&{and}& X_i'e({\mathbf{c}})=e({\mathbf{c}})X_i',\label{lHecke4new}\\ X_iX_j=X_jX_i,&{and}& X'_iX'_j=X'_jX'_i,\label{lHecke5new}\\ T_{r}T_{s}=T_{s}T_{r} \mbox{ if }|r-s|>1, &{and}& T_rX_i=X_iT_r~\mbox{ if }|r-i|>1,\\ T_re({\mathbf{c}})=0 \mbox{ if } c_r=c_{r+1}=1,&and&T_re({\mathbf{c}})=e(s_r({\mathbf{c}}))T_r,\label{rel_Te=0} \end{eqnarray} \vspace{-0.5cm} \begin{eqnarray} (T_rX_{r+1}-X_rT_r)e({\mathbf{c}})&=& \begin{cases} (q-1)X_{r+1} &\mbox{ if }c_r=c_{r+1}=0,\\ 0 &\mbox{ else}, \end{cases}\\ (T_rX_{r}-X_{r+1}T_r)e({\mathbf{c}})&=& \begin{cases} -(q-1)X_{r+1}& \mbox{ if }c_r=c_{r+1}=0,\\ 0& \mbox{ else}, \end{cases} \end{eqnarray} \begin{eqnarray} T_r^2e({\mathbf{c}})&=& \begin{cases} (q-1)T_re({\mathbf{c}})+qe({\mathbf{c}})& \mbox{ if }c_r=c_{r+1}=0,\\ \left(X_r-Q_{\sum_{j=1}^{r+1}c_j}\right)e({\mathbf{c}}) &\mbox{ if }c_r=0, c_{r+1}=1,\\ \left(X_{r+1}-Q_{\sum_{j=1}^{r}c_j}\right)e({\mathbf{c}}) &\mbox{ if }c_{r+1}=0, c_{r}=1,\\ \end{cases} \quad\quad \end{eqnarray} \begin{eqnarray} &&(T_{r}T_{r+1}T_{r}-T_{r+1}T_{r}T_{r+1})e({\mathbf{c}})\nonumber\\ &=& \begin{cases} 0 &\text{if $c_{r+1}=0$ and $r<\ell+d-1\quad$}\\ (1-q)X_{r+2}e({\mathbf{c}}) &\text{if $c_{r+1}=1$, $c_r=c_{r+2}=0$ and $r<\ell+d-1$\quad\quad} \end{cases} \quad \end{eqnarray} \normalsize where $i,j$ run through $[1;\ell+d]$ and $r,s$ through $[1;\ell+d-1]$. \end{df} \begin{rk} \label{ordinaryHecke} In case $\ell=0$ (i.e., $\mathbf{Q}=0\in \bfk^0$) the set $J^{\ell,d}\subset J^{\ell+d}$ contains a single element ${\bf c}=(0,0,\ldots,0)$. Then $e({\mathbf{c}})=1$ by \eqref{lHecke1} and $X_r'=X_r^{-1}$ by \eqref{lHecke7}, and the algebra $\operatorname{H}_{d,\bfQ}(q)$ is nothing else than the ordinary (extended) affine Hecke algebra $\operatorname{H}_{d}(q)$, see e.g. \cite{Kirillovlect}, in the normalization from e.g. \cite{MS}. If additionally $q=1$ then we get the smash product algebra $\bfk[S_d]\#\bfk[X_1^{\pm1},\ldots,X_d^{\pm1}]$. Moreover it contains the ordinary finite dimensional Hecke algebra $\operatorname{H}^{\operatorname{fin}}_d(q)$ attached to $\mathfrak{S}_d$ as subalgebra generated by the $T_r$ for $r\in[1;\ell+d-1]$. \end{rk} \subsection{The diagrammatic version} We introduce a diagrammatic calculus generalizing the usual permutation diagrams of the symmetric group. It provides a convenient way to display elements in the higher level affine Hecke algebra and is done by realizing algebras of homomorphisms in some monoidal category. \begin{figure}[t] \begin{eqnarray} \label{Hecke-diag-1} \begin{tikzpicture}[scale=0.6, thick] \draw (-2,0) +(0,-1) .. controls (-0.4,0) .. +(0,1); \draw (-0.4,0) +(0,-1) .. controls (-2,0) .. +(0,1); \node at (2.5,0) {\Large $=\;\;(q-1)$}; \draw(6,0) +(-1,-1) -- +(1,1); \draw(6,0) +(1,-1) -- +(-1,1); \node at (8.5,0) {\Large $+\;\;q$}; \draw (11.5,0) +(0,-1) -- +(0,1); \draw (10,0) +(0,-1) -- +(0,1); \end{tikzpicture} \end{eqnarray} \begin{equation} \label{Hecke-diag-2} \begin{tikzpicture}[scale=0.6, thick] \draw (-3,0) +(1,-1) -- +(-1,1); \draw (-3,0) +(0,-1) .. controls (-4,0) .. +(0,1); \draw (-3,0) +(-1,-1) -- +(1,1); \node at (-1,0) {\Large $=$}; \draw (1,0) +(1,-1) -- +(-1,1); \draw (1,0) +(0,-1) .. controls (2,0) .. +(0,1); \draw (1,0) +(-1,-1) -- +(1,1); \end{tikzpicture} \end{equation} \begin{equation} \label{Hecke-diag-3} \begin{tikzpicture}[scale=0.6, thick] \draw(-8,6) +(-1,-1) -- +(1,1); \draw(-8,6) +(1,-1) -- +(-1,1); \fill (-7.5,5.5) circle (5pt); \node at (-6,6) {\Large $=$}; \draw(-4,6) +(-1,-1) -- +(1,1); \draw(-4,6) +(1,-1) -- +(-1,1); \fill (-4.5,6.5) circle (5pt); \node at (-0.5,6) {\Large $+\;\;(q-1)$}; \draw (2,6) +(0,-1) -- +(0,1); \draw (3.5,6) +(0,-1) -- +(0,1); \fill (3.5,6) circle (5pt); \end{tikzpicture} \end{equation} \begin{equation} \label{Hecke-diag-4} \begin{tikzpicture}[scale=0.6, thick] \draw(-8,6) +(-1,-1) -- +(1,1); \draw(-8,6) +(1,-1) -- +(-1,1); \fill (-7.5,6.5) circle (5pt); \node at (-6,6) {\Large $=$}; \draw(-4,6) +(-1,-1) -- +(1,1); \draw(-4,6) +(1,-1) -- +(-1,1); \fill (-4.5,5.5) circle (5pt); \node at (-0.5,6) {\Large $+\;\;(q-1)$}; \draw (2,6) +(0,-1) -- +(0,1); \draw (3.5,6) +(0,-1) -- +(0,1); \fill (3.5,6) circle (5pt); \end{tikzpicture} \end{equation} \caption{Affine Hecke algebra relations} \end{figure} \begin{df} \label{univtensor} Let $I_b$ and $I_r$ be sets not both empty. The {\it universal higher level category} corresponding to this pair is the $\bfk$-linear strict monoidal category generated as monoidal category by objects $i\in I_b$, called {\it black labels}, and objects $Q\in I_r$, called {\it red labels}, and by morphisms (for any $i,j\in I_b$, $Q\in I_r$) \begin{equation*} \label{Morphmonoidal} \TikZ{[thick,scale=.5] \draw (0,0) node{} to (1,1) node{} (1,0) node{} to (0,1)node{}; \node at (0,-0.3) {\tiny i}; \node at (1,-0.3) {\tiny j};}:\;{i}\otimes{j}\longrightarrow{j}\otimes{i},\quad \TikZ{[thick,scale=.5] \draw (1,1) node{} to (0,0)node{};\draw[wei] (1,0) node{} to (0,1)node{}; \node at (1.2,-0.3) {\tiny Q};\node at (0,-0.3) {\tiny i}; } :\;{i}\otimes {Q}\longrightarrow{Q}\otimes {i},\quad \TikZ{[thick,scale=.5] \draw (1,0) node{} to (0,1)node{};\draw[wei] (1,1) node{} to (0,0)node{}; \node at (-0.2,-0.3) {\tiny Q};, \node at (1,-0.3) {\tiny i}; } :\;{Q}\otimes {i}\longrightarrow{i}\otimes {Q}\end{equation*} called {\it crossings}, and $\TikZ{[thick, scale=.5] \draw (0,0) node{} to (0,1) node{} (0,0.5) node[fill,circle,inner sep=1.5pt]{};\node at (0,-0.3) {\tiny i};}: {i}\longrightarrow {i}$ called {\it dot morphisms}. The diagrams $\TikZ{[thick, scale=.5] \draw (0,0) node{} to (0,1) node{} (0,0.5);\node at (0,-0.3) {\tiny i};}$ and $\TikZ{[thick, scale=.5] \draw[wei] (0,0) node{} to (0,1) node{} (0,0.5);\node at (0,-0.3) {\tiny Q};}$ depict the identity for the black label $i$ respectively for the red label $Q$. \end{df} The tensor product of morphisms is displayed by placing them horizontally next to each other, whereas for the composition of morphisms we place them vertically. We omit indicating labels which can be arbitrary from $I_b$ or $I_r$ except that the colour has to match the colour of the strand. Note that by definition two red strands never cross. \begin{df} \label{diag} Consider the universal higher level category attached to a pair $I_b$ and $I_r$. Then an {\it $(\ell,d)$-diagram} is a morphism between two objects, that (both) are the tensor product of exactly $d$ black labels and $\ell$ red labels, which is a finite composition of tensor products of generating morphisms. \end{df} We observe that for the existence of such a diagram the multiset of black labels and the sequence of red labels for the two involved objects must agree. \begin{df} Let $|I_b|=1$ and $I_r=\bfk^*$. We then define \label{def-extaffHeck_diag} \begin{enumerate} \item the {\it higher level affine Hecke category} as the universal higher level category modulo the {\it affine Hecke algebra relations} \eqref{Hecke-diag-1}-\eqref{Hecke-diag-4} and the {\it higher affine Hecke algebra relations} \eqref{l-Hecke-diag-1}-\eqref{l-Hecke-diag-4} on morphisms; and \item the $\ell$-\emph{affine Hecke algebra} ${\operatorname{H}_{d,\bfQ}(q)}$ as the induced algebra structure on the vector space spanned by all $(\ell,d)$-diagrams with fixed red labels $\mathbf{Q}$ read from from left to right. \end{enumerate} \end{df} \begin{figure} \begin{equation} \label{l-Hecke-diag-1} \begin{tikzpicture}[scale=0.7, thick,baseline=1.6cm] \draw (-2.8,0) +(0,-1) .. controls (-1.2,0) .. +(0,1); \draw[wei] (-1.2,0) +(0,-1) .. controls (-2.8,0) .. +(0,1) node[below,at start]{$Q$}; \node at (-.3,0) {\Large $=$}; \draw[wei] (2.8,0) +(0,-1) -- +(0,1) node[below,at start]{$Q$}; \draw (1.2,0) +(0,-1) -- +(0,1); \fill (1.2,0) circle (5pt); \draw[wei] (-2.8,3) +(0,-1) .. controls (-1.2,3) .. +(0,1) node[below,at start]{$Q$}; \node at (3.5,0) {\Large $-$}; \node at (4.5,0) {\Large $Q$}; \draw[wei] (6.8,0) +(0,-1) -- +(0,1) node[below,at start]{$Q$}; \draw (5.2,0) +(0,-1) -- +(0,1); \draw (-1.2,3) +(0,-1) .. controls (-2.8,3) .. +(0,1); \node at (-.3,3) {\Large $=$}; \draw (2.8,3) +(0,-1) -- +(0,1); \draw[wei] (1.2,3) +(0,-1) -- +(0,1) node[below,at start]{$Q$}; \fill (2.8,3) circle (5pt); \node at (3.7,3) {\Large $-$}; \node at (4.5,3) {\Large $Q$}; \draw (6.8,3) +(0,-1) -- +(0,1); \draw[wei] (5.2,3) +(0,-1) -- +(0,1) node[below,at start]{$Q$}; \end{tikzpicture} \end{equation} \begin{equation} \label{l-Hecke-diag-2} \begin{tikzpicture}[scale=0.6, thick] \draw(-3,6) +(-1,-1) -- +(1,1); \draw[wei](-3,6) +(1,-1) -- +(-1,1); \fill (-3.5,5.5) circle (5pt); \node at (-1.5,6) {\Large $=$}; \draw(0,6) +(-1,-1) -- +(1,1); \draw[wei](0,6) +(1,-1) -- +(-1,1); \fill (.5,6.5) circle (5pt); \draw[wei](5,6) +(-1,-1) -- +(1,1); \draw(5,6) +(1,-1) -- +(-1,1); \fill (4.5,6.5) circle (5pt); \node at (6.5,6) {\Large$=$}; \draw[wei](8,6) +(-1,-1) -- +(1,1); \draw(8,6) +(1,-1) -- +(-1,1); \fill (8.5,5.5) circle (5pt); \end{tikzpicture} \end{equation} \begin{equation} \label{l-Hecke-diag-3} \begin{tikzpicture}[scale=0.6, thick] \draw[wei] (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1) .. controls (-4,3) .. +(0,1); \draw (-3,3) +(-1,-1) -- +(1,1); \node at (-1.5,3) {\Large $=$}; \draw[wei] (0,3) +(1,-1) -- +(-1,1); \draw (0,3) +(0,-1) .. controls (1,3) .. +(0,1); \draw (0,3) +(-1,-1) -- +(1,1); \draw (5,3) +(1,-1) -- +(-1,1); \draw (5,3) +(0,-1) .. controls (4,3) .. +(0,1); \draw[wei] (5,3) +(-1,-1) -- +(1,1); \node at (6.5,3) {\Large $=$}; \draw (8,3) +(1,-1) -- +(-1,1); \draw (8,3) +(0,-1) .. controls (9,3) .. +(0,1); \draw[wei] (8,3) +(-1,-1) -- +(1,1); \end{tikzpicture} \end{equation} \begin{equation} \label{l-Hecke-diag-4} \begin{tikzpicture}[scale=0.6, thick] \draw (-5,3) +(1,-1) -- +(-1,1); \draw[wei] (-5,3) +(0,-1) .. controls (-6,3) .. +(0,1); \draw (-5,3) +(-1,-1) -- +(1,1); \node at (-3,3) {\Large $=$}; \draw (-1,3) +(1,-1) -- +(-1,1); \draw[wei] (-1,3) +(0,-1) .. controls (0.5,3) .. +(0,1); \draw (-1,3) +(-1,-1) -- +(1,1); \node at (2.5,3) {\Large $-\;\;(q-1)$}; \draw (6,3) +(-1,-1) -- +(-1,1); \draw[wei] (6,3) +(0,-1) -- +(0,1); \draw (6,3) +(1,-1) -- +(1,1); \fill (7,3) circle (5pt); \end{tikzpicture} \end{equation} \hspace{1.5cm}(Omitted labels can be arbitrary, but of course fixed in each relation.) \label{lHeckepictures} \caption{Additional relations in the $\ell$-affine Hecke algebra.} \end{figure} The following easy observation justifies our notation ${\operatorname{H}_{d,\bfQ}(q)}$. \begin{lem} \label{lem:twodefs} The algebras in Definitions~\ref{def-extaffHecke_alg} and~\ref{def-extaffHeck_diag} are isomorphic. \end{lem} \begin{proof} One can easily verify by checking the relations that the following correspondence on generators defines an isomorphism of the two algebras. The idempotent $e({\mathbf{c}})$ corresponds to the diagram with vertical strands with colours determined by the sequence ${\mathbf{c}}$. The element $X_ie({\mathbf{c}})$ (resp. $X'_je({\mathbf{c}})$) such that $c_r$ is black corresponds to the diagram with vertical strands with colours determined by the sequence ${\mathbf{c}}$ and a dot labelled by $1$ (resp. $-1$) on the strand number $i$ (counted from the left). (Note that $X_ie({\mathbf{c}})$ and $X'_ie({\mathbf{c}})$ are zero if $c_i$ is red by \eqref{rel_Xe=0}.) The element $T_re({\mathbf{c}})$ such that at least one of the colours $c_r$, $c_{r+1}$ is black corresponds to the diagram with the $r$-th and $(r+1)$th strand intersecting once and all other strands just vertical, with the colours on the bottom of the diagram determined by ${\mathbf{c}}$. (By \eqref{rel_Te=0} we have $T_re({\mathbf{c}})=0$ if $c_r=1=c_{r+1}$.) \end{proof} The usual affine Hecke algebra $\operatorname{H}_{d}(q)$ has an automorphism $\#$ given by $(X_i)^\#=X_i^{-1}$ and $(T_r)^\#=(q-1)-T_r=-q(T_r)^{-1}$. We would like to extend it to the higher level affine Hecke algebra. However, we don't get an automorphism of $\operatorname{H}_{d,\bfQ}(q)$ but we get an isomorphism between $\operatorname{H}_{d,\bfQ}(q)$ and $\operatorname{H}_{d,\bfQ^{-1}}(q)$, where $\mathbf{Q}^{-1}=(Q_1^{-1},\ldots,Q_\ell^{-1})$. The following is straightforward. \begin{lem} \label{lem-hash_isom} There is an isomorphism of algebras $$ \begin{array}{rcll} \#\colon \operatorname{H}_{d,\bfQ}(q) & \to & \operatorname{H}_{d,\bfQ^{-1}}(q),\\ e({\mathbf{c}}) &\mapsto & e({\mathbf{c}}), &\\ X_ie({\mathbf{c}}) & \mapsto & X'_ie({\mathbf{c}}), & \mbox{ if }c_i=0,\\ T_re({\mathbf{c}}) & \mapsto & ((q-1)-T_r)e({\mathbf{c}}) & \mbox{ if }c_r=c_{r+1}=0,\\ T_re({\mathbf{c}}) & \mapsto & T_re({\mathbf{c}}) & \mbox{ if }c_r=1,c_{r+1}=0,\\ T_re({\mathbf{c}}) & \mapsto & -Q_rX'_rT_re({\mathbf{c}}) & \mbox{ if }c_r=0,c_{r+1}=1. \end{array} $$ \end{lem} \subsection{The polynomial representation of ${\operatorname{H}_{d,\bfQ}(q)}$} In this section we generalize the polynomial representation of the affine Hecke algebra to our higher level version by extending the action of $\operatorname{H}_{d,\bfQ}(q)$ on a Laurent polynomial ring in $d$ generators to an action of $\operatorname{H}_{d,\bfQ}(q)$ on a direct sum $\operatorname{P}_{d,\bfQ}$ of Laurent polynomial rings. \begin{df} For each ${\mathbf{c}}\in J^{\ell,d}$ consider the subring \begin{eqnarray*} \operatorname{P}_{d,\bfQ}({\mathbf{c}})=\bfk[x^{\pm1}_1, \ldots, x^{\pm1}_d]\subset \bfk[X^{\pm 1}_1,\ldots,X^{\pm 1}_{\ell+d}] \end{eqnarray*} generated by the variables $x_t=X^{\pm 1}_{t_{\mathbf{c}}}$ where $1_{\mathbf{c}}<2_{\mathbf{c}}<\ldots <d_{\mathbf{c}}$ are precisely the positions of the black strands, that is those indices where $c_{1_{\mathbf{c}}}=\cdots=c_{d_{\mathbf{c}}}=0$. Set \begin{eqnarray} \label{polrep} \operatorname{P}_{d,\bfQ}&=&\bigoplus_{{\mathbf{c}}\in J^{\ell,d}}\operatorname{P}_{d,\bfQ}({\mathbf{c}})=\bigoplus_{{\mathbf{c}}\in J^{\ell,d}}\bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]e({\mathbf{c}}). \end{eqnarray} Here $e({\mathbf{c}})$ is a formal symbol distinguishing the different direct summands. \end{df} \begin{prop} \label{prop-pol_rep_lH} There is an action of ${\operatorname{H}_{d,\bfQ}(q)}$ on $\operatorname{P}_{d,\bfQ}$ defined as follows. \begin{itemize} \item The element $e({\mathbf{c}})$ acts as the projector to the direct summand $\operatorname{P}_{d,\bfQ}({\mathbf{c}})$. \item The element $X_ie({\mathbf{c}})$ acts by multiplication with $X_i$ on $\operatorname{P}_{d,\bfQ}({\mathbf{c}})$, if $c_i=0$ and by zero otherwise. (Recall that $X_ie({\mathbf{c}})=0$ if $c_i=1$.) \item The element $T_re({\mathbf{c}})$ acts only non-trivially on the summand $\operatorname{P}_{d,\bfQ}({\mathbf{c}})$ where it sends $f\in \operatorname{P}_{d,\bfQ}({\mathbf{c}})$ to \small \begin{eqnarray*} \begin{cases} -s_r(f)+(q-1)\frac{X_{r+1}}{(X_{r}-X_{r+1})}(s_r(f)-f)\in\operatorname{P}_{d,\bfQ}({\mathbf{c}})&\mbox{ if } c_r=c_{r+1}=0,\\ s_r(f)\in \operatorname{P}_{d,\bfQ}(s_r({\mathbf{c}})) &\mbox{ if } c_r=1, c_{r+1}=0,\\ \left(X_{r+1}-Q_{\sum_{j=1}^{r+1}c_j}\right)s_r(f)\in \operatorname{P}_{d,\bfQ}(s_r({\mathbf{c}})) &\mbox{ if } c_r=0, c_{r+1}=1,\\ 0 &\mbox{ if } c_r=c_{r+1}=1. \end{cases} \end{eqnarray*} \normalsize (Recall that $T_re({\mathbf{c}})=0$ if $c_r=1=c_{r+1}$.) \end{itemize} \end{prop} \begin{proof} One directly verifies the relations from Definition~\ref{def-extaffHecke_alg}. \end{proof} During the proof of Proposition~\ref{prop-basis-Hdl} we will establish a crucial fact: \begin{prop} \label{prop-faithfullHecke} The representation from Proposition~\ref{prop-pol_rep_lH} is faithful. \end{prop} \subsection{A basis of ${\operatorname{H}_{d,\bfQ}(q)}$} \label{subs-basis_lHeck} The goal of this section is to construct a basis of the algebra ${\operatorname{H}_{d,\bfQ}(q)}$. To do this, it is enough to construct a basis of $e({\mathbf{b}}){\operatorname{H}_{d,\bfQ}(q)}e({\mathbf{c}})$ for each ${\mathbf{b}},{\mathbf{c}}\in J^{\ell,d}$. First we define for each $w\in \mathfrak{S}_d$, ${\mathbf{b}},{\mathbf{c}}\in J^{\ell,d}$ an element $T_w^{{\mathbf{b}},{\mathbf{c}}}\in e({\mathbf{b}}){\operatorname{H}_{d,\bfQ}(q)}e({\mathbf{c}})$. We define this element using the diagrammatic calculus as follows: Consider the permutation $w$ and draw a permutation diagram using black strands representing $w$ with a minimal possible number of crossings. Then we create the sequence ${\mathbf{b}}$ (resp. ${\mathbf{c}}$) on the top (resp. bottom) of the diagram by adding accordingly $\ell$ red points on the top and $\ell$ red points on the bottom. Finally we join the red points on the top with the red points on the bottom by red strands in such a way that there are no intersections between red strands and such that a red strand intersects each black strand at most once. The resulting element is denoted $T_w^{{\mathbf{b}},{\mathbf{c}}}$. By construction it depends on several choices, but we just fix such a choice for any triple $({\mathbf{b}},{\mathbf{c}}, w)$. \begin{ex} Let $d=3$, $\ell=2$, ${\mathbf{b}}=(1,1,0,0,0)$, ${\mathbf{c}}=(0,1,0,0,1)$, and $w=s_1s_2s_1$. Then there are precisely two choices for the permutation diagram of $w$, we displayed one on the left in \eqref{threediags}. The diagram $T_w^{{\mathbf{b}},{\mathbf{c}}}$ involves again a choice. Two of the possible choices are as follows \begin{eqnarray} \label{threediags} \begin{tikzpicture}[scale=0.6, thick] \draw (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1) .. controls (-4,3) .. +(0,1); \draw (-3,3) +(-1,-1) -- +(1,1); \end{tikzpicture}\quad&&\quad \begin{tikzpicture}[scale=0.6, thick] \draw (-3,3) +(1,-1) .. controls (-3.3,3) .. +(-1,1); \draw (-3,3) +(0,-1) .. controls (-3.7,3) .. +(0,1); \draw (-3,3) +(-1,-1) -- +(1,1); \draw[wei] (-3,3) +(-2,1) .. controls +(0,-0.8) .. +(-0.5, -1) ; \draw[wei] (-3.5,3)+(-1,1) .. controls +(2,-0.5) .. +(2.5,-1); \end{tikzpicture} \quad \begin{tikzpicture}[scale=0.6, thick] \draw (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1) .. controls (-4,3) .. +(0,1); \draw (-3,3) +(-1,-1) -- +(1,1); \draw[wei] (-3,3)+(-2,1) .. controls +(0,-0.6) .. +(-0.5,-1); \draw[wei] (-3.5,3)+(-1,1) .. controls +(0,-0.8) .. +(2,-1); \end{tikzpicture} \end{eqnarray} \end{ex} Let $\operatorname{H}_{d,\bfQ}(q)^{\leqslant w}$ be the span of the elements of the form $T_y^{{\mathbf{b}},{\mathbf{c}}}f$, where ${\mathbf{b}},{\mathbf{c}}\in J^{\ell,d}$, $y\leqslant w$ and $f\in\operatorname{P}_{d,\bfQ}({\mathbf{c}})$. Define $H_{\ell,d}^{\mathbf{Q},<w}$ similarly. \begin{lem} \begin{enumerate} \item \label{1} The subspaces $\operatorname{H}_{d,\bfQ}(q)^{\leqslant w}$ and $\operatorname{H}_{d,\bfQ}(q)^{<w}$ of $\operatorname{H}_{d,\bfQ}(q)$ are independent of the choices of the elements $T_x^{{\mathbf{b}},{\mathbf{c}}}$. \item \label{2} The different choices of $T_w^{{\mathbf{b}},{\mathbf{c}}}$ attached to $w, {\mathbf{c}}, {\mathbf{b}}$ by the construction above are equal modulo $H_{\ell,d}^{\mathbf{Q},<w}$. \end{enumerate} \end{lem} \begin{proof} We prove both parts simultaneously by induction on the length of $w$. Assume ${l}(w)=0$. In this case the definition of the element $T_w^{{\mathbf{b}},{\mathbf{c}}}$ is independent of any choice and there is noting to show. Assume now that the statements are true for all $w$ such that ${l}(w)<n$ and let us prove them for ${l}(w)=n$. By definition, the vector space $\operatorname{H}_{d,\bfQ}(q)^{<w}$ is spanned by $\operatorname{H}_{d,\bfQ}(q)^{\leqslant z}$ for all $z<w$. By the induction hypothesis, the vector spaces $\operatorname{H}_{d,\bfQ}(q)^{\leqslant z}$ are independent of the choices of $T^{{\mathbf{b}},{\mathbf{c}}}_x$ such that $x\leqslant z$. Thus the vector space $\operatorname{H}_{d,\bfQ}(q)^{<w}$ is independent of the choices of $T^{{\mathbf{b}},{\mathbf{c}}}_y$ where $y < w$. This proves the second part of~\ref{1}.). To prove~\ref{2}.), consider two different choices for the diagram $T_w^{{\mathbf{b}},{\mathbf{c}}}$. Then one of them can be obtained from the other one by applying relations in Definition~\ref{def-extaffHeck_diag}, which might create additional terms, but they are all contained in $\operatorname{H}_{d,\bfQ}(q)^{<w}$ hence~\ref{2}.) holds. Now~\ref{1}.) follows from~\ref{2}.) and the part of~\ref{1}.) which we already established. \end{proof} To give a basis of ${\operatorname{H}_{d,\bfQ}(q)}$, it is convenient to introduce some new elements $x_1,\ldots,x_d\in {\operatorname{H}_{d,\bfQ}(q)}$. Set $x_r=\sum_{{\mathbf{c}}\in J^{\ell,d}}X_{r_{\mathbf{c}}}e({\mathbf{c}})$, where $r_{\mathbf{c}}$ is the number of the position in ${\mathbf{c}}$ where the colour black appears for the $r$th time (counted from the left). Then the following statement is obvious from the relations \eqref{lHecke7}-\eqref{lHecke5new}. \begin{lem} The elements $x_1,\ldots,x_d$ pairwise commute and are invertible. \end{lem} The following provides two bases of ${\operatorname{H}_{d,\bfQ}(q)}$. \begin{prop} \label{prop-basis-Hdl} For each ${\mathbf{b}},{\mathbf{c}}\in J^{\ell,d}$, the following sets \begin{eqnarray*} \{T_w^{{\mathbf{b}},{\mathbf{c}}}x_1^{m_1}\ldots x_d^{m_d}\mid w\in \mathfrak{S}_d,m_i\in \mathbb{Z}\}, && \{x_1^{m_1}\ldots x_d^{m_d}T_w^{{\mathbf{b}},{\mathbf{c}}}\mid w\in \mathfrak{S}_d,m_i\in \mathbb{Z}\} \end{eqnarray*} each form a basis of $e({\mathbf{b}}){\operatorname{H}_{d,\bfQ}(q)} e({\mathbf{c}})$. \end{prop} \begin{proof} It is clear from the defining relations of ${\operatorname{H}_{d,\bfQ}(q)}$ that the asserted basis elements span $e({\mathbf{b}}){\operatorname{H}_{d,\bfQ}(q)} e({\mathbf{c}})$. Indeed, we can use relations \eqref{Hecke-diag-1} - \eqref{l-Hecke-diag-4} to write each diagram as a linear combination of diagrams where all dots are above (resp. below) all intersections and such that two strands intersect at most twice. To prove the linear independence, it suffices to show that the elements act by linearly independent operators on the polynomial representation \eqref{polrep}. The element $T_w^{{\mathbf{b}},{\mathbf{c}}}$ takes $\bfk[x^{\pm 1}_1,\cdots,x^{\pm 1}_d]e({\mathbf{b}})$ to $\bfk[x^{\pm 1}_1,\cdots,x^{\pm 1}_d]e({\mathbf{c}})$ by sending $fe({\mathbf{b}})$ to $\sum_{y\in\mathfrak{S}_d,y\leqslant w}C_yy(f)e({\mathbf{c}})$, where the $C_y \in\bfk(x_1,\cdots,x_d)$ are rational functions such that $C_w\ne 0$. Since $y\in\mathfrak{S}_d$ acts on the polynomial $f$ by the obvious permutation $y(f)$ of variables, an expression of the form $\sum_{w}a_w T_w^{{\mathbf{b}},{\mathbf{c}}}$ or $\sum_{w} T_w^{{\mathbf{b}},{\mathbf{c}}}a_w$, where $a_w\in\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]$, $w\in\mathfrak{S}_d$, can only act by zero if each $a_w$ is zero. This implies the linear independence. \end{proof} \begin{rk} \label{rk-special} In the special case $\ell=0$ these bases are the standard bases of the affine Hecke algebra from \cite[Prop.~3.7]{Lus89}, see also \cite[Cor.~3.4]{MS}. \end{rk} \subsection{The centre of ${\operatorname{H}_{d,\bfQ}(q)}$} \label{subs-centre-lHecke} Consider the element $\omega=(1,\ldots,1,0,\ldots,0)\in J^{\ell,d}$. This means that $\omega$ contains the colour red $\ell$ times followed by the colour black $d$ times. The following lemma shows that the affine Hecke algebra $\operatorname{H}_{d}(q)$ from Remark~\ref{ordinaryHecke} can be realised as an idempotent truncation of the higher level affine Hecke algebra. In particular our diagrams generalize indeed the ordinary permutation diagrams. \begin{lem} \label{lem-Hd_in_Hdl} There is an isomorphism of algebras $\operatorname{H}_{d}(q)\simeq e(\omega){\operatorname{H}_{d,\bfQ}(q)}e(\omega)$. \end{lem} \begin{proof} There is an obvious algebra homomorphism $\operatorname{H}_{d}(q)\to e(\omega){\operatorname{H}_{d,\bfQ}(q)}e(\omega)$ that adds $\ell$ red strands to the left of the diagram. It is an isomorphism, because it sends the standard basis (see Remark~\ref{rk-special}) of the affine Hecke algebra $\operatorname{H}_{d}(q)$ to the basis of $e(\omega){\operatorname{H}_{d,\bfQ}(q)} e(\omega)$ from Proposition~\ref{prop-basis-Hdl}. \end{proof} The group $\mathfrak{S}_d$ acts on the Laurent polynomial ring $\operatorname{P}_{d,\bfQ}({\mathbf{c}})$ for each ${\mathbf{c}}\in J^{\ell,d}$. Moreover, the group $\mathfrak{S}_{\ell+d}$ acts on $\operatorname{P}_{d,\bfQ}$ such that the permutation $w\in\mathfrak{S}_{\ell+d}$ sends the element $f\in \operatorname{P}_{d,\bfQ}({{\mathbf{c}}})$ to $w(f)\in\operatorname{P}_{d,\bfQ}({w({\mathbf{c}})})$. For each ${\mathbf{c}}\in J^{\ell,d}$, the restriction of the projection $\operatorname{P}_{d,\bfQ}\to \operatorname{P}_{d,\bfQ}({\mathbf{c}})$ to $\operatorname{P}_{d,\bfQ}^{\mathfrak{S}_{\ell+d}}$ yields an isomorphism $\operatorname{P}_{d,\bfQ}^{\mathfrak{S}_{\ell+d}}\simeq \operatorname{P}_{d,\bfQ}({\mathbf{c}})^{\mathfrak{S}_{d}}$ of vector spaces. By identifying $\operatorname{P}_{d,\bfQ}({\mathbf{c}})=\bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]e({\mathbf{c}})$, we can view $\operatorname{P}_{d,\bfQ}$ as a subalgebra of ${\operatorname{H}_{d,\bfQ}(q)}$ containing the algebra $\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]$ embedded diagonally. Moreover, the subalgebra $\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]^{\mathfrak{S}_d}$ coincides with $\operatorname{P}_{d,\bfQ}^{\mathfrak{S}_{\ell+d}}$. The centre $Z({\operatorname{H}_{d,\bfQ}(q)})$ of ${\operatorname{H}_{d,\bfQ}(q)}$ is then given as follows. \begin{prop} \label{lem-cen-Hdl} We have $Z({\operatorname{H}_{d,\bfQ}(q)})=\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]^{\mathfrak{S}_d}=\operatorname{P}_{d,\bfQ}^{\mathfrak{S}_{\ell+d}}$. \end{prop} \begin{proof} It is clear that $\operatorname{P}_{d,\bfQ}^{\mathfrak{S}_{\ell+d}}\subset Z({\operatorname{H}_{d,\bfQ}(q)})$. It suffices to show that the centre contains not more elements. Let $z\in Z({\operatorname{H}_{d,\bfQ}(q)})$. Write $z=\sum_{{\mathbf{c}}\in J^{\ell,d}}z_{\mathbf{c}}$, where $z_{\mathbf{c}}=ze({\mathbf{c}})$. Then $z_\omega\in Z(e(\omega){\operatorname{H}_{d,\bfQ}(q)}e(\omega))$. Since the centre of the affine Hecke algebra is formed by symmetric Laurent polynomials, \cite[Prop.~3.11]{Lus89}, there exists, by Lemma~\ref{lem-Hd_in_Hdl}, some $f\in \operatorname{P}_{d,\bfQ}(\omega)^{\mathfrak{S}_d}$ such that $z_\omega=f$. To complete, it is enough to show that $z_{w(\omega)}=w(f)\in \operatorname{P}_{d,\bfQ}({w({\mathbf{c}})})$ for each $w\in \mathfrak{S}_{\ell+d}$. Let $T=T_{\mathrm{Id}}^{w(\omega),\omega}$. Since $z$ commutes with $T$, we must have $z_{w(\omega)}T=Tz_{\omega}$. On the other hand we have $Tz_{\omega}=Tf=w(f)T$. This implies $z_{w(\omega)}=w(f)$ because the map $e(w(\omega)){\operatorname{H}_{d,\bfQ}(q)}e(w(\omega))\longrightarrow e(w(\omega)){\operatorname{H}_{d,\bfQ}(q)}e(\omega),$ $y\longmapsto yT $ is injective by Proposition~\ref{prop-basis-Hdl}. \end{proof} \subsection{Completion} \label{subs-compl-Hecke} For our main result we have to complete the higher level affine Hecke algebra. We first recall the completion $\widehat{\operatorname{H}}_{\bfa}(q)$ of $\operatorname{H}_{d}(q)$ from \cite[Sec.~3.3]{MS} at a maximal ideal of $Z(\operatorname{H}_{d}(q))$. From now on we assume $\bfk$ to be algebraically closed. For each $\mathbf{a}=(a_1,\ldots,a_d)\in(\bfk^*)^d$ consider the central character $\chi_\mathbf{a}\colon Z(\operatorname{H}_{d}(q))=\bfk[X_1^{\pm 1},\ldots,X_d^{\pm 1}]^{\mathfrak{S}_d}\to \bfk$ obtained by restriction of the algebra homomorphism which sends $X_1,\ldots,X_d$ to $a_1,a_2,\ldots,a_d$ respectively. Two such central characters $\chi_\mathbf{a}$ and $\chi_{\mathbf{a}'}$ coincide if and only if $\mathbf{a}'$ is a permutation of $\mathbf{a}$. Fix now $\mathbf{a}$. \begin{df} We denote by $\widehat{\operatorname{H}}_{\bfa}(q)$ the completion of $\operatorname{H}_{d}(q)$ with respect to the ideal $\mathfrak{m}_{\mathbf{a}}$ of $\operatorname{H}_{d}(q)$ generated by $\ker \chi_\mathbf{a}$. \end{df} Each finite dimensional $\widehat{\operatorname{H}}_{\bfa}(q)$-module decomposes into its generalised eigen\-spaces $M=\bigoplus_{{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}}M_{{\mathbf{i}}}$, for the $\bfk[X_1^{\pm 1},\ldots,X_d^{\pm 1}]$-action, where \begin{eqnarray} \label{dec} M_{{\mathbf{i}}}&=&\{m\in M\mid\exists N\in\mathbb{Z}_{\geqslant 0} \mbox{ such that } (X_r-i_r)^Nm=0~\forall r\}. \end{eqnarray} For each ${\mathbf{i}}\in\mathfrak{S}_d\mathbf{a}$, there is an idempotent $e({\mathbf{i}})\in \widehat{\operatorname{H}}_{\bfa}(q)$ which projects onto $M_{{\mathbf{i}}}$ when applied to $M$. Obviously, $1=\sum_{{\mathbf{i}}}e({\mathbf{i}})$ holds. \begin{df} By a {\it topological basis} or {\it Schauder basis} of a topological $\bfk$-vector space $V$ we mean a sequence $v_i$, $i\in\mathbb{Z}_{\geqslant 0}$ of vectors in $V$ such that every element of $V$ can be expressed uniquely as a convergent series of the form $\sum_{i\in\mathbb{Z}_{\geqslant 0}}a_iv_i$ with $a_i\in\bfk$. \end{df} We consider now $\widehat{\operatorname{H}}_{\bfa}(q)$ with its $\mathfrak{m}_{\mathbf{a}}$-adic topology. It comes with the usual $\mathfrak{m}_{\mathbf{a}}$-adic-order function, namely the order of an element is the minimal number $j$ such that $f$ is not in $\mathfrak{m}_{\mathbf{a}}^j$. This defines a norm on $\widehat{\operatorname{H}}_{\bfa}(q)$ and hence we can talk about topological bases, see \cite[VII]{ZS} for more details. \begin{prop}[{\cite[Lemma~3.8]{MS}}] The following set (viewed as a sequence by picking any total ordering) $$ \left\{T_w(X_1-i_1)^{m_1}\ldots (X_d-i_d)^{m_d}e({\mathbf{i}})\mid w\in \mathfrak{S}_d,m_i\in \mathbb{Z}_{\geqslant 0},{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}\right\} $$ forms a topological basis of $\widehat{\operatorname{H}}_{\bfa}(q)$. \end{prop} Informally speaking this means that every element in $\widehat{\operatorname{H}}_{\bfa}(q) e({\mathbf{i}})$ can be written uniquely as a power series in the $(X_r-i_r)$ with coefficients in $\operatorname{H}^{\operatorname{fin}}_d(q)$, see \cite[VII (8)]{ZS} for a precise statement. In particular $\operatorname{H}_{d}(q)$ is everywhere dense in $\widehat{\operatorname{H}}_{\bfa}(q)$ in the sense of \cite[VII, Lemma 1]{ZS}. \begin{prop}[{\cite[Cor.~3.13]{MS}}] \label{MS} The algebra $\widehat{\operatorname{H}}_{\bfa}(q)$ acts faithfully on \begin{eqnarray*} \widehat{\operatorname{P}}_{\bfa}&=&\bigoplus_{{\mathbf{i}}\in\mathfrak{S}_d\mathbf{a}}\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{i}}\def\widehat{\operatorname{P}}_{\bfa,\bfQ}{\widehat{\operatorname{P}}_{\mathbf{a},\mathbf{Q}}}). \end{eqnarray*} \end{prop} By Proposition~\ref{lem-cen-Hdl}, the algebra $Z(\operatorname{H}_{d,\bfQ}(q))$ is independent of the level $\ell$ and so we can consider $\chi_\mathbf{a}$ as a central character of $\operatorname{H}_{d,\bfQ}(q)$ as well. Let $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ be the completion of ${\operatorname{H}_{d,\bfQ}(q)}$ with respect to the ideal $\mathfrak{m}_\mathbf{a}$ generated by the kernel of $\chi_\mathbf{a}$ in $\operatorname{H}_{d,\bfQ}(q)$. We have again the decomposition \eqref{dec} for each finite dimensional $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$-module $M$ and an idempotent $e({\mathbf{i}})\in \widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ projecting onto $M_{{\mathbf{i}}}$. The idempotents $e({\mathbf{i}})$ for ${\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}$ commute with the idempotents $e({\mathbf{c}})$ for ${\mathbf{c}}\in J^{\ell,d}$. Thus we may define idempotents $e({\mathbf{c}},{\mathbf{i}})=e({\mathbf{c}})e({\mathbf{i}})$ in $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$. We have $1=\sum_{{\mathbf{c}},{\mathbf{i}}}e({\mathbf{c}},{\mathbf{i}})$. \begin{prop} \label{prop_basis-lHecke-comp} \begin{enumerate} \item The following set (viewed as a sequence by picking any total ordering) \begin{equation*} \left\{T_w^{{\mathbf{b}},{\mathbf{c}}}(x_1-i_1)^{m_1}\ldots (x_d-i_d)^{m_d}e({\mathbf{c}},{\mathbf{i}})\left| \begin{array}[c]{ll} w\in \mathfrak{S}_d,&m_i\in \mathbb{Z}_{\geqslant 0},\\ {\mathbf{b}},{\mathbf{c}}\in J^{\ell,d},&{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a} \end{array} \right.\right\} \end{equation*} forms a topological basis of $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$. \item \label{prop_rep-lHecke-comp} The algebra $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ acts (extending the actions from Propositions~\ref{prop-pol_rep_lH} and~\ref{MS}) faithfully on \begin{eqnarray*} \widehat{\operatorname{P}}_{\bfa,\bfQ}=\bigoplus_{{\mathbf{c}}\in J^{\ell,d},{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}}\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{c}},{\mathbf{i}}). \end{eqnarray*} where $e({\mathbf{c}},{\mathbf{i}})$ is just a formal symbol on which $e({\mathbf{c}},{\mathbf{i}})$ acts by the identity and all other $e({\mathbf{c}}',{\mathbf{j}})$ as zero. \end{enumerate} \end{prop} \begin{proof} All statements follow directly from the definitions except the faithfulness. The action is such that $e({\mathbf{c}},{\mathbf{i}})=e({\mathbf{c}})e({\mathbf{i}})$ acts as the projector to the direct summand $\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{c}},{\mathbf{i}})$. We then write $\widehat{\operatorname{P}}_{\bfa,\bfQ}=\bigoplus_{{\mathbf{c}}\in J^{\ell,d}} P({{\mathbf{c}})}$, where $P({{\mathbf{c}}})=\bigoplus_{{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a} }\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{c}},{\mathbf{i}})$. Then the completion $\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{i}})\subset\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ acts just by the obvious multiplication on $\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{i}})$ and by zero on the other summands. There is an action of $\mathfrak{S}_d$ on $P({{\mathbf{c}}})$ such that $w\in \mathfrak{S}_d$ sends $f(x_1-i_1,\hdots,x_d-i_d)e({\mathbf{c}},{\mathbf{i}})$ to $f(x_{w(1)}-i_1,\hdots,x_{w(d)}-i_d)e({\mathbf{c}},w({\mathbf{i}}))$ where $w({\mathbf{i}})=(i_{w^{-1}(1)},\hdots, i_{w^{-1}(d)})$. The action of $\mathfrak{S}_d$ on $P({\mathbf{c}})$ can therefore be extended to an action on $$ \bigoplus_{{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}}\bfk((x_1-i_1,\ldots,x_d-i_d))e({\mathbf{c}},{\mathbf{i}}). $$ Then the element $T_w^{{\mathbf{b}},{\mathbf{c}}}$ takes $ P({{\mathbf{c}})}$ to $ P({{\mathbf{b}})}$ and sends an element $fe({\mathbf{c}})$, $$ f\in \bigoplus_{{\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}}\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{c}},{\mathbf{i}}), $$ to an element of the form $\sum_{y\in\mathfrak{S}_d,y\leqslant w}y(\varphi_yf)e({\mathbf{b}})$, where we have $$\varphi_y\in \bigoplus_{{\mathbf{i}}\in\mathfrak{S}_d\mathbf{a}}\bfk((x_1-i_1,\hdots,x_d-i_d))e({\mathbf{c}},{\mathbf{i}}) $$ and $\varphi_w\ne 0$. This implies that an expression of the form $\sum_{w\in \mathfrak{S}_d}T^{{\mathbf{b}},{\mathbf{c}}}_wa_w$ with $a_w\in\bfk[[x_1-i_1,\ldots,x_d-i_d]]e({\mathbf{i}})$ acts on $\widehat{\operatorname{P}}_{\bfa,\bfQ}$ by zero only if each $a_w$ is zero. This means exactly that the set from the statement of Proposition~\ref{prop_basis-lHecke-comp} acts on $\widehat{\operatorname{P}}_{\bfa,\bfQ}$ by linearly independent operators. It is clear that this set spans the algebra $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ in the topological sense. Hence it forms a topological basis of $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$, and that the representation $\widehat{\operatorname{P}}_{\bfa,\bfQ}$ is faithful. \end{proof} \section{Affine KLR and tensor product algebras ${R}_{\nu,\bfQ}(\Gamma)$} \label{sec-KLR_tens} The next goal is to identify our higher level Hecke algebras, after completion, with Webster's tensor product algebras, \cite{Webster}, attached to a type $A$ quiver depending on $q$ and $\mathbf{Q}$. Let $J$ be as in Setup~\ref{set-up}. \subsection{Tensor product algebras} Let $\Gamma=(I,A)$ be a quiver without loops with set of vertices $I$ and set of arrows $A$. We call elements in $I$ {\it labels} since they will be used later as black and red labels. Consider the set $I_{\operatorname{col}}=J\times I$ with the two obvious projections $c\colon I_{\operatorname{col}}\to J$ and $\gamma\colon I_{\operatorname{col}}\to I$ that forget the labels respectively the colours. Obviously, elements $z\inI_{\operatorname{col}}$ are determined by their colour $c(z)$ and their label $\gamma(z)$, thus we call them {\it called labels}. We call $z$ {\it black} if $c(z)=0$ and {\it red} otherwise. One can also think of $I_{\operatorname{col}}$ as two copies of $I$, one copy coloured in black and the other copy coloured in red. We fix an $\ell$-tuple $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\in I^\ell$. \begin{df} Let $\mathbf{\nu}\in I^d$. Then $I_{\operatorname{col}}(\nu,\bfQ)$ denotes the set of $(\ell+d)$-tuples ${\mathbf{t}}=(t_1,\cdots,t_{\ell+d})\in I_{\operatorname{col}}^{\ell+d}$ such that \begin{itemize} \item $\sum_{i=1}^{\ell+d}c(t_i)=d$ (i.e., $c({\mathbf{t}})$ contains $d$ black elements and $\ell$ red elements), \item the labels of black elements in ${\mathbf{t}}$ form a permutation of $\mathbf{\nu}$, \item the labels of the red elements of ${\mathbf{t}}$ are $Q_1,\ldots, Q_\ell$ (in this order). \end{itemize} \end{df} \begin{df} A $\Gamma$-$(\ell,d)$-diagram is an $(\ell,d)$-diagram in the sense of Definition~\ref{diag} for the set $I_b=I_r=I$ of vertices of $\Gamma$. It is of type $(\mathbf{\nu},\mathbf{Q})$, if the sequence of coloured labels is in $I_{\operatorname{col}}(\nu,\bfQ)$. \end{df} As before, the labels are read from left to right at the bottom of the diagram. Since reds strands never cross, we could read off the type (although possibly realized via a different sequence in the same orbit) at any horizontal slice of the diagram instead of at the bottom. \begin{ex} Take $\mathbf{\nu}=(i,i,j)\in I^3$, $\mathbf{Q}=(i,k)\in I^2$ (in particular, we have $d=3$ and $\ell=2$). Then the tuple ${\mathbf{t}}=((i,1),(j,0),(i,0),(i,0),(k,1))$ is an element of $I_{\operatorname{col}}$. The labels of black elements in ${\mathbf{t}}$ are $(j,i,i)$, which is a permutation of $\mathbf{\nu}$. The labels of red elements in ${\mathbf{t}}$ are $(i,k)$, this coincides with $\mathbf{Q}$. If we forget the labels in ${\mathbf{t}}$, we get the tuple of colours $c({\mathbf{t}})=(1,0,0,0,1)\in J^{2,3}$. \end{ex} \begin{figure} \begin{eqnarray*} \begin{tikzpicture}[scale=0.6] \draw[thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](-4,0) +(1,-1) -- +(-1,1) node[below,at start] {$j$}; \fill (-4.5,.5) circle (5pt); \node at (-2,0){=}; \draw[thick](0,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](0,0) +(1,-1) -- +(-1,1) node[below,at start] {$j$}; \fill (.5,-.5) circle (5pt); \node at (4,0){unless $i=j$}; \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[scale=0.6,thick] \draw[thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](-4,0) +(1,-1) -- +(-1,1) node[below,at start] {$i$}; \fill (-4.5,.5) circle (5pt); \node at (-2,0){=}; \draw[thick](0,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](0,0) +(1,-1) -- +(-1,1) node[below,at start] {$i$}; \fill (.5,-.5) circle (5pt); \node at (2,0){+}; \draw[thick](4,0) +(-1,-1) -- +(-1,1) node[below,at start] {$i$}; \draw[thick](4,0) +(0,-1) -- +(0,1) node[below,at start] {$i$}; \end{tikzpicture} && \begin{tikzpicture}[scale=0.6,thick] \draw[thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](-4,0) +(1,-1) -- +(-1,1) node[below,at start] {$i$}; \fill (-4.5,-.5) circle (5pt); \node at (-2,0){=}; \draw[thick](0,0) +(-1,-1) -- +(1,1) node[below,at start] {$i$}; \draw[thick](0,0) +(1,-1) -- +(-1,1) node[below,at start] {$i$}; \fill (.5,.5) circle (5pt); \node at (2,0){+}; \draw[thick](4,0) +(-1,-1) -- +(-1,1) node[below,at start] {$i$}; \draw[thick](4,0) +(0,-1) -- +(0,1) node[below,at start] {$i$}; \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[scale=0.6, thick] \draw (-4,0) +(0,-1) .. controls (-2.4,0) .. +(0,1) node[below,at start]{$i$}; \draw (-2.4,0) +(0,-1) .. controls (-4,0) .. +(0,1) node[below,at start]{$i$}; \node at (-1.5,0){$=$}; \node at (-0.5,0){$0$}; \node at (0.7,0){and}; \draw (2,0) +(0,-1) .. controls (3.6,0) .. +(0,1) node[below,at start]{$i$}; \draw (3.6,0) +(0,-1) .. controls (2,0) .. +(0,1) node[below,at start]{$j$} ; \node at (4.5,0){$=$}; \draw (7.8,0) +(0,-1) -- +(0,1) node[below,at start]{$j$}; \draw (7,0) +(0,-1) -- +(0,1) node[below,at start]{$i$}; \node[inner xsep=10pt,fill=white,draw,inner ysep=7pt] at (7.4,0) {$\mathcal{Q}_{ij}(y_1,y_2)$}; \node at (13.5,0) {if $i\ne j$}; \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[thick,scale=0.6] \draw (-3,0) +(1,-1) -- +(-1,1) node[below,at start]{$k$}; \draw (-3,0) +(-1,-1) -- +(1,1) node[below,at start]{$i$}; \draw (-3,0) +(0,-1) .. controls (-4,0) .. +(0,1) node[below,at start]{$j$}; \node at (-1,0) {=}; \draw (1,0) +(1,-1) -- +(-1,1) node[below,at start]{$k$}; \draw (1,0) +(-1,-1) -- +(1,1) node[below,at start]{$i$}; \draw (1,0) +(0,-1) .. controls (2,0) .. +(0,1) node[below,at start]{$j$}; \node at (5,0) {unless $i=k\neq j$}; \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[thick,scale=0.6] \draw (-3,0) +(1,-1) -- +(-1,1) node[below,at start]{$i$}; \draw (-3,0) +(-1,-1) -- +(1,1) node[below,at start]{$i$}; \draw (-3,0) +(0,-1) .. controls (-4,0) .. +(0,1) node[below,at start]{$j$}; \node at (-1,0) {=}; \draw (1,0) +(1,-1) -- +(-1,1) node[below,at start]{$i$}; \draw (1,0) +(-1,-1) -- +(1,1) node[below,at start]{$i$}; \draw (1,0) +(0,-1) .. controls (2,0) .. +(0,1) node[below,at start]{$j$}; \node at (3,0){$+$}; \draw (6.2,0)+(1,-1) -- +(1,1) node[below,at start]{$i$}; \draw (6.2,0)+(-1,-1) -- +(-1,1) node[below,at start]{$i$}; \draw (6.2,0)+(0,-1) -- +(0,1) node[below,at start]{$j$}; \node[inner ysep=8pt,inner xsep=5pt,fill=white,draw,scale=.6] at (6.2,0){$\displaystyle \frac{\mathcal{Q}_{ij}(y_3,y_2)-\mathcal{Q}_{ij}(y_1,y_2)}{y_3-y_1}$}; \node at (12,0) {if $i\ne j$}; \end{tikzpicture} \end{eqnarray*} \caption{Tensor product algebra relations I: The KLR relations} \label{defKLR} \end{figure} To define the tensor product algebras we need one more definition. For each $i,j\in I$ we denote by $h_{i,j}$ the number of arrows in the quiver $\Gamma$ going from $i$ to $j$, and define for $i\ne j$ the polynomials $$ \mathcal{Q}_{ij}(u,v)=(u-v)^{h_{i,j}}(v-u)^{h_{j,i}}. $$ \begin{df} \label{tpalg} Fix a $d$-tuple $\mathbf{\nu}\in I^d$. The \emph{tensor product algebra} ${R}_{\nu,\bfQ}(\Gamma)$ (or simply ${R}_{\nu,\bfQ}$) is the induced algebra structure on the vector space spanned by all $\Gamma$-$(\ell,d)$-diagrams of type $(\mathbf{\nu},\mathbf{Q})$ modulo the {\it tensor product algebra relations of KLR type} from Figure~\ref{defKLR} and the {\it tensor product algebra relations of the second type} from Figure~\ref{reltensor2}. \end{df} \begin{rk} The special case where we only allow black strands (that is $\ell=0$), is the KLR algebra $\widehat{{R}}_{\nu}$ originally introduced in \cite{KL} and \cite{Rou2KM}. The following elements (defined for ${\mathbf{i}}=(i_1,\cdots,i_d)\in I^\mathbf{\nu}$, $i\in[1;d]$ and $r\in[1;d-1]$) $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.7,.25) {$e({\mathbf{i}})=$}; \draw (0,0) --(0,.5) node[below,at start]{$i_1$}; \draw (.4,0) --(.4,.5) node[below,at start]{$i_2$}; \node at (.7,.25) {$\cdots$}; \draw (1,0) --(1,.5) node[below,at start]{$i_i$}; \node at (1.3,.25) {$\cdots$}; \draw (1.6,0) --(1.6,.5) node[below,at start]{$i_{d-1}$}; \draw (2,0) --(2,.5) node[below,at start]{$i_d$}; } $$ and $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.9,.25) {$y_ie({\mathbf{i}})=$}; \draw (0,0) --(0,.5) node[below,at start]{$i_1$}; \draw (.4,0) --(.4,.5) node[below,at start]{$i_2$}; \node at (.7,.25) {$\cdots$}; \draw (1,0) --(1,.5) node[below,at start]{$i_i$}; \fill (1,.25) circle (1pt); \node at (1.3,.25) {$\cdots$}; \draw (1.6,0) --(1.6,.5) node[below,at start]{$i_{d-1}$}; \draw (2,0) --(2,.5) node[below,at start]{$i_d$}; } $$ and $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.67,.25) {$\psi_re({\mathbf{i}})=$}; \draw (0,0) --(0,.5) node[below,at start]{$i_1$}; \node at (.3,.25) {$\cdots$}; \draw (.6,0) --(.6,.5) node[below,at start]{$i_{r-1}$}; \draw (.9,0) --(1.3,.5) node[below,at start]{$i_r$}; \draw (1.3,0) --(.9,.5) node[below,at start]{$i_{r+1}$}; \draw (1.6,0) --(1.6,.5) node[below,at start]{$i_{r+2}$}; \node at (1.9,.25) {$\cdots$}; \draw (2.2,0) --(2.2,.5) node[below,at start]{$i_d$}; } $$ generate the algebra, see \cite{KL}, \cite{Rou2KM}. \end{rk} Now, for ${\mathbf{i}}\in I_{\operatorname{col}}(\nu,\bfQ)$, $r\in[1;\ell+d-1]$, $j\in [1;\ell+d]$ we define more generally elements $e({\mathbf{i}})$, $\psi_re({\mathbf{i}})$, $Y_ie({\mathbf{i}})$ that will generate the algebra ${R}_{\nu,\bfQ}$. \begin{df} Let $e({\mathbf{i}})\in {R}_{\nu,\bfQ}$ be the idempotent given by the diagram with only vertical strands with colours and labels determined by the sequence ${\mathbf{i}}$. Let $Y_je({\mathbf{i}})$ be the same diagram with additionally a dot on the strand number $j$ (counting from the left) in case $i_i$ is black, and set $Y_je({\mathbf{i}})=0$ if $i_j$ is red. Finally let $\psi_re({\mathbf{i}})$ be the same diagram as $e({\mathbf{i}})$ except that the $r$-th and $(r+1)$th strand intersect once in case not both $i_r$ and $i_{r+1}$ are red, and set $\psi_re({\mathbf{i}})=0$ otherwise. \end{df} \begin{ex} For example, for ${\mathbf{i}}=((i,1),(j,0),(i,0),(i,0),(k,1))$, we have $$ \begin{tikzpicture}[thick, scale=0.4] \node at (2.8,-.2){$ \Large{e({\mathbf{i}})\;\;= }$} ; \draw[wei] (6.5,0) +(-2,-1) -- +(-2,1) node[at start,below]{$i$}; \draw (6.5,0) +(-1,-1) -- +(-1,1) node [at start,below]{$j$}; \draw (6.5,0) +(0,-1) -- +(0,1)node [at start,below]{$i$}; \draw (6.5,0) +(1,-1) -- +(1,1) node[at start,below]{$i$}; \draw[wei] (6.5,0) +(2,-1) -- +(2,1) node[at start,below]{$k$}; \end{tikzpicture} $$ with $Y_re({\mathbf{i}})=0$ for $r=1$ and $r=5$. \end{ex} We preferred here to define the algebras diagrammatically instead of giving a cumbersome definition similar to Definition~\ref{def-extaffHecke_alg}. Analogously to the situation for the algebra ${\operatorname{H}_{d,\bfQ}(q)}$, it is convenient to introduce the elements $y_1,\ldots,y_d\in {R}_{\nu,\bfQ}$ defined as $y_r=\sum_{{\mathbf{i}}\in I^d}Y_{r_{\mathbf{i}}}e({\mathbf{i}})$, with $r_{\mathbf{i}}$ being the number of the position in ${\mathbf{i}}$ where the colour black appears for the $r$th time (counted from the left). \subsection{Polynomial representation} Let $\it{Pol}_{\nu,\bfQ}$ be the direct sum \begin{eqnarray*} \it{Pol}_{\nu,\bfQ}&=&\bigoplus_{{\mathbf{i}}\in I_{\operatorname{col}}(\nu,\bfQ)}\bfk[y_1,\ldots,y_d]e({\mathbf{i}}), \end{eqnarray*} of polynomial rings, where again $e({\mathbf{i}})$ is just a formal symbol. We can also view $e({\mathbf{i}})$ as a projector in $\it{Pol}_{\nu,\bfQ}$ to the summand $\bfk[y_1,\ldots,y_d]e({\mathbf{i}})$. For $r\in [1;d-1]$ denote by $\partial_r$ the {\it Demazure operator} \begin{eqnarray} \label{defDemazure} \partial_r\colon \bfk[y_1,\ldots,y_d]\to \bfk[y_1,\ldots,y_d], \qquad f\mapsto (f-s_r(f))/(y_r-y_{r+1}). \end{eqnarray} For each $i,j\in I$ such that $i\ne j$, consider the following polynomial $P_{i,j}(u,v)=(u-v)^{h_{i,j}}$. In the case $\ell=0$ we write ${R}_{\nu}$ instead of ${R}_{\nu,\bfQ}$ and $\it{Pol}_{\nu}$ instead of $\it{Pol}_{\nu,\bfQ}$. (The algebra ${R}_{\nu}$ is the usual KLR algebra.) Then we have the following faithful representation, see \cite[Sec.~2.3]{KL}. \begin{lem} \label{lem-polrep_KLR} The algebra ${R}_{\nu}$ has a faithful representation on $\it{Pol}_{\nu}$ such that \begin{itemize} \item the element $e({\mathbf{i}})$ acts as the projector onto $\bfk[y_1,\ldots,y_d]e({\mathbf{i}})$, \item the element $y_re({\mathbf{i}})$ acts by multiplication with $y_r$ on $\bfk[y_1,\cdots,y_d] e({\mathbf{i}})$ and by zero on all other direct summands of $\it{Pol}_{\nu}$, \item the element $\psi_re({\mathbf{i}})$ acts nontrivially only on $\bfk[y_1,\ldots,y_d]e({\mathbf{i}})$ and there as \begin{eqnarray*} fe({\mathbf{i}})&\mapsto& \begin{cases} \partial_r(f)e({\mathbf{i}}) &\mbox{ if } j_r=j_{r+1},\\ P_{i_r,i_{r+1}}(y_r,y_{r+1})s_r(f)e(s_r({\mathbf{i}})) &\mbox{ else.} \end{cases} \end{eqnarray*} \end{itemize} \end{lem} \begin{figure} \begin{eqnarray*} \begin{tikzpicture}[scale=0.7, thick] \draw (-3,0) +(1,-1) -- +(-1,1) node[at start,below]{$i$}; \draw (-3,0) +(-1,-1) -- +(1,1)node [at start,below]{$j$}; \draw[wei] (-3,0) +(0,-1) .. controls (-4,0) .. +(0,1)node [at start,below]{$k$}; \node at (-1,0) {=}; \draw (1,0) +(1,-1) -- +(-1,1) node[at start,below]{$i$}; \draw (1,0) +(-1,-1) -- +(1,1) node [at start,below]{$j$}; \draw[wei] (1,0) +(0,-1) .. controls (2,0) .. +(0,1)node [at start,below]{$k$}; \node at (2.8,0) {$+ $}; \draw (6.5,0) +(1,-1) -- +(1,1) node[at start,below]{$i$}; \draw (6.5,0) +(-1,-1) -- +(-1,1) node [at start,below]{$j$}; \draw[wei] (6.5,0) +(0,-1) -- +(0,1)node [at start,below]{$k$}; \node at (3.8,-.2){$ \delta_{i,j,k} $} ; \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[scale=0.7,thick,baseline=2.85cm] \draw[wei] (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1) .. controls (-4,3) .. +(0,1); \draw (-3,3) +(-1,-1) -- +(1,1); \node at (-1,3) {=}; \draw[wei] (1,3) +(1,-1) -- +(-1,1); \draw (1,3) +(0,-1) .. controls (2,3) .. +(0,1); \draw (1,3) +(-1,-1) -- +(1,1); \end{tikzpicture} &\quad\quad\quad& \begin{tikzpicture}[scale=0.7,thick,baseline=2.85cm] \draw (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1) .. controls (-4,3) .. +(0,1); \draw[wei] (-3,3) +(-1,-1) -- +(1,1); \node at (-1,3) {=}; \draw (1,3) +(1,-1) -- +(-1,1); \draw (1,3) +(0,-1) .. controls (2,3) .. +(0,1); \draw[wei] (1,3) +(-1,-1) -- +(1,1); \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[scale=0.7,thick] \draw(-3,6) +(-1,-1) -- +(1,1); \draw[wei](-3,6) +(1,-1) -- +(-1,1); \fill (-3.5,5.5) circle (5pt); \node at (-1,6) {=}; \draw(1,6) +(-1,-1) -- +(1,1); \draw[wei](1,6) +(1,-1) -- +(-1,1); \fill (1.5,6.5) circle (5pt); \end{tikzpicture} &\quad\quad\quad& \begin{tikzpicture}[scale=0.7,thick] \draw[wei](-3,6) +(-1,-1) -- +(1,1); \draw(-3,6) +(1,-1) -- +(-1,1); \fill (-3.5,6.5) circle (5pt); \node at (-1,6) {=}; \draw[wei](1,6) +(-1,-1) -- +(1,1); \draw(1,6) +(1,-1) -- +(-1,1); \fill (1.5,5.5) circle (5pt); \end{tikzpicture} \end{eqnarray*} \begin{eqnarray*} \begin{tikzpicture}[scale=0.7,thick] \draw (-2.8,0) +(0,-1) .. controls (-1,0) .. +(0,1) node[below,at start]{$i$}; \draw[wei] (-1.2,0) +(0,-1) .. controls (-3,0) .. +(0,1) node[below,at start]{$j$}; \node at (-.3,0) {=}; \draw[wei] (2.8,0) +(0,-1) -- +(0,1) node[below,at start]{$j$}; \draw (1.2,0) +(0,-1) -- +(0,1) node[below,at start]{$i$}; \fill (1.2,0) circle (5pt) node[right=5pt]{$\delta_{i,j}$}; \end{tikzpicture} &\quad\quad\quad& \begin{tikzpicture} [scale=0.7,thick] \draw[wei] (-2.8,0) +(0,-1) .. controls (-1,0) .. +(0,1) node[below,at start]{$j$}; \draw (-1.2,0) +(0,-1) .. controls (-3,0) .. +(0,1) node[below,at start]{$i$}; \node at (-.3,0) {=}; \draw (2.8,0) +(0,-1) -- +(0,1) node[below,at start]{$i$}; \draw[wei] (1.2,0) +(0,-1) -- +(0,1) node[below,at start]{$j$}; \fill (2.8,0) circle (5pt) node[right=5pt]{$\delta_{i,j}$}; \end{tikzpicture} \end{eqnarray*} \caption{Tensor product algebra relations II involving red strands} \label{reltensor2} \end{figure} The following may be deduced from \cite[Prop.~4.7,~Prop.~4.9]{SW} (see also \cite[Fig.~3]{SW}). Hereby $\it{Pol}_{\nu,\bfQ}$ is realized as a subring of $\bigoplus_{{\mathbf{i}}\in I_{\operatorname{col}}(\nu,\bfQ)}\bfk[Y_1,\ldots,Y_{\ell+d}]e({\mathbf{i}})$ via $ P(y_1,\ldots,y_r)e({\mathbf{i}})\mapsto P(Y_{1_{\mathbf{i}}},\ldots,Y_{d_{\mathbf{i}}})e({\mathbf{i}}). $ \begin{lem} \label{lem-polrep_tenspr} The algebra ${R}_{\nu,\bfQ}$ has a faithful representation on $\it{Pol}_{\nu,\bfQ}$ such that \begin{itemize} \item the element $e({\mathbf{i}})$ acts as the projector onto $\bfk[y_1,\ldots,y_d]e({\mathbf{i}})$, \item the element $y_re({\mathbf{i}})$ acts by multiplication with $y_r$ on $\bfk[y_1,\cdots,y_d] e({\mathbf{i}})$ and by zero on other direct summand of $\it{Pol}_{\nu,\bfQ}$, \item the element $\psi_re({\mathbf{i}})$ acts only nontrivially on $\bfk[y_1,\ldots,y_d]e({\mathbf{i}})$, where it sends $fe({\mathbf{i}})$ to \end{itemize} \begin{eqnarray*} && \begin{cases} \partial_r(f)e({\mathbf{i}}) &\mbox{\rm if } c(j_r)=c(j_{r+1})=0,~ j_r=j_{r+1},\\ P_{\gamma(j_r),\gamma(j_{r+1})}(Y_r,Y_{r+1})s_r(f)e(s_r({\mathbf{i}})) &\mbox{\rm if } c(j_r)=c(j_{r+1})=0,~ j_r\ne j_{r+1},\\ 0 &\mbox{\rm if } c(j_r)=c(j_{r+1})=1,\\ Y_{r+1}s_r(f)e(s_r({\mathbf{i}})) &\mbox{\rm if } c(j_r)=0,~ c(j_{r+1})=1,~ \gamma(j_r)=\gamma(j_{r+1}),\\ s_r(f)e(s_r({\mathbf{i}})) &\mbox{\rm for all other cases}.\\ \end{cases} \end{eqnarray*} \end{lem} \subsection{Completion} Let $\mathfrak{m}$ be the ideal in $\bfk[y_1,\ldots,y_d]$ generated by all $y_r$, $1\leqslant r\leqslant d$. \begin{df} Denote by $\widehat{{R}}_{\nu}$ the completion of the algebras ${R}_{\nu}$ at the sequence of ideals ${R}_{\nu} \mathfrak{m}^j {R}_{\nu}$. Denote by $\widehat{{R}}_{\nu,\bfQ}$ the completion of the algebra ${R}_{\nu,\bfQ}$ at the sequence of ideals ${R}_{\nu,\bfQ} \mathfrak{m}^j {R}_{\nu,\bfQ}$. \end{df} \begin{rk} The faithful polynomial representation of ${R}_{\nu}$ on $\it{Pol}_{\nu}$ (see Lemma~\ref{lem-polrep_KLR}) yields a faithful representation of $\widehat{{R}}_{\nu}$ on \begin{eqnarray} \label{faithful1} \widehat{\it{Pol}}_{\nu}=\bigoplus_{{\mathbf{i}}\in I^\nu}\bfk[[y_1,\cdots,y_d]]e({\mathbf{i}}). \end{eqnarray} The faithful polynomial representation of ${R}_{\nu,\bfQ}$ on $\it{Pol}_{\nu,\bfQ}$ (see Lemma~\ref{lem-polrep_tenspr}) yields a faithful representation of $\widehat{{R}}_{\nu,\bfQ}$ on \begin{eqnarray} \label{faithful2} \widehat{\it{Pol}}_{\nu,\bfQ}&=&\bigoplus_{{\mathbf{i}}\in I_{\operatorname{col}}(\nu,\bfQ)}\bfk[[y_1,\cdots,y_d]]e({\mathbf{i}}). \end{eqnarray} \end{rk} \subsection{The isomorphisms $ \widehat{{R}}_{\nu}\simeq \widehat{\operatorname{H}}_{\bfa}(q)$ and $ \widehat{{R}}_{\nu,\bfQ}\simeq\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$} \label{subs-isom_lHeck-tens-compl} Fix $q\in\bfk$ such that $q\not\in\{0,1\}$. Fix an $\ell$-tuple $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\subset (\bfk^*)^\ell$. Consider the following set \begin{equation} \label{Corona} \mathcal{F}=\{q^nQ_m\mid n\in\mathbb{Z}, m\in[1;\ell]\}\subset\bfk^*. \end{equation} We can consider $\mathcal{F}$ as a vertex set of a quiver $\Gamma_\mathcal{F}$ such that for $i,j\in\mathcal{F}$ we have an arrow $i\to j$ if and only if we have $j=qi$. If $q$ is an $e$th root of unity, then the quiver $\Gamma_\mathcal{F}$ is a disjoint union of at most $\ell$ oriented cycles of length $e$. If $q$ is not a root of unity, then the quiver $\Gamma_\mathcal{F}$ is a disjoint union of at most $\ell$ (two-sided) infinite oriented linear quivers. Then $\mathbf{Q}$ can be considered as an $\ell$-tuple of vertices of the quiver $\Gamma_\mathcal{F}$. In this section we assume that the KLR algebra and the tensor product algebra are defined with respect to the quiver $\Gamma_\mathcal{F}$. In particular we have $I=\mathcal{F}$. We also assume $\nu=\mathbf{a}$. Then we have $I^\nu=\mathfrak{S}_d\mathbf{a}$. First, we recall the isomorphism $\widehat{{R}}_{\nu}\simeq \widehat{\operatorname{H}}_{\bfa}(q)$ from \cite[Thm.~7.3]{MS}. For this we identify the vector spaces $\widehat{\it{Pol}}_{\nu}$ and $\widehat{\operatorname{P}}_{\bfa}$ via \begin{eqnarray} \label{identifyPol} \widehat{\it{Pol}}_{\nu}\to\widehat{\operatorname{P}}_{\bfa}, && -i_ry_re({\mathbf{i}})\mapsto (X_r-i_r)e({\mathbf{i}}). \end{eqnarray} \begin{prop}[{\cite[Thm.~7.3]{MS}}] \label{prop-isom-Heck-KLR-comp} There is an isomorphism $\widehat{{R}}_{\nu}\simeq \widehat{\operatorname{H}}_{\bfa}(q)$ of algebras sending $e({\mathbf{i}})$ to $e({\mathbf{i}})$, $y_re({\mathbf{i}})$ to $-\gamma(i_r)^{-1}(X_r-\gamma(i_r))e({\mathbf{i}})$ and $\psi_re({\mathbf{i}})$ to the expression in \eqref{imagepsi} below. \end{prop} \begin{proof} It is enough to check that the induced actions of the generators and their images agree on the (faithful) polynomial representations \eqref{identifyPol}. This is straightforward noting that the element $\psi_re({\mathbf{i}})\in \widehat{{R}}_{\nu}$ acts as \begin{eqnarray} \label{imagepsi} \begin{cases} -\frac{i_r}{X_r-qX_{r+1}}(T_r+1) e({\mathbf{i}})&\mbox{if }i_r=i_{r+1},\\ i_r^{-1}q^{-1}((X_r-X_{r+1})T_r+(q-1)X_{r+1}) e({\mathbf{i}})&\mbox{if } qi_r=i_{r+1},\\ \left(1-\frac{X_r-X_{r+1}}{X_r-qX_{r+1}}(T_r+1)\right)e({\mathbf{i}}) &\mbox{else},\\ \end{cases} \end{eqnarray} and so the claim follows. \end{proof} We extend this now to an isomorphism $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)\simeq \widehat{{R}}_{\nu,\bfQ}$. First, note that we have an obvious bijection $I_{\operatorname{col}}(\nu,\bfQ)\simeq J^{\ell,d}\times \mathfrak{S}_d\mathbf{a}$. This is important because the algebra $\widehat{{R}}_{\nu,\bfQ}$ has idempotents parametrised by $I_{\operatorname{col}}(\nu,\bfQ)$ and the algebra $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ has idempotents parametrised by $J^{\ell,d}\times \mathfrak{S}_d\mathbf{a}$. We identify the vector spaces underlying the polynomial representations, $\widehat{\it{Pol}}_{\nu,\bfQ}$ for $\widehat{{R}}_{\nu,\bfQ}$ and $\widehat{\operatorname{P}}_{\bfa,\bfQ}$ for $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$, via \begin{eqnarray} \label{identifyPol2} \widehat{\it{Pol}}_{\nu,\bfQ}\to\widehat{\operatorname{P}}_{\bfa,\bfQ}, \qquad -\gamma(i_r)Y_re({\mathbf{i}})\to (X_r-\gamma(i_r))e({\mathbf{i}}) \quad \mbox{ if $c(i_r)=0$}. \end{eqnarray} (Recall that both $Y_re({\mathbf{i}})$ and $X_re({\mathbf{i}})$ are zero if $c(i_r)=1$.) \begin{thm} \label{thm-isom-lHeck-tens-comp} There is an isomorphism of algebras $ \widehat{{R}}_{\nu,\bfQ}\simeq\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ extending the isomorphism from Proposition~\ref{prop-isom-Heck-KLR-comp}. \end{thm} \begin{proof} Abbreviate $(\dagger)=q^{-1}\frac{1}{\gamma(i_r)}((X_r-X_{r+1})T_r+(q-1)X_{r+1}) $. We claim that sending $\psi_re({\mathbf{i}})\in \widehat{{R}}_{\nu,\bfQ}$ to the element \begin{eqnarray*} \begin{cases} \frac{-\gamma(i_r)}{X_r-qX_{r+1}}(T_r+1) &\mbox{if }i_r=i_{r+1},~ c(i_r)=c(i_{r+1})=0,\\ (\dagger)&\mbox{if } q\gamma(i_r)=\gamma(i_{r+1}),~c(i_r)=c(i_{r+1})=0,\\ (1-\frac{X_r-X_{r+1}}{X_r-qX_{r+1}}(T_r+1))e({\mathbf{i}}) &\mbox{for all other cases with } c(i_r)=c(i_{r+1})=0,\\ T_re({\mathbf{i}}) &\mbox{if } c(i_r)=1,~c(i_{r+1})=0,\\ \frac{-1}{\gamma(i_r)}T_re({\mathbf{i}}) &\mbox{if } c(i_r)=0,~c(i_{r+1})=1,~\gamma(i_r)=\gamma(i_{r+1}),\\ \frac{1}{(X_{r+1}-\gamma(i_{r+1}))}T_re({\mathbf{i}}), &\mbox{if } c(i_{r})=0,~c(i_{r+1})=1,~\gamma(i_r)\ne \gamma(i_{r+1}), \end{cases} \end{eqnarray*} defines an isomorphism as claimed. Clearly this makes the map unique, since we specified the image of on a set of generators and moreover surjective, since the generators of $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ are in the image. To show well-definedness and that it is an isomorphism it suffices to show that the action of the generators agrees with that of their images on the (faithful) polynomial representations \eqref{identifyPol2}. For the idempotents $e({\mathbf{i}})\in \widehat{{R}}_{\nu,\bfQ}$ this is clear, and the element $Y_re({\mathbf{i}})\in \widehat{{R}}_{\nu,\bfQ}$ acts as $-\gamma(i_r)^{-1}(X_r-\gamma(i_r))e({\mathbf{i}})\in\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ if $c(i_r)=0$. (Recall that if $c(i_r)=1$ then both $X_re({\mathbf{i}})$ and $Y_re({\mathbf{i}})$ are zero.) Since $\psi_re({\mathbf{i}})\in \widehat{{R}}_{\nu,\bfQ}$ acts exactly as its proposed image (recalling that if $c(i_r)=c(i_{r+1})=1$ then both $T_re({\mathbf{i}})$ and $\psi_re({\mathbf{i}})$ are zero), the claim follows. \end{proof} \begin{rk} It is useful to give an explicit inverse of the isomorphism from Theorem~\ref{thm-isom-lHeck-tens-comp}. The element $T_re({\mathbf{i}})$ acts on the polynomial representation by the same operator as \begin{eqnarray*} \begin{cases} \left(-1+{(q-1+Y_r-qY_{r+1})}\psi_r\right) e({\mathbf{i}}) &\mbox{if }i_r=i_{r+1},~ c(i_r)=c(i_{r+1})=0,\\ \left(\frac{q(q-1)(Y_{r+1}-1)}{1-q-Y_r+qY_{r+1}}+\frac{q\psi_r}{q-1-qY_r+Y_{r+1}}\right)e({\mathbf{i}}) &\mbox{if }q\gamma(i_r)=\gamma(i_{r+1}),~ c(i_r)=c(i_{r+1})=0,\\ \left(\frac{(1-q)\gamma(i_{r+1})(1-Y_{r+1})}{\gamma(i_r)(1-Y_r)-\gamma(i_{r+1})(1-Y_{r+1})}-\right.&\\ \left.\frac{\gamma(i_{r+1})(1-Y_r)-q\gamma(i_{r})(1-Y_{r+1})}{\gamma(i_{r+1})(1-Y_r)-\gamma(i_{r})(1-Y_{r+1})}\psi_r\right)e({\mathbf{i}}), &\mbox{otherwise, with } c(i_r)=c(i_{r+1})=0,\\ \psi_re({\mathbf{i}}) &\mbox{ if } c(i_r)=1, c(i_{r+1})=0,\\ (\gamma(i_r)(1-Y_{r+1})-\gamma(i_{r+1}))\psi_re({\mathbf{i}}) &\mbox{ if }\gamma(i_{r})\ne \gamma(i_{r+1}), c(i_r)=0, c(i_{r+1})=1,\\ -\gamma(i_r)\psi_re({\mathbf{i}}) &\mbox{ if }\gamma(i_{r})=\gamma(i_{r+1}), c(i_r)=0, c(i_{r+1})=1.\\ \end{cases} \end{eqnarray*} \end{rk} \section{Higher level affine Schur algebras $\operatorname{S}_{d,\bfQ}(q)$} \label{sec-Schur} We recall the definition of the (ordinary) affine Schur algebra as it appears for instance in \cite{Greenaff}, \cite{MS}, \cite{vigneras} and then generalize it to a higher level version. \subsection{Affine Schur algebras} \label{subs-affSchur} For each non-negative integer $d$, a \emph{composition} of $d$ is a tuple $\lambda=(\lambda_1,\hdots,\lambda_r)$ (the number $r$, called the {\it length} ${l}(\lambda)$ of $\lambda$, is not fixed) such that $\sum_{i=1}^{r}\lambda_i=d$ and $\lambda_i>0$. If $\lambda$ is a composition of $d$, we write $|\lambda|=d$. Denote by $\mathcal{C}_d$ the set of compositions of $d$. We use the convention that $\mathcal{C}_0$ contains a unique composition which is empty. For each $\lambda=(\lambda_1,\hdots,\lambda_r)\in \mathcal{C}_d$ denote by $\mathfrak{S}_\lambda$ the parabolic (or Young) subgroup \begin{eqnarray} \label{Young} \mathfrak{S}_\lambda&=&\mathfrak{S}_{\lambda_1}\times\hdots\times \mathfrak{S}_{\lambda_r}\subset \mathfrak{S}_d. \end{eqnarray} Its unique longest element is denoted by $w_\lambda$. Moreover, let $\operatorname{D}_{\lambda,\mu}$ be the set of shortest length representatives for the double cosets $\mathfrak{S}_\lambda\backslash\mathfrak{S}_d/\mathfrak{S}_\mu$. We also write $\operatorname{D}_{\emptyset,\mu}$ and $\operatorname{D}_{\lambda,\emptyset}$ for the sets of shortest length representatives of the cosets $\frak{S}_d/\frak{S}_\mu$ and $\frak{S}_\lambda\backslash \frak{S}_d$ respectively. Attached to this subgroup, let $m_\lambda\in \operatorname{H}_{d}(q)$ be defined by \begin{eqnarray} \label{defm} m_\lambda&=&\sum_{w\in \mathfrak{S}_\lambda}(-q)^{l(w_\lambda)-l(w)}T_w. \end{eqnarray} We consider $m_\lambda \operatorname{H}_{d}(q)$ as a right $\operatorname{H}_{d}(q)$-module. \begin{df} The {\it affine Schur algebra} is the algebra \begin{eqnarray} \label{affSchurDef} \operatorname{S}_{d}(q)&=&\mathrm{End}_{\operatorname{H}_{d}(q)}\left(\bigoplus_{\lambda\in \mathcal{C}_d}m_\lambda \operatorname{H}_{d}(q)\right). \end{eqnarray} The algebra $\operatorname{S}_{d}(q)$ has idempotents $e(\lambda)$, $\lambda\in\mathcal{C}_d$ given by the projection to $m_\lambda \operatorname{H}_{d}(q)$. \end{df} \subsection{Generators of $\operatorname{S}_{d}(q)$ and thick calculus} Next we introduce the {\it thick calculus} for the algebra $\operatorname{S}_{d}(q)$. \label{subs-gen_S} Let $\lambda, \mu\in \mathcal{C}_d$ and assume that $\mu$ is obtained from $\lambda$ by splitting one component of $\lambda$. In other words, there is an index $t$ such that $\mu$ is of the form $(\lambda_1,\hdots,\lambda_{t-1},\lambda'_t,\lambda''_t,\lambda_{t+1},\hdots,\lambda_{l(\lambda)})$, where $\lambda'_t$ and $\lambda''_t$ are positive integers such that $\lambda'_t+\lambda''_t=\lambda_t$. In this case we say that $\mu$ is a \emph{split} of $\lambda$ and that $\lambda$ is a \emph{merge} of $\mu$ (at position $t$). \begin{df} Assume $\mu$ is a split of $\lambda$. We define the special elements in $\operatorname{S}_{d}(q)$: \begin{eqnarray*} \text{the {\it split morphism}} &&m_\lambda x\mapsto m_\mu x\in \mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_\lambda\operatorname{H}_{d}(q),m_\mu\operatorname{H}_{d}(q)),\\ \text{the {\it merge morphism}} &&m_\mu x\mapsto m_\lambda x\in \mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_\mu\operatorname{H}_{d}(q),m_\lambda\operatorname{H}_{d}(q)). \end{eqnarray*} \end{df} More generally, if $\mu$ is a refinement of the composition $\lambda$ we have the corresponding split morphism, denoted $(\lambda\rightarrow\mu)$, and the corresponding merge morphism, denoted $(\mu\rightarrow\lambda)$, defined in the obvious way. They are the compositions of the splits (respectively merges) describing the refinement. Note that the order in the composition does not matter because of the associativity property of splits and merges, \cite[Lemma~6.5 (twisted with the automorphism $\sharp$)]{MS}. The idempotents $e(\lambda)$, splits, merges and multiplication with (invariant) polynomials generate the algebra $\operatorname{S}_{d}(q)$ see \cite[Prop.~6.19]{MS}. We draw the generators as diagrams that are similar to the diagrams for $\operatorname{H}_{d}(q)$ from Definition~\ref{def-extaffHeck_diag}. The differences are that the black strands are now allowed to have a higher thickness (corresponding to multiplicities of the labels given by a nonnegative integer), the diagrams representing the generators are now of the form \begin{equation} \label{diag-split-merge} \tikz[thick,xscale=2.5,yscale=-1.5]{ \draw (0,0) node[above] {$a$} to [out=90,in=-90](.3,.5) (.6,0) node[above] {$b$} to [out=90,in=-90] (.3,.5) (.3,.5) -- (.3,.8) node[below] {$a+b$}; } \qquad \tikz[thick,xscale=2.5,yscale=1.5]{ \draw (0,0) node[below] {$a$} to [out=90,in=-90](.3,.5) (.6,0) node[below] {$b$} to [out=90,in=-90] (.3,.5) (.3,.5) -- (.3,.8) node[above] {$a+b$}; } \end{equation} and each strand with thickness $b$ is allowed to carry now any symmetric Laurent polynomial in $b$ variables instead of dots. \begin{df} Let $\lambda,\mu\in\mathcal{C}_d$. We draw the idempotent $e(\lambda)\in \operatorname{S}_{d}(q)$ given by the identity endomorphism of the right $\operatorname{H}_{d}(q)$-module $m_\lambda\operatorname{H}_{d}(q)$ as a diagram with ${l}(\lambda)$ vertical strands labelled by the parts of $\lambda$, $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.9,.25) {$e(\lambda)\quad\mapsto\quad$}; \draw (-0.3,0) --(-0.3,.5) node[below,at start]{$\lambda_1$}; \draw (-0,0) --(-0,.5) node[below,at start]{$\lambda_2$}; \node at (0.25,.25) {$\cdots$}; \draw (0.5,0) --(0.5,.5) node[below,at start]{$\lambda_{{l}(\lambda)}$}; } $$ Let $f\in \bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}$ be of the form $f=f_1\cdots f_{l(\lambda)}$, where $f_j$ is a symmetric Laurent polynomial containing only variables with indices in $[\lambda_1+\ldots+\lambda_{j-1}+1;\;\lambda_1+\ldots+\lambda_{j}]$. Then we associate to $fe(\lambda)\in \operatorname{S}_{d}(q)$ the diagram $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-1,.35) {$fe(\lambda)\quad\mapsto\quad$}; \draw (-0.3,0) --(-0.3,.2) node[below,at start]{$\lambda_1$}; \draw (-0.3,0.5) --(-0.3,.7); \draw (-0.15,0.2) rectangle (-0.45,0.5); \node at (-0.3,0.35) {$f_1$}; \draw (0.1,0) --(0.1,.2) node[below,at start]{$\lambda_2$}; \draw (0.1,0.5) --(0.1,.7); \draw (-0.05,0.2) rectangle (0.25,0.5); \node at (0.1,0.35) {$f_2$}; \node at (0.45,.35) {$\cdots$}; \draw (0.8,0) --(0.8,.2) node[below,at start]{$\lambda_{{l}(\lambda)}$}; \draw (0.8,0.5) --(0.8,.7); \draw (0.6,0.2) rectangle (1.0,0.5); \node at (0.8,0.35) {$f_{{l}(\lambda)}$}; } $$ Since any $f\in \bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}$ can be written as a sum of polynomials of the form $f_1\ldots f_{{l}(\lambda)}$, the notation $fe(\lambda)$ makes sense for any such $f$. In the special case where $\lambda_i=1$ the $i$th strand is allowed to carry any Laurent polynomial in the variable $x_i$, in particular it can carry dots as in our notation before. We assign to a split $\lambda\rightarrow \mu$ of the form $(a+b)\to (a,b)$ (respectively a merge $(a,b)\to(a+b)$) the first (resp. second) diagram in \eqref{diag-split-merge}, and if the compositions have more parts we add additionally vertical strands to the left and to the right labelled by the remaining components. \end{df} \begin{rk} \label{rkcat} The algebra $\operatorname{S}_{d}(q)$ can be realized more conceptually as a quotient of the algebra structure on the direct sum of homomorphism spaces of certain objects in the following category with $I_b=(\mathbb{Z}_{>0},+)$. The objects we take are all tensor products of black labels such that the sum of the labels is $d$, and $R_a$ is the ring of symmetric Laurent polynomials in $a$ variables. The strands with labels $\boxed{f}$ is then the image the dot morphism for $f$. Note that hereby $I_r$ does not play any role. \end{rk} \begin{df} \label{Corona2} Let $I_r$ be a set and $I_b$ an additive monoid. The {\it universal thickened higher level category} corresponding to this pair is the $\bfk$-linear strict monoidal category generated as monoidal category by objects ${a}\in I_b$, called {\it black labels} and objects $Q\in I_r$, called {\it red labels}, and by the following morphisms: \begin{itemize} \item the {\it split morphisms} $a+b\longrightarrow a\otimes b$ and the {\it merge morphisms} $a\otimes b\longrightarrow a+b$ for any $a,b\in I_b$ given abstractly by diagrams \eqref{diag-split-merge}, \item the {\it crossings} $\TikZ{[thick,scale=.5] \draw (1,1) node{} to (0,0)node{};\draw[wei] (1,0) node{} to (0,1)node{}; \node at (1.2,-0.3) {\tiny Q};\node at (0,-0.3) {\tiny a};} :\;{a}\otimes{Q}\longrightarrow{Q}\otimes {a}$ and $\TikZ{[thick,scale=.5] \draw (1,0) node{} to (0,1)node{};\draw[wei] (1,1) node{} to (0,0)node{}; \node at (-0.2,-0.3) {\tiny Q};, \node at (1,-0.3) {\tiny i}; } :\;{Q}\otimes {i}\longrightarrow{i}\otimes {Q},$ \item the {\it dot morphisms} $\TikZ{[thick, scale=.5] \draw (0,0) node{} to (0,1) node{} (0,0.5) node[fill,circle,inner sep=1.5pt]{};\node at (0,-0.3) {\tiny a};\node at (0.4,0.5) {\tiny f};}:{a}\longrightarrow {a}$ for $a\in I_b$ and $f\in R_a$ for some fixed commutative ring $R_a$ depending on $a$. \end{itemize} The monoidal structure is again horizontal placement, the composition of morphisms vertical placement. We impose the relation that the composition of two dot morphisms with the same thickness, say for $f_1$ and $f_2\in R_a$, equals the dot morphism for $f_1f_2\in R_a$. \end{df} \begin{rk} The affine Hecke algebra $\operatorname{H}_{d}(q)$ is an idempotent truncation of $\operatorname{S}_{d}(q)$. The thick calculus in $\operatorname{S}_{d}(q)$ generalizes the diagrammatic calculus in $\operatorname{H}_{d}(q)$ in the sense that each usual Hecke strand can be viewed as a strand of thickness $1$ and $R_1=\bfk[X]$, the usual polynomial ring in one variable. The dot morphism using $\bullet$ is just the abbreviation for the dot morphism for $X\in R_1$. \end{rk} \begin{rk} One would like to have an analogue of Lemma~\ref{lem:twodefs} for the Schur algebras, that is an explicit presentation in terms of the generators in the universal thickened higher level category modulo explicit relations. This is so far not known. In \cite{Greenaff} solved the analogous problem in case of generic $q$ for a Morita equivalent algebra using similar generators. It would be nice to be able to generalize this result to our setting. \end{rk} It is also convenient to explicitly specify a few more elements of $\operatorname{S}_{d}(q)$. For this let $\lambda,\mu\in\mathcal{C}_d$ and assume that $\mu$ is obtained from $\lambda$ by swapping $\lambda_t$ and $\lambda_{t+1}$ for some $t$. Let $\nu$ be the merge of $\lambda$ at position $t$. Denote by $w(\lambda/\mu)$ the shortest coset representative in $\mathfrak{S}_\nu/ \mathfrak{S}_{\lambda}$ of $w_{\nu}$. (As a permutation diagram one might draw a cross as displayed in \eqref{crossdiag} indicating that $\lambda_t$ elements get swapped with $\lambda_{t+1}$ elements keeping the order inside the groups.) Then with $T=T_{w(\lambda/\mu)}\in \operatorname{H}_{d}(q)$ it holds $T m_\lambda=m_\mu T$ in $\operatorname{H}_{d}(q)$. \begin{df} The corresponding \emph{black crossing} is the element of $\operatorname{S}_{d}(q)$ which is only nonzero on the summand $m_\lambda \operatorname{H}_{d}(q)$ and there given by $m_\lambda \operatorname{H}_{d}(q)\to m_\mu \operatorname{H}_{d}(q)$, $m_\lambda h\mapsto T m_\lambda h=m_\mu T h$. We draw this element in the following way. \begin{equation} \label{crossdiag} \tikz[thick,xscale=2.5,yscale=1.5]{ \draw (-1,0) --(-1,.5) node[below,at start]{$\lambda_1$}; \node at (-0.75,0.25) {$\cdots$}; \draw (-0.6,0) --(-0.3,.5) node[below,at start]{$\lambda_t$}; \draw (-0.3,0) --(-0.6,.5) node[below,at start]{$\lambda_{t+1}$}; \node at (-0.15,0.25) {$\cdots$}; \draw (0.1,0) --(0.1,.5) node[below,at start]{$\lambda_{{l}(\lambda)}$}; } \end{equation} \end{df} \begin{lem} \label{lem-black_cross} A black crossing can be written as a product of splits, merges and Laurent polynomials. \begin{proof} \cite[Prop.~6.19]{MS} using \cite[(3.6)]{MS} and the definition \cite[(4.3)]{MS}. \end{proof} \end{lem} \subsection{Demazure operators} \label{subs_Demazure} For each $w\in\mathfrak{S}_d$, fix a reduced expression $w=s_{k_1}\ldots s_{k_r}$ and define $\partial_w=\partial_{k_1}\ldots\partial_{k_r}$ using the Demazure operators from \eqref{defDemazure}. This definition is independent of the choice of a reduced expression, see \cite[Thm.~1]{Demazure}. \begin{df} Set $D_d=\partial_{w_d}$, where $w_d$ is the longest element in $\mathfrak{S}_d$. For positive integers $a$ and $b$ such that $a+b=d$ let $D_{a,b}=\partial_{w_{a,b}}$ with $w_{a,b} \in\mathfrak{S}_d$ the permutation $$ w_{a,b}(i)= \begin{cases} i+b & \mbox{ if } 1\leqslant i\leqslant a,\\ i-a & \mbox{ if } a< i\leqslant a+b. \end{cases} $$ \end{df} We need the following well-known symmetrizing properties of these operators: \begin{lem} \begin{enumerate} \item For each polynomial $f$, the polynomial $D_d(f)$ is symmetric. \item In case $f$ is $\mathfrak{S}_a\times \mathfrak{S}_b$-symmetric, then $D_{a,b}(f)$ is symmetric. \end{enumerate} \end{lem} \begin{proof} The first property follows directly from the definition. Moreover, it is easy to see that each symmetric polynomial is in the image of the operator $D_d$. Then the second statement follows because for each $\mathfrak{S}_a\times\mathfrak{S}_b$-symmetric polynomial $f$ we can find a polynomial $g$ such that $D_{a,b}(f)=D_{a+b}(g)$. \end{proof} \subsection{Polynomial representation of $\operatorname{S}_{d}(q)$} \label{subs-polrep_S} By definition, the algebra $\operatorname{S}_{d}(q)$ has a faithful representation on the vector space $\bigoplus_{\lambda\in \mathcal{C}_d}m_\lambda \operatorname{H}_{d}(q)$. We will construct a faithful polynomial representation of $\operatorname{S}_{d}(q)$ on \begin{eqnarray*} \operatorname{sP}_{d}&=&\bigoplus_{\lambda\in \mathcal{C}_d}\bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}e(\lambda) \end{eqnarray*} realized as a subrepresentation of the defining representation. Fix $\lambda\in \mathcal{C}_d$. We will say that the indices $i,j\in[1;d]$ {\it are in the same block for $\lambda$} if there exists some $t$ such that $$ \sum_{a=1}^{t-1}\lambda_a<i,j\leqslant \sum_{b=1}^{t}\lambda_a. $$ \begin{df} \label{defpolys} We consider the following polynomials depending on $\lambda$: $$ \overrightarrow{p}_\lambda=\prod_{i<j}(x_i-qx_j),\quad \overleftarrow{p}_\lambda=\prod_{i<j}(x_j-qx_i) $$ where the product is taken over all $i,j\in [1;d]$ such that $i$ and $j$ are in the same block with respect to $\lambda$. Set also $n_\lambda=\sum_{w\in\mathfrak{S}_\lambda}T_w$ and $n'_\lambda=\sum_{w\in \operatorname{D}_{\lambda,\emptyset}}T_w$. \end{df} For instance, if $\lambda=(2,3)$ then \begin{eqnarray*} \overrightarrow{p}_\lambda&=&(x_1-qx_2)(x_3-qx_4)(x_3-qx_5)(x_4-qx_5),\\ \overleftarrow{p}_\lambda&=&(x_2-qx_1)(x_4-qx_3)(x_5-qx_3)(x_5-qx_4). \end{eqnarray*} \begin{df} For each $\lambda\in \mathcal{C}_d$ we define the following linear map \begin{eqnarray} \label{inclusiondef} \Phi_\lambda\colon\;\bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}\to m_\lambda \operatorname{H}_{d}(q),&& f\mapsto m_\lambda \overrightarrow{p}_\lambda f n'_\lambda. \end{eqnarray} which is in fact an inclusion by Corollary~\ref{coro_exch-mu-Pol} below, since $m_\lambda\overrightarrow{p}_\lambda f n'_\lambda=\overleftarrow{p}_\lambda f n_d.$ \end{df} \begin{lem} \label{lem-polrep-S-inside} Let $\lambda,\mu\in \mathcal{C}_d$ and assume that $\mu$ is a split of $\lambda$. \begin{enumerate} \item The split in $\mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_\lambda\operatorname{H}_{d}(q),m_\mu\operatorname{H}_{d}(q))$ applied to the image of $\Phi_\lambda$ is contained in the image of $\Phi_\mu$. \item The merge in $\mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_\mu\operatorname{H}_{d}(q),m_\lambda\operatorname{H}_{d}(q))$ applied to the image of $\Phi_\mu$ is contained in the image of $\Phi_\lambda$. \end{enumerate} \end{lem} The proof will be given in Section~\ref{subs_some-rel}. We will also need the following auxiliary polynomials. Assume that $a$ and $b$ are positive integers such that $a+b=d$. \begin{eqnarray*} \overrightarrow{p}'_{a,b}=\prod_{1\leqslant i\leqslant a< j\leqslant b }(x_i-qx_j), && \overleftarrow{p}'_{a,b}=\prod_{1\leqslant i\leqslant a< j\leqslant b }(x_j-qx_i). \end{eqnarray*} \begin{prop} \label{prop-polrep_S} The algebra $\operatorname{S}_{d}(q)$ has a faithful representation in $\operatorname{sP}_{d}$ such that the generators act as follows, using the abbreviation $P=\bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]$. \begin{itemize} \item The idempotent $e(\lambda)$, $\lambda\in \mathcal{C}_d$, acts on $\operatorname{sP}_{d}$ as the projection to $P^{\mathfrak{S}_\lambda}e(\lambda)$. \item For each $g\in P^{\mathfrak{S}_\lambda}$, $\lambda\in \mathcal{C}_d$, the element $ge(\lambda)$ sends $fe(\lambda)\in P^{\mathfrak{S}_\lambda}e(\lambda)$ to $gfe(\lambda)$. \item Assume $\mu$ is a split of $\lambda$ at position $j$. Then the split map $\lambda\rightarrow\mu$ acts by sending $fe(\lambda)\in P^{\mathfrak{S}_\lambda}e(\lambda)$ to $\overleftarrow{p}'_{a,b}fe(\mu)$ and the merge map acts by sending $fe(\mu)\in P^{\mathfrak{S}_\mu}$ to $D_{a,b}(f)e(\lambda)$ in case $\lambda=(a+b)$ and $\mu=(a,b)$ with $a+b=d$. In the general case they act by the same formulas with $a=\mu_j$, $b=\mu_{j+1}$, where $\overleftarrow{p}'_{a,b}$ and $D_{a,b}$ are defined with respect to the variables $x_i$ such that $i\in [\lambda_1+\ldots+\lambda_{j-1}+1;\lambda_1+\ldots+\lambda_{j-1}+\lambda_j]$. \end{itemize} \end{prop} \begin{proof} The existence of such a representation follows from \eqref{affSchurDef}, Lemma~\ref{lem-polrep-S-inside} and from the fact that the algebra $\operatorname{S}_{d}(q)$ is generated by the idempotents $e(\lambda)$, splits, merges and multiplications with (invariant) polynomials. Assume this representation is not faithful. Then we can find $\lambda,\mu\in \mathcal{C}_d$ and a nonzero $\phi\in \mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_\lambda\operatorname{H}_{d}(q),m_\mu\operatorname{H}_{d}(q))$ such that $\phi$ acts by zero on the polynomial representation. Let us compose $\phi$ with the split $(d)\to\lambda$ on the right. Then we get (as splits are injective) a nonzero element $\psi\in \mathrm{Hom}_{\operatorname{H}_{d}(q)}(m_d\operatorname{H}_{d}(q),m_\mu\operatorname{H}_{d}(q))$ that acts by zero. By construction of the polynomial representation, this implies $\psi(m_d\overrightarrow{p}_d)=0$ and thus $\psi(m_d)\overrightarrow{p}_d=0$. Since $\operatorname{H}_{d}(q)$ is a free right $P$-module, this implies $\psi(m_d)=0$ and thus $\psi=0$. This is a contradiction. \end{proof} \subsection{Some useful relations in the affine Hecke algebra} \label{subs_some-rel} To prove Lemma~\ref{lem-polrep-S-inside} we need to establish some explicit formulas which we think are of interest by themselves. In particular we want to understand the action of the special elements \begin{equation*} m_d=\sum_{w\in{\mathfrak{S}_d}}(-q)^{{l}(w_d)-{l}(w)}T_w, \quad n_d=\sum_{w\in{\mathfrak{S}_d}}T_w,\quad n'_{a,b}=\sum_{w\in \operatorname{D}_{(a,b),\emptyset}}T_w \end{equation*} from Sections~\ref{subs-gen_S},~\ref{subs-polrep_S} on the polynomial representation of $\operatorname{H}_{d}(q)$. \begin{lem} \hfill \label{lem_m-on-Pol-rep} \begin{enumerate} \item The element $m_d$ acts on the polynomial representation as $D_d\overleftarrow{p}_d$. \item The element $n_d$ acts on the polynomial representation as $\overrightarrow{p}_d D_d$. \end{enumerate} \end{lem} \begin{proof} Let $A=\bfk(X_1,\ldots,X_d)\#\bfk[\frak{S}_d]$ be the subalgebra of linear endomorphisms of $\bfk(X_1,\ldots,X_d)$ generated the multiplications with the $X_i$'s and by the permutations of variables for $w\in\mathfrak{S}_d$. The algebra $A$ is free as a left $\bfk(X_1,\ldots,X_d)$-module with for instance the bases \begin{eqnarray} \{w\mid w\in \bfk[\mathfrak{S}_d]\}&\text{respectively}&\{T_w\mid w\in\mathfrak{S}_d\}, \end{eqnarray} where $T_w$ is the endomorphism given as the composition of endomorphisms $T_r=T_{s_r}=-s_r-(q-1)X_{r+1}\partial_r$ according to a reduced expression. Let $N$ be the left $\bfk(X_1,\ldots,X_d)$-submodule of $A$ generated by $\{w\in\mathfrak{S}_d\mid w\not=w_d\}$. Note that $N$ is equal to the left submodule of $A$ generated by $\{T_w\mid w\not=w_d\}$. Moreover, the submodule $N$ does not change if we replace "left" by "right". To prove the first statement write $D_d\overleftarrow{p}_d$ in the form $D_d\overleftarrow{p}_d=\sum_{w}T_w a_w$, where $a_w\in\bfk(X_1,\ldots,X_d)$. We need to show $a_w=(-q)^{{l}(w_d)-{l}(w)}$. Since the Demazure operator $D_d$ sends rational functions to symmetric rational functions and $T_r$ act by $-1$ on symmetric rational functions, we have $T_rD_d \overleftarrow{p}_d= -D_d \overleftarrow{p}_d$ for each $r$, hence $$ T_r\left(\sum_w T_wa_w\right)=-\left(\sum_w T_w a_w\right ). $$ This implies $-qa_{s_rw}=a_{w}$ for each $w$ such that ${l}(w)<{l}(s_rw)$, and it suffices to show $a_{w_d}=1$. Because $T_r$ can be written as $T_r=-\frac{X_r-qX_{r+1}}{X_r-X_{r+1}}s_r-\frac{(q-1)X_{r+1}}{X_{r}-X_{r+1}}$ we have \begin{eqnarray*} T_{w_d}&\equiv&(-1)^{{l}(w_d)}\prod_{1\leqslant a<b\leqslant d}\frac{X_a-qX_b}{X_a-X_b}~w_d\equiv (-1)^{{l}(w_d)}w_d\prod_{1\leqslant a<b\leqslant d}\frac{X_b-qX_a}{X_b-X_a}, \end{eqnarray*} where $\equiv$ means equality modulo the subspace $N$. Thus \begin{eqnarray*} w_d\equiv (-1)^{{l}(w_d)}T_{w_d}\prod_{1\leqslant a<b\leqslant d}\frac{X_b-X_a}{X_b-qX_a}. \end{eqnarray*} Finally, we can write \begin{eqnarray*} D_d&\equiv&\prod_{1\leqslant a<b\leqslant d}\frac{1}{X_a-X_b}~w_d\equiv (-1)^{{l}(w_d)}w_d\prod_{1\leqslant a<b\leqslant d}\frac{1}{X_b-X_a} \end{eqnarray*} and therefore $D_d=T_{w_d}\prod_{1\leqslant a<b\leqslant d}\frac{1}{X_b-qX_a}+n$ for some $n\in N$. This implies $a_{w_d}=1$ and hence the first statement follows. To prove the second statement write $D_d$ in the form $D_d=\sum_w b_w T_w$, where $b_w\in \bfk(X_1,\ldots,X_d)$. It then suffices to show $b_w=\frac{1}{\overrightarrow{p}_d}$. Since $D_d$ sends rational functions to symmetric rational functions and $T_r$ acts by $-1$ on symmetric rational functions, we have $T_rD_d=-D_d$. This yields \begin{eqnarray*} T_r\left(\sum_w b_w T_w\right)&=&-\left(\sum_w b_w T_w\right). \end{eqnarray*} Using the relation $T_rb_w=s_r(b_w)T_r-(q-1)X_{r+1}\partial_r(b_w)$ we deduce that for each $w$ with ${l}(s_rw)>{l}(w)$ we have \begin{eqnarray} \label{eq-relations_for_b} -b_{s_rw}&=&s_r(b_w)+(q-1)s_r(b_{s_rw})-(q-1)X_{r+1}\partial_r(b_{s_rw}). \end{eqnarray} Clearly, the rational functions $b_w$ are determined by $b_{w_d}$ and \eqref{eq-relations_for_b}. Thus it suffices to show $b_{w_d}=\frac{1}{\overrightarrow{p}_d}$ and that $b_w=\frac{1}{\overrightarrow{p}_d}$ satisfy the relations \eqref{eq-relations_for_b}. We have \begin{eqnarray*} -\frac{1}{\overrightarrow{p}_d}&=&s_r\left(\frac{1}{\overrightarrow{p}_d}\right)+\left(q-1\right)s_r\left(\frac{1}{\overrightarrow{p}_d}\right)-\left(q-1\right)X_{r+1}\partial_r\left(\frac{1}{\overrightarrow{p}_d}\right), \end{eqnarray*} and since $\overrightarrow{p}_d$ is a product of $X_r-qX_{r+1}$ by an element that commutes with $s_r$ and $\partial_r$, it is enough to verify that $-\frac{1}{X_r-qX_{r+1}}$ equals \begin{eqnarray*} s_r\left(\frac{1}{X_r-qX_{r+1}}\right)+(q-1)s_r\left(\frac{1}{X_r-qX_{r+1}}\right)-(q-1)X_{r+1}\partial_r\left(\frac{1}{X_r-qX_{r+1}}\right). \end{eqnarray*} which is straightforward. The proof of $b_{w_d}=\frac{1}{\overrightarrow{p}_d}$ is similar to the arguments in the first part, namely we have \begin{eqnarray*} D_d&\equiv&\prod_{1\leqslant a<b\leqslant d}\frac{1}{X_a-X_b}~w_d\equiv \prod_{1\leqslant a<b\leqslant d}\frac{1}{X_a-qX_b}~T_{w_d},\\ \end{eqnarray*} which implies the claim. \end{proof} We obtain the following generalization of the easy equality in $\operatorname{H}_{d}(q)$ \begin{equation} \label{eq_exch-(T-q)-pol} (T_r-q)(X_r-qX_{r+1})=(X_{r+1}-qX_r)(T_r+1). \end{equation} \begin{coro} \label{coro_exch-mu-Pol} We have the equality $m_d\overrightarrow{p}_d= \overleftarrow{p}_dn_d$ in $\operatorname{H}_{d}(q)$. \end{coro} \begin{proof} This follows from Lemma~\ref{lem_m-on-Pol-rep} and from the symmetricity of $\overleftarrow{p}_d\overrightarrow{p}_d$. \end{proof} We also need to know how $n'_{a,b}$ acts on the polynomial representation. In light of Lemma~\ref{lem_m-on-Pol-rep} it would be natural to expect that $n'_{a,b}$ acts as $\overrightarrow{p}'_{a,b} D_{b,a}$. Unfortunately, this is not true is general. However, the following lemma shows that this becomes true in the presence of $n_an_b^{\small{+a}}$ on the left of $n'_{a,b}$. Here we mean that $n_b^{\small{+a}}$ is defined with respect to the shifted indices $a+1,\ldots,a+b$, i.e., with the composition $\nu=(1,1,\ldots,1,b)$ of $d$ we have \begin{eqnarray*} n_b^{\small{+a}}&=&\sum_{w\in {\mathfrak{S}_\nu}}T_w. \end{eqnarray*} We will use analogously the notations $m_b^{\small{+a}}$, $\overrightarrow{p}_b^{\small{+a}}$, $\overleftarrow{p}_b^{\small{+a}}$ and $D_b^{\small{+a}}$. \begin{lem} \label{lem_n'-on-Pol-rep} The element $n_a n_b^{\small{+a}} n'_{a,b}$ acts on the polynomial representation as $(\overrightarrow{p}_a D_a)(\overrightarrow{p}_b^{\small{+a}} D_b^{\small{+a}})(\overrightarrow{p}'_{a,b} D_{b,a})$. \end{lem} \begin{proof} The statement follows directly from Lemma~\ref{lem_m-on-Pol-rep} $(b)$. Indeed, the product $n_a n_b^{\small{+a}} n'_{a,b}$ is equal to $n_{a+b}$ and the product $(\overrightarrow{p}_a D_a)(\overrightarrow{p}_b^{\small{+a}} D_b^{\small{+a}})(\overrightarrow{p}'_{a,b} D_{b,a})$ is equal to $\overrightarrow{p}_{a+b}D_{a+b}$. \end{proof} \begin{proof}[Proof of Lemma~\ref{lem-polrep-S-inside}] It is enough to prove these statements in the case where $\mu$ has only two components and $\lambda$ has only one component. Assume therefore $\mu=(a,b)$ and $\lambda=(a+b)$. We have $m_\mu=m_am_b^{\small{+a}}$ and $m_\lambda=m_{a+b}$. To prove the first part fix $f\in \bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}$. Then $\Phi_\lambda(f)=m_{a+b} \overrightarrow{p}_{a+b} f$, and the split sends $\Phi_\lambda(f)$ to the element $m_{a+b} \overrightarrow{p}_{a+b} f\in m_\mu \operatorname{H}_{d}(q)$. We have to check that it is in the image of $\Phi_\mu$. Now, we have \begin{equation*} \begin{array}{lllll} m_{a+b} \overrightarrow{p}_{a+b} f&=&\overleftarrow{p}_{a+b}n_{a+b} f &=&\overleftarrow{p}_{a}\overleftarrow{p}_{b}^{\small{+a}}\overleftarrow{p}'_{a,b}n_an_b^{\small{+a}}n'_{a,b}f\\ &=&\overleftarrow{p}_a\overleftarrow{p}_b^{\small{+a}}n_an_b^{\small{+a}}\overleftarrow{p}'_{a,b}fn'_{a,b} &=&m_am_b^{\small{+a}}\overrightarrow{p}_a\overrightarrow{p}_b^{\small{+a}}\overleftarrow{p}'_{a,b}fn'_{a,b}\\ &=&\Phi_\mu(\overleftarrow{p}'_{a,b}f). \end{array} \end{equation*} Here the first and the fourth equalities follow from Corollary~\ref{coro_exch-mu-Pol}. The third equality follows since $\overleftarrow{p}'_{a,b}$ is symmetric with respect to the first $a$ and the last $b$ variables, and $f$ is symmetric. Hence the split $(a+b)\to (a,b)$ sends $\Phi_\lambda(f)$ to $\Phi_\mu(\overleftarrow{p}'_{a,b}f)$. This proves the first statement. To prove the second part fix $f\in \bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\mu}$. We show that $\Phi_\mu(f)$ is sent by the split to $\Phi_\lambda(D_{a,b}(f))$, in formulas \begin{eqnarray} \label{eq_check-for-merge-before} m_{a+b} \overrightarrow{p}_{a}\overrightarrow{p}^{\small{+a}}_{b} f n'_{a,b}&=&m_{a+b}\overrightarrow{p}_{a+b}D_{a,b}(f). \end{eqnarray} By Lemmas~\ref{lem_m-on-Pol-rep}, and ~\ref{lem_n'-on-Pol-rep} it suffices to verify, for any $g\in \bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]$, that \begin{eqnarray} D_{a+b}\left(\overleftarrow{p}_{a+b}\overrightarrow{p}_a\overrightarrow{p}^{\small{+a}}_bf\overrightarrow{p}'_{a,b}D_{b,a}(g)\right)&=&D_{a+b}\left(\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a,b}(f)g\right). \label{eq_check-for-merge} \end{eqnarray} (Note that it is not obvious that we are allowed to apply Lemma~\ref{lem_n'-on-Pol-rep} here, because we have no "$n_an_b^{\small{+a}}$" on the left of "$n'_{a,b}$" in the formula on the left hand side of \eqref{eq_check-for-merge-before}. But we can write $m_{a+b}$ in the form $xm_am_b^{\small{+a}}$ and rewrite the left hand side of \eqref{eq_check-for-merge-before} using Corollary~\ref{coro_exch-mu-Pol} as follows \begin{equation*} m_{a+b} \overrightarrow{p}_{a}\overrightarrow{p}_{b}^{\small{+a}} f n'_{a,b} = xm_am_b^{\small{+a}} \overrightarrow{p}_{a}\overrightarrow{p}^{\small{+a}} _{b} f n'_{a,b} = x\overleftarrow{p}_a\overleftarrow{p}_b^{\small{+a}}fn_an_b^{\small{+a}} n'_{a,b}, \end{equation*} which allows us apply the lemma.) Since, the polynomials $\overrightarrow{p}_{a+b}\overleftarrow{p}_{a+b}$ and $D_{a,b}(f)$ are symmetric, the right hand side of \eqref{eq_check-for-merge} is equal to $\overrightarrow{p}_{a+b}\overleftarrow{p}_{a+b}D_{a,b}(f)D_{a+b}(g)$. It agrees with the left hand side of \eqref{eq_check-for-merge} by the calculation \begin{equation*} \begin{array}{lllll} &D_{a+b}\left(\overleftarrow{p}_{a+b}\overrightarrow{p}_a\overrightarrow{p}_b^{\small{+a}}f\overrightarrow{p}'_{a,b}D_{b,a}(g)\right) =&D_{a+b}\left(\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}fD_{b,a}(g)\right)\\ =&\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a+b}\left(fD_{b,a}(g)\right) =&\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a,b}D_{a}D^{+a}_{b}\left(fD_{b,a}(g)\right)\\ =&\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a,b}\left(fD_aD^{+a}_bD_{b,a}(g)\right) =&\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a,b}\left(fD_{a+b}(g)\right)\\ =&\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}D_{a,b}(f)D_{a+b}(g). \end{array} \end{equation*} Here the second equality follows since $\overleftarrow{p}_{a+b}\overrightarrow{p}_{a+b}$ is symmetric. The fourth equality follows because $f$ is symmetric in the first $a$ and last $b$ variables. The sixth equality follows since $D_{a,b}(f)$ is symmetric. This proves \eqref{eq_check-for-merge}. \end{proof} \subsection{Higher level affine Schur algebra} Now we define the higher level version $\operatorname{S}_{d,\bfQ}(q)$ of the algebra $\operatorname{S}_{d}(q)$ depending on $\mathbf{Q}=(Q_1,\hdots,Q_\ell)\in \bfk^\ell$. \begin{df} \label{def-multicomp} An \emph{$(\ell+1)$-composition} of $d$ is an $(\ell+1)$-tuple $\lambda=(\lambda^{(0)},\hdots,\lambda^{(\ell)})$ such that $\lambda^{(0)},\hdots,\lambda^{(\ell)}$ are compositions (of some non-negative integers) such that $\sum_{i=0}^\ell|\lambda^{(i)}|=d$. Denote by $\mathcal{C}^\ell_d$ the set of $(\ell+1)$-compositions of $d$. For $\lambda=(\lambda^{(0)},\hdots,\lambda^{(\ell)})\in \mathcal{C}^\ell_d$ let $\mathfrak{S}_\lambda=\mathfrak{S}_{\lambda^{(0)}}\times\hdots\times \mathfrak{S}_{\lambda^{(\ell)}}\subset \mathfrak{S}_d$ be the corresponding parabolic subgroup of $\mathfrak{S}_d$. For each $(\ell+1)$-composition $\lambda$ of $d$ we denote by $[\lambda_r^{(k)}]$ the subset of $\{1,2,\ldots,d\}$ that contains the elements $$ \mbox{from}\quad 1+\sum_{i=0}^{k-1}|\lambda^{(i)}|+\sum_{j=1}^{r-1}\lambda_j^{(k)} \quad \mbox{to} \quad \sum_{i=0}^{k-1}|\lambda^{(i)}|+\sum_{j=1}^{r}\lambda_j^{(k)}. $$ Note that the set $[\lambda_r^{(k)}]$ depends on $\lambda$, $r$ and $k$ (not only on the number $\lambda_r^{(k)}$). \end{df} To $\lambda\in \mathcal{C}^\ell_d$ we attach the following element $m_\lambda\in\operatorname{H}_{d,\bfQ}(q)$, \begin{eqnarray*} \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.9,.25) {$m_\lambda\;\;=$}; \node at (-0.25,0.25) {$m_{\lambda^{(0)}}$}; \draw[wei] (0,0) --(0,.5) node[below,at start]{$Q_1$}; \node at (0.25,0.25) {$m_{\lambda^{(1)}}$}; \draw[wei] (.5,0) --(.5,.5) node[below,at start]{$Q_2$}; \node at (0.85,.25) {$\cdots$};x \draw[wei] (1.2,0) --(1.2,.5) node[below,at start]{$Q_{\ell-1}$}; \node at (1.45,0.25) {$m_{\lambda^{(\ell-1)}}$}; \draw[wei] (1.7,0) --(1.7,.5) node[below,at start]{$Q_\ell$}; \node at (1.95,0.25) {$m_{\lambda^{(\ell)}}$}; } \end{eqnarray*} We consider $m_\lambda \operatorname{H}_{d,\bfQ}(q)$ as a right $\operatorname{H}_{d,\bfQ}(q)$-module. \begin{df} The {\it affine Schur algebra (of level $\ell$)} is the algebra \begin{eqnarray} \label{defhigherlevSchur} \operatorname{S}_{d,\bfQ}(q)&=&\mathrm{End}_{\operatorname{H}_{d,\bfQ}(q)}\left(\bigoplus_{\lambda\in \mathcal{C}^\ell_d}m_\lambda \operatorname{H}_{d,\bfQ}(q)\right). \end{eqnarray} \end{df} We could define $n_\lambda$ similarly to $m_\lambda$ and consider the following modification of the affine Schur algebra defined in terms of $n_\lambda$ instead of $m_\lambda$: \begin{eqnarray*} \operatorname{\overline{S}}_{d,\bfQ}(q) &= &\mathrm{End}_{\operatorname{H}_{d,\bfQ}(q)}\left(\bigoplus_{\lambda\in \mathcal{C}^\ell_d}n_\lambda \operatorname{H}_{d,\bfQ}(q)\right). \end{eqnarray*} Using the isomorphism $\#\colon \operatorname{H}_{d,\bfQ}(q)\to \operatorname{H}_{d,\bfQ^{-1}}(q)$ in Lemma~\ref{lem-hash_isom} we have $(n_\lambda)^\#=m_\lambda$ (up to a sign). This implies directly the following. \begin{lem} \label{lem-isom-Sbar-Sop} There is an isomorphism of algebras $\operatorname{\overline{S}}_{d,\bfQ}(q)\to \operatorname{S}_{d,\bfQ^{-1}}(q)$. \end{lem} We introduce now the thick calculus for the algebra $\operatorname{S}_{d,\bfQ}(q)$ extending the diagrammatic calculus for $\operatorname{H}_{d,\bfQ}(q)$ and $\operatorname{S}_{d}(q)$. We draw special elements of this algebra as diagrams that are similar to the special diagrams for $\operatorname{H}_{d,\bfQ}(q)$. The difference is that the black strands are also allowed to have "multiplicities" (that are positive integers). We also allow the diagrams to contain locally elements of the form \eqref{diag-split-merge}. Instead of dots, a segment of a strand of multiplicity $b$ is allowed to carry a symmetric Laurent polynomial of $b$ variables. \subsection{Generators of $\operatorname{S}_{d,\bfQ}(q)$} \label{subs-gen_lS} For each $\lambda\in\mathcal{C}^\ell_d$ there is an idempotent $e(\lambda)\in \operatorname{S}_{d,\bfQ}(q)$ given by the identity endomorphism of the summand $m_\lambda\operatorname{H}_{d,\bfQ}(q)$. We draw it as $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.9,.25) {$e(\lambda)=$}; \draw (-0.6,0) --(-0.6,.5) node[below,at start]{$\lambda^{(0)}_1$}; \draw (-0.4,0) --(-0.4,.5) node[below,at start]{$\lambda^{(0)}_2$}; \node at (-0.2,.25) {$\cdots$}; \draw[wei] (0,0) --(0,.5) node[below,at start]{$Q_1$}; \draw (0.2,0) --(0.2,.5) node[below,at start]{$\lambda^{(1)}_1$}; \draw (0.4,0) --(0.4,.5) node[below,at start]{$\lambda^{(1)}_2$}; \node at (0.6,.25) {$\cdots$}; \draw[wei] (0.8,0) --(0.8,.5) node[below,at start]{$Q_2$}; \draw (1.0,0) --(1.0,.5) node[below,at start]{$\lambda^{(2)}_1$}; \draw (1.2,0) --(1.2,.5) node[below,at start]{$\lambda^{(2)}_2$}; \node at (1.4,.25) {$\cdots$}; \node at (1.6,.25) {$\cdots$}; \node at (1.8,.25) {$\cdots$}; \draw[wei] (2.0,0) --(2.0,.5) node[below,at start]{$Q_\ell$}; \draw (2.2,0) --(2.2,.5) node[below,at start]{$\lambda^{(\ell)}_1$}; \draw (2.4,0) --(2.4,.5) node[below,at start]{$\lambda^{(\ell)}_2$}; \node at (2.6,.25) {$\cdots$}; } $$ Let $\mu$ be another $(\ell+1)$-composition of $d$. We say that $\mu$ is a \emph{split} of $\lambda$ (and $\lambda$ is a {\it merge} of $\mu$) if there is a $t$ such that the component $\mu^{(t)}$ of $\mu$ is a split of the component $\lambda^{(t)}$ of $\lambda$ (in the sense of Section~\ref{subs-gen_S}) and $\mu^{(i)}=\lambda^{(i)}$ if $i\ne t$. In this case we can define the split map $\lambda\to \mu$ and the merge map $\mu\to\lambda$ in $\operatorname{S}_{d,\bfQ}(q)$ in the same way as in Section~\ref{subs-gen_S}. We draw the split and merge map for $\lambda= (a+b)$ and $\mu=(a,b)$ as in \eqref{diag-split-merge} and for arbitrary $\lambda$, $\mu$ by adding the appropriate vertical strands to the left and right. \begin{df} Assume $\lambda,\mu \in\mathcal{C}^\ell_d$ such that $\mu$ is obtained from $\lambda$ by moving the first component of $\lambda^{(t)}$ to the end of $\lambda^{(t-1)}$ for some $t\in[1;\ell]$. More precisely, we assume $\lambda^{(i)}=\mu^{(i)}$ for $i\ne t-1, t$ and $\mu^{(t-1)}=(\lambda^{(t-1)}_1,\lambda^{(t-1)}_2,\hdots,\lambda^{(t-1)}_{{l}(\lambda^{(t-1)})},\lambda^{(t)}_1)$ and $\mu^{(t)}=(\lambda^{(t)}_2,\lambda^{(t)}_3,\hdots,\lambda^{(t)}_{{l}(\lambda^{(t)})})$. In this case we say that $\mu$ is a \emph{left crossing} of $\lambda$ and that $\lambda$ is a \emph{right crossing} of $\mu$. \end{df} To a left crossing $\mu$ of $\lambda$ we assign the two special elements in $\operatorname{S}_{d,\bfQ}(q)$ given by left multiplication with \begin{eqnarray} \label{eq_def-lsh} \begin{tikzpicture}[thick,baseline=9pt] \node at (-0.5,0.5) {$$}; \draw[wei] (0,0) node[below]{$Q_t$} -- (1.5,1); \draw (1.5,0) -- (0.9,1); \draw (1.2,0) -- (0.6,1); \node at (0.5,0.6) {$\hdots$}; \draw (0.6,0) -- (0,1); \node at (1.8,0.5) {$$}; \end{tikzpicture} &\text{respectively}& \begin{tikzpicture}[thick,baseline=9pt] \node at (-0.5,0.5) {$$}; \draw (0,0) -- (0.6,1); \draw (0.3,0) -- (0.9,1); \node at (1,0.6) {$\hdots$}; \draw (0.9,0) -- (1.5,1); \draw[wei] (1.5,0) node[below]{$Q_t$} -- (0,1); \node at (1.8,0.5) {$$}; \end{tikzpicture} \end{eqnarray} where in either case we have $\lambda^{(t)}_1$ parallel black strands crossing the involved red strand and all other strands (which we did not draw) are just vertical. Such a multiplication yields an element of $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ respectively of $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\mu\operatorname{H}_{d,\bfQ}(q),m_\lambda\operatorname{H}_{d,\bfQ}(q))$ because of the relations \eqref{l-Hecke-diag-3}. Thus by extending by zero to the other summands we obtain indeed an element of $\operatorname{S}_{d,\bfQ}(q)$. We call these elements of $\operatorname{S}_{d,\bfQ}(q)$ \emph{left crossings} respectively \emph{right crossings}, denote them $\lambda\to\mu$ respectively $\lambda\to\mu$ and usually draw them just as \begin{eqnarray} \label{eq_def-lsh2} \tikz[thick,xscale=2.5,yscale=1.5, baseline=0.8cm]{ \draw[wei] (0,0) node[below] {$Q_t$} to (.6,1) ; \draw (.6,0) node[below] {$\lambda^{(t)}_1$} to (0,1) ; } &\quad\text{respectively}\quad& \tikz[thick,xscale=2.5,yscale=1.5, baseline=0.8cm]{ \draw (0,0) node[below] {$\lambda^{(t)}_1$} to (.6,1) ; \draw[wei] (.6,0) node[below] {$Q_t$} to (0,1) ; } \end{eqnarray} (with possibly vertical strands to the left and right). Similarly to Section~\ref{subs-gen_S}, for each $\lambda\in\mathcal{C}^\ell_d$ and $f\in \bfk[x_1^{\pm 1},\ldots, x_d^{\pm 1}]$ we have an element $fe(\lambda)\in \operatorname{S}_{d,\bfQ}(q)$. \begin{rk} Similarly, to Section~\ref{subs-gen_S}, we could introduce a black crossing in $\operatorname{S}_{d,\bfQ}(q)$. But this element can be expressed in terms of other generators of $\operatorname{S}_{d,\bfQ}(q)$. \end{rk} \begin{rk} One could again realize $\operatorname{S}_{d,\bfQ}(q)$ as a quotient of an algebra structure on the direct sum of homomorphism spaces in the universal thickened higher level category, where we take $I_r=\bfk^*$, $I_b=(\mathbb{Z}_{>0},+)$ and again for $R_a$ the ring of symmetric Laurent polynomials in $a$ variables. The objects to consider are all tensor products of black and red labels, such that the sum of the black labels is $d$ and the sequence of red labels is $\mathbf{Q}$. Since we do not know the defining relations for $\operatorname{S}_{d,\bfQ}(q)$ we do not follow this viewpoint here. In particular, the faithful representation constructed below becomes crucial. Similar remarks also apply to the (higher level) quiver Schur algebras in Section~\ref{sec-QSchur}. \end{rk} \subsection{Polynomial representation of $\operatorname{S}_{d,\bfQ}(q)$} \label{subs:Pol_rep_lS} By definition, \eqref{defhigherlevSchur}, the algebra $\operatorname{S}_{d,\bfQ}(q)$ has a faithful representation on the vector space $\bigoplus_{\lambda\in \mathcal{C}^\ell_d}m_\lambda\operatorname{H}_{d,\bfQ}(q)$. In this section we are going to construct a polynomial representation $$ \operatorname{sP}_{d,\bfQ}=\bigoplus_{\lambda\in \mathcal{C}^\ell_d}\bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}e(\lambda) $$ of the algebra $\operatorname{S}_{d,\bfQ}(q)$ sitting inside the defining representation. \begin{df} For each $\lambda\in \mathcal{C}^\ell_d$ we denote \begin{itemize} \item by $\overline\lambda$ the elements of $\mathcal{C}_d$ obtained by concatenation of the $\ell+1$ components of $\lambda$, i.e., we have $\overline\lambda=\lambda^{(0)}\cup\ldots \cup \lambda^{(\ell)}$, where $\cup$ denotes the concatenation of compositions; and \item by $e^0(\lambda)$ the idempotent in $\operatorname{H}_{d,\bfQ}(q)$ obtained from $e(\lambda)$ by replacing each vertical black strand of multiplicity $a$ (for each positive integer $a$) by $a$ usual (multiplicity $1$) vertical black strands; and \item by $\mathfrak{r}_\lambda$ the element of $\operatorname{H}_{d,\bfQ}(q)$ represented by the diagram defined by the following three properties. The top part of the diagram corresponds to the idempotent $e^0(\lambda)$. At the bottom of the diagram, each red strand is on the left of each black strand. The diagram may contain left crossings, but neither dots, splits, merges nor right crossings. \end{itemize} \end{df} \begin{ex} Take $\ell=2$, $\lambda=((1),(2,1),(1,2))$. In this case we have \begin{eqnarray*} \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-1.3,.25) {$e(\lambda)=$}; \draw (-1.0,0) --(-1.0,.5) node[below,at start]{$1$}; \draw[wei] (-0.8,0) --(-0.8,.5) node[below,at start]{$Q_1$}; \draw (-0.6,0) --(-0.6,.5) node[below,at start]{$2$}; \draw (-0.4,0) --(-0.4,.5) node[below,at start]{$1$}; \draw[wei] (-0.2,0) --(-0.2,.5) node[below,at start]{$Q_2$}; \draw (0,0) --(0,.5) node[below,at start]{$1$}; \draw (0.2,0) --(0.2,.5) node[below,at start]{$2$}; } &\quad\text\quad& \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-1.3,.25) {$e^0(\lambda)=$}; \draw (-1.0,0) --(-1.0,.5); \draw[wei] (-0.8,0) --(-0.8,.5) node[below,at start]{$Q_1$}; \draw (-0.6,0) --(-0.6,.5); \draw (-0.4,0) --(-0.4,.5); \draw (-0.2,0) --(-0.2,.5); \draw[wei] (0,0) --(0,.5) node[below,at start]{$Q_2$}; \draw (0.2,0) --(0.2,.5); \draw (0.4,0) --(0.4,.5); \draw (0.6,0) --(0.6,.5); } \end{eqnarray*} \end{ex} $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-1.5,.25) {$\mathfrak{r}_\lambda=$}; \draw (-0.4,0) --(-0.6,.5); \draw (-0.2,0) --(-0.4,.5); \draw (0,0) --(-0.2,.5); \draw[wei] (-0.8,0) --(0,.5) node[below,at start]{$Q_2$}; \draw (0.2,0) --(0.2,.5); \draw (0.4,0) --(0.4,.5); \draw (0.6,0) --(0.6,.5); \draw[wei] (-1.0,0) --(-0.8,.5) node[below,at start]{$Q_1$}; \draw (-0.6,0) --(-1.0,.5); } $$ Denote by $\iota$ the obvious inclusion of $\operatorname{H}_{d}(q)$ to $\operatorname{H}_{d,\bfQ}(q)$ obtained by adding $\ell$ red strands on the left. This defines an inclusion \begin{eqnarray} \label{inclhigherlevel} \Phi_\lambda\colon \bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]^{\mathfrak{S}_\lambda}\to m_\lambda \operatorname{H}_{d,\bfQ}(q), \qquad f\mapsto \mathfrak{r}_{\lambda}\iota(\Phi_{\overline\lambda}(f)). \end{eqnarray} \begin{ex} Let $\lambda=((2,1),(1,2))$. Then the element $\Phi_\lambda(f)$ is displayed on the right hand side in Figure~\ref{diag-ex_Phi}. It equals the left hand side, since relations \eqref{l-Hecke-diag-2}-\eqref{l-Hecke-diag-3} allow dots and black-black crossings to slide through red strands. This argument shows in general that the element $\Phi_\lambda(f)$ is indeed in $m_\lambda \operatorname{H}_{d,\bfQ}(q)$. (Although this is obvious for the left hand side of the equality in Figure~\ref{diag-ex_Phi}, this was not completely obvious for the original definition of $\Phi_\lambda(f)$.) \begin{figure} \begin{equation*} \tikz[thick,xscale=2.5,yscale=1.5]{ \draw (-0.6,1.4) --(-0.6,1.6); \draw (-0.4,1.4) --(-0.4,1.6); \draw (-0.1,1) --(-0.1,1.6); \node at (-0.5,1.25) {$m_2$}; \draw (-0.8,1.1) rectangle (-.2,1.4); \draw (0.3,1) --(0.3,1.6); \draw (0.6,1.4) --(0.6,1.6); \draw (0.8,1.4) --(0.8,1.6); \node at (0.7,1.25) {$m_2$}; \draw (0.4,1.1) rectangle (1,1.4); \draw (-0.6,0.5) --(-0.6,.6); \draw (-0.6,.9) --(-0.6,1.1); \draw (-0.4,0.5) --(-0.4,.6); \draw (-0.4,.9) --(-0.4,1.1); \draw (-0.1,0.5) --(-0.1,1.1); \node at (-0.5,0.75) {$x_1-qx_2$}; \draw (-0.8,.6) rectangle (-.2,0.9); \draw[wei] (0.1,0.5) --(0.1,1.6); \draw (0.3,0.5) --(0.3,1.1); \draw (0.6,0.5) --(0.6,.6); \draw (0.6,.9) --(0.6,1.1); \draw (0.8,0.5) --(0.8,.6); \draw (0.8,.9) --(0.8,1.1); \node at (0.7,0.75) {$x_5-qx_6$}; \draw (0.4,.6) rectangle (1,0.9); \draw (-0.4,0) --(-0.6,.5); \draw (-0.2,0) --(-0.4,.5); \draw (0.1,0) --(-0.1,.5); \draw[wei] (-0.6,0) --(0.1,.5) ; \draw (0.3,0) --(0.3,.5); \draw (0.6,0) --(0.6,.5); \draw (0.8,0) --(0.8,.5); \node at (0.25,-0.15) {$f$}; \draw (-0.5,0) rectangle (1,-0.3); \draw[wei] (-0.6,-.4) -- (-0.6,0); \draw (-0.4,-0.4) --(-0.4,-0.3); \draw (-0.3,-0.4) --(-0.3,-0.3); \draw (0.1,-0.4) --(0.1,-0.3); \draw (0.3,-0.4) --(0.3,-0.3); \draw (0.6,-0.4) --(0.6,-0.3); \draw (0.8,-0.4) --(0.8,-0.3); \node at (0.25,-0.55) {\small{$n'_{\overline\lambda}$}}; \draw (-0.5,-0.4) rectangle (1,-0.7); \draw[wei] (-0.6,-0.8) -- (-0.6,-0.4) node[below,at start]{$Q_1$}; \draw (-0.4,-0.9) --(-0.4,-0.7); \draw (-0.2,-0.9) --(-0.2,-0.7); \draw (0.1,-0.9) --(0.1,-0.7); \draw (0.3,-0.9) --(0.3,-0.7); \draw (0.7,-0.9) --(0.7,-0.7); \draw (0.9,-0.9) --(0.9,-0.7); \node at (1.6,0.5) {$=$}; \draw[wei] (2.2,0.9) --(3.2,1.6); \draw (2.6,0.9) -- (2.4,1.6); \draw (2.8,0.9) -- (2.6,1.6); \draw (3.1,0.9) -- (2.9,1.6); \draw (3.3,1) --(3.3,1.6); \draw (3.6,1.4) --(3.6,1.6); \draw (3.8,1.4) --(3.8,1.6); \node at (3.7,1.25) {$m_2$}; \draw (3.4,1.1) rectangle (4,1.4); \draw (3.3,0.5) --(3.3,1.1); \draw (3.6,0.5) --(3.6,.6); \draw (3.6,.9) --(3.6,1.1); \draw (3.8,0.5) --(3.8,.6); \draw (3.8,.9) --(3.8,1.1); \node at (3.7,0.75) {$x_5-qx_6$}; \draw (3.4,.6) rectangle (4,0.9); \draw (2.6,0.8) --(2.6,0.9); \draw (2.8,0.8) --(2.8,0.9); \draw (3.1,0.5) --(3.1,0.9); \node at (2.7,0.65) {$m_2$}; \draw (2.4,.5) rectangle (3,0.8); \draw (2.6,0) --(2.6,0.1); \draw (2.6,0.4) --(2.6,0.5); \draw (2.8,0) --(2.8,0.1); \draw (2.8,0.4) --(2.8,0.5); \draw (3.1,0) --(3.1,0.5); \node at (2.7,0.25) {$x_1-qx_2$}; \draw (2.4,.1) rectangle (3,0.4); \draw (3.3,0) --(3.3,.5); \draw (3.6,0) --(3.6,.5); \draw (3.8,0) --(3.8,.5); \node at (3.25,-0.15) {$f$}; \draw (2.5,0) rectangle (4,-0.3); \draw (2.6,-0.4) --(2.6,-0.3); \draw (2.8,-0.4) --(2.8,-0.3); \draw (3.1,-0.4) --(3.1,-0.3); \draw (3.3,-0.4) --(3.3,-0.3); \draw (3.6,-0.4) --(3.6,-0.3); \draw (3.8,-0.4) --(3.8,-0.3); \node at (3.25,-0.55) {$n'_{\overline\lambda}$}; \draw (2.5,-0.4) rectangle (4,-0.7); \draw[wei] (2.2,-0.8) -- (2.2,0.9) node[below,at start]{$Q_1$}; \draw (2.6,-0.9) --(2.6,-0.7); \draw (2.8,-0.9) --(2.8,-0.7); \draw (3.1,-0.9) --(3.1,-0.7); \draw (3.3,-0.9) --(3.3,-0.7); \draw (3.6,-0.9) --(3.6,-0.7); \draw (3.8,-0.9) --(3.8,-0.7); } \end{equation*} \caption{Well-definedness of the polynomial representation.} \label{diag-ex_Phi} \end{figure} \end{ex} \begin{lem} \label{lem-polrep-lS-inside-split-merge} Let $\lambda, \mu\in \mathcal{C}^\ell_d$ and assume that $\mu$ is a split of $\lambda$. \begin{enumerate} \item The split map in $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ applied to the image of $\Phi_\lambda$ is contained in the image of $\Phi_\mu$. \item The merge in $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\mu\operatorname{H}_{d,\bfQ}(q),m_\lambda\operatorname{H}_{d,\bfQ}(q))$ applied to the image of $\Phi_\mu$ is contained in the image of $\Phi_\lambda$. \end{enumerate} \end{lem} \begin{proof} The proof is totally analogous to the proof of Lemma~\ref{lem-polrep-S-inside}. \end{proof} \begin{lem} \label{lem-polrep-lS-inside-crossings} Let $\lambda\in \mathcal{C}^\ell_d$. Assume that $\mu$ is a left crossing of $\lambda$. \begin{enumerate} \item The left crossing in $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ applied to the image of $\Phi_\lambda$ is contained in the image of $\Phi_\mu$. \item The right crossing in $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\mu\operatorname{H}_{d,\bfQ}(q),m_\lambda\operatorname{H}_{d,\bfQ}(q))$ applied to the image of $\Phi_\mu$ is contained in the image of $\Phi_\lambda$. \end{enumerate} \end{lem} \begin{proof} Fix $f\in \bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]^{\mathfrak{S}_{\lambda}}$. It is clear from the definitions that the left crossing map $\lambda\to\mu$ acts by sending $\Phi_\lambda(fe(\lambda))$ to $\Phi_\mu(fe(\mu))$. Let $t$ be the index such that $\lambda^{(t)}=(\mu^{(t-1)}_{l(\mu^{(t-1)})})\cup\mu^{(t)}$. Set $a=\lambda^{(t)}_1$ and $b=\sum_{i=1}^{t-1}|\lambda^{(i)}|$. Relation \eqref{l-Hecke-diag-1} implies that $\mu\to\lambda$ sends $\Phi_\mu(fe(\mu))$ to $\Phi_\lambda(gfe(\lambda))$, where $g=\prod_{i=b+1}^{b+a}(x_i-Q_t)$. \end{proof} Lemmas~\ref{lem-polrep-lS-inside-split-merge} and~\ref{lem-polrep-lS-inside-crossings} will be used to deduce the following result which as a special case establishes also a proof of Proposition~\ref{prop-polrep_S}. \begin{prop} \label{prop-polrep-lS} There is a unique action of the algebra $\operatorname{S}_{d,\bfQ}(q)$ on $\operatorname{sP}_{d,\bfQ}$ satisfying the following properties using the abbreviation $P=\bfk[x^{\pm 1}_1,\hdots,x^{\pm 1}_d]$. \begin{itemize} \item The idempotent $e(\lambda)$, $\lambda\in \mathcal{C}^\ell_d$, acts on $\operatorname{sP}_{d,\bfQ}$ as projection to $Pe(\lambda)$. \item For each $g\in P^{\mathfrak{S}_\lambda}$, the element $ge(\lambda)$ sends $fe(\lambda)$ to $gfe(\lambda)$. \item Splits and merges act in the same way as in Proposition~\ref{prop-polrep_S}. \item Left crossing maps $\lambda\to \mu$ act by sending $fe(\lambda)$, $f\in P^{\mathfrak{S}_\lambda}$ to $fe(\mu)$. \item Right crossing maps $\mu\to\lambda$ act by sending $fe(\mu)$, $f\in P^{\mathfrak{S}_\mu}$ to $gfe(\lambda)$ where $g=\prod_{i\in [\lambda^{(t)}_1]}^{b+a}(Q_t-x_i)$. \end{itemize} Moreover, the obtained representation of $\operatorname{S}_{d,\bfQ}(q)$ in $\operatorname{sP}_{d,\bfQ}$ is faithful. \end{prop} \begin{proof} The proof of the existence and uniqueness will be spread over the whole next section and then finally follow from Lemma~\ref{lem-generate_lS} and the above lemmas. The proof of faithfulness is similar to Proposition~\ref{prop-polrep_S}: Assume that there exist $\lambda,\mu\in \mathcal{C}^\ell_d$ and a nonzero element $\phi\in \mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ such that $\phi$ acts on $\operatorname{sP}_{d,\bfQ}$ by zero. Consider the split $\lambda'\to \lambda$ such that for each $r\in\{0,1,\ldots,\ell\}$, we have $\lambda'^{(r)}=(|\lambda'^{(r)}|)$ (i.e., $\lambda'$ is the coarsest possible). Then, after composing $\phi$ with this split, we get a nonzero element of $\psi\in\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_{\lambda'}\operatorname{H}_{d,\bfQ}(q),m_{\mu}\operatorname{H}_{d,\bfQ}(q))$ that acts by zero on $\operatorname{sP}_{d,\bfQ}$. The fact that $\psi$ acts by zero on $\operatorname{sP}_{d,\bfQ}$ implies $ \psi(m_{\lambda'}\overrightarrow{p}_{\lambda'}\mathfrak{r}_{\lambda'})=\psi(\Phi_{\lambda'}(1))=0, $ where $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-.9,.25) {$\overrightarrow{p}_{\lambda'}\;\;=$}; \node at (-0.25,0.25) {$\overrightarrow{p}_{\lambda'^{(0)}}$}; \draw[wei] (0,0) --(0,.5) node[below,at start]{$Q_1$}; \node at (0.25,0.25) {$\overrightarrow{p}_{\lambda'^{(1)}}$}; \draw[wei] (.5,0) --(.5,.5) node[below,at start]{$Q_2$}; \node at (0.85,.25) {$\cdots$}; \draw[wei] (1.2,0) --(1.2,.5) node[below,at start]{$Q_{\ell-1}$}; \node at (1.45,0.25) {$\overrightarrow{p}_{\lambda'^{(\ell-1)}}$}; \draw[wei] (1.7,0) --(1.7,.5) node[below,at start]{$Q_\ell$}; \node at (1.95,0.25) {$\overrightarrow{p}_{\lambda'^{(\ell)}}$}; } $$ This implies $\psi(m_{\lambda'})\overrightarrow{p}_{\lambda'}\mathfrak{r}_{\lambda'}=0$. Moreover, it is clear from \eqref{l-Hecke-diag-1} that the element $\mathfrak{r}_{\lambda'}$ can by multiplied by an element of $\operatorname{H}_{d,\bfQ}(q)$ on the right such that the product is of the form $Qe^0(\lambda')$, where $Q\in\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]$, $Q\ne 0$. We get $$ \psi(m_{\lambda'})\overrightarrow{p}_{\lambda'}Q=\psi(m_{\lambda'})\overrightarrow{p}_{\lambda'}Qe^0(\lambda')=0. $$ Thus, $\psi(m_{\lambda'})=0$, because $\operatorname{H}_{d,\bfQ}(q)$ is free as a right $\bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]$-module. \end{proof} \subsection{A basis of $\operatorname{S}_{d,\bfQ}(q)$} The goal of this section is to obtain a basis of $\operatorname{S}_{d,\bfQ}(q)$. For this we first describe the space $\mathrm{Hom}(\lambda,\mu)=\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$, for $\lambda,\mu\in\mathcal{C}_d$ in terms of the finite Hecke algebra $\operatorname{H}^{\operatorname{fin}}_d(q)$, see Remark~\ref{ordinaryHecke}. For $\lambda\in\mathcal{C}_d$, denote by $\operatorname{H}^{\operatorname{fin}}_\lambda(q)\subset\operatorname{H}^{\operatorname{fin}}_d(q)$ the Hecke algebra corresponding to $\mathfrak{S}_\lambda$ (see \eqref{Young}) and by $\epsilon_\lambda$ the sign representation of $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$. The following is well-known: \begin{lem} \label{lem-ind_sing} We have an isomorphism of right $\operatorname{H}_{d}(q)$-modules \begin{eqnarray} \label{easy} m_\lambda\operatorname{H}_{d}(q)&\simeq& \epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)}\operatorname{H}_{d}(q) \end{eqnarray} \end{lem} \begin{proof} Let $v$ be a generator of the (one-dimensional) vector space $\epsilon_\lambda$. We have an obvious morphism of $\operatorname{H}_{d}(q)$-modules $$ \epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)}\operatorname{H}_{d}(q)\to m_\lambda\operatorname{H}_{d}(q),\quad v\otimes x\mapsto m_\lambda\cdot x. $$ It is well-defined because we have $m_\lambda T_r=-m_\lambda$ for each $r$ such that $s_r\in\mathfrak{S}_\lambda$. The bijectivity of this morphism follows from the fact that $\operatorname{H}_{d}(q)$ is a free left $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$-module. \end{proof} Now, we would like to extend \eqref{easy} to the higher level affine Hecke algebra $\operatorname{H}_{d,\bfQ}(q)$. Given $\lambda\in\mathcal{C}^\ell_d$, denote again by $\operatorname{H}^{\operatorname{fin}}_\lambda(q)\subset\operatorname{H}^{\operatorname{fin}}_d(q)$ the Hecke algebra corresponding to $\mathfrak{S}_\lambda$ (the group $\mathfrak{S}_\lambda$ is as in Definition~\ref{def-multicomp}). We can identify $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$ with the unitary subalgebra in $e^0(\lambda)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ generated by the elements $T_re^0(\lambda)$ where the indices $r$ correspond to simple reflection in $\mathfrak{S}_\lambda$. \begin{lem} We have an isomorphism of right $\operatorname{H}_{d,\bfQ}(q)$-modules \begin{eqnarray} \label{noteasy} m_\lambda\operatorname{H}_{d,\bfQ}(q)&\simeq& \epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)} e^0(\lambda)\operatorname{H}_{d,\bfQ}(q) \end{eqnarray} \end{lem} \begin{proof} Let $\operatorname{H}_\lambda(q)$ be the (non-unitary) subalgebra of $\operatorname{H}_{d,\bfQ}(q)$ generated by $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$ and $\bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]$. (This algebra is clearly isomorphic to a tensor product of the algebras $H_{\lambda_{i}^{(j)}}(q)$.) We have $$ \begin{array}{rcl} \epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)} e^0(\lambda)\operatorname{H}_{d,\bfQ}(q) & \simeq & \epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)}\operatorname{H}_\lambda(q)\otimes_{\operatorname{H}_\lambda(q)} e^0(\lambda)\operatorname{H}_{d,\bfQ}(q)\\ & \simeq & m_\lambda \operatorname{H}_\lambda(q)\otimes_{\operatorname{H}_\lambda(q)} e^0(\lambda)\operatorname{H}_{d,\bfQ}(q)\\ & \simeq & m_\lambda\operatorname{H}_{d,\bfQ}(q). \end{array} $$ The first isomorphism is obvious, the second follows from Lemma~\ref{lem-ind_sing} and the third is true because the right $\operatorname{H}_\lambda(q)$-module $e^0(\lambda)\operatorname{H}_{d,\bfQ}(q)$ is free by Proposition~\ref{prop-basis-Hdl}. \end{proof} We thus have that $\mathrm{Hom}(\lambda,\mu)$ is isomorphic to \begin{eqnarray} \label{eq-Hom_via_sing} \begin{array}{rcl} &&\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}\left(\epsilon_\lambda\otimes_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)}e^0(\lambda)\operatorname{H}_{d,\bfQ}(q),\epsilon_\mu\otimes_{\operatorname{H}^{\operatorname{fin}}_\mu(q)}e^0(\mu)\operatorname{H}_{d,\bfQ}(q)\right)\\ &\simeq& \mathrm{Hom}_{\operatorname{H}^{\operatorname{fin}}_\lambda(q)}\left(\epsilon_\lambda,\epsilon_\mu\otimes_{\operatorname{H}^{\operatorname{fin}}_\mu(q)}e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)\right). \end{array} \end{eqnarray} Above, we used the adjunction $$ \mathrm{Hom}_{A_1}(M\otimes_{A_2}B,N)=\mathrm{Hom}_{A_2}(M,\mathrm{Hom}_{A_1}(B,N)), $$ where $A_1$ and $A_2$ are rings, $N$ is a right $A_1$-module, $M$ is a right $A_2$-module and $B$ is an $(A_2,A_1)$-bimodule. This adjunction is applied to $A_1=\operatorname{H}_{d,\bfQ}(q)$, $A_2=\operatorname{H}^{\operatorname{fin}}_\lambda(q)$, $N=\epsilon_\mu\otimes_{\operatorname{H}^{\operatorname{fin}}_\mu(q)}e^0(\mu)\operatorname{H}_{d,\bfQ}(q)$, $M=\epsilon_\lambda$, $B=e^0(\lambda)\operatorname{H}_{d,\bfQ}(q)$. Now, we see that to get a basis of $\operatorname{S}_{d,\bfQ}(q)$, we should understand the structure of the $(\operatorname{H}^{\operatorname{fin}}_\mu(q),\operatorname{H}^{\operatorname{fin}}_\lambda(q))$-bimodule $e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ for $\lambda,\mu\in\mathcal{C}^\ell_d$. \begin{df} Let $\lambda,\mu, \nu\in\mathcal{C}_d$. Denote by $\lambda\cap\mu$ the composition in $\mathcal{C}_d$ such that $\frak{S}_{\lambda\cap\mu}=\frak{S}_\lambda\cap \frak{S}_\mu$. Recall from Section~\ref{subs-affSchur} that we denote by $\operatorname{D}_{\lambda,\mu}$ the set of minimal length representatives of the double cosets $\frak{S}_\lambda\backslash \frak{S}_d/\frak{S}_\mu$. If $\frak{S}_\lambda, \frak{S}_\mu$ are subgroups of $\frak{S}_\nu$ we denote $\operatorname{D}^\nu_{\lambda,\mu}=\frak{S}_\nu\cap \operatorname{D}_{\lambda,\mu}$. \end{df} Let $\mathcal{X}$ be the set of Laurent monomials $x_1^{a_1}x_2^{a_2}\ldots x_d^{a_d}$ with $a_r\in\mathbb{Z}$. Denote by $\mathcal{X}_\lambda^+$ the subset of $\mathcal{X}$ that contains only monomials such that $(a_1,a_2,\ldots,a_d)$ is non-decreasing inside of each component of $\lambda$, i.e., we have \begin{eqnarray} \label{Xdomla} \mathcal{X}_\lambda^+=\left\{ x^{a_1}\cdots x^{a_d}\in \mathcal{X}\mid a_r\leqslant a_{r+1}, \mbox{ unless } r=\lambda_1+\ldots+\lambda_t \mbox{ for some }t \right\}. \end{eqnarray} For $p=x_1^{a_1}\cdots x_d^{a_d}\in\mathcal{X}_\lambda^+$, denote by $\lambda\cap p$ the unique composition that is finer than $\lambda$ and such that its components correspond precisely to the segments where $(a_1,a_2,\ldots,a_d)$ is constant. In other words, the indices $r,r+1\in \{1,2\ldots,d\}$ are in the same component of the composition $\lambda\cap p$ if and only if they are in the same component of the composition $\lambda$ and $a_r=a_{r+1}$. \begin{ex} If for instance $\lambda=(2,3)$, then $p=x_1^3x_2^3x_3^2x_4^6x_5^6\in\mathcal{X}_\lambda^+$ because $3\leqslant 3$ and $2\leqslant 6\leqslant 6$, and $\lambda\cap p=(2,1,2)$. \end{ex} Assume $\lambda,\mu\in \mathcal{C}_d$, $w\in \operatorname{D}_{\lambda,\mu}$. Denote by $\lambda\cap w(\mu)$ the unique partition in $\mathcal{C}_d$ such that $\mathfrak{S}_{\lambda\cap w(\mu)}=\mathfrak{S}_\lambda\cap w\mathfrak{S}_\mu w^{-1}$. (But $w(\mu)$ itself has no sense as a partition. Note also that $\lambda\cap w(\mu)$ has no sense for an arbitrary permutation $w$ that is not an element of $\operatorname{D}_{\lambda,\mu}$.) Recall that for each $(\ell+1)$-composition $\lambda\in \mathcal{C}^\ell$ we denote by $\overline \lambda$ the associated composition (i.e., the concatenation of the components of $\lambda$). If $\lambda$, $\mu$ and $\nu$ are $(\ell+1)$-compositions in $\mathcal{C}_d^\ell$, we can also use notation $\operatorname{D}_{\lambda,\mu}$, $\operatorname{D}^{\nu}_{\mu,\lambda}$, $\lambda\cap\mu$, $\lambda\cap w(\mu)$, $\mathcal{X}_\lambda^+$ etc. instead of $\operatorname{D}_{\overline\lambda,\overline\mu}$, $\operatorname{D}^{\nu}_{\overline\mu,\overline\lambda}$, $\overline\lambda\cap\overline\mu$, $\overline\lambda\cap w(\overline\mu)$, $\calX^+_{\overline\lambda}$ etc. (in this situations we just consider each $(\ell+1)$-composition as an associated composition). For $p\in\mathcal{X}^+_{(d)}$, denote by $\frakS_p$ the stabilizer of $p$ in $\frakS_d$. Then the notation $\operatorname{D}_{p,\emptyset}$ also makes sense. \begin{lem} \label{lem:basis-Heck} The set $$ B=\{T_wpT_z\mid w\in \frakS_d,p\in \mathcal{X}^+_{(d)}, z\in \operatorname{D}_{p,\emptyset} \} $$ is a basis of $\operatorname{H}_{d}(q)$. \end{lem} \begin{proof} First we show that $B$ spans, that means we prove that the set $$ B'=\{pT_z\mid p\in \mathcal{X}^+_{(d)}, z\in \operatorname{D}_{p,\emptyset} \} $$ generates the left $\operatorname{H}^{\operatorname{fin}}_d(q)$-module $\operatorname{H}_{d}(q)$. To do this, it is enough to show that each monomial $p\in \mathcal{X}$ can be written as an $\operatorname{H}^{\operatorname{fin}}_d(q)$-linear combination of elements of $B'$. This can be proved by induction using the equality $$ p=q^{-1}T_rs_r(p)T_r+(q^{-1}-1)T_r\partial_r(X_rp). $$ For the linear independence it is enough to check that the elements of $B$ act on the polynomial representation $\bfk[x_1^{\pm 1},\ldots,x_d^{\pm 1}]$ by linearly independent operators. This can be done similarly to the proof of Proposition~\ref{prop-basis-Hdl}. \end{proof} \begin{coro} \label{coro-two_set_invert} Fix $\lambda\in \mathcal{C}_d$. Consider the left $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$-module $\operatorname{H}_{d}(q)$. Consider two sets in this module: $$ \mathcal{X}\subset \operatorname{H}_{d}(q),\qquad\text{and}\qquad\{pT_z\mid p\in \mathcal{X}^+_\lambda,z\in \operatorname{D}^{\lambda}_{\lambda\cap p,\emptyset}\}\subset \operatorname{H}_{d}(q). $$ The elements of the two sets above can be expressed in terms of each other with an invertible change of basis matrix. \end{coro} \begin{proof} The statement follows from the fact that both of the sets above form bases in the left $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$-module $ \operatorname{H}_{\lambda_1}(q)\otimes\ldots\otimes \operatorname{H}_{\lambda_{l(\lambda)}}(q). $ \end{proof} \begin{rk} Let $\lambda,\mu\in\mathcal{C}^\ell_d$ and pick $w\in \operatorname{D}_{\mu,\lambda}$ and $z\in \frakS_{\lambda\cap w^{-1}(\mu)}$. Setting $z'=wzw^{-1}$ we obtain the equality $wz=z'w$ and also $T_wT_z=T_{z'}T_w$ in the Hecke algebra $\operatorname{H}_{d}(q)$. Now, let ${\mathbf{b}},{\mathbf{c}}\in J^{\ell,d}$ be such that we have $e^0(\mu)=e({\mathbf{b}})$ and $e^0(\lambda)=e({\mathbf{c}})$. We also would like to have the following version of this equality in $\operatorname{H}_{d,\bfQ}(q)$ (see see Section~\ref{subs-basis_lHeck} for the notation) \begin{equation} \label{eq-lT_{wz}} T^{\mathbf{b},\mathbf{c}}_w T_z=T_{z'}T^{\mathbf{b},\mathbf{c}}_w \end{equation} This is slightly delicate, because the element $T^{\mathbf{b},\mathbf{c}}_w$ depends on some choices. We can however make these choices in a way such that indeed \eqref{eq-lT_{wz}} holds. To do this, we first choose for each $w\in \operatorname{D}_{\mu,\lambda}$ some $T^{\mathbf{b},\mathbf{c}}_w$ arbitrarily and then define $T^{\mathbf{b},\mathbf{c}}_y$ for any other $y\in\frakS_\mu w \frakS_\lambda$ (dependent on these choices) inductively, by induction on the length. Assuming we have constructed $T^{\mathbf{b},\mathbf{c}}_{y}$ for some $y$ such that $y(\mathbf{c})=\mathbf{b}$, then for each simple reflection $s\in \frakS_{\lambda}$ such that ${l}(ws)={l}(w){l}(s)$ (resp. for each simple reflection $s'\in \mathfrak{S}_{\mu}$ such that ${l}(s'w)={l}(s'){l}(w)$) we set $T^{\mathbf{b},s(\mathbf{c})}_{ws}=T^{\mathbf{b},\mathbf{c}}_w T_s$ (resp. $T^{s'(\mathbf{b}),\mathbf{c}}_{s'w}=T_{s'}T^{\mathbf{b},\mathbf{c}}_w$). \end{rk} \begin{lem} \label{lem-basis_bimodule} The set \begin{eqnarray*} \mathcal{B}&=&\left\{T_xT^{{\mathbf{b}},{\mathbf{c}}}_wpT_y\mid w\in \operatorname{D}_{\mu,\lambda}, x\in \frakS_\mu, p\in \mathcal{X}^+_{\lambda\cap w^{-1}(\mu)}, y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu)\cap p,\emptyset}\right\} \end{eqnarray*} is a basis of $e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$. \end{lem} \begin{proof} It is a standard fact that each $y\in\mathfrak{S}_d$ has a unique presentation of the form $y=xwz$, where $w\in \operatorname{D}_{\mu,\lambda}$, $x\in \frakS_\mu$, $z\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu),\emptyset}$ and ${l}(y)={l}(x)+{l}(w)+{l}(z)$. Together with Proposition~\ref{prop-basis-Hdl} this shows that the left $\operatorname{H}^{\operatorname{fin}}_\mu(q)$-module $e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ is free with a basis \begin{eqnarray*} \mathcal{B}_1&=&\left\{T^{{\mathbf{b}},{\mathbf{c}}}_w T_y p\mid w\in \operatorname{D}_{\mu,\lambda}, y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu), \emptyset}, p\in\mathcal{X} \right\} \end{eqnarray*} or alternatively with a basis \begin{eqnarray*} \mathcal{B}_2&=& \{T^{{\mathbf{b}},{\mathbf{c}}}_w pT_y\mid w\in \operatorname{D}_{\mu,\lambda}, y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu), \emptyset}, p\in\mathcal{X} \}. \end{eqnarray*} Indeed, we can find a bijection between $\mathcal{B}_1$ and $\mathcal{B}_2$ such that the base change matrix in an appropriate order on the bases is triangular with invertible elements on the diagonal. For $w\in \operatorname{D}_{\mu,\lambda}, y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu), \emptyset}$, we define \begin{eqnarray*} \mathcal{B}_2^{w,y}=\{T^{{\mathbf{b}},{\mathbf{c}}}_w pT_y\mid p\in\mathcal{X}\},&& \mathcal{B}_3^{w,y}=\{T^{{\mathbf{b}},{\mathbf{c}}}_w pT_{zy}\mid p\in\mathcal{X}^+_{\lambda\cap w^{-1}(\mu)},z\in \operatorname{D}^{\lambda\cap w^{-1}(\mu)}_{{\lambda\cap w^{-1}(\mu)}\cap p,\emptyset}\}. \end{eqnarray*} By Corollary~\ref{coro-two_set_invert}, the elements of the sets $\mathcal{B}_2$ and $\mathcal{B}_3$ can be written as $\operatorname{H}^{\operatorname{fin}}_{w(\lambda)\cap \mu}(q)$-linear combinations of each other with an invertible change of basis matrix. Since $\mathcal{B}_2=\coprod_{w,y}\mathcal{B}_2^{w,y}$ is a basis of the left $\operatorname{H}^{\operatorname{fin}}_\mu(q)$-module $e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ so is $\mathcal{B}_3=\coprod_{w,y}\mathcal{B}_3{w,y}$. The set $\mathcal{B}_3$ can be written in a slightly different way as \begin{eqnarray*} \mathcal{B}_3&=&\{T^{{\mathbf{b}},{\mathbf{c}}}_wpT_y\mid w\in \operatorname{D}_{\mu,\lambda}, p\in \mathcal{X}^+_{\lambda\cap w^{-1}(\mu)}, y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu)\cap p,\emptyset}\}. \end{eqnarray*} This implies that $\mathcal{B}$ is a basis of the vector space $e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$. \end{proof} For each $w\in \operatorname{D}_{\mu,\lambda}$ and $p\in \mathcal{X}^+_{\lambda\cap w^{-1}(\mu)}$ consider the element \begin{eqnarray*} b^{w,p}\in\mathrm{Hom}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))&&m_\lambda h\mapsto m_\mu T^{{\mathbf{b}},{\mathbf{c}}}_{w}p(\sum_{y} (-q)^{r-{l}(y)}T_y) h, \end{eqnarray*} where $y$ runs through $\operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu)\cap p,\emptyset}$ and $r$ denotes the length of the longest element therein. \begin{coro} \label{Corbasis} The following is a basis of $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ $$ \{b^{w,p}\mid w\in \operatorname{D}_{\mu,\lambda},p\in \mathcal{X}^+_{\lambda\cap w^{-1}(\mu)}\}. $$ \end{coro} \begin{proof} We have seen in \eqref{eq-Hom_via_sing} that $\mathrm{Hom}_{\operatorname{H}_{d,\bfQ}(q)}(m_\lambda\operatorname{H}_{d,\bfQ}(q),m_\mu\operatorname{H}_{d,\bfQ}(q))$ is in bijection with the vector subspace of elements of $\epsilon_\mu\otimes_{\operatorname{H}^{\operatorname{fin}}_\mu(q)}e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ on which $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$ acts from the right by the sign representation. By Lemma~\ref{lem-basis_bimodule}, the right $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$-module $\epsilon_\mu\otimes_{\operatorname{H}^{\operatorname{fin}}_\mu(q)}e^0(\mu)\operatorname{H}_{d,\bfQ}(q) e^0(\lambda)$ is a direct sum of submodules $M_{w,p}$, for $w\in \operatorname{D}_{\mu,\lambda}$ and $p\in \mathcal{X}^+_{\lambda\cap w^{-1}(\mu)}$, with vector space basis $$ \{\epsilon_\mu \otimes T^{\mathbf{b},\mathbf{c}}_w p T_y\mid y\in \operatorname{D}^{\lambda}_{\lambda\cap w^{-1}(\mu)\cap p}\}. $$ We claim that the vector subspace of vectors of $M_{w,p}$ that transform as a sign representation of $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$ is one-dimensional . Indeed, the right $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$-module $M_{w,p}$ is isomorphic to $\epsilon_\xi\otimes_{\operatorname{H}^{\operatorname{fin}}_\xi(q)} \operatorname{H}^{\operatorname{fin}}_\lambda(q)$, where $\xi=\lambda\cap w^{-1}(\mu)\cap p$. An element of $\epsilon_\xi\otimes_{\operatorname{H}^{\operatorname{fin}}_\xi(q)} \operatorname{H}^{\operatorname{fin}}_\lambda(q)$ can be written uniquely in the form $\sum_{y\in \operatorname{D}^\lambda_{\xi,\emptyset}}a_y(\epsilon_\xi\otimes T_y)$, where $a_y\in \bfk$. This element transforms as a sign representation of $\operatorname{H}^{\operatorname{fin}}_\lambda(q)$ if an only if for each $i$ we have $$ \left(\sum_{y\in \operatorname{D}^\lambda_{\xi,\emptyset}}a_y(\epsilon_\xi\otimes T_y)\right)T_i=-\left(\sum_{y\in \operatorname{D}^\lambda_{\xi,\emptyset}}a_y(\epsilon_\xi\otimes T_y)\right). $$ Standard computation shows that this is equivalent to the condition $-a_y=qa_{ys_i}$ whenever $y,ys_i\in \operatorname{D}^\lambda_{\xi,\emptyset}$ with ${l}(ys_i)>{l}(y)$. But this condition is simply equivalent to the fact that the element is proportional to $\sum_{y\in \operatorname{D}^\lambda_{\xi,\emptyset}}\epsilon_\xi\otimes (-q)^{r-{l}(y)}T_y$, where $r$ is the length of the longest element of $\operatorname{D}^\lambda_{\xi,\emptyset}$. Under the isomorphism \eqref{eq-Hom_via_sing} this corresponds to the basis element $b^{w,p}$. \end{proof} We can write the morphism $b^{w,p}$ as a composition as follows: \begin{eqnarray*} m_\lambda\operatorname{H}_{d,\bfQ}(q)\;\stackrel{b^{1,1}}{\longrightarrow}\;m_{\lambda\cap w^{-1}(\mu)}\operatorname{H}_{d,\bfQ}(q)&\stackrel{b^{1,p}}{\longrightarrow}&m_{\lambda\cap w^{-1}(\mu)}\operatorname{H}_{d,\bfQ}(q)\\ &\stackrel{b^{w,1}}{\longrightarrow}&m_{w(\lambda)\cap \mu}\operatorname{H}_{d,\bfQ}(q)\;\stackrel{b^{1,1}}{\longrightarrow}\;m_{\mu}\operatorname{H}_{d,\bfQ}(q). \end{eqnarray*} Note that the first and the last morphisms in this decompositions are obviously a split and a merge, the morphism $b^{1,p}$ is a multiplication by a polynomial, whereas, $b^{w,1}$ is a composition of left, right and black crossings. The discussion above together with Lemma~\ref{lem-black_cross} proves the following lemma. \begin{lem} \label{lem-generate_lS} The algebra $\operatorname{S}_{d,\bfQ}(q)$ is generated by the idempotents $e(\lambda)$, for $\lambda\in\mathcal{C}_d^\ell$, the splits, the merges, the left/right crossings and the polynomials. \end{lem} \subsection{Completion} This section is very similar to \cite[Sec.~5]{MS}. As in Section~\ref{subs-compl-Hecke}, we fix $\mathbf{a}\in (\bfk^*)^\ell$. The affine Schur algebra considered in \cite{MS} corresponds to the case $\ell=0$ (no red lines). But the completion procedure only does something with black lines. We consider $m_\lambda\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ as a right $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$-module. \begin{df} We set $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)=\mathrm{End}_{\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)}\left(\bigoplus_{\lambda\in \mathcal{C}^\ell_d}m_\lambda \widehat{\operatorname{H}}_{\bfa,\bfQ}(q)\right)$. \end{df} As for Hecke algebras, the affine Schur algebra gets more idempotents after completion. They can be constructed in the following way. For each ${\mathbf{i}}\in \mathfrak{S}_d\mathbf{a}$, we have an idempotent $e(\lambda,{\mathbf{i}})=\sum_{{\mathbf{j}}\in\mathfrak{S}_\lambda{\mathbf{i}}}e({\mathbf{j}})\in\operatorname{H}_{d,\bfQ}(q)$. It is clear that $e(\lambda,{\mathbf{i}})$ depends only on the $\mathfrak{S}_\lambda$-orbit of ${\mathbf{i}}$. Similarly to \cite[Lemma~5.3]{MS}, the idempotent $e(\lambda,{\mathbf{i}})$ commutes with $m_\lambda$. Then we obtain \begin{eqnarray*} \widehat{\operatorname{S}}_{\bfa,\bfQ}(q)&=&\mathrm{End}_{\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)}(\bigoplus_{\lambda\in \mathcal{C}^\ell_d,{\mathbf{i}}\in \mathfrak{S}_\lambda\backslash\mathfrak{S}_d\mathbf{a}}e(\lambda,{\mathbf{i}})m_\lambda \widehat{\operatorname{H}}_{\bfa,\bfQ}(q)). \end{eqnarray*} In particular, $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ has idempotents $e(\lambda,{\mathbf{i}})$ projecting to $e(\lambda,{\mathbf{i}})m_\lambda \widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$. \begin{rk} It is possible to give an equivalent definition of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ as a completion of $\operatorname{S}_{d,\bfQ}(q)$ with respect to some sequence of ideals (see \cite[Sec.~5.1]{MS}, where this is done for $\ell=0$). In particular, this realizes $\operatorname{S}_{d,\bfQ}(q)$ is a subalgebra of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$. The idempotent $e(\lambda)\in \operatorname{S}_{d,\bfQ}(q)$ is decomposed in $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ as $e(\lambda)=\sum_{{\mathbf{i}}\in \mathfrak{S}_\lambda\backslash\mathfrak{S}_d\mathbf{a}} e(\lambda,{\mathbf{i}})$. \end{rk} \subsection{Generators of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$} Let $\lambda,\mu\in\mathcal{C}^\ell_d$ be such that $\mu$ is a split of $\lambda$. Fix ${\mathbf{i}}\in\mathfrak{S}_d\mathbf{a}$. Then we can define the following elements of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$: \begin{equation*} \begin{array}{lllcl} \text{the {\it split} element}:&&(\lambda,{\mathbf{i}})\to(\mu,{\mathbf{i}})&=&e(\mu,{\mathbf{i}})(\lambda\to\mu)e(\lambda,{\mathbf{i}}),\\ \text{the {\it merge} element}:&&(\mu,{\mathbf{i}})\to(\lambda,{\mathbf{i}})&=&e(\lambda,{\mathbf{i}})(\mu\to\lambda)e(\mu,{\mathbf{i}}), \end{array} \end{equation*} where $\lambda\to\mu$ and $\mu\to\lambda$ are the images of the usual split and merge with respect to the inclusion $\operatorname{S}_{d,\bfQ}(q)\subset \widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$. If now $\mu$ is obtained from $\lambda$ by a left crossing, then we define the left $(\lambda,{\mathbf{i}})\to(\mu,{\mathbf{i}})$ respectively right crossing $(\mu,{\mathbf{i}})\to(\lambda,{\mathbf{i}})$ in the same way as for split and merges. \begin{prop} The algebra $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ acts faithfully on \begin{eqnarray*} \widehat{\operatorname{sP}}_{\bfa,\bfQ}=\bigoplus_{\lambda\in \mathcal{C}^\ell_d,{\mathbf{i}}\in \mathfrak{S}_\lambda\backslash\mathfrak{S}_d\mathbf{a}} \bfk[[x_1-i_1,\ldots,x_d-i_d]]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}}), \end{eqnarray*} where $\mathfrak{S}_{\lambda,{\mathbf{i}}}$ is the stabilizer of ${\mathbf{i}}$ in $\mathfrak{S}_\lambda$. \end{prop} \begin{proof} This can be proved as \cite[Prop.~5.18]{MS}. \end{proof} \subsection{Modified representation of $\operatorname{S}_{d,\bfQ}(q)$} We now construct a modification of the representation of $\operatorname{S}_{d,\bfQ}(q)$ in $\operatorname{sP}_{d,\bfQ}$ which will be relevant later, see Remark~\ref{rk:modified}. Assume $\lambda\in \mathcal{C}^\ell_d$. Let $\overleftarrow{p}'_\lambda$ be the polynomial such that $\overleftarrow{p}_\lambda \overleftarrow{p}'_\lambda=\overleftarrow{p}_d$. (In other words, we have $\overleftarrow{p}_\lambda=\prod_{1\leqslant i<j\leqslant d}(x_j-qx_i)$, where the product is taken only over $i$ and $j$ that are in different components of $\lambda$.) Note that this notation is a generalization of $\overleftarrow{p}'_{a,b}$ used above. \begin{df} Let $\operatorname{sP}_{d,\bfQ}'$ be equal to $\operatorname{sP}_{d,\bfQ}$ as a vector space, but equipped with a different action of $\operatorname{S}_{d,\bfQ}(q)$. In this new action the element $x\in \mathrm{Hom}(\lambda, \mu)\subset\operatorname{S}_{d,\bfQ}(q)$ acts on $\operatorname{sP}_{d,\bfQ}'$ as $ (\overleftarrow{p}'_\mu)^{-1} x \overleftarrow{p}'_\lambda $ on $\operatorname{sP}_{d,\bfQ}$. \end{df} A priori, the action of $\operatorname{S}_{d,\bfQ}(q)$ defined above is only well-defined on some localization of $\operatorname{sP}_{d,\bfQ}'$ (not on $\operatorname{sP}_{d,\bfQ}'$ itself). But it can be checked on generators (idempotents, polynomials, splits, merges, left and right crossings) that this action is also well-defined on $\operatorname{sP}_{d,\bfQ}'$. The following lemma describes this action. \begin{lem} \begin{enumerate} \item The idemponents $e(\lambda)$, the ($\mathfrak{S}_\lambda$-symmetric) Laurent polynomials, and the left and right crossings in $\operatorname{S}_{d,\bfQ}(q)$ act on $\operatorname{sP}_{d,\bfQ}'$ in the same way as on $\operatorname{sP}_{d,\bfQ}$. \item Let $\mu$ be a split of $\lambda$. Then in case $\lambda=(a+b)$ and $\mu=(a,b)$, the split map $\lambda\rightarrow\mu$ acts by sending $fe(\lambda)\in P^{\mathfrak{S}_\lambda}e(\lambda)$ to $fe(\mu)$, whereas the merge map acts by sending $fe(\mu)\in P^{\mathfrak{S}_\mu}$ to $D_{a,b}(\overleftarrow{p}'_{a,b}f)e(\lambda)$ with $a+b=d$. In the general case split and merge act by the same formulae but in the variables from the two blocks of $\mu$ that form one block of $\lambda$. \end{enumerate} \end{lem} \begin{proof} The statement follows directly from Proposition~\ref{prop-polrep-lS}. \end{proof} The faithfulness of the representation $\operatorname{sP}_{d,\bfQ}$ implies the faithfulness of the representation $\operatorname{sP}_{d,\bfQ}'$. A modification $\widehat{\operatorname{sP}}_{\bfa,\bfQ}'$ of the faithful representation $\widehat{\operatorname{sP}}_{\bfa,\bfQ}$ of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ can be defined similarly. \section{(Higher level) Quiver Schur algebras ${A}_{\nu,\bfQ}$} \label{sec-QSchur} \subsection{Quiver Schur algebras} In this section we restrict the form of the quiver $\Gamma=(I,A)$. We assume that the quiver $\Gamma$ has no loops and each vertex of the quiver has exactly one incoming arrow and exactly one outgoing arrow. (This assumption means that each connected component of the quiver is either an oriented cycle of length $\geqslant 2$ or an infinite oriented chain.) Note that the quiver $\Gamma_\mathcal{F}$ in Section~\ref{subs-isom_lHeck-tens-compl} always satisfies this assumption. We make this assumption here, because the quiver Schur algebra is defined in \cite{SW} only for type $A$, although the definition from \cite{SW} could easily be generalized, but this is not our focus here. As above, we fix $\nu\in I^d$ and $\mathbf{Q}\in I^\ell$. We first recall the definition of the quiver Schur algebra ${A}_{\nu,\bfQ}$, introduced by the second author and Webster in \cite{SW}. For each $\lambda\in \mathcal{C}^\ell_d$ and ${\mathbf{i}}\in I^\nu$, let $\mathfrak{S}_{\lambda,{\mathbf{i}}}$ be the stabilizer of ${\mathbf{i}}$ in $\mathfrak{S}_\lambda$, and let $\mathcal{C}^\ell_{\nu}$ the set of pairs $(\lambda,{\mathbf{i}})$ such that $\lambda\in \mathcal{C}^\ell_d,{\mathbf{i}}\in \mathfrak{S}_\lambda\backslash I^\nu$. Consider the following vector space \begin{eqnarray} \label{sPol} \it{sPol}_{\nu,\bfQ}&=&\bigoplus_{(\lambda,{\mathbf{i}})\in \mathcal{C}^\ell_\nu} \bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}}). \end{eqnarray} \begin{rk} Note that if ${\mathbf{i}},{\mathbf{j}}\in I^\nu$ are in the same $\mathfrak{S}_\lambda$-orbit, and $w$ is an element of $\mathfrak{S}_\lambda$ such that $w({\mathbf{i}})={\mathbf{j}}$, then we have a canonical isomorphism $$ \bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}\simeq \bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{j}}}}, \quad P(y_1,\ldots,y_d)\mapsto P(y_{w(1)},\ldots,y_{w(d)}). $$ This shows that $\it{sPol}_{\nu,\bfQ}$ is well-defined. \end{rk} The following was introduced in \cite{SW}. \begin{df} \label{def-QS} The {\it quiver Schur algebra} ${A}_{\nu,\bfQ}$ is the subalgebra of $\mathrm{End}(\it{sPol}_{\nu,\bfQ})$ generated by the following endomorphisms. \begin{itemize} \item The {\it idempotents}: $e(\lambda,{\mathbf{i}})$ for $(\lambda,{\mathbf{i}})\in \mathcal{C}^\ell_\nu$, \\ defined as the projection onto the summand $\bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}})$. \item The {\it polynomials}: $Pe(\lambda,{\mathbf{i}})$ for any $(\lambda,{\mathbf{i}})\in \mathcal{C}^\ell_\nu$ and $P\in \bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}$, \\ defined as multiplication by $P$ on the summand $\bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}})$ (and by zero on other summands). \item The {\it split}: $(\lambda,{\mathbf{i}})\to (\mu,{\mathbf{i}})$ for any $(\lambda,{\mathbf{i}}), (\mu,{\mathbf{i}})\in\mathcal{C}^\ell_\nu$ (the $d$-tuple ${\mathbf{i}}\in I^\nu$ is the same for both pairs) such that $\mu$ is a split of $\lambda$ in the component $\lambda^{(r)}$ at position $j$. It acts non-trivially only on the component $\bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}})$ and we have there (in the notation from Definition~\ref{def-multicomp}) $$ fe(\lambda,{\mathbf{i}})\mapsto fe(\mu,{\mathbf{i}}). $$ \item The {\it merge}: $(\mu,{\mathbf{i}})\to (\lambda,{\mathbf{i}})$ for any $(\lambda,{\mathbf{i}})$ and $(\mu,{\mathbf{i}})$, as above. It acts non-trivially only on the component $\bfk[y_1,\ldots,y_d]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\mu,{\mathbf{i}})$. There it acts by $$ fe(\mu,{\mathbf{i}})\mapsto(\prod_{i\in I}D_{a_i,b_i})\left (\prod_{n\in [\mu^{(r)}_j],m\in[\mu^{(r)}_{j+1}]}(y_n-y_m)\right)fe(\lambda,{\mathbf{i}}), $$ where the Demazure operator $D_{a_i,b_i}$ is defined as in Section~\ref{subs_Demazure} with respect to the $a_i+b_i$ polynomial variables $y_r$ with indices $r\in[\mu^{(r)}_j]\cup[\mu^{(r)}_{j+1}]$ such that $i_r=i$ and the product is taken only by the indices $n,m$ such that we have $i_n\to i_m$. Hereby $a_i$ (resp. $b_i$) denotes the number of occurrence of $i$ in ${\mathbf{i}}$ in the indices in $[\mu^{(r)}_j]$ (resp. $[\mu^{(r)}_{j+1}]$). \item The {\it left crossing}: $(\lambda,{\mathbf{i}})\to (\mu,{\mathbf{i}})$ for any $(\lambda,{\mathbf{i}}), (\mu,{\mathbf{i}})\in\mathcal{C}^\ell_\nu$ such that $\mu$ is a left crossing of $\lambda$, defined as $fe(\lambda,{\mathbf{i}})\mapsto fe(\mu,{\mathbf{i}})$. \item The {\it right crossing}: $(\mu,{\mathbf{i}})\to (\lambda,{\mathbf{i}})$ for any $(\lambda,{\mathbf{i}}), (\mu,{\mathbf{i}})\in\mathcal{C}^\ell_\nu$ such that $\lambda$ is a right crossing of $\mu$, moving the last component of $\mu^{(r)}$ to the first of $\mu^{(r+1)}$, is defined as $ fe(\mu,{\mathbf{i}})\mapsto (\prod_{n\in [\lambda^{(r+1)}_1],i_n=Q_{r+1}}y_n)fe(\lambda,{\mathbf{i}}). $ \end{itemize} \end{df} \begin{rk} \label{rk:modified} The definition of ${A}_{\nu,\bfQ}$ differs slightly from the original definition in \cite{SW}. The difference is that the multiplication by the Euler class is moved from the split to the merge and the Euler class is also reversed. The two algebras are however isomorphic, as proved (with an explicit isomorphism) in \cite[Sec.~9.2-9.3]{MS} for $\ell=0$. The arguments directly generalize to arbitrary $\ell$. Passing to this modified quiver Schur algebra is necessary to identify the completion of the algebra ${A}_{\nu,\bfQ}$ with the completion of the algebra $\operatorname{S}_{d,\bfQ}(q)$ via identification of the polynomial representations. This approach does not work if we use the polynomial representation of ${A}_{\nu,\bfQ}$ considered in \cite{SW}. The modification $\operatorname{sP}_{d,\bfQ}'$ of $\operatorname{sP}_{d,\bfQ}$ was defined for the same reason. For a geometric interpretation of ${A}_{\nu,\bfQ}$ we refer to \cite{Tomasz}. \end{rk} It is possible to introduce a diagrammatic calculus for ${A}_{\nu,\bfQ}$ similarly to the diagrammatic calculus for $\operatorname{S}_{d,\bfQ}(q)$ (see \cite{SW} for more details). The only difference is that black strands in the diagrams for ${A}_{\nu,\bfQ}$ have labels in $\mathbb{Z}_{\geqslant 0} I$ instead of $\mathbb{Z}_{> 0}$ (here $\mathbb{Z}_{\geqslant 0} I$ is the set of formal $\mathbb{Z}_{\geqslant 0}$-linear combinations of elements of $I$). We draw the idempotent $e(\lambda,{\mathbf{i}})\in {A}_{\nu,\bfQ}$ by the same diagram as the idempotent $e(\lambda)\in \operatorname{S}_{d,\bfQ}(q)$, except that we replace each integer label $\lambda_r^{(t)}$ on a black strand by the label $\sum_{j\in [\lambda_r^{(t)}]}i_j\in \mathbb{Z}_{\geqslant 0} I$. We draw polynomials, splits, merges, left and right crossings in ${A}_{\nu,\bfQ}$ in the same way as for $\operatorname{S}_{d,\bfQ}(q)$. Let $\widehat{{A}}_{\nu,\bfQ}$ be the completion of ${A}_{\nu,\bfQ}$ with respect to the ideal generated by the homogeneous polynomials of positive degrees. The definitions give rise to the following completed version of the faithful representation \eqref{sPol} of ${A}_{\nu,\bfQ}$. \begin{lem} The algebra $\widehat{{A}}_{\nu,\bfQ}$ has a faithful representation in $$ \widehat{\it{sPol}}_{\nu,\bfQ}=\bigoplus_{(\lambda,{\mathbf{i}})\in \mathcal{C}^\ell_\nu}\bfk[[y_1,\ldots,y_d]]^{\mathfrak{S}_{\lambda,{\mathbf{i}}}}e(\lambda,{\mathbf{i}}). $$ \end{lem} \subsection{The isomorphisms $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)\simeq \widehat{{A}}_{\nu,\bfQ}$} Fix $q\in\bfk$ such that $q\not\in\{0,1\}$. Fix an $\ell$-tuple $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\subset (\bfk^*)^\ell$. As in Section~\ref{subs-isom_lHeck-tens-compl}, we consider the quiver $\Gamma_\mathcal{F}$ with the vertex set $$ \mathcal{F}=\{q^nQ_r\mid n\in\mathbb{Z},r\in[1;\ell]\}\subset\bfk^*. $$ and consider the algebra ${A}_{\nu,\bfQ}$ defined with respect to this quiver. We take $\nu=\mathbf{a}$. We know, that $\widehat{{A}}_{\nu,\bfQ}$ acts faithfully on $\widehat{\it{sPol}}_{\nu,\bfQ}$ and $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ acts faithfully on $\widehat{\operatorname{sP}}_{\bfa,\bfQ}'$. On the other hand, there is an obvious isomorphism of algebras \begin{eqnarray*} \widehat{\it{sPol}}_{\nu,\bfQ}\simeq \widehat{\operatorname{sP}}_{\bfa,\bfQ}',&& P(-i_1y_1,\ldots,-i_dy_d)e(\lambda,{\mathbf{i}})\mapsto P(x_1-i_1,\ldots,x_d-i_d)e(\lambda,{\mathbf{i}}). \end{eqnarray*} To prove that the algebras $\widehat{{A}}_{\nu,\bfQ}$ and $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ are isomorphic, it is enough to identify their actions on $\widehat{\it{sPol}}_{\nu,\bfQ}\simeq \widehat{\operatorname{sP}}_{\bfa,\bfQ}'$. As a result obtain such an isomorphism: \begin{thm} \label{thm-isom-qS-QS-comp} There is an isomorphism of algebras $\widehat{{A}}_{\nu,\bfQ}\simeq \widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$. \end{thm} \begin{proof} It is clear that the idempotents $e(\lambda,{\mathbf{i}})$ act on the faithful representation in the same way. Obviously, the power series in $\widehat{{A}}_{\nu,\bfQ}$ yield the same operators on the faithful representation as the power series in $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$. It remains to match splits, merges and left/right crossings. Since splits and merges only use black strands, it is enough to treat the case $\ell=0$. This is already done in \cite[Sec.~9]{MS}. It is also easy to see that the left crossings in $\widehat{{A}}_{\nu,\bfQ}$ and $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ act in the same way on the polynomial representations. Indeed, both of them just change the idempotent without changing the power series. Let now $\lambda$ be a right crossing of $\mu$, moving the last component of $\mu^{(t)}$ to the first component of $\mu^{(t+1)}$, and fix ${\mathbf{i}}$. We compare the actions of the right crossings $(\mu,{\mathbf{i}})\to (\lambda,{\mathbf{i}})$ in $\widehat{{A}}_{\nu,\bfQ}$ and $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$. The right crossing in $\widehat{{A}}_{\nu,\bfQ}$ acts by $$ P(y_1,\ldots,y_d)e(\mu,{\mathbf{i}})\mapsto\left (\prod_{n\in[\lambda^{(t+1)}_1],i_n=Q_{r+1}}y_n\right)P(y_1,\ldots,y_d)e(\lambda,{\mathbf{i}}). $$ The right crossing in $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ acts by $$ P(x_1,\ldots,x_d)e(\mu,{\mathbf{i}})\mapsto \left(\prod_{n\in[\lambda^{(t+1)}_1]}(x_n-Q_{t+1})\right)P(x_1,\ldots,x_d)e(\lambda,{\mathbf{i}}). $$ Then it is clear that these operators can be expressed in terms of each other because we can divide by $(x_n-Q_{t+1})$ if $i_n\ne Q_{t+1}$. This proves the theorem. \end{proof} \section{Cyclotomic quotients and the isomorphism theorem} \label{sec-cycl-quot} We finish by establishing a higher level version of the (cyclotomic) Brundan-Kleshchev-Rouquier isomorphism. As above we fix $\mathbf{Q}=(Q_1,\ldots,Q_\ell)\in(\bfk^*)^\ell$ and $q\in \bfk^*$, $q\ne 1$ and consider the quiver $\Gamma_\mathcal{F}$ as in section~\ref{subs-isom_lHeck-tens-compl}. We assume that all KLR algebras and tensor product algebras in this section are defined with respect to the quiver $\Gamma_\mathcal{F}$. We take $\nu=\mathbf{a}$. \subsection{Cyclotomic $\ell$-Hecke algebras and tensor product algebras} \label{subs-cycl-lHeck} \begin{df} \label{defcylHeck} The \emph{cyclotomic $\ell$-Hecke algebra} $\operatorname{H}^\bfQ_{d,\bfQ}(q)$ is the quotient of the algebra $\operatorname{H}_{d,\bfQ}(q)$ by the ideal generated by the idempotents $e(\mathbf{c})$ such that $\mathbf{c}$ is of the form $\mathbf{c}=(0,\ldots)$. In other words, we kill all diagrams that have a piece of a black strand on the left of all red strands. \end{df} \begin{lem} \label{lem-eigenv_H2} Let $X_1$, $X_2$ and $T$ be three endomorphisms of a vector space $V$, satisfying the relations of $H_2(q)$, i.e., $$ \begin{array}{lclcrcl} X_1T&=&TX_2-(q-1)X_2,&& (T-q)(T+1)&=&0,\\ X_2T&=&TX_1+(q-1)X_2,&& X_1X_2&=&X_2X_1. \end{array} $$ (We do not assume that $X_1$ and $X_2$ are invertible.) Let $\lambda_1,\lambda_2\in \bfk^*$ be such that $\lambda_1\ne q^{\pm 1}\lambda_2$. Then if $V$ has a simultaneous eigenvector for $X_1$, $X_2$ with eigenvalues $\lambda_1$, $\lambda_2$, then $V$ has also a simultaneous eigenvector with eigenvalues $\lambda_2$, $\lambda_1$ respectively. \end{lem} \begin{proof} Let $v\in V$, $v\not=0$ such that $X_1(v)=\lambda_1v$ and $X_2(v)=\lambda_2v$. Consider the vector $ w=(q-1)\lambda_2v+(\lambda_1-\lambda_2)T(v). $ It follows directly from the relations that $X_1(w)=\lambda_2w$ and $X_1(w)=\lambda_2w$. Note that $w=0$ implies that $T(v)$ is proportional to $v$. In this case we have either $T(v)=-v$ or $T(v)=qv$ and then $\lambda_2$ must equal $q\lambda_1$ or $q^{-1}\lambda_1$. But this is impossible by the assumptions on $\lambda_1$ and $\lambda_2$. \end{proof} \begin{coro} \label{coro-eigenv_lHeck} Let $V$ be a finite dimensional representation of $\operatorname{H}^\bfQ_{d,\bfQ}(q)$. Then for each $r\in\{1,2,\ldots,d\}$, all eigenvalues of the action of $x_r$ on $V$ are in $\mathcal{F}$. \end{coro} \begin{proof} Assume that some $x_r$ has an eigenvalue $\lambda\notin \mathcal{F}$. Since $x_r$ is invertible, we have $\lambda\ne 0$. Then there exists an idempotent $e({\mathbf{c}})\in J^{\ell,d}$ such that $\lambda$ is an eigenvalue of $x_re({\mathbf{c}})$. (This simply means that $e({\mathbf{c}})$ does not annihilate the $\lambda$-eigenspace of $x_r$.) Let $t$ be such that $X_te({\mathbf{c}})=x_re({\mathbf{c}})$ in $\operatorname{H}_{d,\bfQ}(q)$ and set $k=\sum_{i=1}^tc_i$ (i.e., $k$ is the number of red strands to the left of the dot in the diagram of $x_re({\mathbf{c}})$). We assume that the index $t$ as above is as minimal as possible (for all possible $r$ and $\mathbf{c}$). We clearly have $t>1$, because $X_1=0$ in $\operatorname{H}^\bfQ_{d,\bfQ}(q)$ and $\lambda\ne 0$. Assume $c_{t-1}=1$. Let $v$ be an eigenvector of $x_re({\mathbf{c}})$ with eigenvalue $\lambda$, (in particular $e({\mathbf{c}})(v)=v$). Then $T_{t-1}(v)\ne 0$. Indeed, we have $T^2_{t-1}e({\mathbf{c}})=(X_t-Q_k)e({\mathbf{i}})$. This implies $ T_{t-1}^2(v)=T_{t-1}^2e({\mathbf{c}})(v)=(X_t-Q_k)e({\mathbf{c}})(v)=(\lambda-Q_k)v\ne 0. $ Moreover, the vector $T_t(v)$ is clearly an eigenvector of $x_re(s_{t-1}({\mathbf{c}}))=X_{t-1}e(s_{t-1}({\mathbf{c}}))$ corresponding to the eigenvalue $\lambda$. This contradicts the minimality of $t$. Assume $c_{t-1}=0$. Then we can find a vector $v\in V$ such that $v$ is a common eigenvector for $x_{r-1}$ and $x_r$ with $e({\mathbf{i}})(v)=v$ and $x_r(v)=\lambda v$. Let $\mu$ be such that $x_{r-1}(v)=\mu v$. We have $\mu\ne 0$ because $x_{r-1}$ is invertible. Moreover, the eigenvalue $\mu$ must be in $\mathcal{F}$ (else, this contradicts the minimality of $t$). Then we can apply Lemma~\ref{lem-eigenv_H2} to $x_{r-1}e({\mathbf{c}})$, $x_re({\mathbf{c}})$ and $T_{t-1}e({\mathbf{c}})$. This shows that $\lambda$ is an eigenvalue of $x_{r-1}e({\mathbf{c}})=X_{t-1}e({\mathbf{c}})$. This contradicts the minimality of $t$. \end{proof} In $\operatorname{H}^\bfQ_{d,\bfQ}(q)$, we have the idempotents $e({\mathbf{i}})$ such that $1=\sum_{{\mathbf{i}}\in\mathcal{F}^d}e({\mathbf{i}})$ and for each index $r$, the element $(x_r-i_r)e({\mathbf{i}})$ is nilpotent (see Corollary~\ref{coro-eigenv_lHeck}). Moreover, for each $\mathbf{a}\in\mathcal{F}^d$ we have a central idempotent $1_\mathbf{a}=\sum_{\mathbf{a}\in \mathfrak{S}_d \mathbf{a}}e({\mathbf{i}})$. Set $\operatorname{H}^\bfQ_{\bfa,\bfQ}(q)=1_\mathbf{a}\operatorname{H}^\bfQ_{d,\bfQ}(q)$. Then there is the following direct sum decomposition of algebras $\operatorname{H}^\bfQ_{d,\bfQ}(q)=\bigoplus_{\mathbf{a}\in\mathcal{F}^d}\operatorname{H}^\bfQ_{\bfa,\bfQ}(q)$. \begin{df} The \emph{cyclotomic tensor product algebra} ${R}_{\nu,\bfQ}^{\bfQ}$ is the quotient of the algebra ${R}_{\nu,\bfQ}$ by the ideal generated by the idempotents $e({\mathbf{i}})$ such that ${\mathbf{i}}\in I_{\operatorname{col}}(\nu,\bfQ)$ is such that $c(i_1)=0$. In other words, we kill all diagrams that have a piece of a black strand on the left of all red strands. \end{df} It is clear from the definitions that the algebra $\operatorname{H}^\bfQ_{\bfa,\bfQ}(q)$ is a quotient of $\widehat{\operatorname{H}}_{\bfa,\bfQ}(q)$ and the algebra ${R}_{\nu,\bfQ}^{\bfQ}$ is a quotient of $\widehat{{R}}_{\nu,\bfQ}$. We obtain \begin{thm} \label{prop-isom-lHeck-tens-cycl} There is an isomorphism of algebras $\operatorname{H}^\bfQ_{\bfa,\bfQ}(q)\simeq {R}_{\nu,\bfQ}^{\bfQ}$. \end{thm} \begin{proof} This follows immediately from Theorem~\ref{thm-isom-lHeck-tens-comp}. \end{proof} \subsection{Classical Brundan-Kleshchev-Rouquier isomorphism} In this section we show how to deduce from Theorem~\ref{prop-isom-lHeck-tens-cycl} the usual Brundan-Kleshchev-Rouquier isomorphism for cyclotomic KLR and Hecke algebras. \begin{df} The \emph{cyclotomic Hecke algebra} $\operatorname{H}_{d}^{\bfQ}(q)$ is the quotient of the algebra $\operatorname{H}_{d}(q)$ by the ideal generated by the polynomial $(X_1-Q_1)\ldots(X_1-Q_\ell)$. \end{df} For each ${\mathbf{i}}=(i_1,\ldots,i_\ell)\in \mathcal{F}^d$ we have an idempotent $e({\mathbf{i}})\in\operatorname{H}_{d}^{\bfQ}(q)$ such that $1=\sum_{{\mathbf{i}}\in\mathcal{F}^d}e({\mathbf{i}})$ and for each index $r$, the element $(X_r-i_r)e({\mathbf{i}})$ is nilpotent. Moreover, with $\mathbf{a}\in\mathcal{F}^d$ comes a central idempotent $1_\mathbf{a}=\sum_{\mathbf{a}\in \mathfrak{S}_d \mathbf{a}}e({\mathbf{i}})$. Set $\operatorname{H}^\bfQ_{\bfa}(q)=1_\mathbf{a}\operatorname{H}_{d}^{\bfQ}(q)$. There is a direct sum decomposition of algebras $\operatorname{H}_{d}^{\bfQ}(q)=\bigoplus_{\mathbf{a}\in\mathcal{F}^d}\operatorname{H}^\bfQ_{\bfa}(q)$. \begin{df} The \emph{cyclotomic KLR algebra} ${R}_{\nu}^{\bfQ}$ is the quotient of the algebra ${R}_{\nu}$ by the ideal generated by $y_1^{\Lambda_{i_1}}e({\mathbf{i}})$. Here, $\Lambda_i$ the multiplicity of $i\in \mathcal{F}$ in $\mathbf{Q}$. \end{df} Recall the idempotent $e(\omega)\in\operatorname{H}_{d,\bfQ}(q)$ such that $e(\omega)\operatorname{H}_{d,\bfQ}(q) e(\omega)\simeq \operatorname{H}_{d}(q)$, see Section~\ref{subs-centre-lHecke}. We have a similar idempotent $e(\omega)\in{R}_{\nu,\bfQ}$ with $e(\omega){R}_{\nu,\bfQ} e(\omega)\simeq {R}_{\nu}$. The following is proved in \cite[Thm.~4.18]{Webster}. \begin{lem} There is an isomorphism of algebras $e(\omega){R}_{\nu,\bfQ}^{\bfQ} e(\omega)\simeq {R}_{\nu}^{\bfQ}$. \end{lem} We can prove the following analogue of this statement. \begin{lem} There is an isomorphism of algebras $e(\omega)\operatorname{H}^\bfQ_{d,\bfQ}(q) e(\omega)\simeq \operatorname{H}_{d}^{\bfQ}(q)$. \end{lem} \begin{proof} We will identify $\operatorname{H}_{d}(q)$ with $e(\omega)\operatorname{H}_{d,\bfQ}(q) e(\omega)$ as in Lemma~\ref{lem-Hd_in_Hdl}. Denote by $K_1$ the kernel of $\operatorname{H}_{d}(q)\to \operatorname{H}_{d}^{\bfQ}(q)$. Denote by $K_2$ the kernel of $e(\omega)\operatorname{H}_{d,\bfQ}(q) e(\omega)\to e(\omega)\operatorname{H}^\bfQ_{d,\bfQ}(q) e(\omega)$. We have to prove that $K_1=K_2$. First of all, it is clear that $K_1\subset K_2$, because we have $$ \begin{tikzpicture}[thick, scale=0.6] \draw[wei] (6.5,0) +(-2,-1) -- +(-2,1) node[at start,below]{$Q_1$}; \draw[wei] (6.5,0) +(-1,-1) -- +(-1,1) node [at start,below]{$Q_2$}; \node at (6.5,0) {$\cdots$} ; \draw[wei] (6.5,0) +(1,-1) -- +(1,1) node[at start,below]{$Q_\ell$}; \draw (8.5,-1) .. controls (2.5,0) .. (8.5,1); \node at (9.5,0) {$=$}; \draw[wei] (12.5,0) +(-2,-1) -- +(-2,1) node[at start,below]{$Q_1$}; \draw[wei] (12.5,0) +(-1,-1) -- +(-1,1) node [at start,below]{$Q_2$}; \node at (12.5,0) {$\cdots$} ; \draw[wei] (12.5,0) +(1,-1) -- +(1,1) node[at start,below]{$Q_\ell$}; \draw (14.5,-1) -- (14.5,-0.5); \draw (14.5,0.5) -- (14.5,1); \draw (14,-0.5) rectangle (20,0.5); \node at (17,0) {$(x_1-Q_1)\ldots (x_1-Q_\ell)$}; \end{tikzpicture} $$ Let us show $K_2\subset K_1$. We need to show for each ${\mathbf{c}}\in J^{\ell,d}$ such that $c_1=0$, it holds $e(\omega)\operatorname{H}_{d,\bfQ}(q) e({\mathbf{c}})\operatorname{H}_{d,\bfQ}(q) e(\omega)\subset K_1$. Denote by $({\mathbf{c}}\to \omega)$ the unique element of $e(\omega)\operatorname{H}_{d,\bfQ}(q) e({\mathbf{c}})$ that is presented by a diagram that contains right crossings only. Similarly, denote by $(\omega\to {\mathbf{c}})$ the unique element of $e({\mathbf{c}})\operatorname{H}_{d,\bfQ}(q) e(\omega)$ that is presented by a diagram that contains left crossings only. For example, for ${\mathbf{c}}=(0,1,0,0,0,1)$, we have $$ \tikz[thick,xscale=2.5,yscale=1.5]{ \node at (-1.5,.25) {$(\omega\to{\mathbf{c}})=$}; \draw (-0.4,0) --(-0.6,.5); \draw (-0.2,0) --(-0.4,.5); \draw (0,0) --(-0.2,.5); \draw[wei] (-0.8,0) --(0,.5); \draw[wei] (-1.0,0) --(-0.8,.5); \draw (-0.6,0) --(-1.0,.5); \node at (1,.25) {$({\mathbf{c}}\to\omega)=$}; \draw (1.9,0) -- (2.1,0.5); \draw (2.1,0) -- (2.3,0.5); \draw (2.3,0) -- (2.5,0.5); \draw[wei] (2.5,0) -- (1.7,0.5); \draw[wei] (1.7,0) -- (1.5,0.5); \draw (1.5,0) -- (1.9,0.5); } $$ By Proposition~\ref{prop-basis-Hdl}, each element of $e(\omega)\operatorname{H}_{d,\bfQ}(q) e({\mathbf{c}})$ can be written as $a\cdot({\mathbf{c}}\to\omega)$ with $a\in e(\omega) \operatorname{H}_{d,\bfQ}(q) e(\omega)$. Similarly, each element of $e({\mathbf{c}}) \operatorname{H}_{d,\bfQ}(q) e(\omega)$ can be written as $(\omega\to{\mathbf{c}})\cdot b$ with $b\in e(\omega) \operatorname{H}_{d,\bfQ}(q) e(\omega)$. Then each element of $e(\omega)\operatorname{H}_{d,\bfQ}(q) e({\mathbf{c}})\operatorname{H}_{d,\bfQ}(q) e(\omega)$ can be written as $a\cdot({\mathbf{c}}\to\omega)\cdot (\omega\to{\mathbf{c}})\cdot b$. Since $c_1=0$, the element $({\mathbf{c}}\to\omega)\cdot (\omega\to{\mathbf{c}})$ can be written as $e(\omega)P$, where $P\in \bfk[x_1,\ldots,x_\ell]$ is a polynomial divisible by $(x_1-Q_1)\ldots (x_1-Q_\ell)$. This implies $K_2\subset K_1$. \end{proof} Consequently, we get the Brundan-Kleshchev-Rouquier isomorphism, \cite[Thm.~1.1]{BKKL}, \cite[Cor.~3.20]{Rou2KM}: \begin{coro} \label{prop-isom-Heck-KLR-cycl} There is an isomorphism of algebras $\operatorname{H}^\bfQ_{\bfa}(q)\simeq {R}_{\nu}^{\bfQ}$. \end{coro} \subsection{The DJM $q$-Schur algebra} We establish now a connection with the cyclotomic $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ defined in \cite{DJM}. Denote by $\mathcal{C}^{0,\ell}_d$ the subset of $\mathcal{C}^\ell_d$ that contains all $\lambda$ such that $\lambda^{(0)} = 0$ (here $0$ is the unique (empty) composition of $0$). For each $\lambda\in{\mathcal{C}^{\ell}_d}$, set $u_\lambda=\prod(X_r-Q_t)\in \operatorname{H}_{d}^{\bfQ}(q)$, where the product is taken over all indices $r$ and $t$ such that $r\leqslant |\lambda^{(0)}|+\ldots+|\lambda^{(t-1)}|$. \begin{ex} For example, for $\ell=3$ and $\lambda=(0,(1,1),(2),(1,2))$, we have $$ |\lambda^{(0)}|=0,\quad |\lambda^{(1)}|=2,\quad|\lambda^{(2)}|=2,\quad |\lambda^{(3)}|=3 $$ and $ u_\lambda=(X_1-Q_2)(X_2-Q_2)(X_1-Q_3)(X_2-Q_3)(X_3-Q_3)(X_4-Q_3). $ \end{ex} We consider $u_\lambda n_\lambda \operatorname{H}_{d}^{\bfQ}(q)$ as a right $\operatorname{H}_{d}^{\bfQ}(q)$-module. \begin{df} The Dipper-James-Mathas cyclotomic $q$-Schur algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ is the algebra \begin{eqnarray*} \operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)&=&\mathrm{End}_{\operatorname{H}_{d}^{\bfQ}(q)}(\bigoplus_{\lambda\in\mathcal{C}^{\ell}_d}u_\lambda n_\lambda \operatorname{H}_{d}^{\bfQ}(q)). \end{eqnarray*} \end{df} \begin{rk} The algebra $\operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)$ is defined in \cite{DJM} with respect to the set $\mathcal{C}^{0,\ell}_d$ instead of $\mathcal{C}^{\ell}_d$. But there is no difference because, $u_\lambda=0$ in $\operatorname{H}_{d}^{\bfQ}(q)$ if $\lambda\in \mathcal{C}^\ell_d\backslash \mathcal{C}^{0,\ell}_d$. Indeed, note that if $\lambda\in \mathcal{C}^\ell_d\backslash \mathcal{C}^{0,\ell}_d$, then $(X_1-Q_1)\ldots (X_1-Q_\ell)$ divides $u_\lambda$. This means that $u_\lambda=0$ in $\operatorname{H}_{d}^{\bfQ}(q)$. \end{rk} \begin{lem} \label{lem:qSchur-from-lHeck} There is an isomorphism of algebras \begin{eqnarray*} \operatorname{S}_{d,\bfQ}^{\operatorname{DJM}}(q)\simeq \mathrm{End}_{\operatorname{H}^\bfQ_{d,\bfQ}(q)}\left(\bigoplus_{\lambda\in \mathcal{C}^{\ell}_d}n_\lambda \operatorname{H}^\bfQ_{d,\bfQ}(q)\right). \end{eqnarray*} \end{lem} \begin{proof} A similar description of the $q$-Schur algebra is given in \cite[(5.8)]{SW}. To get the statement we only need to identify $\operatorname{H}^\bfQ_{d,\bfQ}(q)$ with ${R}_{d,\bfQ}^{\bfQ}$, where ${R}_{d,\bfQ}^{\bfQ}=\bigoplus_{\nu\in \mathfrak{S}_d\backslash\mathcal{F}^d}{R}_{\nu,\bfQ}^{\bfQ}$. \end{proof} \subsection{The Schur version} In this section we give the most general version of the isomorphism above: the (higher level) Schur version. We consider $m_\lambda \operatorname{H}^\bfQ_{d,\bfQ}(q)$ as a right $\operatorname{H}^\bfQ_{d,\bfQ}(q)$-module. \begin{df} The \emph{cyclotomic $q$-Schur algebra} $\operatorname{S}_{d,\bfQ}^{\bfQ}(q)$ is the algebra \begin{eqnarray*} \operatorname{S}_{d,\bfQ}^{\bfQ}(q) &=& \mathrm{End}_{\operatorname{H}^\bfQ_{d,\bfQ}(q)}(\bigoplus_{\lambda\in \mathcal{C}^{\ell}_d}m_\lambda \operatorname{H}^\bfQ_{d,\bfQ}(q)). \end{eqnarray*} \end{df} It is clear from the definition that the algebra $\operatorname{S}_{d,\bfQ}^{\bfQ}(q)$ is a quotient of $\operatorname{S}_{d,\bfQ}(q)$. \begin{rk} \label{rk-cyS-DJM} By Lemmas~\ref{lem-isom-Sbar-Sop} and~\ref{lem:qSchur-from-lHeck} we have $\operatorname{S}_{d,\bfQ}^{\bfQ}(q)\simeq \operatorname{S}_{d,\bfQ^{-1}}^{\operatorname{DJM}}(q)$ as algebras. \end{rk} Similarly to the set $\mathcal{C}_\nu^\ell$ defined above, we denote by $\mathcal{C}_\mathbf{a}^\ell$ the set of pairs $(\lambda,{\mathbf{i}})$, where $\lambda\in\mathcal{C}_d^\ell$ and ${\mathbf{i}}\in \mathfrak{S}_\lambda\backslash \mathfrak{S}_d\mathbf{a}$. The algebra $\operatorname{S}_{d,\bfQ}^{\bfQ}(q)$ contains idempotents $e(\lambda,{\mathbf{i}})\in\operatorname{S}_{d,\bfQ}^{\bfQ}(q)$ such that $1=\sum_{(\lambda,{\mathbf{i}})\in\mathcal{C}^\ell_\mathbf{a}}e(\lambda,{\mathbf{i}})$ and such that for each Laurent polynomial $P(x_1,\ldots,x_d)\in \bfk[x^{\pm 1}_1,\ldots,x^{\pm 1}_d]^{\mathfrak{S}_\lambda}$, the element $(P(x_1,\ldots,x_d)-P(i_1,\ldots,i_d))e(\lambda,{\mathbf{i}})$ is nilpotent. Moreover, for each $\mathbf{a}\in\mathcal{F}^d$ we have a central idempotent $1_\mathbf{a}=\sum_{(\lambda,{\mathbf{i}})\in \mathcal{C}^\ell_\mathbf{a}}e(\lambda,{\mathbf{i}})$. Set $\operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q)=1_\mathbf{a}\operatorname{S}_{d,\bfQ}^{\bfQ}(q)$. We have the following direct sum decomposition of algebras $\operatorname{S}_{d,\bfQ}^{\bfQ}(q)=\bigoplus_{\mathbf{a}\in\mathcal{F}^d}\operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q)$. \begin{df} The \emph{cyclotomic quiver Schur algebra} $A_{\nu,\bfQ}^{\bfQ}$ is the quotient of the algebra ${A}_{\nu,\bfQ}$ by the ideal generated by the idempotents of the form $e(\lambda,{\mathbf{i}})$ such that $l(\lambda^{(0)})\ne 0$. In other words, we kill all diagrams that have a piece of a black strand on the left of all red strands. \end{df} It is clear from the definitions that the algebra $\operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q)$ is a quotient of $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)$ and the algebra $A_{\nu,\bfQ}^{\bfQ}$ is a quotient of $\widehat{{A}}_{\nu,\bfQ}$. Theorem~\ref{thm-isom-qS-QS-comp} implies the following: \begin{prop} \label{prop-isom-qS-QS-cycl} There is an isomorphism of algebras $\operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q)\simeq A_{\nu,\bfQ}^{\bfQ}$. \end{prop} \begin{proof} It is clear from the definitions that for each $\lambda\in \mathcal{C}^\ell_d$ such that $l(\lambda^{(0)})\ne 0$, the idemponent $e(\lambda)$ is in the kernel of $\operatorname{S}_{d,\bfQ}(q)\to \operatorname{S}_{d,\bfQ}^{\bfQ}(q)$. This implies that the isomorphism $\widehat{\operatorname{S}}_{\bfa,\bfQ}(q)\simeq \widehat{{A}}_{\nu,\bfQ}$ in Theorem~\ref{thm-isom-qS-QS-comp} yields a surjective homomorphism $A_{\nu,\bfQ}^{\bfQ}\to \operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q)$. To prove that this is an isomorphism, it is enough to show that these algebras have the same dimensions. We have $$ \dim(\operatorname{S}_{\bfa,\bfQ}^{\bfQ}(q))=\dim(\operatorname{S}^{DJM}_{\mathbf{a},\mathbf{Q}^{-1}}(q))=\dim(A^{\mathbf{Q}^{-1}}_{\nu,\mathbf{Q}^{-1}})=\dim(A_{\nu,\bfQ}^{\bfQ}). $$ The first equality holds by Remark~\ref{rk-cyS-DJM}, the second by \cite[Thm.~6.2]{SW}, and the third since the quivers $\Gamma_\mathcal{F}$ defined with respect to $\mathbf{Q}$ and $\mathbf{Q}^{-1}$ are isomorphic. \end{proof} \bibliographystyle{abbrv}
{ "timestamp": "2020-04-15T02:18:07", "yymm": "1805", "arxiv_id": "1805.02425", "language": "en", "url": "https://arxiv.org/abs/1805.02425" }
\section{Introduction} The aim of this paper is to address the question of the well-posedness of the Cauchy problems to a class of shallow water wave equations, such as the Camassa-Holm equation, Degasperis-Procesi equation, Novikov equation and other related models, in the Sobolev space $H^{3/2}$ or Besov spaces $B_{2,r}^{3/2},$ $1<r<\infty$, and in both real-line and torus cases (only real-line case for Novikov). The methods we used in this paper are very simple and apply equally well to these equations. Thus, we focus on Camassa-Holm equation and give the results for other equations as remarks. Consider the Cauchy problem to the Camassa-Holm (CH) equation \Ncase{\label{eq:CH} u_t-u_{txx}+3uu_x &= 2u_xu_{xx}+uu_{xxx},\quad t>0,\, x\in {\mathbb K},\\ \hfill{} u(0,x) &= u_0(x), } with ${\mathbb K}={\mathbb R}$ or the torus ${\mathbb K}={\mathbb T}={\mathbb R}/2\pi {\mathbb Z}$, which models the unidirectional propagation of shallow water waves over a flat bottom. Here the function $u(t,x)$ represents the water’s free surface above a flat bottom. The CH equation \eqref{eq:CH} appeared initially in the context of hereditary symmetries studied by Fuchssteiner and Fokas \cite{F-Fokas} as a bi-Hamiltonian generalization of KdV equation. Later, Camassa and Holm \cite{C-H} derived it by approximating directly in the Hamiltonian for Euler's equations in the shallow water regime. After the CH equation \eqref{eq:CH} was derived physically in the context of water waves, there are a large amount of literatures devoted to its study and which we do not attempt to exhaust. Here we only recall some results concerning the well-posedness of the Cauchy problem \eqref{eq:CH} (see also the survey \cite{Molinet}). Li and Olver \citep{Li-Olver} (see also \cite{Rodriguez-Blanco}) proved that the Cauchy problem \eqref{eq:CH} is locally well-posed with the initial data $u_0(x)\in H^s({\mathbb R})$ with $s>3/2$ (See \cite{C-E1} for earlier results in $H^s({\mathbb R})$, $s\geq 3$). The analogous well-posedness result on the torus was shown in \cite{Misiolek1} (See \cite{C-E2} for earlier result in $H^3({\mathbb T})$). Danchin \cite{Danchin1} considered the local well-posedness in the Besov space, and proved well-posedness in $B^s_{p,r}$ if $1\leq p\leq \infty $, $1\leq r< \infty$ and $s>\max\{1+1/p,3/2\}$ (For continuous dependence, see Li and Yin \cite{Li-Yin}). Note that if one wants to include the case $ r=\infty $ then one has to weaken the notion of well-posedness since the continuity of the solution with values in $B^s_{p,\infty}$ as well as the continuity of the data-to-solution map with values in $ L^\infty(0,T; B^s_{p,\infty} )$ are not known to hold. For the endpoints, local well-posedness in the space $B^{3/2}_{2,1}$, and ill-posedness in $B^{3/2}_{2,\infty}$ (the data-to-solution map $u_0\mapsto u$ is not continuous using peakon solution. Note however that, as mentioned above, the continuity of this map is not known for any $s\in{\mathbb R} $) were established in \cite{Danchin2}. Moreover, as mentioned in \cite{Danchin2}, there is no definitive answer to the intermediate cases $B^{3/2}_{2,r},1<r<\infty$. The critical Sobolev space for well-posedness is $H^{3/2}$ since CH is ill-posed in $H^s$ for $s<3/2$ in the sense of norm inflation (Byers \cite{Byers}). For other equations there are similar results, but we do not list here. To the best of our knowledge, well-posedness in $H^{3/2}$ is still an open problem. In this paper we solve this problem by proving the norm inflation and hence the ill-posedness of the CH equation \eqref{eq:CH} in $H^{3/2}$ and in $B^{3/2}_{2,r},\ 1<r<\infty$. \begin{thm}\label{thm1} Let ${\mathbb K}={\mathbb R}\mbox{ or }{\mathbb T}$, $1\leq p\leq \infty$ and $1<r\leq \infty$. $\forall\ \varepsilon>0$, there exists $u_0 \in H^\infty({\mathbb K})$, real-valued, such that the following hold: (1) $\norm{u_0}_{B^{1+1/p}_{p,r}({\mathbb K})}\leq \varepsilon$; (2) There is a unique solution $u\in C([0,T); H^\infty({\mathbb K}))$ to the Cauchy problem of \eqref{eq:CH} with a maximal lifespan $T<\varepsilon$; (3) $\limsup_{t\to T^-}\norm{u(t)}_{B^{1+1/p}_{p,r}({\mathbb K})}\geq \limsup_{t\to T^-}\norm{u(t)}_{B^{1}_{\infty,\infty}({\mathbb K})}=\infty$. \end{thm} \begin{rem}\label{remmain} We would like to give a few remarks. (1) Taking $p=r=2$, we see that our theorem implies the ill-posedness in the critical Sobolev space $H^{3/2}$ (non-continuity of the flow map at 0). The norm inflation is even stronger than the classical sense, namely weak norm $\norm{u(t)}_{B^{1}_{\infty,\infty}({\mathbb K})}$ blows up. Recently, for Euler equation, Bourgain and Li in \cite{BL, BL1} proved strong local ill-posedness in borderline Besov spaces $B^{d/p+1}_{p,r}$ for $1\leq p<\infty$ and $1<r\leq\infty$ when $d=2,3$. The hyperbolic structure of CH and other equations is similar to Euler equation. Compared to their proof, our proof of Theorem \ref{thm1} is much simpler since for CH we have simple blow-up result using symmetry. Our methods are inspired by the works of Saut on Burgers equation (see \cite{Saut, Saut2}). (2) Theorem \ref{thm1} also holds for the following related equations: \begin{itemize} \item Degasperis-Procesi (DP) equation \begin{align}\label{eq:DP} u_t-u_{txx}+4uu_x=3u_xu_{xx}+uu_{xxx}, \quad t>0, \, x\in {\mathbb K}. \end{align} \item General $b$-family of equations with $1<b\leq 3$: \begin{align}\label{eq:b-family} u_t-u_{txx}+(b+1)uu_x=bu_xu_{xx}+uu_{xxx}, \quad t>0,\, x\in {\mathbb K}. \end{align} Note that CH corresponds to $b=2$ and DP corresponds to $b=3$. \end{itemize} (3) Theorem \ref{thm1} also holds for the Novikov equation on the real-line (See Remark \ref{remlast}): \EQn{\label{eq:Novikov} u_t-u_{txx}+4u^2u_x=3uu_xu_{xx}+u^2u_{xxx}, \quad t>0, x\in {\mathbb R}. } (4) Theorem \ref{thm1} also holds for the two-component CH, DP, b-family equations. Namely, we have norm inflation in $B^{1+1/p}_{p,r}\times B^{1/p}_{p,r}$ for $1<r\leq \infty$, $1\leq p\leq \infty$. \end{rem} \section{Preliminaries} \subsection{Notations} Throughout this paper, we use $C$ to denote universal contant which may vary from line to line. For $1\leq r\leq \infty$, we denote its conjugate number $r'=\frac{r}{r-1}$. For function $f$ defined on ${\mathbb R}$ or ${\mathbb T}$, we define its Fourier transform denoted by $\hat{f}(\xi)$ as \EQ{ \hat{f}(\xi)=\int_{\mathbb R} f(x)e^{-2\pi ix\xi}dx, \quad \xi\in {\mathbb R}; \quad \hat{f}(\xi)=\frac{1}{2\pi}\int_{\mathbb T} f(x)e^{-i\xi x}dx, \quad \xi \in {\mathbb Z}. } For convenience, we use ${\mathbb K}^*$ to denote ${\mathbb R}$ or ${\mathbb Z}$, endowed with their natural measure $\mu$. $H^s({\mathbb K})$ denotes the usual Sobolev space consisting of tempered distributions $f\in \mathcal{S'}({\mathbb K})$ such that $\hat{f}\in L^2_{loc}({\mathbb K})$ and \[\|u\|_{H^s({\mathbb K})}:=\big[\int_{{\mathbb K}^*}(1+|\xi|^2)^s|\hat{f}(\xi)|^2d\mu(\xi)\big]^{1/2}<\infty.\] Let $\eta: {\mathbb R}\to [0, 1]$ be an even, smooth, non-negative and radially decreasing function which is supported in $\{\xi:|\xi|\leq \frac{8}{5}\}$ and $\eta\equiv 1$ for $|\xi|\leq \frac{5}{4}$. For $k\in {\mathbb Z}$, let $\chi_k(\xi)=\eta(\frac{\xi}{2^k})-\eta(\frac{\xi}{2^{k-1}})$ and $\chi_{\leq k}(\xi)=\eta(\frac{\xi}{2^k})$, and define Littlewood-Paley operators $P_k, P_{\leq k}$ on $L^2({\mathbb K})$ by $\widehat{P_ku}(\xi)=\chi_k(|\xi|)\widehat{f}(\xi),\,\widehat{P_{\leq k}u}(\xi)=\chi_{\leq k}(|\xi|)\widehat{f}(\xi)$. We can then define the Besov space $B^s_{p,r}$ with the norm $\norm{f}_{B^s_{p,r}}=\normb{2^{ks}\norm{P_k f}_{L^p({\mathbb K})}}_{l_k^r}$. \subsection{Useful Lemmas} In this subsection, we collect some results that we need. The first one is the well-posedness result (see \cite{Li-Olver, Rodriguez-Blanco}). For other equations in Remark \ref{remmain}, we have the same results. \begin{lem}[\cite{Li-Olver, Rodriguez-Blanco}]\label{lem1} Given $u_0(x)\in H^s$, $s>3/2$, there exists a maximal $T\ge \tilde{T}(\norm{u_0}_{H^{\frac{3}{2}+}})>0$ and a unique solution $u$ to \eqref{eq:CH} such that \[u\in C([0,T);H^s)\cap C^1([0,T);H^{s-1}).\] Moreover, the solution depends continuously on the initial data, i.e. for any $T'<T $ the mapping $u_0\mapsto u: H^s\mapsto C([0,T'];H^s)\cap C^1([0,T'];H^{s-1})$ is continuous on a $ H^s $-neighborhood of $ u_0 $, and if $T<\infty$, then $\lim_{t\to T^-}\norm{u(t)}_{H^s}=\infty$. \end{lem} The proof of the well-posedness relies on rewriting the equation into a perturbation of Burgers equation. For example, the general $b$-family equations are equivalent to \begin{align}\label{eq:CH1} u_t+uu_x=-\partial_x\Lambda^{-2}(\frac{b}{2}u^2+\frac{3-b}{2}u_x^2)=-\partial_xp*(\frac{b}{2}u^2+\frac{3-b}{2}u_x^2); \end{align} where $\Lambda=(1-\partial_x^2)^{1/2}$ and \EQ{ p(x)=\CAS{e^{-|x|}/2, &\quad x\in {\mathbb R}; \\\dfrac{\cosh(x-[x]-1/2)}{2\sinh(1/2)}, &\quad x\in {\mathbb T}.} } Thus, semilinear well-posedness are not expected, indeed, non uniform continuity of the flow-map for CH in $H^s$ for large $s$ was shown in \cite{Himonas-K}. The CH equation is integrable and has infinite many conservation laws, in particular, the energy conservation \EQ{ E(u)=\int u^2+u_x^2dx. } So $H^1$ is the most natural setting for the physically relevant weak solutions. Global weak solution in $H^1$ was eastablished in \cite{Xin-Zhang}. In the $H^1$ setting one has to associate a measure to keep track of possible singularities and of whether one imposes conservation of energy or permits dissipation (see \cite{C-Molinet, B-C1,B-C2}). On the other hand, one can use the classical energy methods to \eqref{eq:CH1} and prove well-posedness in $H^s$ for $s>3/2$ in the sense of Hadamard. The restriction $s>3/2$ comes from the energy estimates: let $u$ be a smooth solution to \eqref{eq:CH1}, then \EQn{\label{eq:energy estimates} \frac{d}{dt}\norm{u}_{H^s}^2\leq C\norm{u_x}_{L^\infty} \norm{u}_{H^s}^2. } We need $s>3/2$ to ensure the embedding $H^s({\mathbb R})\hookrightarrow Lip$ with $s>3/2$ ($Lip$ here denotes the bounded Lipschitz functions). For $s=3/2$, we could replace $H^s$ by $B^{3/2}_{2,1}$. The energy estimates \eqref{eq:energy estimates} hold for all the equations in Remark \ref{remmain} except Novikov equation \eqref{eq:Novikov}. For $s=2$, we can derive it by just differentiating the equation and integrating by part. For $3/2<s<2$, we need the commutator estimates. For Novikov equation, we have the energy estimates: let $u$ be a smooth solution to \eqref{eq:Novikov}, then \EQn{\label{eq: Nov energy} \frac{d}{dt}\norm{u}_{H^2}^2\leq C\norm{u}_{L^\infty}\norm{u_x}_{L^\infty} \norm{u}_{H^2}^2. } Indeed, let $y=u-\partial_x^2u$. Then the Novikov equation is equivalent to \begin{align*} y_t+u^2y_x+3yu_xu=0. \end{align*} Multiplying $y$ on both sides of the above equation and then integrating by parts we get \eqref{eq: Nov energy}. A crucial difference between CH and KdV is that for CH equation smooth data can develop singularity in finite time. We have the following classical blow-up result (see \cite{C-E1}). For other equations in Remark \ref{remmain} except Novikov equation we have the similar blow-up results. The blow-up of smooth solutions for CH can only occur in the form of wave breaking: the solution remains bounded but its slope becomes unbounded (see \cite{C-E3, Con, C-E4}). \begin{lem}[\cite{C-E1}]\label{lem3} Assume $u_0(x)\in H^3({\mathbb K})$ is a real-valued odd function and $u'_0(0)<0$. Then the corresponding strong solution to \eqref{eq:CH} blows up in finite time. Moreover, the maximal time of existence is bounded by $2/|u'_0(0)|$. \end{lem} The proof of Lemma \ref{lem3} is quite simple based on the observation of preservation of anti-symmetry $u(t,x)\to -u(t,-x)$ under the flow of the CH equation \eqref{eq:CH}. We sketch it here. For $u_0(x)\in H^3$ odd, then the solution of CH equation satisfies $u(t,x)=-u(t,-x)$ and thus $u_{xx}(t,0)=0$. Setting $g(t):=u_x(t,0)$ for $t\in [0,T)$, we deduce from \eqref{eq:CH1} that \begin{align*} \dfrac{d}{dt}g(t)+\dfrac{1}{2}g^2(t)=-\left(p*(u^2+\dfrac{u_x^2}{2})\right) (0)\leq0. \end{align*} Thus, we have \begin{align*} \dfrac{d}{dt}g(t)\leq -\dfrac{1}{2}g^2(t), \quad \ t\in[0,T). \end{align*} Since $g(0)<0$, then $g(t)<0$ and \begin{align*} 0>\dfrac{1}{g(t)}\geq\dfrac{1}{g_0}+\dfrac{t}{2}, \quad \ t\in[0,T), \end{align*} which implies that $T<-2/u'_0(0)$. For general $b$-family equations, the above arguments also work for $1<b\leq 3$. Indeed, similarly we get \begin{align*} \dfrac{d}{dt}g(t)+\dfrac{b-1}{2}g^2(t)=-\left(p*(\frac{b}{2}u^2+\frac{3-b}{2}u_x^2)\right) (0)\leq0. \end{align*} Then we get the maximal time of existence $T<-\frac{2}{(b-1)u'_0(0)}$. For the Novikov equation on the real-line, the above argument does not work but there are similar blow-up results (e.g. see \cite{YanLZ}). However, no explicit initial data was constructed in \cite{YanLZ}. To apply their result, we construct an explicit example (see Remark \ref{remlast}). We are not aware of any blow-up results for the periodic Novikov equation. \begin{lem}[\cite{YanLZ}]\label{lem4} Assume $u_0(x)\in H^3({\mathbb R})$ is a real-valued function and let $y_0=(1-\partial_x^2)u_0$. If $y_0(0)=0$, $y_0(x)\geq 0$ for $x\leq 0$ and $y_0(x)\leq 0$ for $x\geq 0$, and \EQ{ u_0(0)u_0'(0)<-\frac{1}{2}\norm{u_0}_{H^1}^2. } Then the corresponding strong solution to \eqref{eq:Novikov} blows up in finite time. Moreover, the maximal time of existence $T$ is bounded as follows \[T\leq \min\left\{-\frac{2}{(1-\delta)m(0)}, \frac{2}{\norm{u_0}_{H^1}^2}\ln\frac{m(0)-\frac{1}{2}\norm{u_0}_{H^1}^2}{m(0)+\frac{1}{2}\norm{u_0}_{H^1}^2}\right\}\] where $-\sqrt{\delta}m(0)=\frac{1}{2}\norm{u_0}_{H^1}^2$, $m(0)=u_0(0)u_0'(0)$. \end{lem} \section{Proof of Theorem \ref{thm1}} In this section we prove Theorem \ref{thm1}. We rely on the following Gronwall type estimates. \begin{lem}\label{lem:gronwall} Let $I=[0,T)$, $T>0$ could be infinity. Assume $A(t)\in C^1(I)$, $A(t)>0$ and there exists a constant $B>0$ such that \EQn{\label{eq:At} \dfrac{d}{dt}A(t)\leq BA(t)\ln (2+A(t)), \quad \forall\ t\in I. } Then we have \EQn{ A(t)\leq (2+A(0))^{e^{Bt}}, \quad \forall\ t\in I. } \end{lem} \begin{proof} By assumption we have \EQ{ \frac{A'(s)}{[2+A(s)]\ln [2+A(s)]}\leq B, \quad \forall\ s\in I. } Integrating in $s$ over the interval $[0,t)$ for $t\in I$, we obtain the bound. \end{proof} \begin{lem}\label{leminfty} Assume $u\in H^2({\mathbb K})$. We have \EQ{ \norm{u_x}_{L^\infty({\mathbb K})}\leq C \norm{u}_{B^1_{\infty,\infty}}\cdot\log_2(2+\norm{u}_{H^2}^2)+C. } \end{lem} \begin{proof} Fixing an integer $N>0$, we get \EQ{ \norm{u_x}_{L^\infty}\leq& \sum_{k\leq N-1}\norm{P_ku_x}_{L^\infty}+\sum_{k\geq N}\norm{P_ku_x}_{L^\infty}\\ \leq& CN \norm{u}_{B^1_{\infty,\infty}}+C\sum_{k\geq N}2^{-k/2}2^{2k}\norm{P_ku}_{L^2}\\ \leq& CN \norm{u}_{B^1_{\infty,\infty}}+C2^{-N/2}\norm{u}_{H^2}. } Setting $N=\log_2(2+\norm{u}_{H^2}^2)$, we complete the proof. \end{proof} Now we are ready to prove Theorem \ref{thm1}. Fix $1\leq p\leq \infty, 1<r\leq \infty$ and $\varepsilon>0$. We define $h(x)$ \EQ{ h(x)=\sum_{k\geq 1} \frac{1}{2^{2k}k^{\frac{2}{1+r}}}h_k(x) } with $h_k$ given by the Fourier transform $\widehat{h}_k(\xi)=i2^{-k}\xi\tilde{\chi}(2^{-k}\xi)$, $\xi\in {\mathbb K}^*$, where $\tilde{\chi}$ is an even non-negative, non-zero $C_0^\infty$ function such that $\tilde{\chi}\chi_0=\tilde{\chi}$. Then clearly we see that $h$ is real-valued odd function and $(P_k h)(x)=\frac{1}{2^{2k}k^{\frac{2}{1+r}}}h_k(x)$. We also have $\norm{P_kh}_{L^p}\sim \frac{2^{k/p'}}{2^{2k}k^{\frac{2}{1+r}}}$ and thus \EQ{ \norm{h}_{B^{1+1/p}_{p,q}({\mathbb K})}\sim \normo{\frac{1}{k^{\frac{2}{1+r}}}}_{l_k^q}. } From this we see that $h\in B^{1+1/p}_{p,r}({\mathbb K})\setminus B^{1+1/p}_{p,1}({\mathbb K})$, and \[h'(0)=\int \widehat{h'}(\xi)d\xi=\int i2\pi\xi \widehat{h}(\xi)d\xi=-\infty.\] For $ \varepsilon>0 $, we take $u_{0,\varepsilon}=\norm{h}_{B^{1+1/p}_{p,r}}^{-1}\cdot \varepsilon P_{\leq K}(h)$ with $K$ sufficiently large such that $u_{0,\varepsilon} '(0)<-2\varepsilon^{-10}$. Then $u_{0,\varepsilon}$ is a real valued odd function, $u_{0,\varepsilon}\in H^\infty({\mathbb K})$, $\norm{u_{0,\varepsilon}}_{B^{1+1/p}_{p,r}}\leq \varepsilon$. By Lemma \ref{lem1} and Lemma \ref{lem3} , there is a unique associated solution $u_\varepsilon \in C([0,T); H^\infty({\mathbb K}))$ with a maximal lifespan $T_\varepsilon<\varepsilon^{10}$. To prove Theorem \ref{thm1} it suffices to show \EQn{\label{Claim2}\limsup_{t\to T_\varepsilon^-}\norm{u_\varepsilon (t)}_{B^{1}_{\infty,\infty}}=\infty.} We prove \eqref{Claim2} by contradiction. If \eqref{Claim2} fails, then $\exists M_\varepsilon >1$ such that \[\sup_{t\in [0,T_\varepsilon)}\norm{u_\varepsilon (t)}_{B^{1}_{\infty,\infty}}\leq M_\varepsilon.\] By the energy estimates \eqref{eq:energy estimates} and Lemma \ref{leminfty}, we get \EQ{ \frac{d}{dt}\norm{u_\varepsilon}_{H^2}^2\leq& C\norm{u_{\varepsilon,x}}_{L^\infty} \norm{u_\varepsilon}_{H^2}^2\\ \leq& C( \norm{u_\varepsilon}_{B^1_{\infty,\infty}}\log_2(2+\norm{u_\varepsilon}_{H^2}^2)+1)\norm{u_\varepsilon}_{H^2}^2\\ \leq& CM_\varepsilon \norm{u_\varepsilon}_{H^2}^2\log_2(2+\norm{u_\varepsilon}_{H^2}^2). } Using Gronwall inequality in Lemma \ref{lem:gronwall} we get $\sup_{t\in [0,T_\varepsilon)}\norm{u_\varepsilon(t)}^2_{H^2}<\infty$ which contradicts to the blow-up criteria in Lemma \ref{lem1}. \begin{rem}\label{remlast} To show the norm inflation for the Novikov equation on the real-line, one uses the energy estimate \eqref{eq: Nov energy} and the blow-up result in Lemma \ref{lem4} as in the above proof. The key point is to construct such an initial data. Fixing $p\in [1,\infty]$ and $r\in (1,\infty]$, we define \EQ{ u_0(x)=\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\Lambda^{-2} (\phi_k)( x) } where $K$ is a large integer determined later and $\phi_k=2^k\phi(2^k x)$ with $\phi\in C_0^\infty ({\mathbb R})$ given by $\phi(x)=\eta(x+2)-\eta(x-200)$. Obviously $u_0$ is a real-valued $H^\infty$ function and we can verify the following properties. 1. For $y_0(x)=\Lambda^2 u_0(x)=\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\phi_k( x)$, it is easy to see that $y_0(x)\geq 0$ for $x\leq 0$; $y_0(x)\leq 0$ for $x\geq 0$ and $y_0(0)=0$. 2. $u_0\in B^{1+1/p}_{p, r}$ and $\norm{u_0}_{B^{1+1/p}_{p, r}}\leq C_r$ uniformly in $K$. Indeed, since \[\widehat{\phi}(\xi)=\hat{\eta}(\xi)(e^{4\pi i\xi}-e^{-400 \pi i\xi}),\] we have $\widehat{\phi}(0)=0$. So $\widehat{\phi}(\xi)$ is a Schwartz function localized at $|\xi| \sim 1$. Rigorously, \EQ{ P_k [\Lambda^{-2}\phi_j](x)=\int (1+4\pi^2\xi^2)^{-1}\hat{\phi}(2^{-j}\xi)e^{ix\xi}\chi (2^{-k}\xi)d\xi. } Then we have $\norm{P_k [\Lambda^{-2}\phi_j]}_{L^p}\leq C 2^{-2k}2^{k/p'}2^{-|j-k|}$ and thus $\norm{u_0}_{B^{1+1/p}_{p, r}}\leq C_r$. 3. $u_0(0)>C$ uniformly in $K$ and $u_0'(0)\to -\infty$ as $K\to \infty$. Indeed, \EQ{ u_0(0)=&\sum_{k= 1}^K \frac{1}{2k^{\frac{2}{1+r}}}\int e^{-2^{-k}|x|}\phi( x)dx\\ =&\sum_{k= 1}^K \frac{1}{2 k^{\frac{2}{1+r}}}\int (e^{-2^{-k}|x-2|}-e^{-2^{-k}|x+200|})\eta( x)dx\\ \geq&\frac12\int (e^{-2^{-1}|x-2|}-e^{-2^{-1}|x+200|})\eta( x)dx\geq C. } Moreover, \EQ{ u_0'(0)=&\int 2\pi i\xi \widehat{u_0}(\xi)d\xi=\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\int 2\pi i\xi (1+4\pi^2\xi^2)^{-1}\hat{\phi}(2^{-k}\xi)d\xi\\ =&\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\int \frac{i\hat{\phi}(2^{-k}\xi)}{2\pi\xi} d\xi-\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\int \frac{i\hat{\phi}(2^{-k}\xi)}{2\pi\xi (1+4\pi^2\xi^2)}d\xi\\ :=&I+II. } For the term $II$, using the fact $|\hat{\phi}(2^{-k}\xi)|\leq C 2^{-k}|\xi|$ we get $|II|\leq C$. On the other hand, \EQ{ I=&\sum_{k= 1}^K \frac{1}{k^{\frac{2}{1+r}}}\int \frac{i(e^{4\pi i\xi}-e^{-400 \pi i\xi})\hat{\eta}(\xi)}{2\pi\xi} d\xi\\ =&\sum_{k= 1}^K \frac{-1}{k^{\frac{2}{1+r}}}\int \frac{\sin (4\pi \xi)+\sin(400 \pi \xi)}{2\pi\xi}\hat{\eta}(\xi) d\xi. } Using the fact $\widehat{1_{[-A,A]}(x)}(\xi)=\frac{\sin (2\pi A\xi)}{\pi\xi}$ and Parseval equality we get \EQ{ I=&\sum_{k= 1}^K \frac{-1}{2k^{\frac{2}{1+r}}}\brk{\int_{-2}^2\eta(x) dx+\int_{-200}^{200}\eta(x) dx}\to -\infty, \quad K\to \infty. } With the above properties we see that taking intial data $ u_{0,\varepsilon}=C^{-1}\varepsilon u_0$, then $\norm{u_{0,\varepsilon}}_{B^{1+1/p}_{p,r}}\leq \varepsilon$ uniformly in $K$ and the maximal time of existence of the solution tends to $0$ (as $K\to \infty$) by Lemma \ref{lem4}. Using the energy estimate \eqref{eq: Nov energy} and blowup criteria we can prove Theorem \ref{thm1} for Novikov equation on the real-line. \end{rem} \, \noindent\textbf{Acknowledgments} \ Z. Guo was partially supported by ARC DP170101060. X. Liu was partially supported by the Fundamental Research Funds for the Central Universities (No.2018QNA34 and No.2017XKZD11). Z. Yin was partially supported by NSFC (No.11671407 and No.11271382), FDCT (No.098/2013/A3), Guangdong Special Support Program (No.8-2015), and the key project of NSF of Guangdong Province (No.2016A030311004).
{ "timestamp": "2018-08-15T02:05:31", "yymm": "1805", "arxiv_id": "1805.02377", "language": "en", "url": "https://arxiv.org/abs/1805.02377" }
\section{Introduction} \label{sec:intro} One of the surprising pictures that quantum field theories provide is that a classical vacuum is fluctuating, in which virtual particles spontaneously appear and disappear in short period of time allowed by the Heisenberg uncertainty principle. The phenomenon that neutral conductive plates put parallel in a classical electromagnetic vacuum attract each other is caused by such particles and called the Casimir effect~\cite{Casimir} (see \cite{lecture} for a review). If one moves the plates (or boundaries for a quantum field in general), the virtual particles can convert into real ones, which is known as the dynamic Casimir effect (DCE)~\cite{Moore} (see \cite{Dalvit:2010ria} for a review). In general, in order to realize and detect the DCE experimentally by moving a boundary, its speed has to be accelerated up to a few percent of the speed of light. Therefore, such an experiment had been thought to be quite difficult. The effect brought by moving a boundary, however, was recognized to be realized effectively by modulating with high frequency the electromagnetic properties of a static boundary. Based on this idea, the DCE was indeed observed first by Wilson et al.~\cite{nature} using a superconducting circuit. So far, there have been proposed various experimental methods to realize the DCE~\cite{CurrentStatus, Nation:2011dka,Farina:2012qd}, and various theoretical results have been obtained~\cite{Dodonov:2001yb}. It is mentioned that the dynamics of quantum systems undergoing a rapid change of parameter in their Hamiltonian (note that the change of boundary conditions can be included as terms in the Hamiltonian) is called the quantum quench dynamics and is actively studied nowadays since it poses many fundamental questions that can be studied by current-generation experiments~\cite{Mitra}. For example, the effect of time-periodic boundary condition (i.e., Floquet dynamics) in a conformal field theory, which is a low-energy description of quantum critical systems, has been investigated~\cite{Berdanier:2017kmd}. The entanglement entropy of a conformal field excited by the change of boundary condition (BC) also has been studied~\cite{He:2014mwa}. While the DEC is a universal phenomenon caused by time-dependent BCs, it occupies a special position in general relativity and other gravitational theories since the effects similar or equivalent to time-dependent BCs are realized not artificially but naturally in dynamical spacetimes, such as expanding universe~\cite{Parker}, the gravitational collapse of stars to black holes~\cite{Hawking:1974sw}, the creation of naked singularities~\cite{Ford:1978ip, Ishibashi:2002ac} and wormholes~\cite{Braunstein:1996aj}, and the topology change of spacetime (and string worldsheets) in quantum gravity~\cite{Anderson:1986ww, Manogue,Shapere:2012wn} (see \cite{Birrell:1982ix, Wald:1995yp} for a comprehensive study of quantum field theory in curved spacetimes). Among the above phenomena in gravitational physics, the particle creation due to the formation of naked singularities~\cite{Ishibashi:2002ac} is of fundamental importance since it is closely related to the future predictability of law of quantum physics, namely, the existence of cosmic censor~\cite{Penrose:1969pc} from the quantum physics point of view. The basic idea is as follow. The spacetime singularity, in which the predictability of law of physics is thought to be lost if the singularity is naked or visible, is defined by the geodesic incompleteness~\cite{Hawking:1973uf}. However, such a definition of singular spacetime using the notion of particles may judge a spacetime appearing harmless (e.g., Minkowski spacetime from which a single point is taken out) to be singular. Therefore, it was proposed to define the spacetime regularity with not the goedesic completeness but a uniqueness of propagation of classical wave fields (or a uniqueness of self-adjoint extension of time-translation operator)~\cite{Wald:1980jn, Horowitz:1995gi}. Such a definition using the notion of fields actually excludes the spacetimes appearing harmless from a class of singular spacetimes~\cite{Ishibashi:1999vw}. Ishibashi and Hosoya~\cite{Ishibashi:2002ac} proceeded to a next step. Namely, they investigated what happens if one quantizes a wave field in the `wave-singular' (therefore singular also in the ordinary geodesic sense) spacetime describing the formation of a strong naked singularity, which can be modeled by instantaneous change of BCs for the wave field. More specifically, they considered a quantized $1+1$ dimensional free massless scalar field in a cavity for which the BC suddenly changes from the Neumann to Dirichlet. They showed that a diverging flux taking form of delta function squared emanates from the points where the BCs change and propagates along null lines. From such a result, they concluded that the backreaction of created particles would bring null singularities, resulting in the recovery of global hyperbolicity (i.e., the future predictability of law of physics). That is, the created particles play the role of a quantum version of cosmic censor. While the idea of the quantum version of cosmic censor is interesting and shown to work in \cite{Ishibashi:2002ac}, an unsatisfactory point may be that the analysis was restricted to the Neumann-to-Dirichlet case. Although Ishibashi and Hosoya tried to examine a more general case for which the BC changes from a Robin BC $\phi(t,x)=a \pd_x \phi$ to another Robin one $\phi(t,x)=b \pd_x \phi$ ($a$ and $b \; (\neq a)$ are constants and the both sides of equalities are evaluated at the boundary), but failed to obtain any rigorous result. (See \cite{Romeo:2000wt} for a systematic study on the static Casimir effect under Robin BCs and \cite{Mintz:2006jh, Mintz:2006yz, Farina:2012qd} for the DCE with time-dependent Robin BCs with a non-relativistic approximation.) Therefore, in this paper, we shall extend the analysis in Ref.~\cite{Ishibashi:2002ac} in two directions. First, we examine the instantaneous change of BC in a finite cavity from the Dirichlet to Neumann. Then, we examine both the Neumann-to-Dirichlet (N-D) and Dirichlet-to-Neumann (D-N) cases in a semi-infinite cavity. For the D-N cases both in the finite and semi-infinite cavities, we find with a little surprise that a diverging flux emanates from the point where the BC changes but its property is completely different from that in the N-D case obtained in Ref.~\cite{Ishibashi:2002ac}. Furthermore, in the course that we reproduced the result of N-D case, we found that such a diverging flux appears also in the N-D case in addition to the term of delta function squared, but was overlooked in \cite{Ishibashi:2002ac}. These results suggest that the divergence of flux, which would be a necessary condition for the quantum version of cosmic censor to work, is not a special result in the N-D case. In addition, it is also suggested that the type of divergence sensitively depends on the combination of the initial and final BCs. Here, let us give a few remarks on the analysis in this paper. The idealization of instantaneous change of BCs, which is natural from the viewpoint of the formation of strong naked singularities, enables us to obtain all the results in analytic form. The particle creation by the rapid appearance and/or disappearance of a wall in a one-dimensional (1D) finite cavity was studied in Refs.~\cite{Rodriguez-Vazquez:2014hka, Brown:2015yma, Harada:2016kkq}. In particular, the system with the instantaneous appearance and disappearance of a Dirichlet wall studied in \cite{Harada:2016kkq} is more complex than but similar to the system in Sec.~\ref{sec:fin} of the present paper. The organization of this paper is as follow. In Sec.~\ref{sec:fin}, we investigate the particle creation due to the instantaneous change of BC in a finite 1D cavity, for the N-D case (Sec.~\ref{sec:ND_fin}) and the D-N case (Sec.~\ref{sec:DN_fin}). The origin of discrepancy between the result in Sec.~\ref{sec:fin} and Ref.~\cite{Ishibashi:2002ac} is clarified in Sec.~\ref{sec:ih}. In Sec.~\ref{sec:inf}, the case of semi-infinite cavity is analyzed. We conclude in Sec.~\ref{sec:conc}. The proof of consistency between different quantizations, called the unitarity relations, and some integration formulas are presented in Appendices~\ref{sec:UR} and \ref{sec:int}, respectively. The result for the semi-infinite cavity in Sec.~\ref{sec:inf} is reproduced in Appendix~\ref{sec:green} with the Green-function method, which naturally involves the regularization of the vacuum expectation value of energy-momentum tensor. We work in the natural units in which $c=\hbar=1$. \section{Finite cavity I} \label{sec:fin} \subsection{Quantization of massless scalar field} \label{sec:quant_fin} We consider a free massless scalar field in a 1D cavity of which length is $L$, \be (-\pd_t^2+\pd_x^2) \phi(t,x)=0, \;\;\; -\infty < t < \infty, \;\;\; 0 < x < L. \label{eq:eom} \ee At the right boundary $x=L$, we assume the homogeneous Dirichlet boundary condition all the time, \be \phi(t,L)=0, \;\;\; -\infty < t < \infty. \label{eq:bc1} \ee At the left boundary $x=0$, we consider two kinds of boundary conditions. One is the homogeneous Neumann boundary condition, \be \pd_x \phi(t,0)=0. \label{eq:bc2} \ee Another is the Dirichlet boundary condition, \be \phi(t,0) = 0. \label{eq:bc3} \ee During boundary conditions \eqref{eq:bc1} and \eqref{eq:bc2} are imposed, a natural set of positive-energy mode functions $\{ f_n \} $ is given by \be f_n (t,x) = \sqrt{ \frac{2}{n\pi} } e^{-ip_n t} \cos ( p_n x ) , \;\;\; p_n := \frac{n \pi}{2L}, \;\;\; n = 1,3,5,\cdots. \label{eq:f_fin} \ee In the rest of this paper, we suppose that $n$ and $n'$ entirely denote odd natural numbers, otherwise denoted. The above mode functions satisfy the following orthonormal conditions, \be \langle f_n,f_{n'} \rangle = - \langle f_n^\ast, f_{n'}^\ast \rangle = \delta_{nn'}, \;\;\; \langle f_n,f_{n'}^\ast \rangle = 0, \label{eq:f_ortho_fin} \ee where the asterisk denotes the complex conjugate and $\langle \; ,\; \rangle $ denotes the Klein-Gordon inner product~\cite{Birrell:1982ix}, \be \langle \phi,\psi \rangle := i\int_0^L ( \phi^\ast \pd_t \psi - \pd_t \phi^\ast \psi ) dx. \label{eq:IP} \ee During boundary conditions \eqref{eq:bc1} and \eqref{eq:bc3} are imposed, a natural set of positive-energy mode functions $\{ g_m \} $ is given by \be g_m (t,x) = \frac{1}{ \sqrt{ m\pi }} e^{-i q_m t} \sin ( q_m x ), \;\;\; q_m := \frac{m \pi}{L}, \;\;\; m =1,2,3, \cdots . \label{eq:g_fin} \ee In the rest of this paper, we suppose that $m$ and $m'$ entirely denote natural numbers, otherwise denoted. The above mode functions satisfy the following orthonormal conditions, \be \langle g_m,g_{m'} \rangle = - \langle g_m^\ast, g_{m'}^\ast \rangle = \delta_{mm'}, \;\;\; \langle g_m,g_{m'}^\ast \rangle = 0. \label{eq:g_ortho_fin} \ee Associated with the above two sets of mode function, $\{ f_n \}$ and $\{ g_m \}$, there are two ways to quantize the scalar field. One is to expand the scalar field by $f_n$, \be {\bm \phi} = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty ( {\bm a}_n f_n + {\bm a}_n^\dagger f_n^\ast), \label{eq:phi_f_fin} \ee and impose the commutation relations, \be [ {\bm a}_n, {\bm a}_{n'}^\dagger ] = \delta_{nn'}, \;\;\; & [ {\bm a}_n, {\bm a}_{n'} ] = 0. \label{eq:comm_a_fin} \ee By imposing the above commutation relations, the following equal-time canonical commutation relation is realized, \be [ {\bm \phi}(t,x), \pd_t {\bm \phi}(t,x') ] = i \delta (x-x'). \label{eq:canonical} \ee Then, ${\bm a}_n$ and ${\bm a}_{n}^\dagger$ are interpreted as the annihilation and creation operators, respectively. The vacuum state in which no particle corresponding to mode function $f_n$ exists is defined by \be {\bm a}_n | 0_f \rangle = 0, \;\;\; n = 1,3,5,\cdots, \;\;\; \langle 0_f | 0_f \rangle = 1. \label{eq:0f_fin} \ee Another is to expand the field by $ g_m$, \be {\bm \phi} = \sum_{m=1}^\infty ( {\bm b}_m g_m + {\bm b}_m^\dagger g_m^\ast), \label{eq:phi_g_fin} \ee and impose the commutation relations, \be [ {\bm b}_m, {\bm b}_{m'}^\dagger ] = \delta_{mm'}, \;\;\; [ {\bm b}_m, {\bm b}_{m'} ] = 0. \label{eq:comm_b_fin} \ee The vacuum state in which no particle corresponding to $g_m$ exists is defined by \be {\bm b}_m | 0_g \rangle = 0, \;\;\; m =1,2,3\cdots, \;\;\; \langle 0_g | 0_g \rangle = 1. \label{eq:0g_fin} \ee Later, we will estimate the vacuum expectation value of energy-momentum tensor for the scalar field. The energy-momentum tensor operator is written as ${\bm T}_{\mu\nu} = \pd_\mu {\bm \phi} \pd_\nu {\bm \phi} -\frac12 \eta_{\mu\nu} ( \pd {\bm \phi} )^2$, where $\eta_{\mu\nu} = {\rm Diag.} (-1,1)$ is the $1+1$ dimensional flat metric. Introducing double null coordinates, non-zero components of this tensor are \be {\bm T}_{\pm\pm} = ( \pd_\pm {\bm \phi} )^2, \;\;\; z_\pm := t \pm x. \label{eq:em_null} \ee Note that the energy density and momentum density in the original Cartesian coordinates are ${\bm T}^{tt}= {\bm T}_{--} + {\bm T}_{++}$ and $ {\bm T}^{tx} = {\bm T}_{--} - {\bm T}_{++}$, respectively. \subsection{Particle creation by instantaneous change of boundary condition} \label{sec:creation_fin} Given the above quantization schemes, we investigate how the vacuum is excited when the boundary condition at left boundary $x=0$ is instantaneously, say at $t=0$, changed from Neumann to Dirichlet (Sec.~\ref{sec:ND_fin}) and reversely (Sec.~\ref{sec:DN_fin}). \subsubsection{From Neumann to Dirichlet} \label{sec:ND_fin} First, we assume that the boundary condition at $x=0$ is Neumann~\eqref{eq:bc2} for $t<0$ and Dirichlet~\eqref{eq:bc3} for $t>0$, and that the quantum field is in vacuum $| 0_f \rangle$ in the Heisenberg picture. See Fig.~\ref{fig:ND_fin} for a schematic picture of this situation. Then, we investigate how the vacuum is excited due to the change of boundary condition by computing the spectrum and energy flux of created particles. \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{01_ND_fin.eps} \caption{The boundary condition at the left end of domain $(x=0)$ instantaneously changes at $t=0$ from Neumann (dashed) to Dirichlet (solid). Spatial configurations of mode functions $f_n$ and $ g_m $ are schematically depicted. } \label{fig:ND_fin} \end{center} \end{minipage} \end{center} \end{figure} Let us expand $f_n$ by $g_m$, \be f_n = \sum_{m=1}^\infty ( \alpha_{nm}g_m + \beta_{nm} g_m^\ast ), \label{eq:fg_fin} \ee where the expansion coefficients, called the Bogoliubov coefficients, are computed by \be \alpha_{nm} = \langle g_m , f_n \rangle, \;\;\; \beta_{nm} = - \langle g_m^\ast, f_n \rangle. \label{eq:alpha_beta_form_fin} \ee Using the explicit form of mode functions \eqref{eq:f_fin} and \eqref{eq:g_fin}, we obtain \be \alpha_{nm} = \frac{ 2 }{ (2m-n) \pi } \sqrt{ \frac{ 2m }{ n } }, \;\;\; \beta_{nm} = \frac{ 2 }{ (2m+n) \pi } \sqrt{ \frac{ 2m }{ n } }. \label{eq:alpha_beta_value_fin} \ee Substituting Eq.~\eqref{eq:fg_fin} into Eq.~\eqref{eq:phi_f_fin}, and comparing it with Eq.~\eqref{eq:phi_g_fin}, we obtain \be {\bm b}_m = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty ( \alpha_{nm} {\bm a}_n + \beta_{nm}^\ast {\bm a}_n^\dagger ). \label{eq:ba_fin} \ee Substituting Eq.~\eqref{eq:ba_fin} into Eq.~\eqref{eq:comm_b_fin} and using Eq.~\eqref{eq:comm_a_fin}, we obtain \begin{align} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty ( \alpha_{nm} \alpha_{nm'}^\ast - \beta_{nm}^\ast \beta_{nm'} ) = \delta_{mm'}, \;\;\; \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty ( \alpha_{nm} \beta_{nm'}^\ast - \beta_{nm}^\ast \alpha_{nm'} ) = 0, \label{eq:UR_ND_fin} \end{align} which should be satisfied for the two quantizations, Eqs.~\eqref{eq:phi_f_fin} and \eqref{eq:phi_g_fin}, to be consistent. In Appendix~\ref{sec:UR_ND_fin}, these consistency conditions, which we call {\it unitarity relations}, are shown to be satisfied by Bogoliubov coefficients~\eqref{eq:alpha_beta_value_fin}. The spectrum of created particles is given by the vacuum expectation value of number operator ${\bm b}_m^\dagger {\bm b}_m$, \be \langle 0_f | {\bm b}_m^\dagger {\bm b}_m | 0_f \rangle = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty | \beta_{nm} |^2 = \frac{8}{\pi^2} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ m }{ n ( n+2m )^2 }. \label{eq:bb_fin} \ee Note that this is finite but its summation over $m$, the total number of created particles, is divergent. This implies that the Fock-space representation associated with $ {\bm a}_n $ is unitarily inequivalent to that associated with ${\bm b}_m$~\cite{Wald:1995yp}. The vacuum expectation value of energy-momentum tensor before the change of boundary condition at $t=0$ is computed by substituting Eq.~\eqref{eq:phi_f_fin} into Eq.~\eqref{eq:em_null}, and using Eqs.~\eqref{eq:comm_a_fin}, \eqref{eq:0f_fin}, and \eqref{eq:f_fin} as \begin{align} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t<0} = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty | \pd_\pm f_n |^2 = \frac{\pi}{8L^2} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty n. \label{eq:ND_t<0_fin} \end{align} This represents the Casimir energy density~\cite{Casimir}, which can be made finite with standard regularization schemes~\cite{Birrell:1982ix}. The most interesting quantity is the vacuum expectation value of energy-momentum tensor after $t=0$. Substituting Eq.~\eqref{eq:phi_g_fin} into Eq.~\eqref{eq:em_null} and using Eq.~\eqref{eq:ba_fin}, we obtain \begin{gather} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \sum_{m=1}^\infty \sum_{m'=1}^\infty [ ( \alpha_{nm} \beta_{nm'} + \alpha_{nm'} \beta_{nm} ) {\rm Re} ( \pd_\pm g_m \pd_\pm g_{m'} ) \nn \\ + ( \alpha_{nm} \alpha_{nm'} + \beta_{nm} \beta_{nm'} ) {\rm Re} ( \pd_\pm g_m \pd_\pm g_{m'}^\ast ) ]. \label{eq:ND_t>0_form_fin} \end{gather} To derive Eq.~\eqref{eq:ND_t>0_form_fin}, we symmetrize it with respect to dummy indices $m$ and $m'$, and use the fact that $\alpha_{nm}$ and $\beta_{nm}$ are real. Using the explicit expressions of Bogoliubov coefficients \eqref{eq:alpha_beta_value_fin} and mode function \eqref{eq:g_fin}, we obtain \begin{gather} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{1}{2\pi L^2} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \Bigg( \frac{1}{4n} [ 4 \sum_{m=1}^\infty \cos ( q_m z_\pm ) + n^2 \sum_{m=1}^\infty \frac{ \cos ( q_m z_\pm ) }{ m^2-(n/2)^2 } ]^2 + n [ \sum_{m=1}^\infty \frac{ m \sin ( q_m z_\pm ) }{ m^2-(n/2)^2 } ]^2 \Bigg). \label{eq:ND_t>0_value1_fin} \end{gather} This is an even function of $z_\pm$ with period $2L$ since it is invariant under reflection $z_\pm \to - z_\pm$ and translation $z_\pm \to z_\pm + 2L$. Therefore, it is sufficient to calculate it in $0 \leq z_\pm < 2L$, and then generalize the obtained expression appropriately to one valid in the entire domain. The first and second summations over $m$ in Eq.~\eqref{eq:ND_t>0_value1_fin} can be computed to give \begin{align} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{1}{2\pi L^2} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \Bigg( \frac{1}{4n} [ 16L^2 \delta^2( z_\pm ) + n^2 \pi^2 \sin^2 ( p_n z_\pm ) ] + n [ \sum_{m=1}^\infty \frac{ m \sin ( q_m z_\pm ) }{ m^2-(n/2)^2 } ]^2 \Bigg), \label{eq:ND_t>0_value2_fin} \end{align} which is valid in $ 0 \leq z_\pm <2L $, using the following formulas, \begin{align} \sum_{k=1}^\infty \cos( \frac{ 2k \pi}{a} y ) &= -\frac12 + \frac{a}{2} \sum_{\ell = -\infty}^\infty \delta( y- \ell a ), \;\;\; ( - \infty < y < \infty ), \label{eq:sumForm1} \\ \sum_{k=1}^\infty \frac{ \cos ky }{ k^2-a^2 } &= -\frac{\pi}{2a} \cos[ a(\pi - y) ] {\rm cosec} (a \pi )+\frac{1}{2a^2}, \;\;\; (0 \leq y \leq 2\pi). \label{eq:sumForm2} \end{align} See Ref.~\cite[p.~730]{maru} for the second formula. For $z_\pm=0$, from Eq.~\eqref{eq:ND_t>0_value2_fin}, we have \be \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{2}{\pi} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ \delta^2(0) }{ n }, \;\;\; (z_\pm = 0). \label{eq:ND_t>0_value3_fin} \ee For $ 0 <z_\pm < 2L $, the rest summation over $m$ in Eq.~\eqref{eq:ND_t>0_value2_fin} can be computed to give \be \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{ \pi }{ 8L^2 } \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty n, \;\;\; ( 0<z_\pm < 2L ), \label{eq:ND_t>0_value4_fin} \ee using the following formula~\cite[p.~730]{maru}, \begin{align} \sum_{k=1}^\infty \frac{ k \sin ky }{ k^2-a^2 } &= \frac{\pi}{2} \sin[ a(\pi - y) ] {\rm cosec} (a \pi ), \;\;\; (0 <y < 2\pi). \label{eq:sumForm3} \end{align} Combining Eqs.~\eqref{eq:ND_t>0_value3_fin} and \eqref{eq:ND_t>0_value4_fin}, we obtain \be \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{2}{\pi} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ \delta^2( z_\pm ) }{ n } + \begin{cases} 0 & (z_\pm =0) \\ \displaystyle \frac{ \pi }{ 8L^2 } \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty n & ( 0 < z_\pm < 2L )\\ \end{cases}. \label{eq:ND_t>0_value5_fin} \ee This is the expression for $ 0 \leq z_\pm < 2L $, what we wanted to know. Extending the domain of Eq.~\eqref{eq:ND_t>0_value5_fin}, we obtain \begin{align} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{2}{\pi} \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{1}{n} \sum_{\ell = -\infty}^\infty \delta^2(z_\pm - 2 \ell L) + \begin{cases} 0 & (z_\pm = 2\ell L, \; \ell \in {\bm Z}) \\ \displaystyle \frac{ \pi }{ 8L^2 } \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty n & (\mbox{otherwise}) \\ \end{cases}. \label{eq:ND_t>0_value6_fin} \end{align} \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{02_eDens.eps} \includegraphics[height=5cm]{03_mDens.eps} \caption{Vacuum expectation values of energy density $\langle 0_f | ( {\bm T}_{--} + {\bm T}_{++} ) | 0_f \rangle_{t>0}$ (left) and momentum density $ \langle 0_f | ( {\bm T}_{--} - {\bm T}_{++}) | 0_f \rangle_{t>0} $ (right) with cutoff, from which the uniform Casimir contribution is subtracted. We set $L=1$ and summation over modes in Eq.~\eqref{eq:ND_t>0_value1_fin} is taken up to $n = m = 13$. The exact results without cutoff are given by Eq.~\eqref{eq:ND_t>0_value6_fin}.} \label{fig:VEV} \end{center} \end{minipage} \end{center} \end{figure} Let us consider the meaning of two terms in Eq.~\eqref{eq:ND_t>0_value6_fin}. The first term, the delta function squared multiplied by the logarithmically divergent series, represents the diverging flux emanating from the origin $(t,x)=(0,0)$ and localizing on the null lines (Fig.~\ref{fig:VEV}). The dependence of energy density on the delta function squared implies also the divergence of total energy emitted. This component of flux is similar to that predicted in the topology change of 1D universe \cite{Anderson:1986ww} and the same as that predicted in the formation of a strong naked singularity~\cite{Ishibashi:2002ac}. The second term, at first glance, seems to represent the ambient Casimir energy just like Eq.~\eqref{eq:ND_t<0_fin}, which is negative and finite after a regularization, and its vanishing on the null lines. As will be explicitly shown in the semi-infinite cavity case (see Sec.~\ref{sec:inf} and Appendix~\ref{sec:green}), however, this is not the case. The second term represents the {\it divergence on the null lines} after an appropriate regularization in fact. A simple understanding of such an appearance of divergence is possible as follows. A regularization corresponds to the subtraction of the spatially uniform diverging energy density due to the zero-point oscillation. Therefore, if one subtracts such a uniform diverging quantity from Eq.~\eqref{eq:ND_t>0_value6_fin}, leading to the regularization of ambient Casimir term, a divergence appears {\it on} the null lines $z_\pm = 2\ell L \; (\ell \in {\bm Z})$. As far as the present author knows, the second kind of diverging flux was first found in the particle creation due to the instantaneous appearance of Dirichlet wall in a cavity~\cite{Harada:2016kkq}. It was confirmed in the same paper that such a divergence appears in the instantaneous limit of smooth formation of a Dirichlet wall in cavity analyzed in~\cite{Brown:2015yma}. It is suspicious that the second kind of flux component does not appear in the analysis of Ishibashi and Hosoya \cite{Ishibashi:2002ac}, since their system is quite similar to the present one. Thus, we will revisit their analysis in Sec.~\ref{sec:ih} and find that the component was overlooked in \cite{Ishibashi:2002ac}. \subsubsection{From Dirichlet to Neumann} \label{sec:DN_fin} \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{04_DN_fin.eps} \caption{The boundary condition at the left end of domain $(x=0)$ instantaneously changes at $t=0$ from Dirichlet (solid) to Neumann (dashed). Spatial configurations of mode functions $ g_m $ and $f_n $ are schematically depicted.} \label{fig:DN_fin} \end{center} \end{minipage} \end{center} \end{figure} We assume that the boundary condition at $x=0$ is Dirichlet~\eqref{eq:bc3} for $t<0$ and Neumann~\eqref{eq:bc2} for $t>0$, and that the quantum field is in vacuum $| 0_g \rangle$. See Fig.~\ref{fig:DN_fin} for a schematic picture of the situation. Since this situation is a kind of time reversal of that in Sec.~\ref{sec:ND_fin}, most parts of calculation can be reused but the results are different. Let us expand $g_m$ by $f_n$, \be g_m = \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty ( \rho_{mn} f_n + \sigma_{mn} f_n^\ast ), \label{eq:gf_fin} \ee where the expansion coefficients are given by \be \rho_{mn} = \langle f_n,g_m \rangle = \alpha_{nm}^\ast, \;\;\; \sigma_{mn} = -\langle f_n^\ast,g_m \rangle = - \beta_{nm}. \label{eq:rho_sigma_form_fin} \ee Here, $\alpha_{nm}$ and $\beta_{nm}$ are given by Eq.~\eqref{eq:alpha_beta_value_fin}. Substituting Eq.~\eqref{eq:gf_fin} into Eq.~\eqref{eq:phi_g_fin}, and comparing it with Eq.~\eqref{eq:phi_f_fin}, we obtain \be {\bm a}_n = \sum_{m=1}^\infty ( \rho_{mn} {\bm b}_m + \sigma_{mn}^\ast {\bm b}_m^\dagger ). \label{eq:ab_fin} \ee Substituting Eq.~\eqref{eq:ab_fin} into Eq.~\eqref{eq:comm_a_fin}, and using Eq.~\eqref{eq:comm_b_fin}, we obtain \begin{align} \sum_{m=1}^\infty ( \rho_{mn} \rho_{mn'}^\ast - \sigma_{mn}^\ast \sigma_{mn'} ) = \delta_{nn'}, \;\;\; \sum_{m=1}^\infty ( \rho_{mn} \sigma_{mn'}^\ast - \sigma_{mn}^\ast \rho_{mn'} ) = 0, \label{eq:UR_DN_fin} \end{align} which should be satisfied again for the two quantization, Eqs.~\eqref{eq:phi_f_fin} and \eqref{eq:phi_g_fin}, to be consistent. It is shown in Appendix~\ref{sec:UR_DN_fin} that the Bogoliubov coefficients given by Eq.~\eqref{eq:rho_sigma_form_fin} indeed satisfy unitarity relations \eqref{eq:UR_DN_fin}. The vacuum expectation value of number operator ${\bm a}_n^\dagger {\bm a}_n$, representing the energy spectrum of created particles, is computed as \be \langle 0_g | {\bm a}_n^\dagger {\bm a}_n | 0_g \rangle = \sum_{m=1}^\infty | \sigma_{mn} |^2 = \frac{8}{ \pi^2} \sum_{m=1}^\infty \frac{ m }{ n(n+2m)^2 }. \label{eq:aa_fin} \ee This and its summation over odd $n$, i.e., the total number of created particles, are divergent. This implies that the Fock-space representation associated with $ {\bm b}_m $ is unitarily inequivalent to that associated with ${\bm a}_n$~\cite{Wald:1995yp}. The vacuum expectation value of energy-momentum tensor before the change of boundary condition at $t=0$ is computed by substituting Eq.~\eqref{eq:phi_g_fin} into Eq.~\eqref{eq:em_null}, and using the explicit expression of mode function \eqref{eq:g_fin}, \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t<0} = \sum_{m=1}^\infty | \pd_\pm g_m |^2 = \frac{ \pi }{ 4L^2 } \sum_{m=1}^\infty m. \label{eq:DN_t<0_fin} \end{align} This represents the Casimir energy density, which can be made finite by standard renormalization procedures~\cite{Birrell:1982ix}. The vacuum expectation value of energy-momentum tensor after $t=0$ is computed by substituting Eq.~\eqref{eq:phi_f_fin} into Eq.~\eqref{eq:em_null}, and using Eq.~\eqref{eq:ab_fin}, as \begin{gather} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \sum_{m=1}^\infty \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \sum_{\substack{ n'=1 \\ n':{\rm odd}}}^\infty [ ( \rho_{mn} \sigma_{mn'} + \rho_{mn'} \sigma_{mn} ) {\rm Re} ( \pd_\pm f_n \pd_\pm f_{n'} ) \nn \\ + ( \rho_{mn} \rho_{mn'} + \sigma_{mn} \sigma_{mn'} ) {\rm Re} ( \pd_\pm f_n \pd_\pm f_{n'}^\ast ) ], \label{eq:DN_t>0_form_fin} \end{gather} which we symmetrize with respect to dummy indices $n$ and $n'$, and we use the fact that $\rho_{mn}$ and $\sigma_{mn}$ are real. Using the explicit form of Bogoliubov coefficients and mode function, Eqs.~\eqref{eq:rho_sigma_form_fin}, \eqref{eq:alpha_beta_value_fin}, and \eqref{eq:f_fin}, we obtain \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \frac{ 4 }{ \pi L^2 } \sum_{m=1}^\infty \left( 4m^3 [ \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ \cos ( p_n z_\pm ) }{ n^2-(2m)^2 } ]^2 + m [ \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ n \sin ( p_n z_\pm ) }{ n^2-(2m)^2 } ]^2 \right). \label{eq:DN_t>0_value1_fin} \end{align} This is an even function of $z_\pm$ with period $2L$, since it is invariant under reflection $z_\pm \to -z_\pm$ and translation $z_\pm \to z_\pm + 2L$. Therefore, it is sufficient to calculate it in $0 \leq z_\pm < 2L$, and generalize it appropriately to one valid in the entire domain. The first summation over odd $n$ in Eq.~\eqref{eq:DN_t>0_value1_fin} can be computed to give \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \frac{ 4 }{ \pi L^2 } \sum_{m=1}^\infty \left( \frac{ m \pi^2 }{ 16 } \sin^2 ( q_m z_\pm ) + m [ \sum_{\substack{ n=1 \\ n:{\rm odd}}}^\infty \frac{ n \sin ( p_n z_\pm ) }{ n^2-(2m)^2 } ]^2 \right), \label{eq:DN_t>0_value2_fin} \end{align} which is valid in $ 0 \leq z_\pm < 2L $. Here, we have used the following formula~\cite[p.\ 733]{maru}, \begin{align} \sum_{k=0}^\infty \frac{ \cos [(2k+1)y] }{(2k+1)^2 -a^2} = \frac{\pi}{4a} \sin[ \frac{a}{2}(\pi - 2y) ] \sec (\frac{a\pi}{2}), \;\;\; ( 0 \leq y \leq \pi ). \label{eq:maru_typo1} \end{align} It is noted here that there are typos in Ref.~\cite[p.\ 733]{maru} about formulas~\eqref{eq:maru_typo1} and \eqref{eq:maru_typo2} (see below). For $z_\pm=0$, from Eq.~\eqref{eq:DN_t>0_value2_fin}, we have \be \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = 0, \;\;\; ( z_\pm = 0 ). \label{eq:DN_t>0_value3_fin} \ee For $ 0 < z_\pm < 2L $, the rest summation over odd $n$ in Eq.~\eqref{eq:DN_t>0_value2_fin} can be computed to give \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \frac{ \pi }{ 4 L^2 } \sum_{m=1}^\infty m \;\;\; ( 0 < z_\pm < 2L ), \label{eq:DN_t>0_value4_fin} \end{align} using the following formula~\cite[p.\ 733]{maru}, \begin{align} \sum_{k=0}^\infty \frac{ (2k+1) \sin [(2k+1)y] }{(2k+1)^2 -a^2} &= \frac{\pi}{4} \cos[ \frac{a}{2}(\pi - 2y) ] \sec (\frac{a\pi}{2}), \;\;\; ( 0 < y < \pi ). \label{eq:maru_typo2} \end{align} Combining Eqs.~\eqref{eq:DN_t>0_value3_fin} and \eqref{eq:DN_t>0_value4_fin}, and extending the domain periodically into the entire domain, we have \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \begin{cases} 0 & (z_\pm = 2 \ell L, \; \ell \in {\bm Z}) \\ \displaystyle \frac{ \pi }{ 4L^2 } \sum_{m=1}^\infty m & ({\rm otherwise}) \end{cases} . \label{eq:DN_t>0_value5_fin} \end{align} Comparing the above result with that in the N-D case~\eqref{eq:ND_t>0_value6_fin}, one sees that there is no flux component of delta function squared in this case. As will be explicitly shown in the semi-infinite cavity case (Sec.~\ref{sec:inf} and Appendix~\ref{sec:green}), Eq.~\eqref{eq:DN_t>0_value5_fin} represents the non-renormalizable diverging flux localized on the null lines $z_\pm = 2 \ell L \; ( \ell \in {\bm Z} )$ and the ambient Casimir energy. Thus, the diverging flux emanates from origin $(t,x)=(0,0)$ and propagates along the null lines in a similar way to Fig.~\ref{fig:VEV}. \section{Finite cavity II: Revisit Ishibashi-Hosoya~\cite{Ishibashi:2002ac}} \label{sec:ih} As seen in Sec.~\ref{sec:fin}, the vacuum expectation value of energy-momentum tensor has two components in the N-D case as Eq.~\eqref{eq:ND_t>0_value6_fin}, and one component in the D-N case as Eq.~\eqref{eq:DN_t>0_value5_fin}. The origin of such a difference between the N-D and D-N cases will be discussed in Conclusion. Here, let us look into the consistency between these results and a relevant past work. In Ref.~\cite{Ishibashi:2002ac}, the authors considered the instantaneous change of boundary condition at the both sides of finite cavity. The boundary conditions for $t<0$ are Neumann at the both sides and those for $t>0$ are Dirichlet at the both sides, which we call the NN-DD case. Since this NN-DD case resembles the N-D case, one can expect the similar results. Namely, we expect that two diverging flux components appear also in the NN-DD case. Reference~\cite{Ishibashi:2002ac}, however, concludes the flux involves only the component of delta function squared. Therefore, we will reconsider here the system adopted in~\cite{Ishibashi:2002ac}, and find that the other component was overlooked. \subsection{Quantization of massless scalar field} \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{05_NNDD.eps} \caption{The boundary conditions at $x=0$ and $x=L$ are instantaneously changes at $t=0$ from Neumann (dashed) to Dirichlet (solid). Spatial configurations of mode functions $ h_k $ and $g_m$ are schematically depicted.} \label{fig:NNDD} \end{center} \end{minipage} \end{center} \end{figure} We consider the situation that the Neumann boundary condition is imposed at $x=0$ and $x=L$ for $t<0$, while the Dirichlet boundary condition is imposed at $x=0$ and $x=L$ for $t>0$ (see Fig.~\ref{fig:NNDD}). In this case, a normalized positive-energy mode function for $t<0$ is given by \be h_k (t,x) = \frac{1}{\sqrt{k\pi}} e^{-i r_k t} \cos (r_k x), \;\;\; r_k := \frac{k\pi}{L}, \;\;\; k =1,2,3,\cdots. \label{eq:h_ih} \ee A normalized mode function for $t>0$ is given by Eq.~\eqref{eq:g_fin}. The scalar field is quantized by expanding it by set of mode functions $\{ h_k \}$ and an additional zero-mode function ${\bm h}_0$, being spatially uniform, as \begin{align} {\bm \phi} = {\bm h_0} + \sum_{k=1}^\infty ( {\bm c}_k h_k + {\bm c}_k^\dagger h_k^\ast ), \;\;\; {\bm h_0} = \frac{1}{\sqrt{L}}( {\bm Q}+t{\bm P} ). \label{eq:phi_h_ih} \end{align} Here, ${\bm Q}$ and ${\bm P}$ are Hermitian (${\bm Q}^\dagger = {\bm Q}, \; {\bm P}^\dagger = {\bm P}$), and the following commutation relations are imposed \be [{\bm Q}, {\bm P}] = i, \;\;\; [{\bm Q}, {\bm c}_{k}] = [{\bm P}, {\bm c}_{k}] = 0, \;\;\; [{\bm c}_k, {\bm c}_{k'}^\dagger] = \delta_{kk'}, \;\;\; [{\bm c}_k, {\bm c}_{k'}] = 0. \label{eq:comm_c} \ee Note that zero-mode ${\bm h}_0$, which exists because the boundary conditions are Neumann at the both ends, is indispensable to realize the equal-time commutation relation~\eqref{eq:canonical} using commutation relations~\eqref{eq:comm_c}. \subsection{Particle creation by instantaneous change of boundary condition: From Neumann-Neumann to Dirichlet-Dirichlet} Let us expand $ {\bm h}_0 $ and $h_k $ by $g_m$, \be {\bm h}_0 = \sum_{m=1}^\infty ( {\bm \xi}_m g_m + {\bm \xi}_m^\dagger g_m^\ast ), \;\;\; h_k = \sum_{m=1}^\infty ( \xi_{km} g_m + \zeta_{km} g_m^\ast ), \label{eq:hg} \ee where the Bogoliubov coefficients are given by \be {\bm \xi}_m = \langle g_m, {\bm h}_0 \rangle, \;\;\; \xi_{km} = \langle g_m, h_k \rangle, \;\;\; \zeta_{km} = -\langle g_m^\ast, h_k \rangle. \label{eq:xi_zeta_form_ih} \ee Using the explicit form of mode functions~\eqref{eq:g_fin} and \eqref{eq:h_ih}, and Eq.~\eqref{eq:phi_h_ih}, Bogoliubov coefficients~\eqref{eq:xi_zeta_form_ih} are computed as \begin{gather} {\bm \xi}_m = \frac{ 2 }{ \sqrt{ m\pi L } } \left( {\bm Q}+i\frac{L}{m\pi} {\bm P} \right) \delta_{m:{\rm odd}}, \label{eq:xi_zeta_value1_ih} \\ \xi_{km} = - \frac{ 2 }{ (k-m)\pi } \sqrt{ \frac{m}{k} } \delta_{k+m:{\rm odd}}, \;\;\; \zeta_{km} = \frac{ 2 }{ (k+m)\pi } \sqrt{ \frac{m}{k} } \delta_{k+m:{\rm odd}}. \label{eq:xi_zeta_value2_ih} \end{gather} Here, we have introduced the following symbols, \be \delta_{k:{\rm odd}} := \frac{1-(-1)^k}{2}, \;\;\; \delta_{k:{\rm even}} := \frac{1+(-1)^k}{2}, \;\;\; k \in {\bm Z}. \label{eq:delta_odd_even} \ee Substituting Eq.~\eqref{eq:hg} into Eq.~\eqref{eq:phi_h_ih}, and comparing it with Eq.~\eqref{eq:phi_g_fin}, we have \be {\bm b}_m = {\bm \xi}_m + \sum_{k=1}^\infty( \xi_{km}{\bm c}_k + \zeta_{km}^\ast {\bm c}_k^\dagger ). \label{eq:bc_ih} \ee Substituting Eq.~\eqref{eq:bc_ih} into Eq.~\eqref{eq:comm_b_fin} and using Eq.~\eqref{eq:comm_c}, we obtain the unitarity relations, \begin{gather} \begin{split} [ {\bm \xi}_m, {\bm \xi}_{m'}^\dagger ] + \sum_{k=1}^\infty ( \xi_{km} \xi_{km'}^\ast - \zeta_{km}^\ast \zeta_{km'} ) = \delta_{mm'}, \\ [ {\bm \xi}_m, {\bm \xi}_{m'} ] + \sum_{k=1}^\infty ( \xi_{km} \zeta_{km'}^\ast - \zeta_{km}^\ast \xi_{km'} ) = 0. \label{eq:UR_ih} \end{split} \end{gather} In Appendix~\ref{sec:UR_ih}, we will show that the operators given in Eqs.~\eqref{eq:xi_zeta_value1_ih} and \eqref{eq:xi_zeta_value2_ih} satisfy unitarity relations~\eqref{eq:UR_ih}. We define the vacuum in which no particle corresponding to ${\bm h}_0$ or $h_k$ exist, \be {\bm P} |0_h \rangle = {\bm c}_k |0_h \rangle = 0, \;\;\; k =1,2,3,\cdots. \ee Then, the spectrum of created particles are given by the expectation value of number operator $ {\bm b}_m^\dagger {\bm b}_m $, \begin{align} & \langle 0_h | {\bm b}_m^\dagger {\bm b}_m | 0_h \rangle = \langle 0_h | {\bm \xi}_m^\dagger {\bm \xi}_m | 0_h \rangle + \sum_{k=1}^\infty | \zeta_{km} |^2 \nn \\ & = \frac{ 4 }{ m^2\pi^2 } \left( \frac{m\pi}{L} \langle 0_h | {\bm Q}^2 | 0_h \rangle -1 \right) \delta_{m:{\rm odd}} + \frac{ 4 }{ \pi^2 } \sum_{k=1}^\infty \frac{ m }{ k(k+m)^2 } \delta_{k+m:{\rm odd}}. \label{eq:bb} \end{align} The vacuum expectation value of energy-momentum tensor before the change of boundary conditions at $t=0$ is computed by substituting Eq.~\eqref{eq:phi_h_ih} into Eq.~\eqref{eq:em_null}, and using explicit form of mode function \eqref{eq:h_ih} as \be \langle 0_h | {\bm T}_{\pm \pm} | 0_h \rangle_{t<0} = \sum_{k=1}^\infty | \pd_\pm h_k |^2 = \frac{\pi}{4L^2} \sum_{k=1}^\infty k. \ee This represents the Casimir energy density, which can be made finite by standard regularization schemes such as the $\zeta$-function regularization, the point-splitting regularization, and so on~\cite{Birrell:1982ix}. The vacuum expectation value of energy-momentum tensor after $t=0$ is computed by substituting Eq.~\eqref{eq:phi_g_fin} into Eq.~\eqref{eq:em_null}, and using Eq.~\eqref{eq:bc_ih}, \begin{gather} \langle 0_h | {\bm T}_{\pm \pm} | 0_h \rangle_{t>0} = \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \sum_{\substack{ m'=1 \\ m':{\rm odd}}}^\infty \Big[ \frac{ 8 \langle 0_h | {\bm Q}^2 | 0_h \rangle }{ \pi L \sqrt{ mm' }} {\rm Re} ( \pd_\pm g_m \pd_\pm g_{m'} + \pd_\pm g_m \pd_\pm g^\ast_{m'} ) \nn \\ + \frac{4i}{\sqrt{ \pi^2 m^3m'^{3} }} {\rm Im} [ (m+m') \pd_\pm g_m \pd_\pm g_{m'} - (m-m') \pd_\pm g_m \pd_\pm g^\ast_{m'} ] \Big] \nn \\ + \sum_{k=1}^\infty \sum_{m=1}^\infty \sum_{m'=1}^\infty \Big[ ( \xi_{km} \zeta_{km'} + \zeta_{km}\xi_{km'} ){\rm Re}( \pd_\pm g_m \pd_\pm g_{m'}) + ( \xi_{km} \xi_{km'} + \zeta_{km}\zeta_{km'} ){\rm Re}( \pd_\pm g_m \pd_\pm g^\ast_{m'}) \Big], \label{eq:NNDD_t>0_form_ih} \end{gather} which we symmetrize with respect to dummy indices $m$ and $m'$, and we have used the fact that $\xi_{km}$ and $\zeta_{km}$ are real. Using explicit form of mode functions~\eqref{eq:g_fin} and Bogoliubov coefficients~\eqref{eq:xi_zeta_value2_ih}, we obtain \begin{gather} \langle 0_h | {\bm T}_{\pm \pm} | 0_h \rangle_{t>0} = \frac{ 4 \langle 0_h | {\bm Q}^2 | 0_h \rangle }{ L^3 } [ \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \cos( q_m z_\pm ) ]^2 - \frac{ 4 i }{ \pi L^2 } \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \frac{ \sin( q_m z_\pm ) }{ m } \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \cos( q_m z_\pm ) \nn \\ + \frac{ 4 }{ \pi L^2 } \sum_{\substack{ k=1 \\ k:{\rm odd}}}^\infty \Big( \frac{1}{k} [ \sum_{\substack{ m=2 \\ m:{\rm even}}}^\infty \cos( q_m z_\pm ) + k^2 \sum_{\substack{ m=2 \\ m:{\rm even}}}^\infty \frac{ \cos( q_m z_\pm )}{m^2-k^2} ]^2 + k [ \sum_{\substack{ m=2 \\ m:{\rm even}}}^\infty \frac{m \sin( q_m z_\pm )}{m^2-k^2} ]^2 \Big) \nn \\ + \frac{ 4 }{ \pi L^2 } \sum_{\substack{ k=2 \\ k:{\rm even}}}^\infty \Big( \frac{1}{k} [ \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \cos( q_m z_\pm ) + k^2 \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \frac{ \cos( q_m z_\pm )}{m^2-k^2} ]^2 + k [ \sum_{\substack{ m=1 \\ m:{\rm odd}}}^\infty \frac{m \sin( q_m z_\pm )}{m^2-k^2} ]^2 \Big). \label{eq:NNDD_t>0_value1_ih} \end{gather} The summations over odd $m$ in the first two terms of Eq.~\eqref{eq:NNDD_t>0_value1_ih}, both of which are the contributions of the zero-mode, are computed using the following formulas, \begin{align} \sum_{\substack{ k=1 \\ k:{\rm odd}}}^\infty \frac{1}{k} \sin ( \frac{ 2k\pi }{ a }y ) =& \frac{\pi}{4} \sum_{\ell=-\infty}^\infty (-1)^\ell \Pi_0^{a/2} (y- \frac{a}{2}\ell), \;\;\; ( -\infty < y < \infty ), \label{eq:sumForm1_ih} \\ \sum_{\substack{ k=1 \\ k:{\rm odd}}}^\infty \cos ( \frac{2k\pi}{a} y ) =& \frac{a}{4} \sum_{\ell=-\infty}^\infty (-1)^\ell \delta (y-\frac{a}{2}\ell ), \;\;\; ( -\infty < y < \infty ), \label{eq:sumForm2_ih} \end{align} where $ \Pi_a^b (x) $ is the rectangular function defined as \be \Pi_a^b (x) := \int_a^b \delta (x-y)dy = \begin{cases} 0 & (x<a, \; b<x) \\ \frac12 & (x=a, b) \\ 1 & (a<x<b) \end{cases}. \ee The rest summations over odd and even $m$ in Eq.~\eqref{eq:NNDD_t>0_value1_ih} are computed using formulas~\eqref{eq:sumForm1}, \eqref{eq:sumForm2}, \eqref{eq:sumForm3}, \eqref{eq:maru_typo1}, and \eqref{eq:maru_typo2} in addition to the above formulas, to obtain \begin{align} \langle 0_h | {\bm T}_{\pm \pm} | 0_h \rangle_{t>0} = \left( \frac{ \langle 0_h | {\bm Q}^2 | 0_h \rangle }{ L } + \frac{1}{\pi} \sum_{k=1}^\infty \frac{1}{k} \right) \sum_{\ell=-\infty}^\infty \delta^2 (z_\pm - \ell L ) + \begin{cases} 0 & ( z_\pm = \ell L, \ell \in {\bm Z}) \\ \displaystyle \frac{\pi}{4L^2} \sum_{k=1}^\infty k & ({\rm otherwise}) \end{cases}. \label{eq:NNDD_t>0_value2_ih} \end{align} After setting $L=\pi$ and regularizing the diverging summation as $\sum_{k=1}^\infty k = -\frac{1}{12}$ by the $\zeta$-function regularization, Eq.~\eqref{eq:NNDD_t>0_value2_ih} should be equal to Eq.~(31) of Ref.~\cite{Ishibashi:2002ac}. The vanishing of Casimir energy on the null lines in Eq.~\eqref{eq:NNDD_t>0_value2_ih}, however, has no counterpart in Eq.~(31) of Ref.~\cite{Ishibashi:2002ac}. As pointed out at the end of Sec.~\ref{sec:ND_fin}, it should be stressed again that the second term in Eq.~\eqref{eq:NNDD_t>0_value2_ih} represents both the ambient Casimir energy and {\it the divergent flux on the null lines} $( z_\pm = \ell L, \ell \in {\bm Z})$ after an appropriate regularization (see Sec.~\ref{sec:inf} and Appendix~\ref{sec:green}), rather than a constant correction to the first divergent term. While we have derived Eq.~\eqref{eq:NNDD_t>0_value2_ih} with keeping the parallelism with the other analyses in the present paper, it is unclear from where the discrepancy comes. In the next subsection, therefore, we will re-derive Eq.~\eqref{eq:NNDD_t>0_value2_ih} with a method similar to one in Ref.~\cite{Ishibashi:2002ac}. \subsection{Origin of discrepancy} Substituting Eq.~\eqref{eq:phi_g_fin} into Eq.~\eqref{eq:em_null}, and using Eq.~\eqref{eq:bc_ih}, the vacuum expectation value of the energy-momentum tensor after $t=0$ is written as \begin{gather} \langle 0_h | {\bm T}_{\pm\pm} | 0_h \rangle_{t>0} = \sum_{m=1}^\infty \sum_{m'=1}^\infty \Big[ ( \langle 0_h | {\bm \xi}_m {\bm \xi}_{m'} | 0_h \rangle + \sum_{k=1}^\infty \xi_{km} \zeta_{km'}^\ast ) \pd_\pm g_m \pd_\pm g_{m'} \nn \\ + ( \langle 0_h | {\bm \xi}_m {\bm \xi}_{m'}^\dagger | 0_h \rangle + \sum_{k=1}^\infty \xi_{km} \xi_{km'}^\ast ) \pd_\pm g_m \pd_\pm g_{m'}^\ast + ( \langle 0_h | {\bm \xi}_m^\dagger {\bm \xi}_{m'} | 0_h \rangle + \sum_{k=1}^\infty \zeta_{km} \zeta_{km'}^\ast ) \pd_\pm g_m^\ast \pd_\pm g_{m'} \nn \\ + ( \langle 0_h | {\bm \xi}_m^\dagger {\bm \xi}_{m'}^\dagger | 0_h \rangle + \sum_{k=1}^\infty \zeta_{km} \xi_{km'}^\ast ) \pd_\pm g_m^\ast \pd_\pm g_{m'}^\ast \Big]. \label{eq:NNDD_t>0_value3_ih} \end{gather} Using explicit form of Bogoliubov coefficients \eqref{eq:xi_zeta_value1_ih} and \eqref{eq:xi_zeta_value2_ih}, and mode function~\eqref{eq:g_fin}, this quantity is rewritten in a compact form, \begin{gather} \langle 0_h | {\bm T}_{\pm\pm} | 0_h \rangle_{t>0} = \frac{1}{L^3} \sum_{\substack{ m=-\infty \\ m:{\rm odd}}}^\infty \left( \langle 0_h | {\bm Q}^2 | 0_h \rangle + \frac{ L }{ m\pi } \right) e^{-iq_m z_\pm } \sum_{\substack{ m'=-\infty \\ m':{\rm odd}}}^\infty e^{-iq_{m'}z_\pm } \nn \\ + \frac{1}{\pi L^2} \sum_{k=1}^\infty \frac{1}{k} \sum_{\substack{ m=-\infty \\ m:{\rm odd}}}^\infty \frac{ me^{ -i (q_m - q_k ) z_\pm } }{ m-k } \delta_{m-k:{\rm odd}} \sum_{\substack{ m'=-\infty \\ m':{\rm odd}}}^\infty \frac{ m'e^{ -i (q_{m'} + q_k ) z_\pm } }{ m'+k } \delta_{m'+k:{\rm odd}} . \label{eq:NNDD_t>0_value4_ih} \end{gather} The summations over odd $m$ and $m'$ in Eq.~\eqref{eq:NNDD_t>0_value4_ih} can be evaluated with the following formulas, \begin{align} \sum_{\substack{ k=-\infty \\ k:{\rm odd}}}^\infty \frac{1}{k} \exp( -i \frac{ 2k \pi }{ a } y ) &= -\frac{ i \pi }{2} \sum_{\ell=-\infty}^\infty (-1)^\ell \Pi_0^{a/2} (y-\frac{a}{2} \ell), \label{eq:sumForm3_ih} \\ \sum_{\substack{ k=-\infty \\ k:{\rm odd}}}^\infty \exp( -i \frac{ 2k \pi }{ a } y ) &= \frac{a}{2}\sum_{\ell=-\infty}^\infty (-1)^\ell \delta (y-\frac{a}{2} \ell), \label{eq:sumForm4_ih} \end{align} which are equivalent to Eqs.~\eqref{eq:sumForm1_ih} and \eqref{eq:sumForm2_ih}, respectively. Finally, in order to obtain the final result, it is necessary to use the following relation, \begin{gather} \sum_{\ell=-\infty}^\infty (-1)^\ell \Pi_0^L (z_\pm - \ell L) \sum_{\ell'=-\infty}^\infty (-1)^{\ell'} \Pi_0^L (z_\pm - \ell' L) = \begin{cases} 0 & (z_\pm = \ell L, \; \ell \in {\bm Z} ) \\ 1 & ({\rm otherwise}) \\ \end{cases}. \label{eq:PiSquared} \end{gather} Then, we obtain Eq.~\eqref{eq:NNDD_t>0_value2_ih}. It seems that Ref.~\cite{Ishibashi:2002ac} overlooked the fact that the left-hand side of Eq.~\eqref{eq:PiSquared} vanishes on null lines $ z_- = 0$ and $ z_+ = L $. This would be the origin of the discrepancy between our result and their result. \section{Semi-infinite cavity} \label{sec:inf} In the rest of this paper, we investigate the particle creation by the instantaneous change of boundary condition in a semi-infinite cavity, which corresponds to the limit $L \to +\infty$ of the finite-cavity model in Sec.~\ref{sec:fin}. We will see that some simplifications happen in such a limit. Namely, one needs just some simple integral formulas rather than the non-trivial summation formulas in Sec.~\ref{sec:fin}. The analysis in semi-infinite space $x \in [0,+\infty)$ can be a footing to generalize the present analysis, for example, to higher-dimensional models by regarding the spatial coordinate $x$ as a radial coordinate of higher-dimensional spaces (see \cite{Zhou:2016hsh} for a relevant higher-dimensional consideration). While the Bogoliubov transformation will be used in this section again in order to keep the parallelism with the previous sections, the results will be re-derived in Appendix~\ref{sec:green} with an independent method using the Green functions, which naturally involves the point-splitting regularization of the vacuum expectation value of energy-momentum tensor. \subsection{Quantization of massless scalar field} \label{sec:quant_inf} We consider a free massless scalar field in the semi-infinite cavity, of which equation of motion is given by Eq.~\eqref{eq:eom} with $L \to +\infty$. At left boundary $x=0$, we consider two kinds of boundary conditions. One is the Neumann boundary condition~\eqref{eq:bc2}. Another is the Dirichlet boundary condition~\eqref{eq:bc3}. During Neumann boundary condition \eqref{eq:bc2} is satisfied, a natural set of positive-energy mode functions $\{ f_p \} $, which is labeled by continuous parameter $p$, is given by \be f_p (t,x) = \frac{1}{ \sqrt{ \pi p } }e^{ -ipt } \cos ( p x ) , \;\;\; p > 0. \label{eq:f_inf} \ee This mode function satisfies the following orthonormal conditions, \be \langle f_p, f_{p'} \rangle = - \langle f_p^\ast, f_{p'}^\ast \rangle = \delta ( p-p' ), \;\;\; \langle f_p, f_{p'}^\ast \rangle = 0, \label{eq:f_ortho_inf} \ee where the integration range of Klein-Gordon inner product, Eq.~\eqref{eq:IP}, is from $0$ to $+\infty$. During Dirichlet boundary condition \eqref{eq:bc3} is satisfied, a natural set of positive-energy mode functions $ \{ g_q \} $ is given by \be g_q (t,x) = \frac{1}{ \sqrt{ \pi q } } e^{-i q t} \sin ( q x ), \;\;\; q > 0. \label{eq:g_inf} \ee This mode function satisfies the following orthonormal conditions, \be \langle g_q, g_{q'} \rangle = - \langle g_q^\ast, g_{q'}^\ast \rangle = \delta (q-q'), \;\;\; \langle g_q,g_{q'}^\ast \rangle = 0. \label{eq:g_ortho_inf} \ee Associated with the above two sets of mode functions, $\{ f_p \}$ and $\{ g_q \}$, there are two ways to quantize the scalar field. Namely, we can expand the scalar field by two sets of mode functions, \begin{align} {\bm \phi} &= \int_0^\infty dp ( {\bm a}_p f_p + {\bm a}_p^\dagger f_p^\ast), \label{eq:phi_f_inf} \\ {\bm \phi} &= \int_0^\infty dq ( {\bm b}_q g_q + {\bm b}_q^\dagger g_q^\ast), \label{eq:phi_g_inf} \end{align} where the expansion coefficients are imposed the commutation relations, \begin{align} [ {\bm a}_p, {\bm a}_{p'}^\dagger ] = \delta (p-p'), \;\;\; & [ {\bm a}_p, {\bm a}_{p'} ] = 0, \label{eq:comm_a_inf} \\ [ {\bm b}_q, {\bm b}_{q'}^\dagger ] = \delta (q-q') , \;\;\; & [ {\bm b}_q, {\bm b}_{q'} ] = 0. \label{eq:comm_b_inf} \end{align} Operators ${\bm a}_p$ and ${\bm b}_q$ (resp.\ ${\bm a}^\dagger_p$ and ${\bm b}^\dagger_q$) are interpreted as annihilation (resp.~creation) operators. Accordingly, we can define two normalized vacuum states, \begin{align} {\bm a}_p | 0_f \rangle = 0, \;\;\; & \forall p >0, \;\;\; \langle 0_f | 0_f \rangle = 1, \label{eq:0f_inf} \\ {\bm b}_q | 0_g \rangle = 0, \;\;\; & \forall q >0, \;\;\; \langle 0_g | 0_g \rangle = 1. \label{eq:0g_inf} \end{align} Then, $| 0_f \rangle$ (resp.~$| 0_g \rangle$) is the state where no particle corresponding to $f_n$ (resp.~$g_m$) exists. \subsection{Particle creation by instantaneous change of boundary condition} \label{sec:creation_inf} Given the above quantization of scalar field in the semi-infinite cavity, we investigate how the vacuum is excited when the boundary condition at $x=0$ instantaneously changes from Neumann to Dirichlet (N-D) in Sec.~\ref{sec:ND_inf} and reversely (D-N) in Sec.~\ref{sec:DN_inf}. \subsubsection{From Neumann to Dirichlet} \label{sec:ND_inf} \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{06_ND_inf.eps} \caption{The boundary condition at the left end of domain $(x=0)$ instantaneous changes at $t=0$ from Neumann (dashed) to Dirichlet (solid). Spatial configurations of mode functions $ f_p $ and $g_q$ are schematically depicted.} \label{fig:ND_inf} \end{center} \end{minipage} \end{center} \end{figure} We assume that the boundary condition at $x=0$ is Neumann~\eqref{eq:bc2} for $t<0$ and Dirichlet~\eqref{eq:bc3} for $t>0$, and that the quantum field is in vacuum $| 0_f \rangle$, defined by Eq.~\eqref{eq:0f_inf}. See Fig.~\ref{fig:ND_inf} for a schematic picture of the situation. Let us expand $f_p$ by $g_q$ as, \be f_p = \int_0^\infty dq ( \alpha_{pq} g_q + \beta_{pq} g_q^\ast ), \label{eq:fg_inf} \ee where the expansion coefficients are given by \be \alpha_{pq} = \langle g_q , f_p \rangle, \;\;\; \beta_{pq} = - \langle g_q^\ast, f_p \rangle. \label{eq:alpha_beta_form_inf} \ee Using Eqs.~\eqref{eq:f_inf} and \eqref{eq:g_inf}, we obtain \be \alpha_{pq} = - \frac{ 1 }{ (p-q) \pi } \sqrt{ \frac{ q }{ p } }, \;\;\; \beta_{pq} = \frac{ 1 }{ (p+q) \pi } \sqrt{ \frac{ q }{ p } }, \label{eq:alpha_beta_value_inf} \ee where we have used integral formula $\int_0^\infty e^{iax} dx = ia^{-1} \; ( -\infty < a <\infty)$. Substituting Eq.~\eqref{eq:fg_inf} into Eq.~\eqref{eq:phi_f_inf}, and comparing it with Eq.~\eqref{eq:phi_g_inf}, we obtain \be {\bm b}_q = \int_0^\infty dp ( \alpha_{pq} {\bm a}_p + \beta_{pq}^\ast {\bm a}_p^\dagger ). \label{eq:ba_inf} \ee Substituting Eq.~\eqref{eq:ba_inf} into Eq.~\eqref{eq:comm_b_inf}, and using Eq.~\eqref{eq:comm_a_inf}, we obtain the unitarity relations, \begin{align} \int_0^\infty dp ( \alpha_{pq} \alpha_{pq'}^\ast - \beta_{pq}^\ast \beta_{pq'} ) = \delta (q-q'), \;\;\; \int_0^\infty dp ( \alpha_{pq} \beta_{pq'}^\ast - \beta_{pq}^\ast \alpha_{pq'} ) = 0. \label{eq:UR_ND_inf} \end{align} In Appendix~\ref{sec:UR_ND_inf}, we prove that Bogoliubov coefficients \eqref{eq:alpha_beta_value_inf} satisfy Eq.~\eqref{eq:UR_ND_inf}. The spectrum of created particles are computed as \be \langle 0_f | {\bm b}_q^\dagger {\bm b}_q | 0_f \rangle = \int_0^\infty dp | \beta_{pq} |^2 = \frac{1}{\pi^2} \int_0^\infty dp \frac{ q }{ p ( p+q )^2 }. \label{eq:bb_inf} \ee This and its integration over $q$ are divergent due to the contribution from the infrared regime. The vacuum expectation value of energy-momentum tensor before the change of boundary condition at $t=0$ is computed by substituting Eq.~\eqref{eq:phi_f_inf} into Eq.~\eqref{eq:em_null}, and using Eqs.~\eqref{eq:comm_a_inf} and \eqref{eq:f_inf}, as \begin{align} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t<0} = \int_0^\infty dp | \pd_\pm f_p |^2 = \frac{1}{4\pi} \int_0^\infty dp p. \label{eq:ND_t<0_inf} \end{align} Unlike the finite-cavity case, there is no Casimir energy in this semi-infinite case. The above result just represents the divergent energy density due to the zero-point oscillation. Thus, the renormalized vacuum expectation value obtained by subtracting such a zero-point contribution identically vanishes everywhere as Eq.~\eqref{VEVf_ren2}. The vacuum expectation value of energy-momentum tensor after $t=0$ is computed by substituting Eq.~\eqref{eq:phi_g_inf} into Eq.~\eqref{eq:em_null}, and using Eq.~\eqref{eq:ba_inf}, as \begin{gather} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \int_0^\infty \int_0^\infty \int_0^\infty dp dq dq' [ ( \alpha_{pq} \beta_{pq'} + \alpha_{pq'} \beta_{pq} ) {\rm Re} ( \pd_\pm g_q \pd_\pm g_{q'} ) \nn \\ + ( \alpha_{pq} \alpha_{pq'} + \beta_{pq} \beta_{pq'} ) {\rm Re} ( \pd_\pm g_q \pd_\pm g_{q'}^\ast ) ]. \label{eq:ND_t>0_form_inf} \end{gather} To derive Eq.~\eqref{eq:ND_t>0_form_inf}, we symmetrize it with respect to integration variables $q$ and $q'$, and use the fact that $\alpha_{pq}$ and $\beta_{pq}$ are real. Using explicit expressions of Bogoliubov coefficients \eqref{eq:alpha_beta_value_inf} and mode function \eqref{eq:g_inf}, we obtain \begin{gather} \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{1}{\pi^3} \int_0^\infty dp \Bigg( \frac{1}{p} [ \int_0^\infty dq \cos ( qz_\pm ) + p^2 \int_0^\infty dq \frac{ \cos ( qz_\pm ) }{ q^2-p^2 } ]^2 + p [ \int_0^\infty dq \frac{ q \sin ( qz_\pm ) }{ q^2-p^2 } ]^2 \Bigg). \label{eq:ND_t>0_value1_inf} \end{gather} The integration over $q$ in Eq.~\eqref{eq:ND_t>0_value1_inf} can be computed to give \be \langle 0_f | {\bm T}_{\pm\pm} | 0_f \rangle_{t>0} = \frac{ \delta^2 (z_\pm) }{ \pi } \int_0^\infty \frac{dp}{p} + \frac{ {\rm sgn}^2 (z_\pm) }{ 4\pi } \int_0^\infty dp p, \label{eq:ND_t>0_value2_inf} \ee where ${\rm sgn}$ denotes the sign function, \be {\rm sgn} (a) := \begin{cases} \pm 1 & (a \gtrless 0) \\ 0 & (a=0) \\ \end{cases}. \ee Note that we have used the following integration formulas, \begin{align} \int_0^\infty \cos (ax) dx &=\pi \delta (a), \;\;\; ( -\infty < a <\infty ), \label{eq:intForm1} \\ \int_0^\infty \frac{ \cos (ax) }{ x^2-b^2 }dx &= -{\rm sgn} ( a ) \frac{\pi}{2b} \sin (ab), \;\;\; ( -\infty < a <\infty , \; b>0), \label{eq:intForm2} \\ \int_0^\infty \frac{ x \sin (ax) }{ x^2-b^2 }dx &= {\rm sgn} ( a ) \frac{\pi}{2} \cos (ab), \;\;\; ( -\infty < a <\infty, \; b>0). \label{eq:intForm3} \end{align} See Appendix~\ref{sec:int} for the derivation of the second and third formulas. Let us consider the meaning of two terms in Eq.~\eqref{eq:ND_t>0_value2_inf}. The first term, the delta function squared multiplied by a divergent integral, represents the diverging flux emanating from the origin $(t,x)=(0,0)$ and localizing on the null line $z_-=0$. The divergent factor involves the infrared divergence too since there in no infrared cutoff introduced by finite $L$. The dependence of energy density on the delta function squared implies also the divergence of total energy emitted. The second term, at first glance, seems to represent an ambient divergent energy density and its vanishing on the null line emanating from the origin (note that ${\rm sgn}(0)=0$). As will be seen below, however, this is not the case. Namely, the divergence at $z_\pm \neq 0$ just represents the energy due to the zero-point oscillation just like Eq.~\eqref{eq:ND_t<0_inf}. Therefore, the regularized vacuum expectation value of energy-momentum tensor should be defined by subtracting such a diverging quantity distributing uniformly in space and time. As the result of such a subtraction, the divergence appears {\it on} the null line $z_- = 0$. Such a renormalized vacuum expectation value of energy-momentum tensor is computed in Appendix~\ref{sec:green} with the Green-function method, which naturally involves the point-splitting regularization. The result is \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{ t>0}^{\rm ren} = \frac{ \delta^2 (z_\pm) }{ \pi }\int_0^\infty \frac{dp}{p} + \begin{cases} \displaystyle \lim_{z_\pm'\to z_\pm}\frac{ 1 }{ 4\pi ( z_\pm - z_\pm' )^2 } & (z_\pm = 0) \\ 0 & (\mbox{otherwise}) \end{cases}. \label{VEVf_ren_t>0} \end{align} Here, $z_\pm$ and $z_\pm'$ are the coordinates of two points on which the Green functions are evaluated. As explained above, the second term diverges on the null line and vanishes elsewhere. Thus, there remain the two components of diverging flux even after the renormalization to propagate along the null line $z_-=0$. \subsubsection{From Dirichlet to Neumann} \label{sec:DN_inf} We assume that the boundary condition at $x=0$ is Dirichlet~\eqref{eq:bc3} for $t<0$ and Neumann~\eqref{eq:bc2} for $t>0$, and that the quantum field is in vacuum $| 0_g \rangle$, given by Eq.~\eqref{eq:0g_inf}. See Fig.~\ref{fig:DN_inf} for a schematic picture of the physical situation. Then, we investigate how the vacuum is excited by computing the spectrum and energy flux of created particles. Let us expand $g_q$ by $f_p$ as, \be g_q = \int_0^\infty dp ( \rho_{qp} f_p + \sigma_{qp} f_p^\ast ). \label{eq:gf_inf} \ee Here, the expansion coefficients are given by \be \rho_{qp} = \langle f_p,g_q \rangle = \alpha_{pq}^\ast, \;\;\; \sigma_{qp} = - \langle f_p^\ast, g_q \rangle = - \beta_{pq}, \label{eq:rho_sigma_inf} \ee where $\alpha_{pq}$ and $\beta_{pq}$ are given by Eq.~\eqref{eq:alpha_beta_value_inf}. Substituting Eq.~\eqref{eq:gf_inf} into Eq.~\eqref{eq:phi_g_inf}, and comparing it with Eq.~\eqref{eq:phi_f_inf}, we obtain \be {\bm a}_p = \int_0^\infty dq ( \rho_{qp} {\bm b}_q + \sigma_{qp}^\ast {\bm b}_q^\dagger ). \label{eq:ab_inf} \ee Substituting Eq.~\eqref{eq:ab_inf} into Eq.~\eqref{eq:comm_a_inf}, and using Eq.~\eqref{eq:comm_b_inf}, we obtain the unitarity relations, \begin{align} \int_0^\infty dq ( \rho_{qp} \rho_{qp'}^\ast - \sigma_{qp}^\ast \sigma_{qp'} ) = \delta ( p-p' ), \;\;\; \int_0^\infty dq ( \rho_{qp} \sigma_{qp'}^\ast - \sigma_{qp}^\ast \rho_{qp'} ) = 0. \label{eq:UR_DN_inf} \end{align} In Appendix \ref{sec:UR_DN_inf}, it is shown that Bogoliubov coefficients \eqref{eq:rho_sigma_inf} indeed satisfy Eq.~\eqref{eq:UR_DN_inf}. \begin{figure} \begin{center} \begin{minipage}[c]{0.8\textwidth} \begin{center} \includegraphics[height=5cm]{07_DN_inf.eps} \caption{The boundary condition at the left end of domain $(x=0)$ instantaneously changes at $t=0$ from Dirichlet (solid) to Neumann (dashed). Spatial configurations of mode functions $ f_p $ and $g_q $ are schematically depicted.} \label{fig:DN_inf} \end{center} \end{minipage} \end{center} \end{figure} The spectrum is computed as \be \langle 0_g | {\bm a}_p^\dagger {\bm a}_p | 0_g \rangle = \int_0^\infty dq | \sigma_{qp} |^2 = \frac{1}{ \pi^2 } \int_0^\infty dq \frac{ q }{ p (p+q)^2 }, \label{eq:aa_inf} \ee which is divergent. The expectation value of energy-momentum tensor before the change of boundary condition at $t=0$ is computed by substituting Eq.~\eqref{eq:phi_g_inf} into Eq.~\eqref{eq:em_null}, and using Eqs.~\eqref{eq:comm_b_inf} and \eqref{eq:g_inf}, as \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t<0} = \int_0^\infty dq | \pd_\pm g_q |^2 = \frac{ 1 }{ 4\pi } \int_0^\infty dq q. \label{eq:DN_t<0_inf} \end{align} This represents the divergence due to the zero-point oscillation, and the regularized value vanishes as given by Eq.~\eqref{VEVg_3}. The expectation value of energy-momentum tensor for $t>0$ is computed by substituting Eq.~\eqref{eq:phi_f_inf} into Eq.~\eqref{eq:em_null}, and using Eq.~\eqref{eq:ab_inf}, as \begin{gather} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \int_0^\infty \int_0^\infty \int_0^\infty dq dp dp' [ ( \rho_{qp} \sigma_{qp'} + \rho_{qp'} \sigma_{qp} ) {\rm Re} ( \pd_\pm f_p \pd_\pm f_{p'} ) \nn \\ + ( \rho_{qp} \rho_{qp'} + \sigma_{qp} \sigma_{qp'} ) {\rm Re} ( \pd_\pm f_p \pd_\pm f_{p'}^\ast ) ], \label{eq:DN_t>0_form_inf} \end{gather} where we symmetrize it with respect to integration variables $p$ and $p'$, and use the fact that $\rho_{qp}$ and $\sigma_{qp}$ are real. Substituting explicit form of the Bogoliubov coefficients, given by Eqs.~\eqref{eq:rho_sigma_inf} and \eqref{eq:alpha_beta_value_inf}, and mode function \eqref{eq:g_inf} into Eq.~\eqref{eq:DN_t>0_form_inf}, we have \begin{align} &\langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \frac{1}{\pi^3} \int_0^\infty dq \left( q^3 [ \int_0^\infty dp \frac{ \cos (pz_\pm) }{ p^2-q^2 } ]^2 + q [ \int_0^\infty dp \frac{ p \sin ( pz_\pm ) }{ p^2 - q^2 } ]^2 \right). \label{eq:DN_t>0_value1_inf} \end{align} The integrations over $p$ in Eq.~\eqref{eq:DN_t>0_value1_inf} are evaluated using formulas \eqref{eq:intForm2} and \eqref{eq:intForm3} to obtain \begin{align} &\langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle_{t>0} = \frac{ {\rm sgn}^2 (z_\pm) }{ 4\pi } \int_0^\infty dq q. \label{eq:DN_t>0_value2_inf} \end{align} Again, result \eqref{eq:DN_t>0_value2_inf} seems to represent a diverging flux and its vanishing on the null line emanating from the origin. After subtracting the uniform contribution from the zero-point oscillation, however, the divergence appears {\it on} the null line. This is explicitly shown by adopting the Green-function method in Appendix~\ref{sec:green}. The result is given by \begin{align} \langle 0_g | {\bm T}_{\pm\pm} | 0_g \rangle^{{\rm ren}}_{t>0} = \begin{cases} \displaystyle \lim_{z_\pm' \to z_\pm} \frac{1}{4\pi (z_\pm - z_\pm')^2 } & (z_\pm =0) \\ 0 & ( \mbox{otherwise} )\\ \end{cases}. \label{VEVg_t>0_4} \end{align} Here, $z_\pm$ and $z_\pm'$ are the coordinates of two points on which the Green functions are evaluated. The flux diverges on the null line and vanishes elsewhere. Thus, there remains only one component of diverging flux after the renormalization to propagate along the null line $z_-=0$. \section{Conclusion} \label{sec:conc} We have investigated the particle creation due to the instantaneous change of boundary condition (BC) in the one-dimensional (1D) finite cavity (Secs.~\ref{sec:fin} and \ref{sec:ih}) and semi-infinite cavity (Sec.~\ref{sec:inf}) by computing the vacuum expectation value of energy-momentum tensor for the free massless Klein-Gordon scalar field. The BC changes from Neumann to Dirichlet (N-D) in Secs.~\ref{sec:fin} and \ref{sec:inf}, from Neumann-Neumann to Dirichlet-Dirichlet (NN-DD) in Sec.~\ref{sec:ih}, and from Dirichlet to Neumann (D-N) in Secs.~\ref{sec:fin} and \ref{sec:inf}. Although any actual change of BC takes a finite interval of time, we believe that these models are capable of extracting the essence of phenomenon when the BC changes rapidly enough compared to typical time scales in the system. In particular, it is plausible that such a situation is realized for the gravitational phenomena like the appearance of strong (or wave-singular) naked singularities~\cite{Ishibashi:2002ac} and topology change of spacetime (or string) in quantum gravity~\cite{Anderson:1986ww}. In addition, the choice of Dirichlet and Neumann BCs introduced no adjustable parameters into the system, which made the whole analysis simple to be a good starting point for succeeding considerations. Most models of the particle creation due to time-dependent BCs (i.e., the dynamic Casimir effect) would have to reproduce the results in this paper in their limit of infinitely rapid change. Thanks to the above simplifications made in our model, we could obtain almost all the results in completely analytic form. For the finite cavity N-D (resp.\ D-N) case, the vacuum expectation value of energy-momentum tensor was obtained as Eq.~\eqref{eq:ND_t>0_value6_fin} (resp.\ \eqref{eq:DN_t>0_value5_fin}). Our result that the flux in the N-D and D-N cases consist of two terms and only one term, respectively, seemed to contradict the result in Ref.~\cite{Ishibashi:2002ac}, which analyses the NN-DD case. Therefore, we revisited the NN-DD case in Sec.~\ref{sec:ih} to obtain Eq.~\eqref{eq:NNDD_t>0_value2_ih}, which is consistent with the result in Sec.~\ref{sec:fin}. The flux in the N-D and NN-DD cases consist of terms of $\delta^2(z_\pm)$ and $1/(z_\pm-z_\pm')^2$, while the flux in the D-N case consists of only term of $1/(z_\pm-z_\pm')^2 $. Although we cannot argue which term is stronger to dominate at this point, it will be the case that not only the flux but also the total energy radiated becomes large since the integration of flux cross $z_\pm=0$ diverges. While the results in the semi-infinite cavity for the N-D case~\eqref{eq:ND_t>0_value2_inf} and D-N case~\eqref{eq:DN_t>0_value2_inf} are quite similar to their respective counterparts in the finite cavity, the analysis for the infinite cavity is much simpler than the finite-cavity case in that non-trivial mathematical formulas such as summation formulas of Eqs.~\eqref{eq:sumForm2}, \eqref{eq:maru_typo1}, and so on, are not necessary. This is a technical but an important point for succeeding studies such as the generalizations of this work (future works will be mentioned later). In addition, the vacuum expectation value of energy-momentum tensor in the semi-infinite cavity was re-derived by the Green-function method in Appendix~\ref{sec:green}. This method not only naturally involves the point-splitting regularization but also involves only simpler calculations than the Bogoliubov method in the text. Again, this is a technical but an important point. Finally, the analysis for the semi-infinite cavity confirmed that the divergence of flux due to the change of BC is nothing but an ultraviolet effect rather than an infrared one, and that the divergence of the flux has nothing to do with the Casimir effect, which exists only when $L$ is finite. Let us discuss the origin of asymmetry between the N-D and D-N cases, of which similar conjecture was proposed in a previous paper of the present author and his collaborators~\cite{Harada:2016kkq}. The $\delta^2$-term seems to stem from a temporal discontinuity of mode function $f_n$ and $f_p$. For instance, in the finite-cavity N-D case, mode function $f_n$ is given by Eq.~\eqref{eq:f_fin} for $t<0$, having a non-zero value at $x=0$, but given by Eq.~\eqref{eq:fg_fin} for $t>0$, vanishing at $x=0$. Therefore, $f_n(t,0)$ is discontinuous as a function of time at $t=0$. On the other hand, in the finite-cavity D-N case, mode function $g_m$ is given by Eq.~\eqref{eq:g_fin} for $t<0$ and Eq.~\eqref{eq:gf_fin} for $t>0$, both of which vanish at $x=0$. Therefore, $g_m(t,0)$ is continuous as a function of time at $t=0$. In a similar way, $h_k(t,0)$ and $h_k(t,L)$ are discontinuous as functions of time at $t=0$ in the NN-DD case, and $f_p(t,0)$ (resp.\ $g_q(t,0)$) is discontinuous (resp.\ continuous) at $t=0$ in the semi-infinite N-D (resp.\ D-N) case. We conjecture that such a discontinuity, which would create a shock in the classical mechanics point of view, is the origin of the delta function squared. Naively speaking, the results in this paper suggest that the backreaction of created particles to the spacetime and/or the cavity cannot be ignored. However, the analysis is based on the test-field approximation, therefore, it is too early to assert such an implication of the results. As a next step, it is natural to investigate the back-reaction through, say, the semi-classical Einstein equation, where the right-hand side of Einstein equation is replaced by the regularized vacuum expectation value of energy-momentum tensor of quantized fields~\cite{Birrell:1982ix}. Given the results in this paper, there would be several directions to proceed besides investigating the back-reaction mentioned above. Firstly, it is natural to generalize the present analysis to higher-dimensional spacetime (see Ref.~\cite{Zhou:2016hsh} for a highly relevant study). Secondly, it would be important to generalize the BC in the present paper (i.e., Dirichlet and Neumann) to the Robin-type BC, which takes the form of $ \phi (t,x) - a \pd_x \phi (t,x)|_{x=0} = 0 $. Taking different values of constant $a$ before and after $t=0$, one can generalize the present analysis. By such a generalization, we would be able to verify the above conjecture about the origin of asymmetry between the N-D and D-N cases, and understand more deeply how the time-dependent BCs make the quantum vacuum excite in general. \subsection*{Acknowledgments} The author would like to thank T.~Harada and S.~Kinoshita for useful discussions, and anonymous referees for various suggestions to improve the early versions of manuscript. This work was partially supported by JSPS KAKENHI Grant Numbers 15K05086 and 18K03652.
{ "timestamp": "2019-01-11T02:04:06", "yymm": "1805", "arxiv_id": "1805.02367", "language": "en", "url": "https://arxiv.org/abs/1805.02367" }
\section{Introduction} Abstract Meaning Representation (AMR) \cite{banarescu-EtAl:2013:LAW7-ID} is a semantic formalism that encodes the meaning of a sentence as a rooted, directed graph. Figure \ref{fig:example_amr} shows an AMR graph in which the nodes (such as ``describe-01'' and ``person'') represent the concepts, and edges (such as ``:ARG0'' and ``:name'') represent the relations between concepts they connect. AMR has been proven helpful on other NLP tasks, such as machine translation \cite{jones2012semantics,tamchyna-quirk-galley:2015:S2MT}, question answering \cite{mitra2015addressing}, summarization \cite{takase-EtAl:2016:EMNLP2016} and event detection \cite{li-EtAl:2015:CNewsStory}. \begin{figure} \centering \includegraphics[scale=0.6]{example_amr.pdf} \caption{An example of AMR graph meaning ``Ryan's description of himself: a genius.''} \label{fig:example_amr} \end{figure} The task of AMR-to-text generation is to produce a text with the same meaning as a given input AMR graph. The task is challenging as word tenses and function words are abstracted away when constructing AMR graphs from texts. The translation from AMR nodes to text phrases can be far from literal. For example, shown in Figure \ref{fig:example_amr}, ``Ryan'' is represented as ``(p / person :name (n / name :op1 ``Ryan''))'', and ``description of'' is represented as ``(d / describe-01 :ARG1 )''. While initial work used statistical approaches \cite{jeff2016amrgen,pourdamghani-knight-hermjakob:2016:INLG,song-EtAl:2017:Short,lampouras-vlachos:2017:SemEval,mille-EtAl:2017:SemEval,gruzitis-gosko-barzdins:2017:SemEval}, recent research has demonstrated the success of deep learning, and in particular the sequence-to-sequence model \cite{sutskever2014sequence}, which has achieved the state-of-the-art results on AMR-to-text generation \cite{konstas-EtAl:2017:Long}. One limitation of sequence-to-sequence models, however, is that they require serialization of input AMR graphs, which adds to the challenge of representing graph structure information, especially when the graph is large. In particular, closely-related nodes, such as parents, children and siblings can be far away after serialization. It can be difficult for a linear recurrent neural network to automatically induce their original connections from bracketed string forms. To address this issue, we introduce a novel graph-to-sequence model, where a graph-state LSTM is used to encode AMR structures directly. To capture non-local information, the encoder performs graph state transition by information exchange between connected nodes, with a graph state consisting of all node states. Multiple recurrent transition steps are taken so that information can propagate non-locally, and LSTM \cite{hochreiter1997long} is used to avoid gradient diminishing and bursting in the recurrent process. The decoder is an attention-based LSTM model with a copy mechanism \cite{gu-EtAl:2016:P16-1,gulcehre-EtAl:2016:P16-1}, which helps copy sparse tokens (such as numbers and named entities) from the input. Trained on a standard dataset (LDC2015E86), our model surpasses a strong sequence-to-sequence baseline by 2.3 BLEU points, demonstrating the advantage of graph-to-sequence models for AMR-to-text generation compared to sequence-to-sequence models. Our final model achieves a BLEU score of 23.3 on the test set, which is 1.3 points higher than the existing state of the art \cite{konstas-EtAl:2017:Long} trained on the same dataset. When using gigaword sentences as additional training data, our model is consistently better than \newcite{konstas-EtAl:2017:Long} using the same amount of gigaword data, showing the effectiveness of our model on large-scale training set. We release our code and models at \url{https://github.com/freesunshine0316/neural-graph-to-seq-mp}. \section{Baseline: a seq-to-seq model} \label{sec:base} Our baseline is a sequence-to-sequence model, which follows the encoder-decoder framework of \newcite{konstas-EtAl:2017:Long}. \subsection{Input representation} \label{sec:base_inp} Given an AMR graph $G=(V,E)$, where $V$ and $E$ denote the sets of nodes and edges, respectively, we use the depth-first traversal of \newcite{konstas-EtAl:2017:Long} to linearize it to obtain a sequence of tokens $v_1, \dots, v_N$, where $N$ is the number of tokens. For example, the AMR graph in Figure 1 is serialized as ``describe :arg0 ( person :name ( name :op1 ryan ) ) :arg1 person :arg2 genius''. We can see that the distance between ``describe'' and ``genius'', which are directly connected in the original AMR, becomes 14 in the serialization result. A simple way to calculate the representation for each token $v_j$ is using its word embedding $e_j$: \begin{equation} x_j = W_1 e_{j} + b_1 \textrm{,} \label{eq:base_inp} \end{equation} where $W_1$ and $b_1$ are model parameters for compressing the input vector size. To alleviate the data sparsity problem and obtain better word representation as the input, we also adopt a forward LSTM over the characters of the token, and concatenate the last hidden state $h_{j}^c$ with the word embedding: \begin{equation} x_j = W_1 \Big( [e_{j}; h_{j}^c] \Big) + b_1 \label{eq:base_inp_2} \end{equation} \subsection{Encoder} \label{sec:base_enc} The encoder is a bi-directional LSTM applied on the linearized graph by depth-first traversal, as in \newcite{konstas-EtAl:2017:Long}. At each step $j$, the current states $\overleftarrow{h_j}$ and $\overrightarrow{h_j}$ are generated given the previous states $\overleftarrow{h_{j+1}}$ and $\overrightarrow{h_{j-1}}$ and the current input $x_j$: \begin{align*} \overleftarrow{h_j} &= \textrm{LSTM}(\overleftarrow{h_{j+1}}, x_j) \\ \overrightarrow{h_j} &= \textrm{LSTM}(\overrightarrow{h_{j-1}}, x_j) \end{align*} \subsection{Decoder} \label{sec:base_dec} We use an attention-based LSTM decoder \cite{bahdanau2015neural}, where the attention memory ($A$) is the concatenation of the attention vectors among all input words. Each attention vector $a_j$ is the concatenation of the encoder states of an input token in both directions ($\overleftarrow{h_j}$ and $\overrightarrow{h_j}$) and its input vector ($x_j$): \begin{align} a_j &= [\overleftarrow{h_j}; \overrightarrow{h_j}; x_j] \\ A &= [a_1; a_2; \dots; a_N] \end{align} where $N$ is the number of input tokens. The decoder yields an output sequence $w_1, w_2, \dots, w_M$ by calculating a sequence of hidden states $s_1, s_2 \dots, s_M$ recurrently. While generating the $t$-th word, the decoder considers five factors: (1) the attention memory $A$; (2) the previous hidden state of the LSTM model $s_{t-1}$; (3) the embedding of the current input (previously generated word) $e_{t}$; (4) the previous context vector $\mu_{t-1}$, which is calculated with attention from $A$; and (5) the previous coverage vector $\gamma_{t-1}$, which is the accumulation of all attention distributions so far \cite{tu-EtAl:2016:P16-1}. When $t=1$, we initialize $\mu_{0}$ and $\gamma_{0}$ as zero vectors, set $e_{1}$ to the embedding of the start token ``$<$s$>$'', and $s_{0}$ as the average of all encoder states. For each time-step $t$, the decoder feeds the concatenation of the embedding of the current input $e_{t}$ and the previous context vector $\mu_{t-1}$ into the LSTM model to update its hidden state. Then the attention probability $\alpha_{t,i}$ on the attention vector $a_i \in A$ for the time-step is calculated as: \begin{align*} \epsilon_{t,i} &= v_2^T \tanh(W_a a_i + W_s s_t + W_{\gamma} \gamma_{t-1} + b_2) \\ \alpha_{t,i} &= \frac{\exp(\epsilon_{t,i})}{\sum_{j=1}^N\exp(\epsilon_{t,j})} \end{align*} where $W_a$, $W_s$, $W_{\gamma}$, $v_2$ and $b_2$ are model parameters. The coverage vector $\gamma_t$ is updated by $\gamma_t = \gamma_{t-1} + \alpha_t$, and the new context vector $\mu_t$ is calculated via $\mu_t = \sum_{i=1}^N \alpha_{t,i} a_{i}$. The output probability distribution over a vocabulary at the current state is calculated by: \begin{equation} P_{vocab} = \textrm{softmax}(V_3[s_t,\mu_t]+b_3)\textrm{,} \label{eq:pvocab} \end{equation} where $V_3$ and $b_3$ are learnable parameters, and the number of rows in $V_3$ represents the number of words in the vocabulary. \section{The graph-to-sequence model} Unlike the baseline sequence-to-sequence model, we leverage a recurrent graph encoder to represent each input AMR, which directly models the graph structure without serialization. \subsection{The graph encoder} \begin{figure} \centering \includegraphics[width=0.9\linewidth]{encoder_mp_2.pdf} \caption{Graph state LSTM.} \label{fig:encoder} \end{figure} Figure \ref{fig:encoder} shows the overall structure of our graph encoder. Formally, given a graph $G=(V, E)$, we use a hidden state vector $h^j$ to represent each node $v_j \in V$. The state of the graph can thus be represented as: \[ g = \{h^j\}|_{v_j \in V} \] In order to capture non-local interaction between nodes, we allow information exchange between nodes through a sequence of state transitions, leading to a sequence of states $g_0, g_1, \dots, g_t, \dots$, where $g_t = \{h_t^j\}|_{v_j \in V}$. The initial state $g_0$ consists of a set of initial node states $h_0^j=h_0$, where $h_0$ is a hyperparameter of the model. \subparagraph{State transition} A recurrent neural network is used to model the state transition process. In particular, the transition from $g_{t-1}$ to $g_t$ consists of a hidden state transition for each node, as shown in Figure \ref{fig:encoder}. At each state transition step $t$, we allow direct communication between a node and all nodes that are directly connected to the node. To avoid gradient diminishing or bursting, LSTM \cite{hochreiter1997long} is adopted, where a cell $c_t^j$ is taken to record memory for $h_t^j$. We use an input gate $i_t^j$, an output gate $o_t^j$ and a forget gate $f_t^j$ to control information flow from the inputs and to the output $h_t^j$. The inputs include representations of edges that are connected to $v_j$, where $v_j$ can be either the source or the target of the edge. We define each edge as a triple $(i,j,l)$, where $i$ and $j$ are indices of the source and target nodes, respectively, and $l$ is the edge label. $x_{i,j}^l$ is the representation of edge $(i,j,l)$, detailed in Section \ref{sec:input}. The inputs for $v_j$ are distinguished by incoming and outgoing edges, before being summed up: \begin{equation*} \begin{split} x_j^{i} &= \sum_{(i,j,l)\in E_{in}(j)} x_{i,j}^l \\ x_j^{o} &= \sum_{(j,k,l)\in E_{out}(j)} x_{j,k}^l \textrm{,} \\ \end{split} \end{equation*} where $E_{in}(j)$ and $E_{out}(j)$ denote the sets of incoming and outgoing edges of $v_j$, respectively. In addition to edge inputs, a cell also takes the hidden states of its incoming nodes and outgoing nodes during a state transition. In particular, the states of all incoming nodes and outgoing nodes are summed up before being passed to the cell and gate nodes: \begin{equation*} \begin{split} h_j^{i} &= \sum_{(i,j,l)\in E_{in}(j)} h_{t-1}^{i} \\ h_j^{o} &= \sum_{(j,k,l)\in E_{out}(j)} h_{t-1}^{k} \textrm{,} \\ \end{split} \end{equation*} Based on the above definitions of $x_j^{i}$, $x_j^{o}$, $h_j^{i}$ and $h_j^{o}$, the state transition from $g_{t-1}$ to $g_t$, as represented by $h_t^j$, can be defined as: \begin{equation*} \begin{split} i_t^j &= \sigma(W_i x_j^{i} + \hat{W_i} x_j^{o} + U_i h_j^{i} + \hat{U_i} h_j^{o} + b_i) \textrm{,} \\ o_t^j &= \sigma(W_o x_j^{i} + \hat{W_o} x_j^{o} + U_o h_j^{i} + \hat{U_o} h_j^{o} + b_o) \textrm{,} \\ f_t^j &= \sigma(W_f x_j^{i} + \hat{W_f} x_j^{o} + U_f h_j^{i} + \hat{U_f} h_j^{o} + b_f) \textrm{,} \\ u_t^j &= \sigma(W_u x_j^{i} + \hat{W_u} x_j^{o} + U_u h_j^{i} + \hat{U_u} h_j^{o} + b_u) \textrm{,} \\ c_t^j &= f_t^j \odot c_{t-1}^j + i_t^j \odot u_t^j \textrm{,} \\ h_t^j &= o_t^j \odot \tanh (c_t^j) \textrm{,} \\ \end{split} \end{equation*} where $i_t^j$, $o_t^j$ and $f_t^j$ are the input, output and forget gates mentioned earlier. $W_x$, $\hat{W}_x$, $U_x$, $\hat{U}_x$, $b_x$, where $x \in \{i, o, f, u\}$, are model parameters. \subsection{Recurrent steps} Using the above state transition mechanism, information from each node propagates to all its neighboring nodes after each step. Therefore, for the worst case where the input graph is a chain of nodes, the maximum number of steps necessary for information from one arbitrary node to reach another is equal to the size of the graph. We experiment with different transition steps to study the effectiveness of global encoding. Note that unlike the sequence LSTM encoder, our graph encoder allows parallelization in node-state updates, and thus can be highly efficient using a GPU\@. It is general and can be potentially applied to other tasks, including sequences, syntactic trees and cyclic structures. \subsection{Input Representation} \label{sec:input} Different from sequences, the edges of an AMR graph contain labels, which represent relations between the nodes they connect, and are thus important for modeling the graphs. Similar with Section \ref{eq:base_inp_2}, we adopt two different ways for calculating the representation for each edge $(i,j,l)$: \begin{align} x_{i,j}^l &= W_4 \Big( [e_l; e_i] \Big) + b_4 \\ x_{i,j}^l &= W_4 \Big( [e_l; e_i; h_i^c] \Big) + b_4 \textrm{,} \end{align} where $e_l$ and $e_i$ are the embeddings of edge label $l$ and source node $v_i$, $h_i^c$ denotes the last hidden state of the character LSTM over $v_i$, and $W_4$ and $b_4$ are trainable parameters. The equations correspond to Equations \ref{eq:base_inp} and \ref{eq:base_inp_2} in Section \ref{sec:base_inp}, respectively. \subsection{Decoder} We adopt the attention-based LSTM decoder as described in Section \ref{sec:base_dec}. Since our graph encoder generates a sequence of graph states, only the last graph state is adopted in the decoder. In particular, we make the following changes to the decoder. First, each attention vector becomes $a_j=[h_T^j; x_j]$, where $h_T^j$ is the last state for node $v_j$. Second, the decoder initial state $s_{-1}$ is the average of the last states of all nodes. \subsection{Integrating the copy mechanism} \label{sec:copy} Open-class tokens, such as dates, numbers and named entities, account for a large portion in the AMR corpus. Most appear only a few times, resulting in a data sparsity problem. To address this issue, \newcite{konstas-EtAl:2017:Long} adopt anonymization for dealing with the data sparsity problem. In particular, they first replace the subgraphs that represent dates, numbers and named entities (such as ``(q / quantity :quant 3)'' and ``(p / person :name (n / name :op1 ``Ryan''))'') with predefined placeholders (such as ``num\_0'' and ``person\_name\_0'') before decoding, and then recover the corresponding surface tokens (such as ``3'' and ``Ryan'') after decoding. This method involves hand-crafted rules, which can be costly. \subparagraph{Copy} We find that most of the open-class tokens in a graph also appear in the corresponding sentence, and thus adopt the copy mechanism \cite{gulcehre-EtAl:2016:P16-1,gu-EtAl:2016:P16-1} to solve this problem. The mechanism works on top of an attention-based RNN decoder by integrating the attention distribution into the final vocabulary distribution. The final probability distribution is defined as the interpolation between two probability distributions: \begin{equation} P_{final} = \theta_t P_{vocab} + (1-\theta_t) P_{attn}\textrm{,} \end{equation} where $\theta_t$ is a switch for controlling generating a word from the vocabulary or directly copying it from the input graph. $P_{vocab}$ is the probability distribution of directly generating the word, as defined in Equation \ref{eq:pvocab}, and $P_{attn}$ is calculated based on the attention distribution $\alpha_t$ by summing the probabilities of the graph nodes that contain identical concept. Intuitively, $\theta_t$ is relevant to the current decoder input $e_{t}$ and state $s_t$, and the context vector $\mu_t$. Therefore, we define it as: \begin{equation} \theta_t = \sigma(w_\mu^T \mu_t + w_s^T s_t + w_e^T e_{t} + b_5)\textrm{,} \end{equation} where vectors $w_\mu$, $w_s$, $w_e$ and scalar $b_{5}$ are model parameters. The copy mechanism favors generating words that appear in the input. For AMR-to-text generation, it facilitates the generation of dates, numbers, and named entities that appear in AMR graphs. \subparagraph{Copying vs anonymization} Both copying and anonymization alleviate the data sparsity problem by handling the open-class tokens. However, the copy mechanism has the following advantages over anonymization: (1) anonymization requires significant manual work to define the placeholders and heuristic rules both from subgraphs to placeholders and from placeholders to the surface tokens, (2) the copy mechanism automatically learns what to copy, while anonymization relies on hard rules to cover all types of the open-class tokens, and (3) the copy mechanism is easier to adapt to new domains and languages than anonymization. \section{Training and decoding} We train our models using the cross-entropy loss over each gold-standard output sequence $W^*=w_1^*, \dots, w_t^*, \dots, w_M^*$: \begin{equation} l = -\sum_{t=1}^M \log p(w_t^*|w_{t-1}^*,\dots,w_1^*,X;\theta)\textrm{,} \end{equation} where $X$ is the input graph, and $\theta$ is the model parameters. Adam \cite{kingma2014adam} with a learning rate of 0.001 is used as the optimizer, and the model that yields the best devset performance is selected to evaluate on the test set. Dropout with rate 0.1 is used during training. Beam search with beam size to 5 is used for decoding. Both training and decoding use Tesla K80 GPUs. \section{Experiments} \subsection{Data} We use a standard AMR corpus (LDC2015E86) as our experimental dataset, which contains 16,833 instances for training, 1368 for development and 1371 for test. Each instance contains a sentence and an AMR graph. Following \newcite{konstas-EtAl:2017:Long}, we supplement the gold data with large-scale automatic data. We take Gigaword as the external data to sample raw sentences, and train our model on both the sampled data and LDC2015E86. We adopt \newcite{konstas-EtAl:2017:Long}'s strategy for sampling sentences from Gigaword, and choose JAMR \cite{flanigan-EtAl:2016:SemEval} to parse selected sentences into AMRs, as the AMR parser of \newcite{konstas-EtAl:2017:Long} only works on the anonymized data. For training on both sampled data and LDC2015E86, we also follow the method of \newcite{konstas-EtAl:2017:Long}, which is fine-tuning the model on the AMR corpus after every epoch of pretraining on the gigaword data. \subsection{Settings} We extract a vocabulary from the training set, which is shared by both the encoder and the decoder. The word embeddings are initialized from Glove pretrained word embeddings \cite{pennington2014glove} on Common Crawl, and are not updated during training. Following existing work, we evaluate the results with the BLEU metric \cite{papineni2002bleu}. For model hyperparameters, we set the graph state transition number as 9 according to development experiments. Each node takes information from at most 10 neighbors. The hidden vector sizes for both encoder and decoder are set to 300 (They are set to 600 for experiments using large-scale automatic data). Both character embeddings and hidden layer sizes for character LSTMs are set 100, and at most 20 characters are taken for each graph node or linearized token. \subsection{Development experiments} \label{sec:comp_sys} \begin{table} \centering \begin{tabular}{l|c|c} \hline Model & BLEU & Time \\ \hline \hline Seq2seq & 18.8 & 35.4s \\ Seq2seq+copy & 19.9 & 37.4s \\ Seq2seq+charLSTM+copy & 20.6 & 39.7s \\ \hline Graph2seq & 20.4 & 11.2s \\ Graph2seq+copy & 22.2 & 11.1s \\ Graph2seq+Anon & 22.1 & 9.2s \\ Graph2seq+charLSTM+copy & \textbf{22.8} & 16.3s \\ \hline \end{tabular} \caption{\textsc{Dev} BLEU scores and decoding times.} \label{tab:dev_res} \end{table} As shown in Table \ref{tab:dev_res}, we compare our model with a set of baselines on the AMR devset to demonstrate how the graph encoder and the copy mechanism can be useful when training instances are not sufficient. \emph{Seq2seq} is the sequence-to-sequence baseline described in Section \ref{sec:base}. \emph{Seq2seq+copy} extends \emph{Seq2seq} with the copy mechanism, and \emph{Seq2seq+charLSTM+copy} further extends \emph{Seq2seq+copy} with character LSTM\@. \emph{Graph2seq} is our graph-to-sequence model, \emph{Graph2seq+copy} extends \emph{Graph2seq} with the copy mechanism, and \emph{Graph2seq+charLSTM+copy} further extends \emph{Graph2seq+copy} with the character LSTM\@. We also try \emph{Graph2seq+Anon}, which applies our graph-to-sequence model on the anonymized data from \newcite{konstas-EtAl:2017:Long}. \subparagraph{The graph encoder} As can be seen from Table \ref{tab:dev_res}, the performance of \emph{Graph2seq} is 1.6 BLEU points higher than \emph{Seq2seq}, which shows that our graph encoder is effective when applied alone. Adding the copy mechanism (\emph{Graph2seq+copy} vs \emph{Seq2seq+copy}), the gap becomes 2.3. This shows that the graph encoder learns better node representations compared to the sequence encoder, which allows attention and copying to function better. Applying the graph encoder together with the copy mechanism gives a gain of 3.4 BLEU points over the baseline (\emph{Graph2seq+copy} vs \emph{Seq2seq}). The graph encoder is consistently better than the sequence encoder no matter whether character LSTMs are used. We also list the encoding part of decoding times on the devset, as the decoders of the seq2seq and the graph2seq models are similar, so the time differences reflect efficiencies of the encoders. Our graph encoder gives consistently better efficiency compared with the sequence encoder, showing the advantage of parallelization. \subparagraph{The copy mechanism} Table \ref{tab:dev_res} shows that the copy mechanism is effective on both the graph-to-sequence and the sequence-to-sequence models. Anonymization gives comparable overall performance gains on our graph-to-sequence model as the copy mechanism (comparing \emph{Graph2seq+Anon} with \emph{Graph2seq+copy}). However, the copy mechanism has several advantages over anonymization as discussed in Section \ref{sec:copy}. \subparagraph{Character LSTM} Character LSTM helps to increase the performances of both systems by roughly 0.6 BLEU points. This is largely because it further alleviates the data sparsity problem by handling unseen words, which may share common substrings with in-vocabulary words. \subsection{Effectiveness on graph state transitions} We report a set of development experiments for understanding the graph LSTM encoder. \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{iters.pdf} \vspace{-1.0em} \caption{\textsc{Dev} BLEU scores against transition steps for the graph encoder.} \label{fig:iters} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth]{amr_diam.pdf} \vspace{-1.0em} \caption{Percentage of \textsc{Dev} AMRs with different diameters.} \label{fig:depth} \vspace{-1.0em} \end{figure} \subparagraph{Number of iterations} We analyze the influence of the number of state transitions to the model performance on the devset. Figure \ref{fig:iters} shows the BLEU scores of different state transition numbers, when both incoming and outgoing edges are taken for calculating the next state (as shown in Figure \ref{fig:encoder}). The system is \emph{Graph2seq+charLSTM+copy}. Executing only 1 iteration results in a poor BLEU score of 14.1. In this case the state for each node only contains information about immediately adjacent nodes. The performance goes up dramatically to 21.5 when increasing the iteration number to 5. In this case, the state for each node contains information of all nodes within a distance of 5. The performance further goes up to 22.8 when increasing the iteration number from 5 to 9, where all nodes with a distance of less than 10 are incorporated in the state for each node. \subparagraph{Graph diameter} We analyze the percentage of the AMR graphs in the devset with different graph diameters and show the cumulative distribution in Figure \ref{fig:depth}. The diameter of an AMR graph is defined as the longest distance between two AMR nodes.\footnote{The diameter of single-node graphs is 0.} Even though the diameters for less than 80\% of the AMR graphs are less or equal than 10, our development experiments show that it is not necessary to incorporate the whole-graph information for each node. Further increasing state transition number may lead to additional improvement. We do not perform exhaustive search for finding the optimal state transition number. \subparagraph{Incoming and outgoing edges} As shown in Figure \ref{fig:iters}, we analyze the efficiency of state transition when only incoming or outgoing edges are used. From the results, we can see that there is a huge drop when state transition is performed only with incoming or outgoing edges. Using edges of one direction, the node states only contain information of ancestors or descendants. On the other hand, node states contain information of ancestors, descendants, and siblings if edges of both directions are used. From the results, we can conclude that not only the ancestors and descendants, but also the siblings are important for modeling the AMR graphs. This is similar to observations on syntactic parsing tasks \cite{mcdonald-crammer-pereira:2005:ACL}, where sibling features are adopted. We perform a similar experiment for the \emph{Seq2seq+copy} baseline by only executing single-directional LSTM for the encoder. We observe BLEU scores of 11.8 and 12.7 using only forward or backward LSTM, respectively. This is consistent with our graph model in that execution using only one direction leads to a huge performance drop. The contrast is also reminiscent of using the normal input versus the reversed input in neural machine translation \citep{sutskever2014sequence}. \subsection{Results} \begin{table} \centering \begin{tabular}{l|c} \hline Model & BLEU \\ \hline \hline PBMT & 26.9 \\ SNRG & 25.6 \\ Tree2Str & 23.0 \\ MSeq2seq+Anon & 22.0 \\ Graph2seq+copy & 22.7 \\ Graph2seq+charLSTM+copy & 23.3 \\ \hline MSeq2seq+Anon (200K) & 27.4 \\ MSeq2seq+Anon (2M) & 32.3 \\ MSeq2seq+Anon (20M) & \textbf{33.8} \\ \hline Seq2seq+charLSTM+copy (200K) & 27.4 \\ Seq2seq+charLSTM+copy (2M) & 31.7 \\ Graph2seq+charLSTM+copy (200K) & 28.2 \\ Graph2seq+charLSTM+copy (2M) & \textbf{33.6}\tablefootnote{It was 33.0 at submission, and has been improved.} \\ \hline \end{tabular} \caption{\textsc{Test} results. ``(200K)'', ``(2M)'' and ``(20M)'' represent training with the corresponding number of additional sentences from Gigaword.} \label{tab:global_res} \end{table} Table \ref{tab:global_res} compares our final results with existing work. \emph{MSeq2seq+Anon} \cite{konstas-EtAl:2017:Long} is an attentional multi-layer sequence-to-sequence model trained with the anonymized data. \emph{PBMT} \cite{pourdamghani-knight-hermjakob:2016:INLG} adopts a phrase-based model for machine translation \cite{koehn2003statistical} on the input of linearized AMR graph, \emph{SNRG} \cite{song-EtAl:2017:Short} uses synchronous node replacement grammar for parsing the AMR graph while generating the text, and \emph{Tree2Str} \cite{jeff2016amrgen} converts AMR graphs into trees by splitting the re-entrances before using a tree transducer to generate the results. \emph{Graph2seq+charLSTM+copy} achieves a BLEU score of 23.3, which is 1.3 points better than \emph{MSeq2seq+Anon} trained on the same AMR corpus. In addition, our model without character LSTM is still 0.7 BLEU points higher than \emph{MSeq2seq+Anon}. Note that \emph{MSeq2seq+Anon} relies on anonymization, which requires additional manual work for defining mapping rules, thus limiting its usability on other languages and domains. The neural models tend to underperform statistical models when trained on limited (16K) gold data, but performs better with scaled silver data \cite{konstas-EtAl:2017:Long}. Following \newcite{konstas-EtAl:2017:Long}, we also evaluate our model using both the AMR corpus and sampled sentences from Gigaword. Using additional 200K or 2M gigaword sentences, \emph{Graph2seq+charLSTM+copy} achieves BLEU scores of 28.2 and 33.0, respectively, which are 0.8 and 0.7 BLEU points better than \emph{MSeq2seq+Anon} using the same amount of data, respectively. The BLEU scores are 5.3 and 10.1 points better than the result when it is only trained with the AMR corpus, respectively. This shows that our model can benefit from scaled data with automatically generated AMR graphs, and it is more effective than \emph{MSeq2seq+Anon} using the same amount of data. Using 2M gigaword data, our model is better than all existing methods. \newcite{konstas-EtAl:2017:Long} also experimented with 20M external data, obtaining a BLEU of 33.8. We did not try this setting due to hardware limitations. The \emph{Seq2seq+charLSTM+copy} baseline trained on the large-scale data is close to \emph{MSeq2seq+Anon} using the same amount of training data, yet is much worse than our model. \subsection{Case study} We conduct case studies for better understanding the model performances. Table \ref{tab:examples} shows example outputs of sequence-to-sequence (\emph{S2S}), graph-to-sequence (\emph{G2S}) and graph-to-sequence with copy mechanism (\emph{G2S+CP}). \emph{Ref} denotes the reference output sentence, and \emph{Lin} shows the serialization results of input AMRs. The best hyperparameter configuration is chosen for each model. For the first example, \emph{S2S} fails to recognize the concept ``a / account'' as a noun and loses the concept ``o / old'' (both are underlined). The fact that ``a / account'' is a noun is implied by ``a~/~account :mod (o~/~old)'' in the original AMR graph. Though directly connected in the original graph, their distance in the serialization result (the input of \emph{S2S}) is 26, which may be why \emph{S2S} makes these mistakes. In contrast, \emph{G2S} handles ``a~/~account'' and ``o~/~old'' correctly. In addition, the copy mechanism helps to copy ``look-over'' from the input, which rarely appears in the training set. In this case, \emph{G2S+CP} is incorrect only on hyphens and literal reference to ``anti-japanese war'', although the meaning is fully understandable. For the second case, both \emph{G2S} and \emph{G2S+CP} correctly generate the noun ``agreement'' for ``a~/ agree'' in the input AMR, while \emph{S2S} fails to. The fact that ``a~/~agree'' represents a noun can be determined by the original graph segment ``p / provide :ARG0 (a / agree)'', which indicates that ``a / agree'' is the subject of ``p / provide''. In the serialization output, the two nodes are close to each other. Nevertheless, \emph{S2S} still failed to capture this structural relation, which reflects the fact that a sequence encoder is not designed to explicitly model hierarchical information encoded in the serialized graph. In the training instances, serialized nodes that are close to each other can originate from neighboring graph nodes, or distant graph nodes, which prevents the decoder from confidently deciding the correct relation between them. In contrast, \emph{G2S} sends the node ``p / provide'' simultaneously with relation ``ARG0'' when calculating hidden states for ``a / agree'', which facilitates the yielding of ``the agreement provides''. \begin{table}[t!] \small \centering \begin{tabularx}{0.5\textwidth}{X} \hline (p / possible-01 :polarity - \\ ~~~~:ARG1 (l / look-over-06 \\ ~~~~~~~~:ARG0 (w / we) \\ ~~~~~~~~:ARG1 (a / \underline{account}-01 \\ ~~~~~~~~~~~~:ARG1 (w2 / war-01 \\ ~~~~~~~~~~~~~~~~:ARG1 (c2 / country :wiki ``Japan'' \\ ~~~~~~~~~~~~~~~~~~~~:name (n2 / name :op1 ``Japan'')) \\ ~~~~~~~~~~~~~~~~:time (p2 / previous) \\ ~~~~~~~~~~~~~~~~:ARG1-of (c / call-01 \\ ~~~~~~~~~~~~~~~~~~~~:mod (s / so))) \\ ~~~~~~~~~~~~:mod (o / \underline{old})))) \\ \textbf{Lin}: possible :polarity - :arg1 ( look-over :arg0 we :arg1 ( \underline{account} :arg1 ( war :arg1 ( country :wiki japan :name ( name :op1 japan ) ) :time previous :arg1-of ( call :mod so ) ) :mod \underline{old} ) ) \\ \textbf{Ref}: we can n't look over the old accounts of the previous so-called anti-japanese war . \\ \textbf{S2S}: we can n't be able to account the past drawn out of japan 's entire war .\\ \textbf{G2S}: we can n't be able to do old accounts of the previous and so called japan war.\\ \textbf{G2S+CP}: we can n't look-over the old accounts of the previous so called war on japan . \\ \hline (p / provide-01 \\ ~~~~:ARG0 (a / \underline{agree}-01) \\ ~~~~:ARG1 (a2 / and \\ ~~~~~~~~:op1 (s / staff \\ ~~~~~~~~~~~~:prep-for (c / center \\ ~~~~~~~~~~~~~~~~:mod (r / research-01))) \\ ~~~~~~~~:op2 (f / fund-01 \\ ~~~~~~~~~~~~:prep-for c))) \\ \textbf{Lin}: provide :arg0 \underline{agree} :arg1 ( and :op1 ( staff :prep-for ( center :mod research ) ) :op2 ( fund :prep-for center ) ) \\ \textbf{Ref}: the agreement will provide staff and funding for the research center .\\ \textbf{S2S}: agreed to provide research and institutes in the center .\\ \textbf{G2S}: the agreement provides the staff of research centers and funding . \\ \textbf{G2S+CP}: the agreement provides the staff of the research center and the funding .\\ \hline \end{tabularx} \caption{Example system outputs.} \label{tab:examples} \end{table} \section{Related work} Among early statistical methods for AMR-to-text generation, \newcite{jeff2016amrgen} convert input graphs to trees by splitting re-entrances, and then translate the trees into sentences with a tree-to-string transducer. \newcite{song-EtAl:2017:Short} use a synchronous node replacement grammar to parse input AMRs and generate sentences at the same time. \newcite{pourdamghani-knight-hermjakob:2016:INLG} linearize input graphs by breadth-first traversal, and then use a phrase-based machine translation system\footnote{http://www.statmt.org/moses/} to generate results by translating linearized sequences. Prior work using graph neural networks for NLP include the use graph convolutional networks (GCN) \cite{kipf2017semi} for semantic role labeling \cite{marcheggiani-titov:2017:EMNLP2017}, neural machine translation \cite{bastings-EtAl:2017:EMNLP2017} and graph-to-sequence learning \cite{xu2018graph2seq}. Both GCN and the graph LSTM update node states by exchanging information between neighboring nodes within each iteration. However, our graph state LSTM adopts gated operations for making updates, while GCN uses a linear transformation. Intuitively, the former has better learning power than the later. Another major difference is that our graph state LSTM keeps a cell vector for each node to remember all history. The contrast between our model with GCN is reminiscent of the contrast between RNN and CNN. We leave empirical comparison of their effectiveness to future work. In this work our main goal is to show that graph LSTM encoding of AMR is superior compared with sequence LSTM. Closest to our work, \newcite{TACL1028} modeled syntactic and discourse structures using DAG LSTM, which can be viewed as extensions to tree LSTMs \cite{tai-socher-manning:2015:ACL-IJCNLP}. The state update follows the sentence order for each node, and has sequential nature. Our state update is in parallel. In addition, \newcite{TACL1028} split input graphs into separate DAGs before their method can be used. To our knowledge, we are the first to apply an LSTM structure to encode AMR graphs. The recurrent information exchange mechanism in our state transition process is remotely related to the idea of loopy belief propagation (LBP) \cite{murphy1999loopy}. However, there are two major differences. First, messages between LSTM states are gated neural node values, rather than probabilities in LBP\@. Second, while the goal of LBP is to estimate marginal probabilities, the goal of information exchange between graph states in our LSTM is to find neural representation features, which are directly optimized by a task objective. In addition to NMT \cite{gulcehre-EtAl:2016:P16-1}, the copy mechanism has been shown effective on tasks such as dialogue \cite{gu-EtAl:2016:P16-1}, summarization \cite{see-liu-manning:2017:Long} and question generation \citep{song-naacl-18}. We investigate the copy mechanism on AMR-to-text generation. \section{Conclusion} We introduced a novel graph-to-sequence model for AMR-to-text generation. Compared to sequence-to-sequence models, which require linearization of AMR before decoding, a graph LSTM is leveraged to directly model full AMR structure. Allowing high parallelization, the graph encoder is more efficient than the sequence encoder. In our experiments, the graph model outperforms a strong sequence-to-sequence model, achieving the best performance. \paragraph{Acknowledgments} We thank the anonymized reviewers for the insightful comments, and the Center for Integrated Research Computing (CIRC) of University of Rochester for special reservations of computation resources.
{ "timestamp": "2018-08-29T02:02:11", "yymm": "1805", "arxiv_id": "1805.02473", "language": "en", "url": "https://arxiv.org/abs/1805.02473" }
\section{Introduction} \label{sect:Intro} Exploring the dark sector of our universe is one of the major, most challenging subjects in recent cosmophysics. In particular, identifying the dark sector of matter is of great importance in the context of astrophysics and high-energy particle physics. One of the most popular candidates for dark matter are weakly coupled ultra-light bosonic fields, predicted to generally arise in string-theory-inspired scenarios~\cite{Arvanitaki:2009fg,Acharya:2015zfk,Goodsell:2009xc}. One way to search for such ultra-light bosonic fields in astrophysical context is to seek for the so called {\em superradiant instability} of rotating black holes, which produces gravitational radiation and eventually spins down the black hole~\cite{Arvanitaki:2010sy}. A superradiant scattering around a rotating black hole can in general occur when the frequency $\omega$ of an impinging wave of some bosonic fields satisfies the condition $0<\omega < m\Omega_H$, where $m$ denotes the azimuthal number of the impinging wave and $\Omega_H$ the horizon angular velocity of the black hole. When, in addition, there is some mechanism that reflects the superradiantly scattered, amplified wave back into the ergoregion of the black hole, the superradiant amplification can take place repeatedly and exhibit an instability. The reflection mechanism can be played by, for example, a ``mirror'' set by hand~\cite{Press:1972zz,Cardoso:2004nk,Herdeiro:DR2013,Hod:2013a,Degollado:Herdeiro:2014,LiZhao:2015}, or the spacetime curvature produced by a negative cosmological constant~\cite{Hawking:Reall:2000,Cardoso:2004hs,Cardoso:2006wa,Uchikata:2009zz,Kodama:2009rq,Cardoso:Dias:Hartnett:Lehner:Santos:2014,Green:Hollands:Ishibashi:Wald:2016}. A superradiant instability can also be realized in more realistic astrophysical circumstances if the impinging bosonic fields posses masses~\cite{Damour:1976kh,Zouros:1979iw,Detweiler:1980uk,Dolan:2007mj,Rosa:2009ei,Hod:2011,Pani:2012vp,Pani:2012bp,Witek:2012tr,Brito:CP:2013,Yoshino:Kodama:2015,Hod:2016,Ishibashi:Pani:Gualtieri:Cardoso:2015}, and can be most efficient when the corresponding compton wavelength of the massive fields is comparable to the black hole radius. This is the case in which, for example, the masses are less than $10^{-10}$ eV, for the stellar mass black hole case. The existence of such ultra-light bosons has been suggested by, for example, the string axiverse scenario~\cite{Arvanitaki:2009fg}, and there have been a number of attempts to derive bounds on the masses of such ultra-light bosons by exploiting the recent developments of precision black hole physics, see, e.g. Refs.~\cite{Arvanitaki:2010sy,Pani:2012vp,Pani:2012bp,Cardoso:2018tly}. \medskip It is also worth mentioning that the presence of massive bosonic fields can affect the environment of stationary black holes by endowing with ``hair.'' In fact, for a certain configuration of massive complex scalar and vector fields, the possibility of hairy rotating black holes has been pointed out~\cite{Herdeiro:Radu2014,Herdeiro:Radu2017}. Instability of such hairy black holes has also been discussed~\cite{Ganchev:Santos:2017,Degollado:Herdeiro:Radu:18}. \medskip To explore the possibility of superradiant instability in our universe and other possible roles of massive bosonic fields in astrophysics and fundamental physics, it is of considerable importance to understand precisely how such massive fields propagate in black hole spacetimes. There have been a number of relevant work along this line by using both analytical and numerical methods, and the behavior of the massive scalar fields has now been well-understood. As for massive vector and tensor fields, however, the analyses become much more involved. For example, in the massive vector field case, having three independent physical degrees of freedom due to the lack of gauge freedom, the equation for massive vector fields or Proca equation does not appear to be immediately separable in Kerr black hole background, let alone reducing to master equations, i.e., a set of decoupled second-order wave equations. This situation should be compared with the case of {\em massless} vector, i.e., Maxwell, field, for which Maxwell-equations are separable and further reduce to the Teukolsky's master equation. See also~\cite{Lunin:2017} for the separability of Maxwell's equations with a new ansatz for the gauge field as well as the analysis in higher dimensional rotating black hole backgrounds. Such a complexity of massive vector fields remains to be the case even for non-rotating, static black hole background. For example, in Schwarzschild spacetime, although the radial and angular parts are immediately separable thanks to the spherical symmetry, the Proca equation still does not appear to reduce to a set of decoupled master equations~\cite{Konoplya:2006,Konoplya:Zhidenko:Molina:2007,Rosa:Dolan:2012}. For these reasons, in order to study the behavior of massive vector fields, one has to appeal to a combination of some approximation and numerical method, or full numerical computation~\cite{Witek:2012tr,Zilhao:Witek:Cardoso:2015,East:Pretorius:2017}. As for some approximation method, for example, by extending Kojima's pioneering work~\cite{Kojima:92,Kojima:apj:93,Kojima:ptp:93}, the slow-rotation approximation for linear perturbations of scalar and massive vector fields on slowly rotating black holes has been formulated in~\cite{Pani:2012vp,Pani:2012bp}. \medskip The superradiant instability is regulated by the dimensionless parameter $\mu M$ (in unit $G=c=1$) with $\mu$ and $M$ being the masses, respectively, of the bosonic field and black hole, and is expected to be strongest e.g., when $\mu M \sim 1$ for maximally spinning extremal black holes. There have already been several observations that indicate the existence of highly spinning, nearly extremal black holes in our universe, see e.g., Refs.~\cite{McClintock:etal:2006,Middleton:16}. It is therefore of considerable interest to develop some formulas which can be applied for rapidly spinning, near-extremal or maximally rotating extremal black holes. In particular, so far little has been done for {\em analytically} studying the dynamics of massive vector fields in extremal and near-extremal black holes. \medskip It is well-known that an extremal black hole admits what is called the {\em near-horizon geometry}, which is obtained by taking a certain scaling limit around the horizon neighborhood, and which admits an enhanced isometry higher than that of the original extremal black hole geometry~\cite{Bardeen:Horowitz:1999,KLR07}. For the maximally rotating extremal Kerr black hole, the near-horizon geometry--also called NHEK--has the enhanced symmetry ${\rm SL}(2,R)\times {\rm U}(1)$, which has been exploited to formulate a type of the gauge-gravity duality, called Kerr/CFT correspondence~\cite{Guica:Hartman:Song:Strominger:2009}. Further, the enhanced symmetry of the near-horizon geometry has recently been used to analytically compute radiation emissions from the near-horizon region of extremal Kerr black holes~\cite{Porfyriadis:Strominer:2014,Porfyriadis:Shi:Strominger:2017}. For further study of the near-horizon geometries and their classification, see e.g., Refs.~\cite{Figueras:Kunduri:Lucietti:Rangamani:2008,Kunduri:Lucietti:2009,Hollands:Ishibashi:2010,Kunduri:Lucietti:2013} and references therein. \medskip Apart from astrophysics context, massive bosonic fields around extremal and near-extremal black holes have received attention also in some theoretical contexts. This is the case also for the static extremal black holes. For instance, a superradiant scattering can also occur in non-rotating, static black holes if one considers a charged scalar field coupled to the background gauge field in a charged black hole. In this case, the role of $m\Omega_H$ for the rotating case is played by $ q \Phi_H$, where $q$ denotes the charge of the field and $\Phi_H$ the electric potential at the horizon. For example, the behavior of charged scalar fields in extremal Reissner-Nordstrom black holes has been studied by using the near-horizon geometry~\cite{Zimmerman:2017}. Another interesting phenomena is the condensation of massive scalar fields around the horizon of near-extremal black holes. The near-horizon geometry of an extremal black hole includes in part a two-dimensional anti-de Sitter (AdS$_2$) spacetime. Then, if the mass of massive field violates the so called Breitenlohner-Freedman bound \cite{BF1,BF2} of the near-horizon AdS$_2$ spacetime, massive scalar field condensates and triggers an instability, which may be interpreted as a phase transition in the dual boundary field theory. Such an instability due to the scalar field condensation is known to occur for near-extremal Reissner-Nordstrom-anti-de Sitter black hole, and has an interesting application to holographic superconductors~\cite{Gubser2008,Hartnoll:Herzog:Horowitz:2008}. For further study of stability of extremal and near-extremal black holes, see, e.g., Refs.~\cite{Dias:Monteiro:Reall:Santos:2010,Aretakis:2011,Durkee:Reall:11,Hollands:Ishibashi:15} and references therein. \medskip The purpose of this paper is to develop a novel perturbation method that can apply for studying the dynamics of massive vector fields in extremal and near-extremal black hole spacetimes. In an extremal and near-extremal black hole geometry, one can introduce a constant scaling parameter, say $\lambda$, which effectively plays a roll of zooming up the neighborhood of the horizon, and taking the limit $\lambda \rightarrow 0$ provides the near-horizon geometry. Our strategy is as follows. We first view $\lambda$ as a small parameter to expand the background extremal (or near-extremal) geometry around the near-horizon geometry. Next, on this expanded geometry we consider massive vector fields as perturbation with small amplitude parametrized again by $\lambda$. Then, we examine the Proca equations on the expanded geometry at each order of $\lambda$. Our approach may be viewed as a two-parameter perturbation in which both the amplitude of the metric and that of the massive vector field are small and simultaneously parametrized by $\lambda$. It should be noted that in our formulation, we assume that the geometry is already fixed in full order of $\lambda$ as a solution of the Einstein equations (i.e., not required to be solved at each order of $\lambda$), while Proca perturbations are our dynamical variables to be solved at each order. \medskip In this paper, as a first step toward formulating our ideas stated above, we restrict our attention to static (near-) extremal black holes. One advantage in the static case is that the radial and angular parts of the Proca equations can immediately be separated as we shall demonstrate explicitly below. We first show, after the separation of variables, that the Proca equation for massive vector field can be reduced to a set of decoupled master equations (for three-independent dynamical degrees of freedom in four-dimensions) at leading order of $\lambda$, i.e., on the near-horizon geometry. Next, we expand both the geometry and massive vector field with respect to the small parameter $\lambda$, and obtain the formulas for higher order perturbation equations. We show that at each order of $\lambda$, one can obtain a set of mutually decoupled wave equations, each of which governs each independent dynamical degree of freedom, and each of which has a source term consisting only of the lower-order variables. Thus, in principle, starting from solving the leading order decoupled master equations, one can iteratively solve any order of massive vector perturbations. As a concrete example, we present the relevant formulas in the (near-) extremal Reissner-Nordstrom background spacetime. It is worth commenting that although the focus of this paper is on the static case, as the separation of variables can be easily performed, it was recently shown that even for the rotating case, equations for massive vector fields are separable~\cite{Frolov:Krtous:Kubiznak:Santos:2018}. We therefore expect that our method developed in this paper may be generalized to the maximally rotating black hole case. \medskip In the next section, we describe our background geometry and the Proca equation, thereby establishing our notation. Our background metric takes the warped product form with an $m$-dimensional spacetime and an $n$-dimensional Einstein space. We classify the components of massive vector field into two types: the divergence-free {\em vector(axial)}-type part and {\em scalar(polar)}-type part with respect to the behavior on the Einstein space. By doing so, we can deal with the vector- and scalar-type parts separately. Then we introduce scalar and vector harmonics on the Einstein space and make the separation of variables into the radial and angular parts. Then, we reduce the Proca equation to a set of wave equations on the $m$-dimensional spacetime. At this stage, for the vector-type component, we obtain a single master equation, whereas for the scalar-type components, the equations are still coupled. We also discuss the massless case, and present a master equation for the scalar-type components of Maxwell field. In section~\ref{sect:Extremal}, we describe extremal and near-extremal black hole spacetimes, introduce the extremality parameter as well as the near-horizon scaling parameter, of which zero-limit corresponds to the near-horizon geometry. In section~\ref{sect:Expanding}, we formulate our perturbation method of expanding both the field variables and the background geometry with respect to the scaling parameter, and derive our main formulas at each order of perturbation. We first do so for the standard four-dimensional (near-) extremal Reissner-Nordstrom background, and then for a general warped product type background. We also explicitly give the general solutions to the leading-order wave equations for both vector- and scalar-type components. We show that the same structure for massive vector field perturbations also holds in the more generic extremal and near-extremal static black hole background. Section~\ref{sect:Summary} is devoted to summary and discussion. For completeness, we also apply our perturbation method to charged massive scalar fields in (near-)extremal Reissner-Nordstrom black hole background, and derive the relevant formulas in Appendix. \section{Background geometry and Proca equations} \label{sect:Background} Although our main concern is the dynamics of massive vector fields in four-dimensional black hole spacetime, taking into consideration the possibility of a wide variety of applications in fundamental physics, we shall present the relevant formulas in a rather generic setup. We first describe our warped product type background geometry and next discuss how to classify vector fields on our background. This part largely follows Refs.~\cite{Kodama:Ishibashi:2003,Ishibashi:Kodama:2003}. We then write down the Proca equations explicitly in our background spacetime. \subsection{General warped product type geometry} Let us consider $D=(m+n)$-dimensional spacetime whose manifold structure is given locally as a warped product ${\cal M} = {\cal N}^m \times {\cal K}^n$. We distinguish tensors living in each manifold $M,{\cal N}^m,{\cal K}^n$ by using greek indices for tensors on ${\cal M}$, latin indices in the range $a,b,c,\dots$ on ${\cal N}^m$, latin indices in the range $i,j,k\dots$ on ${\cal K}^n$. Accordingly we introduce local coordinates on ${\cal M}$ as $x^\mu=(y^a,z^i)$ so that the metric takes the following form: \ben ds^2 =g_{\mu \nu}dx^\mu dx^\nu = {}^{m}g_{ab}(y)dy^ady^b + R^2(y) \gamma_{ij}(z)dz^i dz^j \,, \label{def:background} \een where ${}^{m}g_{ab}(y)$ and $\gamma_{ij}(z)$ denotes, respectively, the Lorentzian metric on ${\cal N}^m$ and Riemannian metric on ${\cal K}^n$. We further assume that $({\cal K}^n,\gamma_{ij})$ is the $n$-dimensional Einstein space, so that its Ricci curvature satisfies $\hat{R}_{ij} = K(n-1)\gamma_{ij}$, with $K=0, \pm 1$ denoting the sectional curvature of ${\cal K}^n$, which essentially describes a manifold of horizon cross-section. We also define the covariant derivatives with respective to $g_{\mu \nu}, {}^{m}g_{ab}, \gamma_{ij}$, by $\nabla_\mu ,\: D_a, \: \hat{D}_i$, respectively. The non-vanishing components of the Christoffel symbol $\Gamma^\lambda{}_{\mu \nu}$ associated with $g_{\mu \nu}$ are given explicitly as \bena \Gamma^a{}_{bc} = \tilde \Gamma^a{}_{bc} \,, \quad \Gamma^a{}_{ij} = -R(D^a R) \gamma_{ij} \,, \quad \Gamma^i{}_{aj} = \frac{D_a R}{R}\delta^i{}_j \,, \quad \Gamma^i{}_{jk} = \hat \Gamma^i{}_{jk} \,, \quad \label{def:Christoffel} \eena where $\tilde \Gamma^a{}_{bc}$ and $\hat \Gamma^i{}_{jk}$ are the components of the Christoffel symbols associated with the metrics ${}^{m}g_{ab}$ and $\gamma_{ij}$, respectively. \subsection{Proca equations in the general warped product background} Let us consider the massive vector field $A_\mu$ with mass-squared $\mu^2$ in $(M,g_{\mu \nu})$, which obeys the following Proca equation: \bena \nabla_\nu F^{\mu \nu} + \mu^2 A^\mu =0 \,, \label{eq:proca} \eena where the field strength $F$ is given as usual $F_{\mu \nu }:= \nabla_\mu A_\nu - \nabla_\nu A_\mu$. In addition, the following Lorenz condition needs to be satisfied; \ben \nabla_\mu A^\mu =0 \,. \label{condi:lorenz} \een Since Proca equation, (\ref{eq:proca}), is not gauge-invariant due to the mass term, massive vector field in $D$-dimensions has $D-1$ physical degrees of freedom. Note that the Proca equation naturally arises, via Kaluza-Klein compactification, from linear gravitational perturbations in higher dimensional black holes, see, e.g. \cite{Ishibashi:Pani:Gualtieri:Cardoso:2015}. \medskip By using the formulas (\ref{def:Christoffel}) above, we can express the projection of Proca equation (\ref{eq:proca}) on ${\cal N}^m$ and that of on ${\cal K}^n$, respectively, as \bena &{}& D_bF^{ab} + n\frac{D_bR}{R} F^{ab} + {\hat D}_jF^{aj} + \mu^2 A^a =0 \,, \label{proca:comp:a} \\ &{}& D_bF^{ib} + n\frac{D_bR}{R} F^{ib} + {\hat D}_j F^{ij} + \mu^2 A^i = 0 \,, \label{proca:comp:i} \eena and the Lorenz condition (\ref{condi:lorenz}) as \ben D_aA^a + n\frac{D_aR}{R}A^a + {\hat D}_iA^i =0 \,. \label{condi:lorenz:ai} \een \medskip Now we discuss a decomposition of $A_\mu$. Note first that any dual vector field $v_i$ on ${\cal K}^n$ can be expressed as $$v_i= V_i + {\hat D}_i S \,,$$ where $\hat D^i V_i=0$, and $V_i$ and $S$ are called, respectively, the {\em vector-} and {\em scalar-}type components of $v_i$. Note that the vector-type is called sometime the {\em axial-} or {\em odd-}type components, and {scalar-}type is called {\em polar-} or {\em even-}type component. In the same manner, we can decompose any dual vector field $A_\mu$ in the background (\ref{def:background}) into the vector-type and scalar-type according to their tensorial behavior on ${\cal K}^n$. Namely, we can express $A_\mu$ as \bena A_\mu dx^\mu = A^S_a dy^a + {\hat D}_i A^S dz^i + A^V_i dz^i \,, \quad {\hat D}^iA^V_i =0 \,. \eena We refer to $A^V_i$ as the vector-type and $A^S_a, \; A^S$ as the scalar-type components. \medskip Next, let us introduce scalar harmonics ${\mS}_{{\bf k}_S}$ as \ben ({\hat \triangle} + {k}_S^2 ) {\mS}_{{\bf k}_S } = 0 \,, \quad \int_{\cal K} d\sigma_n {\mS}_{{\bf k}'_S} {\mS}_{{\bf k}_S} = \delta_{ {{\bf k}'_S } {{\bf k}_S } } \,, \een where ${\hat \triangle}:= \gamma^{ij}\hat D_i \hat D_j=\hat D^i \hat D_i$, and $d\sigma_n$ denotes the volume element on ${\cal K}^n$. Note that when ${\cal K}^n$ is the unit $n$-sphere, the eigenvalue is given by ${k}_S^2= l(l+n-1) \,, \; l=0,1,2,\dots$. Similarly we introduce vector harmonics ${\mV}_{{\bf k}_V i}$ on ${\cal K}^n$ as \ben ({\hat \triangle} + {k}_V^2 ) {\mV}_{{\bf k}_V i} = 0 \,, \quad {\hat D}^i {\mV}_{{\bf k}_V i} = 0\,, \quad \int_{\cal K} d\sigma_n {\mV}_{{\bf k}'_V}^j {\mV}_{{\bf k}_V j} = \delta_{ {{\bf k}'_V } {{\bf k}_V } } \,, \een where the eigenvalue is given, when ${\cal K}^n$ is the unit $n$-sphere, by ${k}_V^2= l(l+n-1) - 1\,, \; l=1,2, \dots$. The number of independent components of $ {\mV}_{{\bf k}_V i}$ is $n-1$ and only when $n \geqslant 2$, the odd-part is non-trivial. \medskip We can expand the vector- and scalar-type components of $A_\mu$ in terms of the above scalar and vector harmonics: for vector-type, \ben A^V_i = \sum_{{\bf k} V} \phi_{{\bf k} V} {\mV}_{{\bf k}_V i}\,, \een where $\phi_{{\bf k} V}(y)$ is a function on ${\cal N}^m$, and for scalar-type, \ben A^S_a = \sum_{{\bf k} S} A_{{\bf k} a} {\mS}_{{\bf k}_S} \,, \quad A^S = \sum_{{\bf k} S} A_{{\bf k} } {\mS}_{{\bf k}_S} \,, \een where $A_{{\bf k} a}(y)$ and $A_{{\bf k} }(y) $ are, respectively, vector and scalar fields on ${\cal N}^m$. Hereafter, we omit the indices ${\bf k}_S$, ${\bf k}_V$ for brevity. \medskip Now that we have separated the variables by introducing the scalar and vector harmonics, we can reduce the Proca equation, (\ref{eq:proca}), to a set of equations for $\phi^V$ and for $(A_a, A)$ on ${\cal N}^m$. \subsubsection{Vector-type component of the Proca equation} The vector-type consists of a single scalar function $\phi^V$ on ${\cal N}^m$, and the field strength is written as \ben F^{ab}=0 \,, \quad F^{ai} = \frac{1}{R^2} (D^a \phi^V) {\mV}^i \,, \quad F^{ij}= \frac{2}{R^4} \phi^V {\hat D}^{[i} {\mV}^{j]} \,. \een It then immediately follows that the projection (\ref{proca:comp:a}) onto ${\cal N}^m$ and the Lorenz condition (\ref{condi:lorenz:ai}) trivially hold. The only non-trivial equation comes from (\ref{proca:comp:i}), which is written explicitly \ben {}^m \Box \phi^V + (n-2) \frac{D^aR}{R}D_a \phi^V - \left[ \frac{K(n-1) + k_V^2}{R^2}+\mu^2 \right]\phi^V = 0 \,, \label{eq:master:vect:m} \een where ${\hat D}_j {\hat D}^{i}{\mV}^{j}=K(n-1){\mV}^i$ has been used, and where here and hereafter ${}^m \Box := D^aD_a$ is the d'Alembertian on the $m$-dimensional spacetime ${\cal N}^m$. This is the master equation for the vector-type component of the massive vector field $A_\mu$. \subsubsection{Scalar-type component of the Proca equation} For the scalar-type component, the field strength is given by \bena F^{ab} = 2D^{[a} B^{b]} {\mS} \,, \quad F^{ai} = - \frac{1}{R^2}B^a {\hat D}^i{\mS} \,, \quad F^{ij} = 0\,, \eena where we have introduced \ben B^a := A^a -D^aA \,. \een Then, the projections (\ref{proca:comp:a}) on ${\cal N}^m$, (\ref{proca:comp:i}) on ${\cal K}^n$, and the Lorenz condition (\ref{condi:lorenz}) reduce, respectively, to \bena &{}& 2D_bD^{[a}B^{b]} + 2n\frac{D_bR}{R}D^{[a}B^{b]}+ \left(\frac{k_S^2}{R^2}+\mu^2 \right)B^a + \mu^2 D^a A =0 \,, \label{eq:scal:a} \\ &{}& D_bB^b + (n-2)\frac{D_bR}{R}B^b + \mu^2 A = 0 \,, \label{eq:scal:i} \\ &{}& D_bB^b + n\frac{D_bR}{R}B^b + {}^m \Box A + n\frac{D_bR}{R} D^bA - \frac{k_S^2}{R^2}A = 0 \,. \label{eq:scal:lorenz} \eena Acting $D^a$ on (\ref{eq:scal:i}) and then combining with (\ref{eq:scal:a}), we can obtain the equation only for $B^a$ as \ben {}^m \Box B^a - {}^m {\cal R}^a{}_b B^b + \frac{D_bR}{R}\left(nD^bB^a -2D^aB^b \right) + (n-2)D^a\left(\frac{D_bR}{R}\right) B^b - \left(\frac{k_S^2}{R^2}+\mu^2 \right) B^a = 0 \,, \label{eq:Ba} \een where ${}^m {\cal R}_{ab}$ is the Ricci tensor on ${\cal N}^m$, while combining (\ref{eq:scal:i}) and (\ref{eq:scal:lorenz}) we have \ben {}^m \Box A + n\frac{D_cR}{R}D^cA - \left(\frac{k_S^2}{R^2}+\mu^2 \right) A + 2\frac{D_bR}{R}B^b =0 \,. \label{eq:A} \een Due to the last term in (\ref{eq:A}), the scalar variable $A$ is coupled with $B_a$. Inspecting eqs.~(\ref{eq:Ba}) and (\ref{eq:A}), we can expect to be able to obtain a set of decoupled equations when $R=const. $, as clearly the case for $A$ in (\ref{eq:A}). In the next section, we shall show that this is also the case for $B_a$; One can in fact derive from (\ref{eq:Ba}) a single master equation for a single component of $B_a$ when considering, as our background geometry, the near-horizon geometry of (near-)extremal black holes. \medskip Note that the scalar-type components $(B_a, A)$ together with the Lorenz condition describe $m$ dynamical degrees of freedom, while the vector-type components (though including only a single scalar field $\phi^V$) describe $n-1$ dynamical degrees of freedom as the vector harmonics ${\mV}_i$ itself has $n-1$ independent components. Thus, in total $m+n-1=D-1$ degrees of freedom for the massive vector field can be expressed by the above variables, as should be so. \subsection{The massless vector (Maxwell) field in the warped product background} Before going further, we show that for the massless vector field case, one can in fact obtain a single master equation also for the scalar-type component. Let us consider the case $m=2$. When the mass vanishes $\mu^2=0$, eq.~(\ref{eq:scal:i}) reduces to \ben D_b \left( R^{n-2} B^b \right) =0 \,. \een This implies that there exists a scalar field $\phi^S$ on ${\cal N}^2$ such that \ben D_a \phi^S = \epsilon_{ab} R^{n-2} B^b \,, \een where $\epsilon_{ab}$ denotes the natural volume element on $({\cal N}^2, {}^2g_{ab})$. Since $\mu^2=0$, the gauge-invariance is recovered and $\phi^S$ admits a gauge-freedom. Now, as $\epsilon^{ca}\epsilon_{ab}= \delta^c{}_b$, it follows \ben F^{ai} = - \frac{1}{R^n} \epsilon^{ac}(D_{c} \phi^S) {\hat D}^i {\mS} \,. \een That $F^{ai}$ itself is gauge-invariant implies that the gauge freedom of $\phi^S$ is restricted to the replacement; \ben \phi^S \rightarrow \phi^S + const. \,. \label{gauge:res} \een In terms of $\phi^S$, eq.~(\ref{proca:comp:a}) is expressed as \ben \epsilon^{ab}D_b \left[R^n D^c\left(\frac{D_c \phi^S}{R^{n-2}}\right)-k_S^2 \phi^S \right]=0 \,. \een Therefore we have \ben R^n D^c\left(\frac{D_c \phi^S}{R^{n-2}}\right)-k_S^2 \phi^S = c \,, \een with $c$ being an arbitrary constant. We can always absorb this integration constant $c$ in $\phi^S$ by using the remaining gauge freedom (\ref{gauge:res}) and thus obtain \ben R^{n-2} D^c\left(\frac{D_c \phi^S}{R^{n-2}}\right) - \frac{k_S^2}{R^2} \phi^S = 0 \,. \label{eq:master:maxwell} \een This is the master equation for the scalar-type component of Maxwell field, which is responsible for only a single polarization degree of freedom. Note that the master equation for the vector-type component of Maxwell field, given by eq.~(\ref{eq:master:vect:m}) with $m=2$ and the vanishing mass $\mu=0$ describes $n-1$ degrees of freedom as vector harmonics ${\mV}_i$ has $n-1$ independent components. Thus, in total, all $n=D-2$ independent degrees of freedom for the Maxwell field in $D=2+n$ dimensions can be expressed by the two master variables $\phi^V$ and $\phi^S$ together with vector and scalar harmonics $\mV_i$ and $\mS$. \medskip Note also that all the results obtained above hold in a fairly generic class of background spacetimes as far as they have the warped product structure given by eq.~(\ref{def:background}). In particular, the background geometry used so far is neither required to be a solution to the Einstein equations with any specific energy-momentum tensor, nor to possess any symmetry. \section{Extremal and near-extremal black holes and their near-horizon geometries} \label{sect:Extremal} In this section, we discuss the Proca equation when our background spacetime describes extremal or near-extremal black holes. \medskip From now on we assume $m=2$. Then the metric (\ref{def:background}) includes the standard solutions to the Einstein-Maxwell-$\Lambda$ system with $\Lambda$ being a cosmological constant when $m=2$, $y^a=(t,r)$, $R(y)=r$ and \bena {}^{2}g_{ab}dy^a dy^b = -F(r)dt^2 + \frac{ dr^2}{F(r)} \,, \quad F(r):= K - \frac{2M}{r^{n-1}} + \frac{Q^2}{r^{2(n-1)}}- \frac{2\Lambda}{n(n+1)} r^2 \,, \label{def:metric:2} \eena where $M$ and $Q$ are, respectively, the mass and charge parameters. The black hole horizon is located at $r=r_+$ for which $F(r_+)=0$. In particular, the above metric allows for two horizons (or more) and then possesses a limit wherein the horizon becomes degenerate. Such a black hole is called extremal. The most well-known case is the Reissner-Nordstrom metric in four-dimensions, for which $n=2$, $K=1$, $\Lambda =0$, so that \ben F(r) = \frac{(r-r_+)(r-r_-)}{r^2} \,, \quad r_\pm := M \pm \sqrt{M^2-Q^2} \,. \een The extremal limit is the case $r_+=r_-=M=|Q|$. Even for the neutral (no electric charge $Q=0$) case, (\ref{def:metric:2}) admits an extremal black hole when $K=-1$, $\Lambda<0$, and $M<0$. \medskip We consider the case in which there are two horizons at $r_+$ and $r_-$. Since we are concerned with the neighborhood of the black hole (outer) horizon, instead of the Schwarzschild type coordinates used in (\ref{def:metric:2}), let us take the ingoing Eddington-Finkelstein type coordinates, which cover the black hole horizon and in which our background metric (\ref{def:background}) takes the form: \bena ds^2 &=& - F(r) dv^2 + 2dv dr + R(r)^2 \gamma_{ij}dz^idz^j \,, \non \\ &{}& \non \\ &{}& F(r) = (r-r_+)(r-r_-)g(r) \,, \label{def:metric:ef} \eena where $g(r)>0$ is an everywhere regular function, except at true curvature singularity, and $v$ is the advanced time-coordinate. We introduce the extremality parameter as \ben \sigma := \frac{r_+ - r_-}{r_+} \,. \label{def:param:extremal} \een When $\sigma \ll 1$, we refer to the metric (\ref{def:metric:ef}) as the {\em near-extremal} and when $\sigma=0$, {\em extremal}. For convenience, we also introduce the new radial coordinate \ben x:= \frac{r-r_+}{r_+} \,, \een so that the black hole event horizon is located at $x=0$. \medskip It is known that any extremal black hole admits a near-horizon limit. Let us take the following scaling transformation: \ben x \rightarrow \lambda x \,, \quad v \rightarrow \frac{r_+}{\lambda} v \,, \quad R \rightarrow r_+ R \,, \quad \sigma \rightarrow \lambda \sigma \,, \label{def:scaling} \een with the scaling parameter $\lambda > 0$. Then the metric (\ref{def:metric:ef}) is written as, \bena \frac{ds^2}{r_+^2}&=& - F(\lambda x)dv^2 + 2dvdx + R(\lambda x)^2 \gamma_{ij}dz^idz^j \,, \non \\ &{}& \qquad F(\lambda x) = x(x+ \sigma) g(\lambda x) \,. \label{def:metric:lambda} \eena The limiting $\lambda \rightarrow 0$ metric is the {\em near-horizon geometry}, in which in particular $R$ becomes a constant, and the isometry is in general enhanced to be $O(2,1)$ as shown in \cite{KLR07}. Note that at this point, the scaling transformation (\ref{def:scaling}) is simply a change of the coordinates together with the parameter change, and the above metric (\ref{def:metric:lambda}) satisfies the same Einstein equations which the original metric satisfies. This is, however, not necessarily the case when we take a power expansion of the above metric by $\lambda$ and truncate at some order. \section{Expanding massive vector field and (near-) extremal black hole geometry} \label{sect:Expanding} Now we shall develop our perturbation method. We view the scaling parameter $\lambda$ as the small perturbation parameter and consider a one-parameter family of the massive vector field $A_\mu(\lambda)$. We expand it in a power series in $\lambda$ about $\lambda=0$, as $$ A_\mu(\lambda)=A^{(0)}_\mu + \lambda A^{(1)}_\mu + \lambda^2 A^{(2)}_\mu + \cdots \,. $$ We also consider our background (near-)extremal black hole metric as a one parameter family of metrics, expanded as $$ g_{\mu \nu}(\lambda)=g^{(0)}_{\mu \nu} + \lambda g^{(1)}_{\mu \nu} + \lambda^2g^{(2)}_{\mu \nu}+\cdots \,, $$ with the leading metric $g_{\mu \nu}^{(0)}=g_{\mu \nu}|_{\lambda =0}$ being the corresponding near-horizon geometry. Note however that the background geometry is fixed from the beginning, and in particular not required to solve the Einstein equations at each order. Then, in doubly expanding the field variables and the background metric with respect to $\lambda$ and exploiting the enhanced symmetry of the leading-order near-horizon geometry, we will examine the Proca equation at each order of $\lambda$. We will perform this analysis in the vector- and scalar-type components, separately. \subsection{Proca equations in (near-)extremal Reissner-Nordstrom black hole} For concreteness, we first consider the standard four-dimensional Reissner-Nordstrom black hole background case. Generalisation of our formulas to more generic case is given afterward. The metric of the Reissner-Nordstrom black hole is given by $m=n=2$, $K=1$, (with the scaling (\ref{def:scaling}), as \ben \frac{ds^{2}}{r_+^2}= -F(x)dv^{2}+2dvdx+ {R^{2}} d\Omega^{2} \,, \label{eq:(vx)metric} \een where \ben F(x) = \frac{x(x+\sigma)}{(1+\lambda x)^2} \,, \quad R= (1+\lambda x) \,. \label{eq:RN_F_R} \een \subsubsection{Vector-type component of the Proca equation} We begin with the vector-type component. In the Reissner-Nordstrom background (\ref{eq:(vx)metric}), the Proca equation for the vector-type, (\ref{eq:master:vect:m}), reduces to \ben \left[F\partial_{x}^{2}+ (\partial_x F) \partial_{x}+2\partial_{v}\partial_{x}-\left\{\frac{k^{2}_{V}+1}{(1+\lambda x)^{2}}+\mu^{2} r_{+}^{2}\right\}\right]\phi^V=0 \,. \label{eq:Proca_vector2} \een \medskip Let us expand the master scalar variable $\phi^V$ in a power series of $\lambda$ as \ben \phi^V =\sum^{\infty}_{n=0}\lambda^{n}\Phi_V^{(n)} \,. \label{eq:phi_expand} \een Pluging this into (\ref{eq:Proca_vector2}), as well as expanding $F$ and the other $\lambda$-dependent coefficients in (\ref{eq:Proca_vector2}), we obtain \ben \sum^{\infty}_{n=0}\lambda^{n}\left[\sum^{n}_{m=0}L^{(m)}_{V} \Phi_V^{(n-m)}\right]=0 \,, \label{eq:Proca_vec_expand} \een where we have introduced the following series of differential operators on ${\cal N}^2$, \bena L^{(n)}_{V} &:=& (-1)^{n} \Big[ (n+1) x^{n+1}(x+\sigma)\partial_{x}^{2} +(n+1)x^{n} [(n+2)x+(n+1)\sigma] \partial_{x} \non \\ &{}& \qquad \qquad +2\delta_{n0}\partial_{v}\partial_{x} -(n+1)(k^{2}_{V}+1)x^{n}-\delta_{n0}\mu^{2}r_{+}^{2} \Big] \,, \label{eq:phi_op} \eena where here and hereafter $\delta_{ij}$ denotes the kronecker's delta. Therefore, at each order \ben \sum^{n}_{m=0}L^{(m)}_{V}\Phi_V^{(n-m)}=0 \,. \label{eq:Proca_vec_expand2} \een Namely, we have: \bena L^{(0)}_{V}\Phi_V^{(0)}&=&0 \,, \label{eq:v:0} \\ L^{(0)}_{V}\Phi_V^{(1)}&=&-L^{(1)}_{V}\Phi^{(0)} \,, \label{eq:v:1} \\ L^{(0)}_{V}\Phi_V^{(2)}&=&-L^{(1)}_{V}\Phi^{(1)}-L^{(2)}_{V}\Phi_V^{(0)} \,, \label{eq:v:2} \\ &\vdots&\non \\ L^{(0)}_{V}\Phi_V^{(n)}&=&-\sum^{n}_{m=1}L^{(m)}_{V} \Phi_V^{(n-m)} \,. \label{eq:v:n} \eena The leading order master equation~(\ref{eq:v:0}) is homogeneous, and at each sub-leading inhomogeneous equation has a source term that consists only of the lower-order variables. Thus, once having obtained the leading solution $\Phi_V^{(0)}$, one can successively obtain all order solutions $\Phi_V^{(n)}$. \medskip Here we give the general solution to the leading-order master equation~(\ref{eq:v:0}): $L_V^{(0)} \Phi_V^{(0)}=0$. With the ansatz of time-dependency $\Phi \propto e^{-i\omega v}$, the time-derivative $\partial_v$ is replaced with $-i\omega$ and the leading-order operator is rewritten as the second-order ordinary differential operator as \ben L_V^{(0)} = x(x+\sigma) \frac{d^2}{dx^2} + (2x+ \sigma -2 i \omega) \frac{d}{dx} - (k_V^2 + 1 + \mu^2r_+^2) \,. \een Note that the time-coordinate $v$ is scale-transformed as $v \rightarrow (r_+ /\lambda) v $ in eq.~(\ref{def:scaling}), and accordingly the frequency $\omega$ is also scale-transformed as $\omega \rightarrow (\lambda/r_+) \omega$. The general solution is then, for the near-extremal $\sigma \neq 0$ case: \bena \Phi_V^{(0)} &=& C_1\cdot {}_2F_1\left( - \nu + \frac{1}{2}, \nu+\frac{1}{2}, 1+ 2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \non \\ &{}& + C_2 \cdot (x+ \sigma)^{-2i \omega/\sigma} {}_2F_1\left( - \nu + \frac{1}{2}- 2i \frac{\omega}{\sigma}, \nu+\frac{1}{2}- 2i \frac{\omega}{\sigma}, 1- 2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \,, \eena where ${}_2F_1$ denotes the hypergeometric function and \bena \nu := \sqrt{k_V^2+1 +\mu^2r_+^2 + \frac{1}{4}} \,, \eena and where $C_1$ and $C_2$ are arbitrary constants. As for the extremal $\sigma=0$ case: \bena \Phi_V^{(0)} &=& \frac{1}{\sqrt{x}}e^{-i\omega/x} \left[ C_1\cdot I_\nu \left( i \omega/x \right) + C_2\cdot K_\nu \left( i \omega/x \right) \right] \,, \eena where $I_\nu, \:K_\nu$ denote the modified Bessel functions. Then, once boundary conditions of interest are determined, one can construct the Green's function $G_V^{(0)} = L_V^{(0)}{}^{-1}$ by standard argument, and obtain higher order solutions, which are formally expressed as \ben \Phi_V^{(n)} = -G_V^{(0)} \sum^{n}_{m=1}L^{(m)}_{V} \Phi_V^{(n-m)} \,. \een \subsubsection{Scalar type component of the Proca equation} Let us turn to the scalar-type components of Proca equation, (\ref{eq:Ba}) and (\ref{eq:A}), in the four-dimensional Reissner-Nordstrom background case. Setting $m=n=2, K=1, \Lambda=0$, we have \ben D_{c}D^{c}A+2\left(\frac{D_{a}R}{R}\right)(D^{a}A)-\left(\frac{k^{2}_{S}}{R^{2}}+\mu^{2}\right)A+2\left(\frac{D_{a}R}{R}\right)B^{a}=0 \,, \label{eq:Proca_scalar1} \een \ben -D_{c}D^{c}B_{a}+{}^{2}{\cal R}_{a}{}^{c}B_{c}+4\left(\frac{D^{b}R}{R}\right)D_{[a}B_{b]}+\left(\frac{k^{2}_{S}}{R^{2}}+\mu^{2}\right)B_{a}=0 \,, \label{eq:Proca_scalar2} \een where ${}^2{\cal R}^a{}_b$ is the Ricci tensor on ${\cal N}^2$, given in terms of the present coordinates $y^a=(v,r)$ by \ben {}^{2}{\cal R}_{a}{}^{b}=-\frac{(\partial_{x}^{2}F)}{2r_{+}^{2}}\delta_{a}{}^{b} \,. \een In the coordinate system of (\ref{eq:(vx)metric}), the above equations, (\ref{eq:Proca_scalar1}) and (\ref{eq:Proca_scalar2}), are explicitly written as the coupled equations for three-components, $(A,B_x,B_v)$, \bena && \left[ F\partial_{x}^{2}+(\partial_{x}F)\partial_{x} +2\partial_{v}\partial_{x} + \frac{2\lambda}{1+\lambda x}(F\partial_{x}+\partial_{v}) -\left\{\frac{k^{2}_{S}}{(1+\lambda x)^{2}}+\mu^{2} r_{+}^{2}\right\} \right]A \non \\ && \hspace{8cm} +\frac{2\lambda}{1+\lambda x} \left( FB_{x}+B_{v} \right)=0 \,, \label{eq:Proca_scalar1_ver2} \eena \bena && \left[ F\partial_{x}^{2}+2(\partial_{x}F)\partial_{x} +(\partial_{x}^{2}F)\partial_{x} +2\partial_{v}\partial_{x}+\frac{2\lambda}{1+\lambda x}\partial_{v} -\left\{ \frac{k^{2}_{S}}{(1+\lambda x)^{2}}+\mu^{2} r_{+}^{2} \right\} \right]B_{x} \non \\ && \hspace{10cm} -\frac{2\lambda}{1+\lambda x}\partial_{x}B_{v}=0 \,, \label{eq:Proca_scalar2_ver2-1} \eena \bena && \left[ F\partial_{x}^{2}+2\partial_{v}\partial_{x} + \frac{2\lambda}{1+\lambda x}F\partial_{x} - \left\{ \frac{k^{2}_{S}}{(1+\lambda x)^{2}} + \mu^{2} r_{+}^{2} \right\} \right]B_{v} \non \\ &{}& \hspace{7cm} +\left[ (\partial_{x}F)\partial_{v} -\frac{2\lambda}{1+\lambda x}F\partial_{v} \right]B_{x}=0 \,. \label{eq:Proca_scalar2_ver2-2} \eena Now we expand the variables $(A,\: B_x, \:B_v)$ about $\lambda$ as \ben A=\sum_{n=0}^{\infty} \lambda^{n}\Phi_{S1}^{(n)} \,, \label{eq:A_expand} \een \ben B_{x}=\sum_{n=0}^{\infty} \lambda^{n}\Phi_{S2}^{(n)} \,, \label{eq:Bx_expand} \een \ben B_{v}=\sum_{n=0}^{\infty} \lambda^{n}\Phi_{S3}^{(n)} \,. \label{eq:Bv_expand} \een Also we expand each term appearing in eqs.~(\ref{eq:Proca_scalar1_ver2}), (\ref{eq:Proca_scalar2_ver2-1}), and (\ref{eq:Proca_scalar2_ver2-2}) about $\lambda$. In order to express the equations (\ref{eq:Proca_scalar1_ver2}), (\ref{eq:Proca_scalar2_ver2-1}), and (\ref{eq:Proca_scalar2_ver2-2}) at each order, it is convenient to introduce the following set of differential operators: \bena L^{(n)}_{\alpha 1} &:=& (-1)^{n}[(n+1) x^{n+1}(x+\sigma)\partial_{x}^{2}+\{(n+1)x^{n}(2x+\sigma) +2\delta_{n0}\partial_{v}\}\partial_{x} \non \\ &{}& \qquad \quad +2(\delta_{n0}-1)x^{n-1}\partial_{v}-(n+1)k^{2}_{S}x^{n} -\delta_{n0}\mu^{2}r_{+}^{2}] \,, \label{eq:aA_op} \\ &{}& \non \\ L^{(n)}_{\alpha 2} &:=& (-1)^{n+1}n(n+1) x^{n}(x+\sigma) \,, \label{eq:ax_op} \\ &{}& \non \\ L^{(n)}_{\alpha 3} &:=& 2(-1)^{n}(\delta_{n0}-1)x^{n-1} \,, \label{eq:av_op} \\ &{}& \non \\ L^{(n)}_{\beta 2} &:=& (-1)^{n}[(n+1) x^{n+1}(x+\sigma)\partial_{x}^{2} \non \\ &{}& \qquad \quad +2(n+1)x^{n}\{(n+2)x+(n+1)\sigma\}\partial_{x} \non \\ &{}& \qquad \quad +2\delta_{n0}\partial_{v}\partial_{x}+(n+1)^{2}x^{n-1}\{(n+2)x+n\sigma\} \non \\ &{}& \qquad \quad +2(\delta_{n0}-1)x^{n-1}\partial_{v}-(n+1)k^{2}_{S}x^{n} -\delta_{n0}\mu^{2}r_{+}^{2}] \,, \label{eq:bx_op} \\ &{}& \non \\ L^{(n)}_{\beta 3} &:=& 2(-1)^{n}(1-\delta_{n0})x^{n-1}\partial_{x} \,, \label{eq:bv_op} \\ &{}& \non \\ L^{(n)}_{\gamma 2} &:=& (-1)^{n}(n+1)x^{n}\{2(n+1)x+(2n+1)\sigma\}\partial_{x} \,, \label{eq:cx_op} \\ &{}& \non \\ L^{(n)}_{\gamma 3} &:=& (-1)^{n}[(n+1) x^{n+1}(x+\sigma)\partial_{x}^{2}-n(n+1)x^{n}(x+\sigma)\partial_{x} \non \\ &{}& \qquad \quad +2\delta_{n0}\partial_{v}\partial_{x}-(n+1)k^{2}_{S}x^{n} -\delta_{n0}\mu^{2}r_{+}^{2}] \,. \label{eq:cv_op} \eena In terms of these operators, eqs.~(\ref{eq:Proca_scalar1_ver2}), (\ref{eq:Proca_scalar2_ver2-1}), and (\ref{eq:Proca_scalar2_ver2-2}), are expressed as \ben \sum^{\infty}_{n=0}\lambda^{n}\left[\sum^{n}_{m=0}\left\{L^{(m)}_{\alpha 1}\Phi_{S1}^{(n-m)}+L^{(m)}_{\alpha 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\alpha 3}\Phi_{S3}^{(n-m)}\right\}\right]=0 \,, \label{eq:Proca_scalar1_ver3} \een \ben \sum^{\infty}_{n=0}\lambda^{n}\left[\sum^{n}_{m=0}\left\{L^{(m)}_{\beta 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\beta 3}\Phi_{S3}^{(n-m)}\right\}\right]=0 \,, ~~~~~~~~~~~~~~~~~~~ \label{eq:Proca_scalar2_ver3-1} \een \ben \sum^{\infty}_{n=0}\lambda^{n}\left[\sum^{n}_{m=0}\left\{L^{(m)}_{\gamma 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\gamma 3}\Phi_{S3}^{(n-m)}\right\}\right]=0 \,. ~~~~~~~~~~~~~~~~~~~ \label{eq:Proca_scalar2_ver3-2} \een Therefore we have, at each order, the following equations: \ben \sum^{n}_{m=0}\left\{L^{(m)}_{\alpha 1}\Phi_{S1}^{(n-m)}+L^{(m)}_{\alpha 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\alpha 3}\Phi_{S3}^{(n-m)}\right\}=0 \,, \label{eq:Proca_scalar1_ver4} \een \ben \sum^{n}_{m=0}\left\{L^{(m)}_{\beta 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\beta 3}\Phi_{S3}^{(n-m)}\right\}=0 \,, ~~~~~~~~~~~~~~~~~~~ \label{eq:Proca_scalar2_ver4-1} \een \ben \sum^{n}_{m=0}\left\{L^{(m)}_{\gamma 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\gamma 3}\Phi_{S3}^{(n-m)}\right\}=0 \,. ~~~~~~~~~~~~~~~~~~~ \label{eq:Proca_scalar2_ver4-2} \een These equations can be rewritten in the following manner: \begin{itemize} \item[(i)] At the leading order $\lambda =0$, the geometry is the near-horizon geometry, and we find \ben L^{(0)}_{\alpha 2}=L^{(0)}_{\alpha 3}=L^{(0)}_{\beta 3}=0 \,. \een Therefore we have \ben \left( \begin{tabular}{ccc} $L^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$L^{(0)}_{\beta 2}$&$0$\\ $0$&$L^{(0)}_{\gamma 2}$&$L^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(0)}$\\ $\Phi_{S2}^{(0)}$\\ $\Phi_{S3}^{(0)}$\\ \end{tabular} \right) = \left( \begin{tabular}{c} $0$\\ $0$\\ $0$\\ \end{tabular} \right) \,. \label{eq:Proca_scalar_0} \een This shows that the first two equations are mutually decoupled, homogeneous master equations for the two master variables, $\Phi_{S1}^{(0)}, \: \Phi_{S2}^{(0)}$. These two variables describe two dynamical degrees of freedom, which the scalar-type components should be responsible for describing. (Recall that in four-dimensions, the massive vector field has in total three dynamical degrees of freedom, one of which is expressed by the vector-type component.) By using the third equation, the remaining variable $\Phi_{S3}^{(0)}$ can be determined in terms of $\Phi_{S2}^{(0)}$. \item[(ii)] Next, at the first order of $\lambda$, we have \ben \left( \begin{tabular}{ccc} $L^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$L^{(0)}_{\beta 2}$&$0$\\ $0$&$L^{(0)}_{\gamma 2}$&$L^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(1)}$\\ $\Phi_{S2}^{(1)}$\\ $\Phi_{S3}^{(1)}$\\ \end{tabular} \right) = - \left( \begin{tabular}{ccc} $L^{(1)}_{\alpha 1}$&$L^{(1)}_{\alpha 2}$&$L^{(1)}_{\alpha 3}$\\ $0$&$L^{(1)}_{\beta 2}$&$L^{(1)}_{\beta 3}$\\ $0$&$L^{(1)}_{\gamma 2}$&$L^{(1)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(0)}$\\ $\Phi_{S2}^{(0)}$\\ $\Phi_{S3}^{(0)}$\\ \end{tabular} \right) \,. \label{eq:Proca_scalar_1} \een The first two equations are mutually decoupled, inhomogeneous wave equations for the two scalar variables, $\Phi_{S1}^{(1)}, \: \Phi_{S2}^{(1)}$, and via the third equation, the remaining variable $\Phi_{S3}^{(1)}$ can be determined. The source terms of the inhomogeneous wave equations consist of the leading order solutions $\Phi_{S1}^{(0)}, \: \Phi_{S2}^{(0)}, \: \Phi_{S3}^{(0)}$. \item[(iii)] At the second-order, we find \bena \left( \begin{tabular}{ccc} $L^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$L^{(0)}_{\beta 2}$&$0$\\ $0$&$L^{(0)}_{\gamma 2}$&$L^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(2)}$\\ $\Phi_{S2}^{(2)}$\\ $\Phi_{S3}^{(2)}$\\ \end{tabular} \right) &=&- \left( \begin{tabular}{ccc} $L^{(1)}_{\alpha 1}$&$L^{(1)}_{\alpha 2}$&$L^{(1)}_{\alpha 3}$\\ $0$&$L^{(1)}_{\beta 2}$&$L^{(1)}_{\beta 3}$\\ $0$&$L^{(1)}_{\gamma 2}$&$L^{(1)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(1)}$\\ $\Phi_{S2}^{(1)}$\\ $\Phi_{S3}^{(1)}$\\ \end{tabular} \right) \non \\ &{}& - \left( \begin{tabular}{ccc} $L^{(2)}_{\alpha 1}$&$L^{(2)}_{\alpha 2}$&$L^{(2)}_{\alpha 3}$\\ $0$&$L^{(2)}_{\beta 2}$&$L^{(2)}_{\beta 3}$\\ $0$&$L^{(2)}_{\gamma 2}$&$L^{(2)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(0)}$\\ $\Phi_{S2}^{(0)}$\\ $\Phi_{S3}^{(0)}$\\ \end{tabular} \right) \,. \label{eq:Proca_scalar_2} \eena The structure of these second-order equations are the same as the first-order equations: The first two equations are mutually decoupled inhomogeneous wave equations for the two scalar variables $\Phi_{S1}^{(2)}, \: \Phi_{S2}^{(2)}$, and the third-equation is used to determine the remaining third variable $\Phi_{S3}^{(2)}$. The source terms for the inhomogeneous equations are given only in terms of the lower-order, i.e., the first- or the leading-order variables. \item[(iv)] In general, at the $n$-th order, we have: \ben \left( \begin{tabular}{ccc} $L^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$L^{(0)}_{\beta 2}$&$0$\\ $0$&$L^{(0)}_{\gamma 2}$&$L^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(n)}$\\ $\Phi_{S2}^{(n)}$\\ $\Phi_{S3}^{(n)}$\\ \end{tabular} \right) = - \sum_{m=1}^{n} \left[ \left( \begin{tabular}{ccc} $L^{(m)}_{\alpha 1}$&$L^{(m)}_{\alpha 2}$&$L^{(m)}_{\alpha 3}$\\ $0$&$L^{(m)}_{\beta 2}$&$L^{(m)}_{\beta 3}$\\ $0$&$L^{(m)}_{\gamma 2}$&$L^{(m)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(n-m)}$\\ $\Phi_{S2}^{(n-m)}$\\ $\Phi_{S3}^{(n-m)}$\\ \end{tabular} \right)\right] \,. \label{eq:Proca_scalar_n} \een \end{itemize} Thus, at any order, we find the same structure; we obtain two mutually decoupled master equations for two master variables, $\Phi_{S1}^{(n)}, \: \Phi_{S2}^{(n)}$, given respectively by the operators $L^{(0)}_{\alpha 1}, \; L^{(0)}_{\beta 2}$, and the third-equation is used to determine the remaining third variable $\Phi_{S3}^{(n)}$. The source terms for the inhomogeneous master equations are given by the lower-order variables. This is the set of master equations for the scalar-type components. Once having obtained the leading solutions $\Phi_{S1}^{(0)},\: \Phi_{S2}^{(0)}$, one can solve successively any order of the scalar-type components of the Proca equation. \medskip As in the vector-type case, one can immediately find the general solutions to the leading-order master equations, $L_{\alpha 1}^{(0)} \Phi_{S1}^{(0)}=0$ and $L_{\beta 2}^{(0)} \Phi_{S2}^{(0)}=0$. With the ansatz of time-dependency $\Phi \propto e^{-i\omega v}$, the time-derivative $\partial_v$ is replaced with $-i\omega$ and the two leading operators are, respectively, expressed as \bena L_{\alpha 1}^{(0)} &=& x(x+\sigma) \frac{d^2}{dx^2} + \left( 2x+ \sigma -2 i \omega \right) \frac{d}{dx} - (k_S^2 + \mu^2r_+^2) \,, \\ L_{\beta 2}^{(0)} &=& x(x+\sigma) \frac{d^2}{dx^2} +2 (2x+ \sigma - i \omega) \frac{d}{dx} - (k_S^2 + \mu^2r_+^2 -2 ) \,. \eena Again note that according to the time-coordinate scaling $v \rightarrow (r_+ /\lambda) v $, the frequency is also scale-transformed: $\omega \rightarrow (\lambda/r_+) \omega$. The general solutions, $\Phi_{S1}^{(0)}$ and $\Phi_{S2}^{(0)}$, are given for the near-extremal $\sigma \neq 0$ case: \bena \Phi_{S1}^{(0)} &=& C_1\cdot {}_2F_1\left( - \nu + \frac{1}{2}, \nu+\frac{1}{2}, 1+ 2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \non \\ &{}& + C_2 \cdot (x+ \sigma)^{-2i \omega/\sigma} {}_2F_1\left( - \nu + \frac{1}{2}- 2i \frac{\omega}{\sigma}, \nu+\frac{1}{2}- 2i \frac{\omega}{\sigma}, 1- 2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \,, \\ \Phi_{S2}^{(0)} &=& C_1\cdot {}_2F_1\left( - \nu + \frac{3}{2}, \nu+\frac{3}{2}, 2 + 2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \non \\ &{}& + C_2 \cdot (x+ \sigma)^{-1-2i \omega/\sigma} {}_2F_1\left( - \nu + \frac{1}{2}- 2i \frac{\omega}{\sigma}, \nu+\frac{1}{2}- 2i \frac{\omega}{\sigma}, -2i \frac{\omega}{\sigma}; 1 + \frac{x}{\sigma} \right) \,, \eena where \bena \nu := \sqrt{k_S^2 +\mu^2r_+^2 + \frac{1}{4}} \,. \eena As for the extremal $\sigma=0$ case: \bena \Phi_{S1}^{(0)} &=& \frac{1}{\sqrt{x}}e^{-i\omega/x} \left[ C_1\cdot I_\nu \left( i \omega/x \right) + C_2\cdot K_\nu \left( i \omega/x \right) \right] \,, \eena and \bena \Phi_{S2}^{(0)} &=& C_1\cdot {x}^{-5/2}e^{-i\omega/x}\cdot \left\{ \omega I_{\nu +1} \left(-i \omega/x \right) + i \left[(\nu + 1/2) x - i\omega \right] I_{\nu} \left(- i \omega/x \right) \right\} \non \\ &{+}& C_2\cdot {x}^{-5/2}e^{-i\omega/x}\cdot \left\{ - \omega K_{\nu +1} \left(-i \omega/x \right) + i \left[(\nu + 1/2) x - i\omega \right] K_{\nu} \left(-i \omega/x \right) \right\} \,. \eena With the choice of boundary conditions of interest, one can construct the Green's functions $G_{\alpha 1}^{(0)} = L_{\alpha 1}^{(0)}{}^{-1}$ and $G_{\beta 2}^{(0)} = L_{\beta 2}^{(0)}{}^{-1}$. Then, one can obtain the $n$-th order solutions, formally expressed as \bena \Phi_{S1}^{(n)} &=& -G_{\alpha 1}^{(0)} \sum^{n}_{m=1} \left\{ L^{(m)}_{\alpha 1}\Phi_{S1}^{(n-m)}+L^{(m)}_{\alpha 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\alpha 3}\Phi_{S3}^{(n-m)} \right\} \,, \\ \Phi_{S2}^{(n)} &=& -G_{\beta 2}^{(0)} \sum^{n}_{m=1} \left\{ L^{(m)}_{\beta 2}\Phi_{S2}^{(n-m)}+L^{(m)}_{\beta 3}\Phi_{S3}^{(n-m)} \right\} \,. \eena \subsection{Proca equations in general (near-)extremal black holes} In this subsection, we provide the expansion of the Proca equation in more generic, extremal and near-extremal black holes in four-dimensions. Our metric ansatz is given by eq.~(\ref{def:metric:lambda}). We expand the general metric functions, $F(\lambda x)=x(x+\sigma)g(\lambda x)$ and $R(\lambda x)$, as \ben g=\sum_{n=0}^{\infty}\lambda^{n}g^{(n)} \,, \quad R=\sum_{n=0}^{\infty}\lambda^{n}R^{(n)} \,, \label{exp:g:R} \een where $g^{(0)}, \: R^{(0)}$ are assumed to be some positive constants and $g^{(n)}, \:R^{(n)} \: (n \geqslant 1)$ can be any regular (except at a singularity) functions of $x$. For later use, we define the following quantities: \bena \Delta^{(n)}&:=& \sum_{m=0}^{n}\sum_{l=0}^{m}R^{(n-m)}R^{(m-l)}g^{(l)} \,, \label{eq:Delta} \\ \Delta^{(n)}_{g}&:=& \sum_{m=0}^{n}\sum_{l=0}^{m}R^{(n-m)}R^{(m-l)}(\partial_{x}g^{(l)}) \,, \label{eq:Delta_g} \\ \Delta^{(n)}_{gg}&:=& \sum_{m=0}^{n}\sum_{l=0}^{m}R^{(n-m)}R^{(m-l)}(\partial_{x}^{2}g^{(l)}) \,, \label{eq:Delta_gg} \\ \Delta^{(n)}_{R}&:=& \sum_{m=0}^{n}\sum_{l=0}^{m}R^{(n-m)}(\partial_{x}R^{(m-l)})g^{(l)} \,, \label{eq:Delta_R} \\ \Delta^{(n)}_2 &:=& \sum_{m=0}^{n}R^{(n-m)}R^{(m)} \,, \label{eq:delta} \\ \Delta^{(n)}_{2R}&:=& \sum_{m=0}^{n}R^{(n-m)}(\partial_{x}R^{(m)}) \,. \label{eq:delta_R} \eena \subsubsection{Vector-type component of the Proca equation} The master equation for the vector-type component (\ref{eq:master:vect:m}) becomes in the present case \ben \left[ R^{2}F\partial_{x}^{2}+R^{2}(\partial_{x}F)\partial_{x} +2R^{2}\partial_{v}\partial_{x} -\left( k^{2}_{V}+1+\mu^{2} r_{+}^{2} R^{2} \right) \right]\phi^V=0 \,. \label{eq:general_Proca_vector} \een As we have done in the previous subsection, we expand this equation with respect to $\lambda$. Let us introduce the operator: \bena \mathcal{L}^{(n)}_{V} &:=& \Delta^{(n)} x(x+\sigma)\partial_{x}^{2} + \left\{ \Delta^{(n)}_{g} x (x+\sigma) + \Delta^{(n)}(2x+\sigma)+2\Delta_2^{(n)}\partial_{v} \right\} \partial_{x} \non \\ &{}& \quad - \delta_{n0}(k^{2}_{V}+1) - \Delta_2^{(n)}\mu^{2}r_{+}^{2} \,. \label{eq:general_Proca_vector_op} \eena In terms of this operator, we obtain \ben \sum^{\infty}_{n=0}\lambda^{n}\left[\sum^{n}_{m=0}\mathcal{L}^{(m)}_{V}\Phi_V^{(n-m)}\right]=0 \,. \label{eq:general_Proca_vec_expand} \een Thus, at $n$-th order, we have \ben \sum^{n}_{m=0}\mathcal{L}^{(m)}_{V}\Phi_V^{(n-m)}=0 \,. \label{eq:Proca_vec_expand2} \een Thus, we obtain the set of master equations for the vector-type component with the operators $L^{(n)}_V$'s in eqs.~(\ref{eq:v:0}) -- (\ref{eq:v:n}) replaced with ${\cal L}^{(n)}$'s defined in eq.~(\ref{eq:general_Proca_vector_op}). Once the leading master variable $\Phi_V^{(0)}$ is obtained, one can successively obtain the solution at any order $\Phi_V^{(n)}$. \subsubsection{Scalar-type component of the Proca equation} In our present general four-dimensional background, the scalar-type component of the Proca equation, (\ref{eq:Proca_scalar1}) and (\ref{eq:Proca_scalar2}), are rewritten as \bena && \left[ R^{2}F\partial_{x}^{2}+R^{2}(\partial_{x}F)\partial_{x} + 2R^{2}\partial_{v}\partial_{x} + 2R(\partial_{x}R)(F\partial_{x}+\partial_{v}) - \left( k^{2}_{S}+\mu^{2}r_{+}^{2} R^{2} \right) \right]A \non \\ &{}& \hspace{8cm} +2R(\partial_{x}R)(FB_{x}+B_{v})=0 \,, \label{eq:general_Proca_scalar1_ver2} \eena \bena && \left[ R^{2}F\partial_{x}^{2}+2R^{2}(\partial_{x}F)\partial_{x} +R^{2}(\partial_{x}^{2}F)\partial_{x} +2R^{2}\partial_{v}\partial_{x}+2R^{2}(\partial_{x}R)\partial_{v}-\left(k^{2}_{S}+\mu^{2} r_{+}^{2}R^2\right) \right]B_{x} \non \\ &{}& \hspace{9cm} -2R(\partial_{x}R)\partial_{x}B_{v}=0 \,, \label{eq:general_Proca_scalar2_ver2-1} \eena \bena && \left[ R^{2}F\partial_{x}^{2}+2R^{2}\partial_{v}\partial_{x} +2R(\partial_{x}R)F\partial_{x} - \left(k^{2}_{S}+\mu^{2}r_{+}^{2} R^2\right) \right]B_{v} \non \\ &{}& \hspace{6cm} + \left[ R^{2}(\partial_{x}F) -2R(\partial_{x}R) F \right] \partial_{v}B_{x}=0 \,. \label{eq:general_Proca_scalar2_ver2-2} \eena As in the vector-type case, we expand these equations by $\lambda$. The three variables $A,\: B_x,\: B_v$ are expanded as (\ref{eq:A_expand}), (\ref{eq:Bx_expand}), and (\ref{eq:Bv_expand}). We define the set of operators: \bena \mathcal{L}^{(n)}_{\alpha 1} &:=& \Delta^{(n)} x(x+\sigma)\partial_{x}^{2} +\left\{ (\Delta^{(n)}_{g} + 2\Delta^{(n)}_{R})x(x+\sigma)+\Delta^{(n)}(2x+\sigma)+2\Delta_2^{(n)}\partial_{v} \right\} \partial_{x} \non \\ && \, +2\Delta^{(n)}_{2R}\partial_{v}-\delta_{n0}k^{2}_{S} - \Delta_2^{(n)}\mu^{2}r_{+}^{2} \,, \label{eq:general_aA_op} \\ && \non \\ \mathcal{L}^{(n)}_{\alpha 2} &:=& 2\Delta^{(n)}_{R}x(x+\sigma) \,, \label{eq:general_ax_op} \\ && \non \\ \mathcal{L}^{(n)}_{\alpha 3} &:=& 2 \Delta^{(n)}_{2R} \,, \label{eq:general_av_op} \\ && \non \\ \mathcal{L}^{(n)}_{\beta 2} &:=& \Delta^{(n)} x(x+\sigma)\partial_{x}^{2}+2\{\Delta^{(n)}_{g}x(x+\sigma)+\Delta^{(n)}(2x+\sigma)+\Delta_2^{(n)}\partial_{v}\}\partial_{x} \non \\ &{}& \, +\Delta^{(n)}_{gg}x(x+\sigma)+2\Delta^{(n)}_{g}(2x+\sigma)+2\Delta^{(n)}+2\Delta^{(n)}_{2R}\partial_{v}-\delta_{n0}k^{2}_{S} -\Delta_2^{(n)}\mu^{2}r_{+}^{2} \,, \label{eq:general_bx_op} \\ && \non \\ \mathcal{L}^{(n)}_{\beta 3} &:=& 2\Delta^{(n)}_{2R}\partial_{x} \,, \label{eq:general_bv_op} \\ && \non \\ \mathcal{L}^{(n)}_{\gamma 2} &:=& \{(\Delta^{(n)}_{g}-2\Delta^{(n)})x(x+\sigma)+\Delta^{(n)}(2x+\sigma)\}\partial_{v} \,, \label{eq:general_cx_op} \\ && \non \\ \mathcal{L}^{(n)}_{\gamma 3} &:=& \Delta^{(n)} x(x+\sigma)\partial_{x}^{2}+2\{\Delta^{(n)}x(x+\sigma) +\Delta_2^{(n)}\partial_{v}\}\partial_{x}-\delta_{n0}k^{2}_{S} - \Delta_2^{(n)}\mu^{2}r_{+}^{2} \,. \label{eq:general_cv_op} \eena The analysis essentially parallels that of the Reissner-Nordstrom case. From eqs.~(\ref{eq:general_Proca_scalar1_ver2}), (\ref{eq:general_Proca_scalar2_ver2-1}), and (\ref{eq:general_Proca_scalar2_ver2-2}), we have at $n$-th order, the same formulas as (\ref{eq:Proca_scalar1_ver4}), (\ref{eq:Proca_scalar2_ver4-1}), and (\ref{eq:Proca_scalar2_ver4-2}) with $L^{(n)}$'s replaced with ${\cal L}^{(n)}$'s given above: \ben \sum^{n}_{m=0}\left\{\mathcal{L}^{(m)}_{\alpha 1}\Phi_{S1}^{(n-m)}+\mathcal{L}^{(m)}_{\alpha 2}\Phi_{S2}^{(n-m)}+\mathcal{L}^{(m)}_{\alpha 3}\Phi_{S3}^{(n-m)}\right\}=0 \label{eq:general_Proca_scalar1_ver4} \een \ben \sum^{n}_{m=0}\left\{\mathcal{L}^{(m)}_{\beta 2}\Phi_{S2}^{(n-m)}+\mathcal{L}^{(m)}_{\beta 3}\Phi_{S3}^{(n-m)}\right\}=0 \,, \label{eq:general_Proca_scalar2_ver4-1} \een \ben \sum^{n}_{m=0}\left\{\mathcal{L}^{(m)}_{\gamma 2}\Phi_{S2}^{(n-m)}+\mathcal{L}^{(m)}_{\gamma 3}\Phi_{S3}^{(n-m)}\right\}=0 \,. \label{eq:general_Proca_scalar2_ver4-2} \een More explicitly, \begin{itemize} \item[(i)] At the leading-order $\lambda=0$, the background is the corresponding near-horizon geometry, and we find \ben \mathcal{L}^{(0)}_{\alpha 2}=\mathcal{L}^{(0)}_{\alpha 3}=\mathcal{L}^{(0)}_{\beta 3}=0 \,. \een Therefore we have \ben \left( \begin{tabular}{ccc} $\mathcal{L}^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$\mathcal{L}^{(0)}_{\beta 2}$&$0$\\ $0$&$\mathcal{L}^{(0)}_{\gamma 2}$&$\mathcal{L}^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(0)}$\\ $\Phi_{S2}^{(0)}$\\ $\Phi_{S3}^{(0)}$\\ \end{tabular} \right) = \left( \begin{tabular}{c} $0$\\ $0$\\ $0$\\ \end{tabular} \right) \,. \label{eq:general_Proca_scalar_0} \een This is the set of leading-order, decoupled master equations for two master variables, $\Phi_{S1}^{(0)}, \: \Phi_{S2}^{(0)}$. The remaining variable $\Phi_{S3}^{(0)}$ can be expressed in terms of $\Phi_{S2}^{(0)}$. \item[(ii)] In general, at $n$-th order, we have \ben \left( \begin{tabular}{ccc} $\mathcal{L}^{(0)}_{\alpha 1}$&$0$&$0$\\ $0$&$\mathcal{L}^{(0)}_{\beta 2}$&$0$\\ $0$&$\mathcal{L}^{(0)}_{\gamma 2}$&$\mathcal{L}^{(0)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(n)}$\\ $\Phi_{S2}^{(n)}$\\ $\Phi_{S3}^{(n)}$\\ \end{tabular} \right) = - \sum_{m=1}^{n} \left[ \left( \begin{tabular}{ccc} $\mathcal{L}^{(m)}_{\alpha 1}$&$\mathcal{L}^{(m)}_{\alpha 2}$&$\mathcal{L}^{(m)}_{\alpha 3}$\\ $0$&$\mathcal{L}^{(m)}_{\beta 2}$&$\mathcal{L}^{(m)}_{\beta 3}$\\ $0$&$\mathcal{L}^{(m)}_{\gamma 2}$&$\mathcal{L}^{(m)}_{\gamma 3}$\\ \end{tabular} \right) \left( \begin{tabular}{c} $\Phi_{S1}^{(n-m)}$\\ $\Phi_{S2}^{(n-m)}$\\ $\Phi_{S3}^{(n-m)}$\\ \end{tabular} \right)\right] \,. \label{eq:general_Proca_scalar_n} \een \end{itemize} This is the set of master equations for the scalar-type components of the Proca equation in the general static black hole background, in which we have, at any order, the formulas similar to those obtained in the Reissner-Nordstrom case with the operators $L^{(n)}$'s replaced with ${\cal L}^{(n)}$'s. Namely, we have two mutually decoupled master equations with source terms that consist only of the lower-order variables, and therefore, in principle, once having obtained the solutions to the leading-order homogeneous master equations, one can successively obtain the solutions to any order of the scalar-type components of the Proca equation. These formulas are our main results. \section{Summary and Discussion} \label{sect:Summary} We have developed a new perturbation method to solve the Proca equation in static extremal and near-extremal black hole spacetimes, providing for the first time a set of mutually decoupled wave equations for massive vector field at each order of perturbations. Our formulas can be a useful tool to analytically study the behavior of massive vector fields around (near-)extremal black holes. We have first considered the background metric which takes the warped product form of an $m$-dimensional arbitrary spacetime ${\cal N}^m$ and an $n$-dimensional Einstein space ${\cal K}^n$, which essentially describes the horizon cross-section manifold. We have classified the massive vector field variables into the vector-type and scalar-type components according to their behavior on the Einstein space ${\cal K}^n$. Then, by introducing the scalar and vector harmonics on ${\cal K}^n$, we have separated the field variables and reduced the Proca equation to the set of wave equations on the generic spacetime ${\cal N}^m$. At this stage, the Proca equation for the scalar-type and vector-type variables are decoupled each other. Furthermore, for the vector-type components we have derived the single master equation (\ref{eq:master:vect:m}) for the master variable $\Phi^V$ on the generic spacetime ${\cal N}^m$. On the other hand, at this stage, for the scalar-type components, we have obtained the set of coupled wave equations, (\ref{eq:Ba}) and (\ref{eq:A}), for the variables $(B_a, A)$ on the generic background ${\cal N}^m$. Note however that for the massless case, i.e., Maxwell field, by exploiting the recovered gauge freedom, we have also been able to derive the single master equation, (\ref{eq:master:maxwell}), for the scalar-type components of the Maxwell field on the generic warped product background ${\cal N}^2 \times {\cal K}^n$. \medskip In order to obtain a set of decoupled wave equations for the scalar-type components of massive vector field, we have restricted our attention to the extremal and near-extremal static black hole background with $m=2$, in which ${\cal N}^2$ is spanned by the advanced time and radial coordinates $y^a=(v,x)$. Such a (near-) extremal black hole admits the near-horizon limit $\lambda \rightarrow 0$, which is known to possess enhanced symmetry. We have viewed the scaling parameter $\lambda$ as a small perturbation parameter, and expanded the massive vector field variables as well as the background metric components about $\lambda$, with the leading-order geometry being the corresponding near-horizon geometry. Then, we have derived the set of wave equations for the massive vector field perturbations at each order of $\lambda$. At the leading-order, we have found that thanks to the enhanced symmetry of the near-horizon geometry, the scalar-type components of the Proca equation reduce to the two mutually decoupled homogeneous master equations for $(\Phi_{S1}^{(0)}, \: \Phi_{S2}^{(0)})$, and that the remaining component $\Phi_{S3}^{(0)}$ can be determined by these two master variables. Then, we have also found that at any higher (say, $n$-th) order, the scalar-type components of the Proca equation always reduce to the two mutually decoupled inhomogeneous wave equations for the two master scalars $(\Phi_{S1}^{(n)}, \: \Phi_{S2}^{(n)})$ with source terms that consist only of the lower-order variables, and the remaining component at the $n$-th order $\Phi_{S3}^{(n)}$ can be determined by the master variables at the same and lower-order. Therefore, once we solve the leading order homogeneous master equations on the near-horizon geometry, we can, in principle, solve successively the set of inhomogeneous wave equations at any order. With the vector-type and scalar-type components all together, at each order the triplet $(\Phi_V^{(n)}, \: \Phi_{S1}^{(n)}, \:\Phi_{S2}^{(n)})$ describes the three independent dynamical degrees of freedom for massive vector field. We have provided the general solutions, $(\Phi_V^{(0)}, \: \Phi_{S1}^{(0)}, \:\Phi_{S2}^{(0)})$, to the leading order master equations for the extremal and near-extremal Reissner-Nordstrom black hole case. \medskip In this paper, we have focused on the static background case. In astrophysical context, extremal or near-extremal rotating black holes are more relevant. In the rotating case, it is far from obvious {\it a priori} whether it is possible to separate field variables of interest. Recently it has been shown by developing the new ansatz~\cite{Lunin:2017} that massive vector field equations can be separable in Kerr-NUT-(A)dS spacetimes~\cite{Frolov:Krtous:Kubiznak:Santos:2018}. Since the class of spacetimes dealt with in~\cite{Frolov:Krtous:Kubiznak:Santos:2018} does not contain {\em static extremal} black holes considered in the present paper, one cannot immediately compare the result of~\cite{Frolov:Krtous:Kubiznak:Santos:2018} and that of the present paper. However, as explicitly stated in~\cite{Frolov:Krtous:Kubiznak:Santos:2018} that their solutions describe (in even $D$-dimensions) $D-2$ real modes, but one polarization is missing. It has been shown, more concretely, by considering the four-dimensional Kerr black hole background and numerically computing quasi-normal modes~\cite{Frolov:Krtous:Kubiznak:Santos:2018} that their separation ansatz allows one to derive a decoupled equation which correctly describe, at least, two of the three physical polarizations of the massive vector field, but how to obtain the remaining polarization within their ansatz remains open. In contrast, although in this paper we have restricted our attention only to a class of static and (near-) extremal black hole backgrounds, we have successfully been able to obtain, at each order of perturbations, three decoupled equations for all the three polarizations of massive vector perturbations. Therefore, at the present stage, it is fair to say that our method developed in this paper and the ansatz of \cite{Frolov:Krtous:Kubiznak:Santos:2018} are regarded as complementary methods. Thus, the remaining open issue is whether (and if possible how) one can derive a set of decoupled wave equations for all the three (in four-dimensions) physical polarizations in the rotating black hole case. It is of considerable interest to generalize to the maximally rotating Kerr black hole case our method of expanding both the field variables and background geometry with respect to the near-horizon scaling parameter and exploiting the enhanced symmetry of the near-horizon geometry to obtain the leading order solutions. It would also be interesting--even for the static case--to consider a generalization of the present method to systems of, e.g., Maxwell theory with the Chern-Simons term. \bigskip \noindent {\bf Acknowledgements:} We would like to thank Tomoki Minamigawa and Takashi Okumura for discussions. The work of A.I. was supported in part by JSPS KAKENHI Grants No.~15K05092 and No.~26400280.
{ "timestamp": "2019-10-29T01:31:17", "yymm": "1805", "arxiv_id": "1805.02479", "language": "en", "url": "https://arxiv.org/abs/1805.02479" }
\section{Introduction} Finding a consistent covariant theory of massive gravity is an old dream for about eight decades, beginning by the pioneer paper of Fierz and Pauli \cite{wit1}, in 1939. The main difficulty is arising ghosts in the spectrum of solutions. In recent years, there was made some hopes toward a consistent theory of massive gravity due to dRGT model \cite{drgt1}. Then Hassan and Rosen improved the model \cite{mmg} by replacing the flat background metric with an external metric $ f_{\mu\nu}$. The interaction term added to Hilbert-Einstein action in this model is a polynomial of the function $ Tr\sqrt{g^{-1}f}$. In order to have a covariant model, they introduced their bi-gravity model afterwards by giving dynamics to the second metric, via introducing the kinetic term $ \sqrt{-^{(4)}f}\mathcal{R}(f) $ in the Lagrangian. Bi-gravity model by itself is attractive theoretically as well as observationally in describing physical events. For instance, it has been recently shown \cite{YA} that doubly coupled models of bi-gravity are tightly constrained by observation in light of the neutron star merger GW170817/GRB170817A \cite{ligo}. These constraints indicate that viable bi-gravity theories would be singly-coupled, in that matter couples to only one of the two available metrics. Our focus here is on theoretically consistent bi-gravity models, specifically those that are ghost-free. Such ghost-free models should enable us to adjust the corresponding couplings to matter in a physically viable manner. To investigate the existence of a ghost (or ghosts) a popular way is to expand the metric (or the metrics in bi-gravity) around a given background and search for conditions of avoiding negative kinetic terms. However, this method is not trusty enough, since just acquires information in the vicinity of the given background solution. The next method, which is much more trustworthy, is the Hamiltonian analysis of the dynamical structure of the model. This approach, however, is much more complicated and requires lengthy and tedious calculations. For the massive gravity, the Hamiltonian analysis given in \cite{HR1} shows that ghost disappears. Despite of some doubts in Refs. \cite{klu1}-\cite{klu3} we show in our previous paper \cite{MS} that in full phase space of 20 variables, there is no ghost in massive gravity. Concerning the case of HR bi-gravity, a crucial calculation is done by Hassan and Rosen \cite{HR5} to show that additional constraints emerge in the Hamiltonian analysis of the theory which lead to omitting the ghost degrees of freedom. Based on this observation, the new model of HR bi-gravity gained considerable attraction among the community. Hence, the Hamiltonian analysis of HR bi-gravity, for assuring people about additional constraints, is a very important task which may validate or invalidate hundreds of papers based on reliability of calculations of a few papers written on this issue \cite{HR5}-\cite{solv2}. However, we think that deducing additional constraints needed to omit the Boulware-Deser ghost does not come true completely. In other words, the main reference on this issue, i.e. Ref \cite{HR5}, contains subtleties which contradict the standard Dirac approach for constrained systems. In fact, the additional constraint $ \mathcal{C}_{2} $ which has the crucial role of omitting the ghost is just the Poisson bracket $ \{\mathcal{C},\mathcal{D}\} $ of two existing constraints $ \mathcal{C} $ and $\mathcal{D}$. In the context of Dirac formalism when $ \{\mathcal{C},D\}\neq 0 $, it turns out that they are second class, while in the mentioned papers the constraint $\mathcal{D} $ is considered as a first class constraint on the basis of demanding enough number of first class constraints to generate diffeomorphism. Hence, it seems that the additional constraints needed to omit the ghost do not emerge naturally in the constraint structure of the model. Our main interest in this paper is to investigate more deeply the constraint structure of the bi-gravity and see how additional constraints may emerge to cancel the Boulware-Deser ghost. As we will show, the crucial point is that the dynamical behavior of a system, including the number of degrees of freedom and the symmetries, may be different in some subregion of the phase space. For example, the problem of ghost may be solved only in some special subregion of the phase space. This may happen due to the problem of bifurcation. Whenever we find multiplicative constraints, the theory may bifurcate into different branches with distinct physical properties. Our final answer to the problem of ghost in bi-gravity is that the theory is ghost free in one branch at the bifurcation point. A second reason to study the constraint structure of the bi-gravity theory is that the original papers on the canonical analysis of HR bi-gravity has performed calculations in a 24 dimensional phase space containing $ g_{ij} $, $ f_{ij} $ (i.e. the spatial part of the metrics) and their conjugate momenta. In this approach, the lapse and shift functions have been considered as auxiliary fields. However, we think that a Hamiltonian analysis in the 40 dimensional phase space including lapse and shift functions as dynamical variables is more fundamental, since they are parts of metrics which do participate in dynamics as well as the gauge symmetry (i.e. diffeomorphisms) of the theory. In fact, in the Hamiltonian formulation the momenta conjugate to the lapse and shift functions should play some roles in generating the gauge transformations. Although the author of Ref. \cite{jklu} have also tried to give a careful Hamiltonian analysis in 40 dimensional phase space, he finally found two similar differential equations for the lapse functions as the result of consistency of the constraints $ \mathcal{C}$ and $\mathcal{D}$. In his approach, no additional constraint is obtained to omit the ghost. On the other hand, there are not enough first class constraints for generating the full space-time diffeomorphism. The same author analyzed, in another paper \cite{Kluson}, a bi-gravity theory in which the interaction term is a function of $ Tr(g^{-1}f) $, (rather than $ Tr\sqrt{g^{-1}f} $). He concluded finally that it is highly improbable to find a ghost free bi-gravity which supports the diagonal diffeomorphism as well. Our Hamiltonian analysis in this paper is not limited to HR bi-gravity; we try to give a compelling Hamiltonian analysis for a general bi-gravity model with an interaction term $V$ as a polynomial function of either $ Tr\sqrt{(g^{-1}f)^{n}} $ or $ Tr((g^{-1}f)^{n}) $. We show that, in every parametrization of the lapse and shift functions, the most determinant factor for the presence of ghosts is the matrix of second derivatives of $ V $ with respect to lapses and shifts. As we will see, one needs, as a necessary condition, four null-vectors for this matrix to guarantee the diffeomorphism gauge symmetry and one more null-vector for omitting the ghost. In section 2, we give a general framework for the Hamiltonian analysis of bi-gravity models, and the crucial role of the second derivatives of the interaction potential with respect to lapses and shifts. In section 3, we give our main Hamiltonian analysis of the HR gravity. In section 4, we analyze a model without square root, using a different set of lapse and shift variables. We show that it is not improbable to have a ghost free model of this kind. Section 5, denotes some concluding remarks and some view points towards future works. \section{ Hamiltonian structure of general bi-gravity} We present a general framework for analyzing a bi-gravity theory. Consider a dynamical theory in four dimensions with two spin-2 fields $ f_{\mu\nu} $ and $ g_{\mu\nu} $ described by the following action \begin{equation} S=\int d^{4}x \left( M_{g}^{2}\sqrt{-^{(4)}g} \mathcal{R}(g)+ M_{f}^{2}\sqrt{-^{(4)}f} \mathcal{R}(f)+2m^{4} \sqrt{-^{(4)}g} V(\mathcal{Z}^{\mu}_{\ \nu})\right) , \label{m11} \end{equation} where $ \mathcal{Z}^{\mu}_{\ \nu}=g^{\mu\rho}f_{\rho \nu}$, $M_{g}$ and $ M_{f}$ are Plank masses and $ m $ is mass parameter. Note that $g^{\mu\nu} $ is the inverse of $g_{\mu\nu} $ , while we do not use $ f^{\mu\nu} $ as the inverse of $ f_{\mu\nu} $ except in construction of the curvature $ \mathcal{R}(f) $ . The interaction potential $ V(\mathcal{Z}^{\mu}_{\ \nu})$ is a scalar function of the matrix $ \mathcal{Z} $. This can include $ Tr(\mathcal{Z}) $ or more generally $Tr(\mathcal{Z}^{n})$. In ADM formalism, the metrics has the following (3+1) decomposition \cite{berg}, \begin{equation} g_{\mu\nu}= \left( \begin{array}{cr} -N^{2}+N_{i}N^{i} & N_{i} \\ N_{i} & g_{ij} \\ \end{array}\right), f_{\mu\nu}= \left(\begin{array}{cr} -M^{2}+M_{i}M^{i} & M_{i} \\ M_{i} & f_{ij} \\ \label{g1} \end{array}\right) \end{equation} where $N,M,N^{i},M^{i}$ are called lapses and shifts respectively. The inverse metrics $g^{\mu\nu} $ and $f^{\mu\nu} $ can be written as \begin{equation} g^{\mu\nu}= \left( \begin{array}{cr} -N^{-2} & N^{i}N^{-2} \\ N^{i}N^{-2} & g^{ij}-N^{i}N^{j}N^{-2} \\ \end{array}\right),\ \ \ \ f^{\mu\nu}= \left(\begin{array}{cr} -M^{-2} & M^{i}M^{-2} \\ M^{i}M^{-2} & f^{ij}-M^{i}M^{j}M^{-2} \label{g2} \end{array}\right). \end{equation} Note that in the interaction term we do not need to raise the indices of $f_{\mu\nu}$, while the indices in g-sector will raise and lower with $g^{\mu\nu}$ and $g_{\mu\nu}$. Since the interaction term does not depend on the derivatives of the fields, the momenta $ \Pi_{N},\Pi_{N^{i}},\Pi_{M} $ and $ \Pi_{M^{i}}$ are primary constraints and the Lagrangian density reads \begin{equation} \mathcal{L}=\pi^{ij}\dot{g_{ij}}+p^{ij}\dot{f_{ij}}-H_{c},\label{g3} \end{equation} where $ \pi^{ij} $ and $p^{ij} $ are conjugate momenta of $ g_{ij} $ and $ f_{ij} $ respectively and \begin{equation} H_{c}=\int d^{3}x \left(N^{\mu} \mathcal{R}^{(g)}_{\mu}+M^{\mu} \mathcal{R}^{(f)}_{\mu}+\mathcal{V}\right) . \label{m12} \end{equation} The expressions $\mathcal{R}^{(g)}_{0}$ , $\mathcal{R}^{(g)}_{i}$ are the Hamiltonian and momentum constraints of the corresponding Hilbert-Einstein action of the metric $g_{\mu\nu}$ as follows \begin{equation} \mathcal{R}_{0}^{(g)}= M_{g}^{2}\sqrt{g}\mathcal{R}+\dfrac{1}{ M_{g}^{2}\sqrt{g}}(\frac{1}{2}\pi^{2}-\pi^{ij}\pi_{ij}),\hspace{10mm} \mathcal{R}_{i}^{(g)}=2\sqrt{g}g_{ij} \triangledown _{k}(\frac{\pi^{jk}}{\sqrt{g}}).\label{a14} \end{equation} Similar relations should also be considered for $\mathcal{R}^{(f)}_{0}$ and $\mathcal{R}^{(f)}_{i}$ in terms of the $f$-metric. Noticing that $ \sqrt{-^{(4)}g} =N\sqrt{g} $, where $ g\equiv det(g_{ij}) $, the interaction term reads \begin{equation} \mathcal{V}= 2m^{4} N \sqrt{g} V(\mathcal{Z}^{\mu}_{\nu}).\end{equation} Let us denote the whole set of lapse and shift functions as $ L^{a},\ a=1,...,8 $ where the first four refer to $ N $ and $ N_{i} $ and the remaining ones to $ M $ and $ M_{i} $. In this way the canonical Hamiltonian (\ref{m12}) reads \begin{equation} H_{c}=\int d^{3}x \left( L^{a}\mathcal{R}_{a}+\mathcal{V}\right) , \label{m16} \end{equation} where the same notation has been used to denote $ \mathcal{R}^{(g)}_{0}, \mathcal{R}^{(g)}_{i}, \mathcal{R}^{(f)}_{0} $ and $\mathcal{R}^{(f)}_{i}$ as $ \mathcal{R}_{1},... , \mathcal{R}_{8} $. The total Hamiltonian reads \begin{equation} H_{T}=H_{c}+\int d^{3}x u^{a}\Pi_{a}, \label{m13} \end{equation} where $\Pi_{a}$, as primary constraints, are momenta conjugate to $L^{a}$ and $u^{a}$ are Lagrange multipliers. The primary constraints should be preserved during the time. This gives the second level constraints as \begin{equation} \mathcal{\mathcal{A}}_{a}\equiv \lbrace \Pi_{a}, H_{c} \rbrace=-(\mathcal{R}_{a}+\frac{\partial \mathcal{V}}{\partial L^{a}})\approx 0.\label{m14} \end{equation} The constraints $\mathcal{\mathcal{A}}_{a}$ should also be preserved during the time, i.e. \begin{equation} \lbrace \mathcal{A}_{a}, H_{T} \rbrace= \lbrace \mathcal{A}_{a}, H_{c} \rbrace- \frac{\partial^{2} \mathcal{V}}{\partial L^{a}\partial L^{b}} u^{b} \approx 0.\label{m15} \end{equation} We know that the bi-gravity theory is diffeomorphic invariant. Hence, loosely speaking, we demand that four arbitrary fields exist in the dynamical analysis of the theory. This can be achieved by demanding that at least four Lagrange multipliers $ u^{a} $ should remain undetermined. In other words, the rank of the matrix $ \partial^{2} \mathcal{V}/\partial L^{a}\partial L^{b} $ should not exceed four, in order to have at least four null-vectors. If there were no interaction, we would have eight null-vectors due to vanishing the matrix $ \partial^{2} \mathcal{V}/\partial L^{a}\partial L^{b} $. Suppose $ \chi^{a}_{(\alpha)} $ are null-vectors of $ \partial^{2} \mathcal{V}/\partial L^{a}\partial L^{b} $. Then from Eq. (\ref{m15}) we find the following third level constraints $ \mathcal{B} _{(\alpha)} $ labeled by the index $ \alpha $, \begin{equation} \mathcal{B} _{(\alpha)} \equiv \chi^{a}_{(\alpha)} \{\mathcal{A}_{a},H_{c}\} \approx 0,\label{Vc18} \end{equation} However, some of $ \mathcal{B} _{(\alpha)} $'s may vanish on the constraint surface. For the case of no interaction, we have two disjoint Einestain-Hilbert theories and the expressions $ \mathcal{B} _{(\alpha)} $ consist of Poisson brackets of $ \mathcal{R}^{(g)}_{0}, \mathcal{R}^{(g)}_{i}, \mathcal{R}^{(f)}_{0} $ and $\mathcal{R}^{(f)}_{i}$ which vanish weakly. For a generic interaction, we also expect that at least four of the third level constraints $\mathcal{B} _{(\alpha)} $ are trivial due to our need to have at least four secondary first class constraints to generate diffeomorphisms. If more than four $\mathcal{B} _{(\alpha)} $ vanish, we would have extra symmetries besides diffeomorphisms and the theory would have less number of degrees of freedom, comparing to what we consider in the following. On the other hand, consistency of the third level constraints (if any) should not determine the Lagrangian multipliers. Therefore, it is legitimate to assume that at least four of the expressions $\mathcal{B} _{(\alpha)} $ should vanish weakly. We will discuss this point with more details for two distinct examples in the following sections. To be used in the next section, let us consider the possibility of redefinition of the second level constraints. In the framework of constrained systems, one may replace, for some reasons, the constraints $ \mathcal{A}_{a} $ with $ \mathcal{\tilde{A}}_{a} $ such that \begin{equation} \mathcal{A}_{a}\approx0 \Leftrightarrow \mathcal{\tilde{A}}_{a} \approx0. \label{jj} \end{equation} Hence, the equation (\ref{m15}) should be replace by \begin{equation} \{\mathcal{\tilde{A}}_{a}, H_{T}\}= \{\tilde{\mathcal{A}}_{a}, H_{c}\}+\frac{\partial \mathcal{\tilde{A}}_{a} }{\partial L^{b}}u^{b}. \label{jj1} \end{equation} In this way, our discussions after Eq. (\ref{m15}) are valid by considering the null-vectors of the matrix $ \partial \mathcal{\tilde{A}}_{a}/\partial L^{b}$ instead of $\partial^{2} \mathcal{V}/\partial L^{a}\partial L^{b} $. If the rank of $\partial \mathcal{\tilde{A}}_{a}/\partial L^{a}$ is four, and we have no further third level constraint $\mathcal{B} _{(\alpha)} $, this means that four of the lapse-shift functions $L^{\bar{a}} $ and the corresponding second level constraint $ \tilde{A}_{\bar{a}}$ are first class and the remaining $ L^{\tilde{a}} $ as well as $ \tilde{A}_{\tilde{a}}$ should be second class. Remember the famous formula of the number of phase space degrees of freedom in a constrained system reads \cite{zms2} \begin{equation} DOF= \mathcal{N}-2FC-SC,\label{hy} \end{equation} where $ \mathcal{N} $ is the number of original variables, $ FC $ is the number of first class constraints and $ SC $ is the number of second class constraints. For the current case of 40 phase space variables with 8 first class and 8 second class constraints we find \begin{equation} DOF=40-2 \times 8 - 8=16, \end{equation} which correspond to 8 degrees of freedom in the configuration space. This can be interpreted as one massive and one massless gravitons accompanying by a scalar ghost field. In order to omit the ghost degree of freedom, we need to find at least two more second class, or one more first class constraints. The latter possibility corresponds to one more gauge symmetry besides diffeomorphism, which does not sound well. Moreover, a first class constraint in the second level implies one more primary first class constraint. Hence, it is not reasonable to have only one more first class constraint. So, in order to omit the ghost we should expect to find two more second class constraints. To reach this goal we need a fifth null-vector for the matrix $\partial \mathcal{\tilde{A}}_{a}/\partial L^{a}$ which leads to a new constraint at the third level via Eq. (\ref{Vc18}), i.e. \begin{equation} \mathcal{B} \equiv \chi^{a}_{(5)} \{\mathcal{A}_{a},H_{c}\} \approx 0.\label{Vc181} \end{equation} If the new constraint depends on lapse-shift functions, one combination of the Lagrange multipliers $ u^{a} $ in Eq. (\ref{m13}) would be determined as the result of consistency of the constraint $ \mathcal{B} $.\footnote{There is a technical point here, i.e. three variables $L^{\tilde{a}}$ (see before Eq.(\ref{hy})) are determined in the second level of consistency in terms of the canonical variables. Moreover, the corresponding momenta $\Pi_{\tilde{a}}$ and second level constraints $ \tilde{\mathcal{A}}_{\tilde{a}} $ constitute a system of second class constraints. Therefore, for the next level of consistency one should consider the Dirac brackets instead of Poisson brackets. This implies that the constraint equations $\Pi_{\tilde{a}} \approx 0$ and $ \tilde{\mathcal{A}}_{\tilde{a}}\approx 0 $ should be imposed as strong equalities, i.e. $\Pi_{\tilde{a}}= 0$ and $ \tilde{\mathcal{A}}_{\tilde{a}}= 0 $ . Hence, the terms $u_{\tilde{a}} \Pi_{\tilde{a}} $ in the total Hamiltonian disappears at all. So when we say that consistency of the third level constraints may determine Lagrange multipliers, we mean the remaining ones other than $ u_{\tilde{a}} $'s. } Hence, the constraint analysis would stop here with just one more second class constraint. This leads to a phase space with 15 dynamical fields. This may sounds undesirable to have a phase space with odd number of dynamical degrees of freedom. However, as shown in \cite{comeli2} and \cite{Henu} this does not mean an odd-dimensional phase space for field theories. Meanwhile, the main problem is we need one more second class constraint to omit the ghost. Let us summaries the final conclusion of this section. In order to have a ghost free bi-gravity theory, we need to have a diffeomorphic invariant interaction with two following properties. i) The rank of the matrix $\partial^{2} \mathcal{V}/\partial L^{a}\partial L^{b} $ or $\partial \mathcal{\tilde{A}}_{a}/\partial L^{a}$, in the case of redefinition of the constraints, should be three. ii) The new constraint $ \mathcal{B} $ emerged due to the fifth null-vector should not contain lapse-shift functions. We will investigate the above conditions in two approaches given in the following sections. \section{Hamiltonian analysis of HR Bi-gravity } We start by investigating the Hamiltonian formulation of HR bi-gravity given by the following action\cite{mmg2}, \begin{equation} S=M^{2}_{g} \int d^{4}x \sqrt{-g} R(g)+M^{2}_{f}\int d^{4}x \sqrt{-f} R(f)+2m^{4} \int d^{4}x \sqrt{-g} \sum_{n=0}^{4} \beta_{n}e_{n}(\Bbbk). \label{a1} \end{equation} In Eq. (\ref{a1}) $\beta_{n}$ are free parameters, $m$ is a mass parameter, $M_{g}$ and $ M_{f}$ are Plank masses and $\Bbbk \equiv \sqrt{g^{-1}f}$ where $(g^{-1}f) ^{\mu}_{\ \nu}= g^{\mu\lambda}f_{\lambda\nu}$. The elementary symmetric polynomials $e_{n}(\Bbbk)$ are given in the appendix A. In this paper we consider only minimal model of the interaction term where the coefficients $\beta_{n}$ are $ \beta_{0}=3, \ \beta_{1}=-1,\ \beta_{2}=\beta_{3}=0, \ \beta_{4}=1. $ By applying the following redefinition for the shift functions \cite{mmg2} \begin{equation} N^{i}=M n^{i}+M^{i}+N D_{\ j}^{i}n^{j},\label{a15} \end{equation} and choosing the $3\times3$ matrix $D^{i}_{\ j}$ appropriately (see appendix A), the interaction term as well as the whole action would become linear in the lapses $N$ and $ M$ and shifts $M^{i}$. Since the interaction does not involve derivatives of the metrics, the definitions of the momentum fields are similar to Hilbert-Einstein action as \begin{eqnarray}&& \pi^{ij}=-\sqrt{g}(K^{ij}-g^{ij}K)\label{c1},\\&& p^{ij}=-\sqrt{f}(L^{ij}-f^{ij}L),\\&& P_{M_{i}}\approx 0, P_{M}\approx 0,P_{N} \approx 0,P_{n^{i}}\approx 0,\label{bi9} \end{eqnarray} where $K^{ij}$ and $L^{ij}$ are three dimensional extrinsic curvatures due to $g$ and $f$ metrics respectively. As is seen, from Eq. (\ref{bi9}) we have 8 primary constraints $P_{M}$, $P_{N}$, $P_{M_{i}}$ and $P_{n^{i}}$. The Lagrangian density reads \begin{eqnarray}&& \mathcal{L}=M^{2}_{g}\pi^{ij}\partial_{t}g_{ij}+M^{2}_{f}p^{ij}\partial_{t}f_{ij} -H_{c}, \label{bi4} \end{eqnarray} with the canonical Hamiltonian \begin{equation} H_{c}=M^{i} \mathcal{R}_{i} + M \mathcal{D} + N \mathcal{C},\label{bi5} \end{equation} in which \begin{eqnarray}&& \mathcal{C}=M^{2}_{g}\mathcal{R}_{0}^{g}+M^{2}_{g} D^{i}_{\ k} n^{k} \mathcal{R}_{i}^{g}-2m^{4}(\sqrt{g}\sqrt{x} D_{\ k}^{ k}-3\sqrt{g}),\\ &&\label{bi57} \mathcal{D}=M^{2}_{f}\mathcal{R}_{0}^{f}+ M^{2}_{g} n^{i}\mathcal{R}_{i}^{g}-2m^{4}(\sqrt{g}\sqrt{x}-\sqrt{f}),\\ && \label{bi7} \mathcal{R}_{i}= M^{2}_{g} \mathcal{R}_{i}^{g}+M^{2}_{f}\mathcal{R}_{i}^{f}, \label{bi6} \end{eqnarray} where $ x=1-n^{i}f_{ij}n^{j}$. As a special case of equation (\ref{m13}) the total Hamiltonian reads \begin{eqnarray} && \mathcal{H}_{T}=\mathcal{H}_{c}+uP_{N}+vP_{M}+u^{i}P_{M^{i}}+v^{i}P_{n^{i}}, \label{k111} \end{eqnarray} where $ u,v,u_{i} $ and $v_{i}$ are 8 undetermined Lagrange multipliers (8 fields, in fact). Since $N$, $M$ and $M_{i}$ appear linearly in the canonical Hamiltonian, consistency of the primary constraints $P_{M}$, $P_{N}$, $P_{M_{i}}$ (using the fundamental Poisson bracket given in appendix A) gives 5 secondary constraints as follows \begin{eqnarray} && \lbrace P_{N},\mathcal{H}_{T} \rbrace= -\mathcal{C}\approx 0, \\&& \lbrace P_{M},\mathcal{H}_{T} \rbrace=-\mathcal{D}\approx 0, \\&& \lbrace P_{M^{i}},\mathcal{H}_{T} \rbrace=-\mathcal{R}_{i}\approx 0. \end{eqnarray} However, all the terms of the canonical Hamiltonian involve the variables $n_{i}$. Hence, for consistency of $P_{n^{i}}$, we find directly \begin{eqnarray} && \lbrace P_{n^{i}},H_{c}\rbrace \equiv -\mathcal{S}_{i} = -\left(M \delta^{k}_{\ i}+N \frac{\partial(D^{k}_{\ j}n^{j})}{\partial n^{i}}\right)U_{k} \approx 0,\label{bi13} \end{eqnarray} where \begin{equation} U_{k}=M^{2}_{g}\mathcal{R}_{k}(g)-2m^{4}\sqrt{g}n^{l}f_{lj}\delta^{j}_{\ k}x^{-1/2} \approx 0.\label{bi15} \end{equation} In this way, for the current model, the secondary constraints $ \mathcal{A}_{a} $ of the previous section are $ -\mathcal{C}, -\mathcal{D}, -\mathcal{R}_{i}$ and $ -\mathcal{S}_{i} $ respectively. The matrix within the parenthesis on the right hand side of Eq. (\ref{bi13}) is the Jacobian of the transformation given in Eq. (\ref{a15}) which is invertible. Hence, Eq. (\ref{bi13}) leads to secondary constraint $U_{k}\approx 0$. So, we replace the secondary constraints with the new set $\mathcal{C},\mathcal{D},\mathcal{R}_{i}$ and $U_{i}$. This replacement is important. Notice that one may consider subregions of the phase space where the matrix $(M \delta^{k}_{\ i}+N \frac{\partial(D^{k}_{\ j}n^{j})}{\partial n^{i}})$ is not full rank. This implies constraints which depend on the lapses $N$ and $M$, which would be second class with respect to the primary constraints $P_{N}$ and $ P_{M} $. Here, we choose to put away such possibilities. Considering the problem from the opposite side, when the consistency condition leads to equality $\mathcal{S}_{i}= K_{ij}U_{j}\approx 0 $, we have two possibilities, either assume $ |K|\neq 0 $ which implies $U_{j}\approx 0 $ instead of $\mathcal{S}_{i}\approx 0$; or assume that $ |K|\approx 0 $ and $U_{j}\neq 0 $ for some nontrivial null-vectors of $ K $. From this point of view the emerging constraints $\mathcal{S}_{i}$ exhibit a bifurcation problem, and we should decide which way to follow in the rest of the problem. However, we are not allowed to keep the original form of the constraints $\mathcal{S}_{i}$, since this means we are mixing the two distinct possibilities simultaneously. In our case, considering the constraints $\mathcal{S}_{i}$ as given in Eq. (\ref{bi13}) leads to non vanishing Poisson brackets $ \{\mathcal{S}_{i},P_{M}\} $ and $ \{\mathcal{S}_{i},P_{N}\} $. Hence, when we say that the matrix $ \left(M \delta^{k}_{\ i}+N \frac{\partial(D^{k}_{\ j}n^{j})}{\partial n^{i}}\right) $ is invertible, we mean, in fact, that we choose to leave in regions of phase space where this matrix is non-singular. \footnote{If the matrix elements $ K_{ij} $ where constant numbers, there were no difference in employing $ \mathcal{S}_{i} $'s or $ U_{i} $'s as constraints, since $ U_{i} $ where just linear combinations of $ \mathcal{S}_{i} $. However, if $ K_{ij} $'s depend on phase space variables (as in our case), then there is difference in choosing the set $\mathcal{S}_{i} $ or $ U_{i} $, in fact one should consider the rank of matrix $ K $ throughout the phase space. There may exist subregions of phase space where $ K $ is not full rank. In such cases no more the sets of constraints $\mathcal{S}_{i} $ and $ U_{i} $ are equivalent. In fact, one should use independent combinations of $ U_{i} $ together with equations which determine the subregion where the rank of $ K $ is lowered. Note in general every multiplicative set of constraints (even in the form of matrices) should be broken in different branches of satisfaction. In other words, it is not allowed to use the original constraints $\mathcal{S}_{i} $. To see an interesting case of this point see Ref. \cite{Haji}. It is well known, on the other hand, that \cite{zms2} given the constraints $ \varphi_{_{a}} $, one can redefine them as $ \varphi^{\prime}_{a}=M_{ab}\varphi_{b} $ provided that $ M_{ab} $ is nonsingular on the constraint surface. Otherwise, it is obvious that the constraint structure may change. For example if we multiply the second class constraints with some other second class constraints we would find first class constraints. } Another subtlety concerns the Eq. (\ref{bi13}) as an eigenvalue problem for the matrix $\frac{\partial(D^{k}_{\ j}n^{j})}{\partial n^{i}}$ with eigenvalue $\frac{-M}{N}$ and eigenvector $U_{k}$. However, since $U_{k}$ is a definite vector given in Eq. (\ref{bi15}) this exceptional case does not matter. Therefore, we have 8 primary constraints $P_{N},P_{M},P_{M^{i}},P_{n^{i}}$ and 8 secondary constraints $\mathcal{C},\mathcal{D},\mathcal{R}_{i},U_{i}$. Now we should consider the consistency of second level constraints. If we wrongly have not replaced the secondary constraints $\mathcal{S}_{i} $ with $U_{i}$ , the matrix $ \partial^{2}V/\partial L^{a}\partial L^{b} $ in equation (\ref{m15}) would have rank 5, since terms $ \partial^{2}V/\partial n^{i}\partial N$ and $ \partial^{2}V/\partial n^{i}\partial M$ do not vanish (although $ \partial^{2}V/\partial n^{i}\partial M^{i}$ vanish). This fact contradicts our expectation to have at least 4 null-vectors for $ \partial^{2}V/\partial L^{a}\partial L^{b} $. However, considering the secondary constraint $ U_{i} $ make the $ 8 \times 8 $ matrix $ \partial \tilde{A}_{a}/\partial L^{b} $ such that the first five columns and the first five rows vanish and only the elements $ \partial^{2}U_{i}/\partial n^{i} $ are non vanishing. Putting all together, the consistency equations for the second level constraints, i.e. Eq. (\ref{jj1}), reads \begin{equation} \left( \begin{array}{cr} \{\mathcal{C}, H_{c}\} \\ \{\mathcal{D}, H_{c}\} \\ \{\mathcal{R}_{i}, H_{c}\} \\ \{U_{i}, H_{c}\} \\ \end{array} \right) + \left( \begin{array}{c|c} \textbigcircle &\textbigcircle \\ & \\ \hline &\\ \textbigcircle& \partial U_{i}/\partial n^{j} \\ \end{array} \right) \left( \begin{array}{cr} u \\ v \\ u^{i} \\ v^{i} \\ \end{array} \right)=0. \end{equation} As is seen, the null vectors of the matrix $ \partial \tilde{A}_{a}/\partial L^{b} $ give the third level constraints $ \{ \mathcal{C},H_{c}\} $, $ \{ \mathcal{D},H_{c}\} $ and $ \{ \mathcal{R}_{i},H_{c}\} $ respectively. The constraints $ \mathcal{R}_{i}$ have vanishing Poisson brackets with all the primary as well as secondary constraints, as calculated in full details in Refs. \cite{HR5, jklu}. Since the canonical Hamiltonian (\ref{bi5}) is composed of secondary constraints, the Poisson brackets $ \{ \mathcal{R}_{i},H_{c}\} $ also vanish. This shows that consistency of $ \mathcal{R}_{i}$ neither determines any of the Lagrange multipliers nor leads to any further constraint. Since $ \mathcal{R}_{i}$ is the sum of momentum constraints due to the individual Einstein-Hilbert actions of $ g_{\mu \nu} $ and $ f_{\mu\nu} $, we expect the set of 6 constraints $ P_{M^{i}} $ and $ \mathcal{R}_{i}$ to act as generators of the spacial diffeomorphisms. Putting aside the 6 first class constraints $P_{M^{i}}$ and $\mathcal{R}_{i}$, there remain constraints $P_{N},P_{M}$ and $ P_{n^{i}} $ as primary constraints and $ \mathcal{C},\mathcal{D} $ and $ U_{i} $ as secondary constraints. Since $ U_{i} $ are functions of $n_{i}$ such that $ \lvert \frac{\partial U_{i}}{\partial n_{k}}\rvert \neq 0$, the set of six constraints $ P_{n^{i}} $ and $ U_{i} $ are second class. Hence, consistency of $ U_{i} $'s leads to determination of Lagrange multipliers $v^{i}$'s in Eq. (\ref{k111}). These second class constraints should be imposed strongly on the system, in order to reach the reduced phase space. Hence, from now on, the momenta $P_{n^{i}} $ should be considered as zero and due to $U_{i}=0 $, the variables $n_{i} $ would be determined in terms of the canonical variables $ g_{ij}, \pi^{ij}, f_{ij}$ and $p^{ij} $. Now we should investigate the time evolution of $ \mathcal{C}$ and $\mathcal{D} $. Remember that the Poisson brackets of $ \mathcal{C}$ and $\mathcal{D} $ with $\mathcal{R}_{i}$ vanish since $\mathcal{R}_{i}$ are first class. Moreover, it is directly seen that $\lbrace \mathcal{C}, p_{n^{j}} \rbrace= U_{i}\frac{\partial(D^{i}_{\ k}n^{k})}{\partial n^{j}}\approx 0 $ and $\lbrace \mathcal{D}, p_{n^{j}} \rbrace=U_{i}\approx 0$ which vanish weakly \cite{HR5}. It can also be shown that $ \lbrace \mathcal{C}(x),\mathcal{C}(y) \rbrace \approx 0 $ and $ \lbrace \mathcal{D}(x),\mathcal{D}(y) \rbrace \approx 0 $ \cite{HR5, jklu}. Hence, consistency of the constraints $ \mathcal{C}$ and $\mathcal{D} $ by using the canonical Hamiltonian (\ref{bi5}) gives the following third level constraints, \begin{eqnarray} && \lbrace \mathcal{C}(x),H_{c} \rbrace =\int d^{3}yM(y)\lbrace \mathcal{C}(x),\mathcal{D}(y) \rbrace=M(x)\Gamma(x), \label{k112} \end{eqnarray} \begin{eqnarray} && \lbrace \mathcal{D}(x),H_{c}\rbrace =\int d^{3}y N(y)\lbrace \mathcal{C}(x),\mathcal{D}(y) \rbrace=N(x)\Gamma(x).\label{k113} \end{eqnarray} where \begin{eqnarray} && \Gamma\approx \left(\frac{m^{4}}{M^{2}_{g}}(g_{mn}\pi-2\pi_{mn})U^{mn}\right)+2m^{4}\sqrt{g}g_{ni}D^{i}_{\ k}n^{k}\triangledown_{m}U^{mn}\nonumber\\&& \hspace{5mm}+\left(\mathcal{R}_{j}^{(g)}D^{i}_{\ k}n^{k}-2m^{4}\sqrt{g}g_{ik}\bar{V}^{ki}\right) \triangledown_{i}n^{j}\nonumber\\&&\hspace{5mm}+\sqrt{ g}\left(\triangledown_{i}(\mathcal{R}^{0(g)}/\sqrt{ g})+\triangledown_{i}(\mathcal{R}_{j}^{(g)}/\sqrt{ g})D^{j}_{\ k}n^{k}\right)n^{i}\nonumber\\&&\hspace{5mm}-\frac{m^{4}}{M^{2}_{f}}\frac{\sqrt{ g}}{\sqrt{ f}}\left(f_{mn}p-2p_{mn}\right)\bar{F}^{mn}, \label{gama} \end{eqnarray} in which \begin{eqnarray} && \ U^{mn}=-\sqrt{x}g^{mn},\\&& \bar{V}^{ki}=g^{kj}(-\frac{f_{jl}}{\sqrt{x}}((D^{-1})^{l}_{\ r}g^{ri})),\\&& \bar{F}^{mn}=-\frac{(D^{-1})^{m}_{\ i}g^{ni}-n^{i}n^{m}D^{n}_{\ i}}{\sqrt{x}}. \end{eqnarray} Historically this point is the most crucial point in the investigation of bi-gravity and proving that it is ghost free. Obviously the Poisson bracket $ \{\mathcal{C}(x),\mathcal{D}(y) \} $ is nonzero which states both $ \mathcal{C}$ and $ \mathcal{D} $ are second class constraints. In Ref. \cite{HR5} which is the main reference of so many papers using HR model, it is argued that $\mathcal{D}$ is first class "since we need it to be first class in order to generate diffeomorphism". Hence, the authors of \cite{HR5}, just "assume" that $ \Gamma\equiv \{\mathcal{C}(x),\mathcal{D}(y) \} $ is a new constraint (they denote it as $ \mathcal{C}_{2} $ ) which constitute a system of second class constraints together with the constraint $ \mathcal{C} $. Two important points arise here. First, there is no preference between $ \mathcal{C} $ and $\mathcal{D}$. One could choose $ \mathcal{C} $ instead of $ \mathcal{D} $ as a first class constraint which may generate guage transformations. In fact, it requires complicated calculations to find which one of $ \mathcal{C}$ or $ \mathcal{D} $, or a combination of them, is the generator of diffeomorphism. Second, with this logic one may consider $ \Omega\equiv \{\mathcal{C},\Gamma \}$ as a new constraint and claim that $ \mathcal{C} $ is also first class. This story may have no end. In fact, in the general context of constrained systems the Poisson bracket of second class constraints just act as non vanishing coefficients in determining the Lagrange multipliers\cite{BGP} and it is not reasonable to consider them as new constraints. New constraints at each level come out only as the Poisson brackets of the existing constraints with the canonical Hamiltonian. This point about the pioneer paper \cite{HR5} was also observed by Kluson in Ref. \cite{jklu}. He investigated similar Hamiltonian analysis as we gave briefly in this section, up to the bottle neck of calculating $ \{\mathcal{C}(x),\mathcal{D}(y) \} $. He found that one is not able to obtain a new constraint out of consistency of constraints $ \mathcal{C} $ and $\mathcal{D}$. However, in Ref. \cite{jklu} the following differential equations are derived for consistency of $ \mathcal{C} $ and $\mathcal{D}$ respectively. \begin{eqnarray} && \mathcal{C}_{2}\equiv M(F-\partial_{i}V^{i})+(W^{i}-V^{i})\partial_{i}M\approx 0, \nonumber\\ && \mathcal{D}_{2}\equiv N(F-\partial_{i}V^{i})+(W^{i}-V^{i})\partial_{i}N\approx 0, \label{diff} \end{eqnarray} with some expressions for $ F$, $V^{i} $ and $W^{i} $. Since the constraints $ \mathcal{C} $ and $\mathcal{D}$ contain spacial derivatives of the canonical variables, it does not seem strange to obtain derivatives of the delta function in the Poisson bracket $ \{\mathcal{C}(x),\mathcal{D}(y) \} $ which lead to differential equations for $ M $ and $ N $ respectively. Eqs. (\ref{diff}) show the constraints $ (\mathcal{C}_{2} , \mathcal{D}_{2} )$ together with $ (P_{M}, P_{N}) $ and $ (\mathcal{C},\mathcal{D}) $ constitute a system of 6 second class constraints. In this way we have, at one hand, a system of 6 first class and 12 second class constraints leading to 16 phase space degrees of freedom which involves ghost. On the other hand, the gauge symmetry is restricted to spacial diffeomorphism generated by $ P_{M_{i}} $ and $\mathcal{R}_{i} $. This objection concerning the existence of ghost in HR model remained unanswered for almost four years. In our study of this problem, for time evaluation of the constraints $ \mathcal{C}$ and $ \mathcal{D} $, we observed that both the constraints $ \mathcal{C}_{2} $ and $ \mathcal{D}_{2} $ in Kluson's analysis are of the same structure as $ \mathcal{C}_{2}(x)=\int d^{3}y \Gamma(x,y) M $ and $ \mathcal{D}_{2}(x)= \int d^{3}y\Gamma(x,y) N $ where $ \Gamma(x,y)\equiv \{\mathcal{C}(x),\mathcal{D}(y) \} $ may contain derivatives of delta function. Apart from dependence of $ \Gamma(x,y) $ on derivative of the delta function, one may consider the consistency equations $ \dot{\mathcal{C}}=\Gamma M=0 $ and $ \dot{\mathcal{D}}=\Gamma N=0 $ as a bifurcation problem. In other words, one may consider these equations as something to determine $ M $ and $ N $ or they can be satisfied just by one condition $ \Gamma =0 $. We are mostly familiar with cases where $ \Gamma(x,y) $ is proportional to $ \delta(x-y) $. However, in a formal way, one may also consider the case where $ \Gamma $ also contains $ \partial_{i}\delta(x-y) $. During the weeks we were preparing this article a new paper by F. Hassan and A. Lundkvist \cite{Hassan18} was published which shows the correct expressions of $ \mathcal{C}_{2} $ and $ \mathcal{D}_{2} $ do not contain derivatives of $ M $ and $ N $. Our calculations are also in agreement with this results. In other words, the Poisson bracket $ \{\mathcal{C},\mathcal{D} \} $ does not contain derivatives of delta function, at all. Hence, the consistency conditions of $ \mathcal{C}_{2} $ and $ \mathcal{D}_{2} $ do not give equations (\ref{diff}) , but they give $ \Gamma N\approx0$ and $ \Gamma M\approx0 $ where $ \Gamma(x) $ is as given in Eq. (\ref{gama}), in agreement with the result of \cite{Hassan18}. We emphasize again that the system of equations $ \Gamma N\approx0$ and $ \Gamma M\approx0 $ are, in fact, a real bifurcation problem, where you need to make a choice to proceed with the problem. Here we have two choices: i) Every where in phase space where $ \Gamma $ does not essentially vanish, we should impose $ M\approx N\approx0 $, which is more or less similar to the result of Ref. \cite{jklu} discussed above, i.e. 16 degrees of freedom containing ghost and lack of complete four parameter diffeomorphism of space-time. The worst result of the choice $ \Gamma \neq 0 $, $ N \approx M \approx 0 $ is emerging singular metrics which are not acceptable physically. However, note that the dynamics of theory, by itself, does not discard this possibility. This is our physical preference to put this choice away, which is imposed from outside of the dynamical investigation of the model. ii) If we restrict ourselves to the subregion $ \Gamma \approx 0 $ of the phase space, we would have no restriction on the lapses $ N $ and $ M $ up to this point, i.e. they remain arbitrary so far. However, we may encounter some restrictions on the lapse functions (not here but) in the subsequent levels of canonical investigation of the theory, as we will see. However, as we mentioned before, $ \Gamma $ is not an ordinary constraint which comes out from the Poisson brackets of the existing constraints with the canonical Hamiltonian, as is the case, in Dirac approach, for every constraint system. In other words, $ \Gamma=0 $ is not a natural consequence of the dynamics of the system; it is just a kind of constraint or restriction on the canonical variables which you impose, in order to escape unwanted results $ M= 0 $ and $ N = 0 $. Hence, in our opinion, it needs special care to see what naturally emerges from the dynamics of the theory and what we "assume" in order to have a consistent theory. In fact, constraints such as $ \Gamma $ should be viewed as a new kind of constraints, which are different from primary constraints (which emerge due to definition of momenta) and secondary constraints (which emerge from the Poisson brackets of the constraints with the canonical Hamiltonian). This kind of constraints which we denote them as "new kind" are also familiar to us in the canonical analysis of Chern-Simons like theories in 3 dimensions \cite{Haji}. \footnote{The necessity of additional constraints has been observed previously in canonical analysis of Chern-Simons like theories in Ref. \cite{x1}-\cite{x3}. However, their special character and distinguishing them from normal Dirac constraints were not recognized before.} Assume, any how, that we have accepted $ \Gamma $ as a new constraint. It is obvious that the system should not exit from the surface $ \Gamma=0 $ during the time evaluation. Hence, the consistency condition $ \dot{\Gamma}=0 $ should be imposed further. This gives the fourth level constraint \begin{eqnarray} && \Omega(x)\equiv \int d^{3}y \{\Gamma(x), H_{c}(y)\}_{*}=E(x)M(x)+F(x)N(x) \end{eqnarray} where \begin{eqnarray} && F(x)N(x)=\int d^{3}y N(y)\{\Gamma(x), \mathcal{C}(y)\}_{*}\\&& E(x)M(x)=\int d^{3}y M(y)\{\Gamma(x), \mathcal{D}(y)\}_{*} \end{eqnarray} The symbol $\lbrace \ ,\ \rbrace_{*}$ means the Dirac bracket \cite{dms3} which implies strongly imposing the constraints $ p_{n^{i}} $ and $ U_{i} $ (see Eq. (\ref{bi15}) ). The constraint $ \Omega(x) $ contains the lapse functions $ M $ and $ N $. So one combination of the Lagrange multipliers $ u $ and $v$ in the total Hamiltonian would be determined from consistency of $ \Omega $, i.e. \begin{eqnarray} && \int d^{3}z \lbrace \Omega(x),\mathcal{H}_{T} (z)\rbrace_{*} =\nonumber\\&& \int d^{3}z \lbrace \Omega(x),(\mathcal{H}_{c}+uP_{N}+vP_{M}) (z)\rbrace_{*} \approx 0. \label{op1} \end{eqnarray} The good news is that this is the end of the consistency process and one combination of $ u $ and $ v $ remain undetermined. In addition to the Lagrange multipliers $u_{i}$ in Eq. (\ref{k111}) we have, in this way, altogether 4 arbitrary gauge fields corresponding to diffeomorphism parameters. One may manage the whole structure of the problem in a more clear form if one changes the lapse variables to $ \bar{N}, M $ such that \begin{eqnarray} && H_{c}=\bar{N}\mathcal{C}+M\mathcal{D}^{\prime}+M^{i}\mathcal{R}_{i}, \label{hc} \end{eqnarray} where \begin{eqnarray} && \bar{N}=N+\frac{E}{F}M\\&& \mathcal{D}^{\prime}=\mathcal{D}-\frac{E}{F}\mathcal{C} \end{eqnarray} In this system consistency of $ \mathcal{D}^{\prime} $ is satisfied identically on the surface $ \Gamma=0 $. Meanwhile, consistency of $ \Gamma $ gives $ \Omega=\bar{N}F $ and finally consistency of $ \Omega $ determine the Lagrange multiplier of the primary constraint $ P_{\bar{N}} $ in the total Hamiltonian. The interesting point is that at the final stage the problem bifurcates once more. In other words, we could restrict ourself on the surface $ F=0 $. However, we do not do this, since it makes our change of variables in Eq. (\ref{hc}) singular. Hence, our analysis is valid where $ F\neq0 $. \section{Bi-gravity without square root} To see better the general construction of section (2), let us consider a general model as \begin{equation} S=\int d^{4}x \left( M^{2}_{g} \sqrt{-^{(4)}g} R(g)+M^{2}_{f} \sqrt{-^{(4)}f} R(f)+2m^{4} (^{(4)}g\ ^{(4)}f)^{1/4} V(\mathcal{Z}_{1},...,\mathcal{Z}_{4})\right), \label{m111} \end{equation} where $\mathcal{Z}_{n}=Tr[(g^{-1}f)^{n}] $. Comparing to HR bi-gravity, this category concerns $\mathcal{Z}^{\mu}_{\ \nu}=g^{\mu \lambda}f_{\lambda\nu} $ instead of $\sqrt{\mathcal{Z}^{\mu}_{\ \nu}} $ . There is also a slight difference in coefficient of the interaction term where $ \sqrt{-^{(4)}g} $ is replaced by $ [^{(4)}g ^{(4)}f]^{1/4}$ which is more symmetric with respect to the $ g$ and $f $ metrics. For this case, it is convenient to use the following variables\cite{bul} \begin{eqnarray}&& \bar{N}= \sqrt{NM} ,\hspace{3mm} n=\sqrt{\dfrac{N}{M}}, \hspace{3mm} \bar{N^{i}}=\dfrac{1}{2}(N^{i}+M^{i}),\hspace{3mm} n^{i}=\dfrac{N^{i}-M^{i}}{\sqrt{NM}}. \label{kogan1} \end{eqnarray} Considering Eqs. (\ref{g1}) and (\ref{g2}) together with Eq. (\ref{kogan1}), one can show directly \begin{eqnarray}&& \mathcal{Z}_{1}=\mathcal{Z}_{\ \mu}^{ \mu}=a+a_{i}^{i},\label{u11}\\&& \mathcal{Z}_{2}=\mathcal{Z}_{\mu}^{\ \nu}\mathcal{Z}_{\ \nu}^{\mu}=a^{2}+v_{i}w^{i}+a_{j}^{i}a_{i}^{i} \label{u12}\\&& \mathcal{Z}_{3}=\mathcal{Z}_{\ \mu}^{\rho}\mathcal{Z}_{\nu}^{\ \mu}\mathcal{Z}^{\nu}_{\ \rho}=a^{3}+3v^{i}w_{i}a+3v_{i}a_{j}^{i}w^{j}+a_{j}^{i}a_{k}^{j}a_{i}^{k} \label{u13}\\&&\mathcal{Z}_{4}=\mathcal{Z}_{\ \sigma}^{\rho}\mathcal{Z}_{\rho}^{\ \sigma}\mathcal{Z}_{\nu}^{\ \mu}\mathcal{Z}^{\nu}_{\ \mu} \nonumber\\&& \hspace{6mm}=a^{4}+4a^{2}v_{i}w^{i}+2(v_{i}w^{i})^{2}+4av_{i}a^{i}_{j}w^{j}+4v_{i}a_{j}^{i}a_{k}^{j}w^{k}+a_{j}^{i}a_{k}^{j}a_{l}^{k}a_{i}^{l}, \label{u1} \end{eqnarray} where \begin{eqnarray}&& v_{i}=\frac{f_{ij}n^{j}}{n^{2}},\\&& a=\frac{1}{n^{4}}-\frac{n^{i}f_{ij}n^{j}}{2n^{2}},\label{u21}\\&& a^{i}_{j}=g^{ik}f_{kj}-\frac{n^{i}f_{jk}n^{k}}{2n^{2}},\label{u31}\\&& w^{i}=n^{i}\frac{n^{m}f_{mn}n^{n}}{4n^{2}}-\frac{n^{i}}{2n^{4}}-\frac{1}{2}g^{im}f_{mk}n^{k}. \end{eqnarray} These relations show that the interaction potential $ V(\mathcal{Z}_{1},...,\mathcal{Z}_{4}) $ is fortunately independent of $\bar{N} $ and $\bar{N}^{i} $. This enables us to linearize the action with respect to $\bar{N} $ and $\bar{N}^{i} $. In Ref. \cite{Kluson} it is argued that the characteristic equation of the matrix $\mathcal{Z}_{\ \nu}^{\mu} $ is the same as $A^{\mu}_{\ \nu}=\mathcal{Z}_{\ \nu}^{\mu}|_{\bar{N}=1,\bar{N}^{i}=0} $. Hence, it is deduced that, in principle, there exists a similarity transformation which brings $ \mathcal{Z}_{\ \nu}^{\mu} $ to $ A_{\ \nu}^{\mu} $. However, besides to direct calculations of $\mathcal{Z}_{1}$ to $\mathcal{Z}_{4} $ in Eq. (\ref{u11}) to (\ref{u1}), we can simply argue that since $ Tr(\mathcal{Z}^{\mu}_{\ \nu})^{n} $ is gauge invariant; one can in fact calculate the corresponding quantities $ \mathcal{Z}_{n} $ in a special gauge where $ \bar{N}=1 $ and $\bar{N}^{i}=0 $, which gives the same results. Including the well-known result for the Einestain-Hilbert parts of the action (\ref{m111}), the Lagrangian density reads as Eq. (\ref{m12}) where \begin{equation} H_{c}=\int d^{3}x (\bar{N}\mathcal{R}+\bar{N}^{i}\mathcal{R}_{i}),\label{hc2} \end{equation} in which \begin{equation} \mathcal{R}=n\mathcal{R}_{0}^{(g)}+\frac{1}{n}\mathcal{R}_{0}^{(f)}+\frac{1}{2}n^{i}\mathcal{R}_{i}^{(g)}-\frac{1}{2}n^{i}\mathcal{R}_{i}^{(f)}+2m^{4} (gf)^{1/4}V(\mathcal{Z}^{\mu}_{\ \nu}),\label{k1} \end{equation} \begin{equation} \mathcal{R}_{i}=\mathcal{R}_{i}^{(g)}+\mathcal{R}_{i}^{(f)}. \end{equation} As usual, the momenta conjugate to the lapse-shift variables $\bar{N},\bar{N}^{i},n$ and $n^{i}$ are primary constraints, i.e. \begin{equation} P_{\bar{N}}\approx 0, P_{i}\approx 0,p_{n}\approx 0, p_{i}\approx 0 . \end{equation} Hence, the total Hamiltonian is as follows \begin{eqnarray} H_{T}=\int d^{3}x (\bar{N}\mathcal{R}+\bar{N}^{i}\mathcal{R}_{i}+ uP_{\bar{N}}+u_{i}P_{i}+v^{i}p_{i}+vp_{n}). \label{l1} \end{eqnarray} The time evaluation of the primary constraints gives \begin{eqnarray} && \lbrace P_{\bar{N}},H_{T} \rbrace =-\mathcal{R}\approx 0,\label{bi110}\\ && \lbrace P_{i},H_{T}\rbrace = -\mathcal{R}_{i} \approx 0,\label{bi111}\\ && \lbrace p_{n} ,H_{T} \rbrace =\bar{N}(-\mathcal{R}_{0}^{(g)}+\frac{1}{n^{2}}\mathcal{R}_{0}^{(f)}-2m^{4} (gf)^{1/4}\frac{\delta V }{\delta n}) \equiv \bar{N}\zeta,\label{bi112}\\ && \lbrace p_{i},H_{T}\rbrace =\bar{N}( -\frac{1}{2}\mathcal{R}_{i}^{(g)}+\frac{1}{2}\mathcal{R}_{i}^{(f)}-2m^{4} (gf)^{1/4}\frac{\delta V }{\delta n^{i}})\equiv \bar{N}\zeta_{i}.\label{bi113} \end{eqnarray} Comparing to our general formalism of section (2), the secondary constraints $ \mathcal{R}_{a} $ of Eq. (\ref{m14}) are $ \mathcal{R} $, $ \mathcal{R}_{i} $, $ \tilde{\zeta}\equiv \bar{N}\zeta$ and $ \tilde{\zeta}^{i}\equiv \bar{N}\zeta^{i}$ respectively. The constraints $ \mathcal{R}_{i} $ are mainly composed of the Einestain-Hilbert parts $ \mathcal{R}_{i}^{(g)} $ and $ \mathcal{R}_{i}^{(f)} $ and commute with each other. The constraint $ \mathcal{R} $ (see Eq. (\ref{k1})) is the most important part of the theory which includes the interaction term. Straightforward calculations given in Ref. \cite{Kluson} show $\lbrace \mathcal{R}(x),\mathcal{R}(y)\rbrace \approx 0 $ as well as $\lbrace \mathcal{R}(x),\mathcal{R}_{i}(y)\rbrace \approx 0 $. Taking a look on the secondary constraints $ \tilde{\zeta}$ and $\tilde{\zeta}_{i} $ shows that we have a bifurcation problem here. We are in general free to assume different cases $\bar{N}=0 $ and $ \bar{N}\neq 0 $. The first choice, leads to a degenerate metric which is not physical. Hence, the simplified constraints $ \zeta $ and $ \zeta^{i} $ are resulted from the physical assumption $\bar{N}\neq 0 $. Let us note briefly that, in spite of the approach of Ref. \cite{Kluson}, it is not needed to add the secondary constraints to the total Hamiltonian. In fact, theoretically as shown in \cite{zms2,BGP}, the total Hamiltonian as the generator of time evaluation should only include the primary constraints.\footnote{Working with the extended Hamiltonian, which includes all constraints may sometimes simplify the problem, but not for the case at hand. However, the extended Hamiltonian turns out to give the correct time evaluation for the gauge invariant quantities.} Adding the secondary constraints to the total Hamiltonian, however, makes us to calculate some unnecessary Poisson brackets. Now we need to consider the consistency of secondary constraints, by using the total Hamiltonian (\ref{l1}). This should give us equations similar to Eq. (\ref{m15}) for $u_{a}$'s as unknowns. Since $R_i$'s include non of the laps and shift functions they would commute with all of the primary constraints, as well as the canonical Hamiltonian. The constraint $R$, however, do depend on $n$ and $n_i$ (see Eq. \ref{k1}). It is easy to see that $\{ R, p_\mu \}=\partial R/\partial n_\mu = \xi_\mu \approx 0$. Hence, consistency of the constraints $R$ and $R_i$ gives no new constraint and determines non of the Lagrange multipliers. Therefore, the only non-trivial part of Eq. (\ref{m15}) comes from the consistency of the constraints $ \zeta $ and $\zeta_{i} $. In this way the consistency conditions of secondary constraints can be given by the following matrix \begin{equation} \left( \begin{array}{cr} 0 \\ 0 \\ \{\zeta , H_{c}\} \\ \{\zeta_{i}, H_{c}\} \\ \end{array} \right) + \left( \begin{array}{c|c} \textbigcircle & \textbigcircle \\ & \\ \hline & \\ \textbigcircle & \vartriangle_{\mu\nu} \\ \end{array} \right) \left( \begin{array}{cr} u \\ u^{i} \\ v \\ v^{i} \\ \end{array} \right)=0. \end{equation} where \begin{eqnarray} && \vartriangle_{\mu\nu}\equiv \lbrace \zeta_{\mu},p_{\nu} \rbrace=\frac{\partial^{2} \tilde{V}}{\partial n^{\mu}\partial n^{\nu}}, \label{ty} \end{eqnarray} in which \begin{eqnarray} && \tilde{V}=\frac{1}{n}\mathcal{R}_{0}^{(f)}+2m^{4} (gf)^{1/4}V. \label{1ty} \end{eqnarray} As expected, the matrix of the coefficients of $ u $'s and $ v $'s have four null-vector, which do not lead to any new constraint. Hence, the four variables $ u $ and $ u^{i} $ remain undetermined through dynamical investigation of the theory. The only nontrivial part of the consistency procedure of the secondary constraints then reads \begin{equation} \{\zeta_{\mu} , H_{c}\}-\vartriangle_{\mu\nu}v^{\nu}=0,\label{5k} \end{equation} If $ \det(\vartriangle_{\mu\nu})\neq 0 $, the constraints $ p_{n},\zeta $ and $ p_{i},\zeta_{i} $ are second class. Hence, we have 8 first class and 8 second class constraints which gives 16 dynamical phase space variables, (see Eq. (\ref{hy})). This involves a scalar ghost. If $ \det (\vartriangle_{\mu\nu})=0 $, we should have at least one null vector for the matrix $ \vartriangle_{\mu\nu} $, denoted by $ \lambda^{\mu} $. Multiplying Eq. (\ref{5k}) by $ \lambda^{\mu} $, we find the new constraint $ \lambda^{\mu}\{\zeta_{\mu} , H_{c}\}=0 $. Since $ \{\zeta_{\mu} , \mathcal{R}_{i}\}=0, $ from Eq. (\ref{hc2}) the new constraint reads \begin{eqnarray} && \lambda^{\mu}\lbrace \zeta_{\mu}(x),H_{c}(y) \rbrace =\int d^{3}y \bar{N}(y)\left( \delta(x-y)\mathcal{F}(x)+\mathcal{W}^{i}(x)\partial_{x^{i}}\delta(x-y)\right)\approx0, \label{mei210} \end{eqnarray} for some functions $\mathcal{F}(x) $ and $ \mathcal{W}^{i}(x) $. If $ \mathcal{W}^{i}(x)\neq 0 $, the equation (\ref{mei210}) gives a differential equation for the lapse function $ \bar{N} $. However, from the requirement of diffeomorphism, we need $ \bar{N} $ to be an arbitrary field, while a differential equation restricts our arbitrariness only to it's initial condition. Ref. \cite{Kluson} deduces from this point that the case $ \det (\vartriangle_{\mu\nu})=0 $, should not happen; hence all models of the form of Eq. (\ref{m111}), including HR bi-gravity contain ghost mode. However, as pointed out in a footnote in the same reference, there is the possibility of vanishing $ \mathcal{W}^{i}(x) $, which changes the constraint (\ref{mei210}) to bifurcation form $ \bar{N}(x)\mathcal{F}(x) $. Again, we use the physical condition $ \bar{N}\neq 0 $ to consider $ \mathcal{F}(x) $ as a new constraint. Consistency of $ \mathcal{F}(x) $ may also lead to a differential equation for $ \bar{N}$. If we are enough lucky, the coefficient of the derivative of delta function in this new equation may also vanish. Under these circumstances, we would have two more second class constraints which cancel the ghost. Although, it seems too improbable, however, the analysis of HR gravity for the more complicated potential (involving the square root of $ g^{-1}f $) shows that it may be possible for a specially designed interaction potential to reach the desired two more constraints needed to omit the ghost. We want here to be bold enough to give a new suggestion. Consider the differential equation (\ref{mei210}) for $ \bar{N}$ as an integral equation \begin{eqnarray} && \int d^{3}y \Upsilon(x,y)\bar{N}(y)\approx0, \label{meki210} \end{eqnarray} This can also be considered as a bifurcation problem for the two factors $ \tilde{\Upsilon}(x,y) $ and $ \bar{N}$. Hence, implying $ \bar{N} \neq 0$ may lead us to consider the new constraint $ \Upsilon(x,y)\equiv\delta(x-y)\mathcal{F}(x)+\mathcal{W}^{i}(x)\partial_{x^{i}}\delta(x-y)\approx0 $. This kind of constraint is deviated slightly from being a local constraint; so we denote it as a "semi local constraint". Consistency of $ \Upsilon $ may give us again a semi local constraint. We think these new constraints are still strong enough to omit the ghost degree of freedom. However, further details requires to consider a given model of the form given in Eq. (\ref{m111}). Here, we just suggested the idea. To see the above arguments better, consider the concrete example in which the interaction term is $ V=\mathcal{Z}_{1} $ as given in Eq. (\ref{u11}), which is also analyzed in Ref. \cite{Klu3a}. Using Eqs. (\ref{u21}) and (\ref{u31}) we have \begin{equation} V=\frac{1}{n^4}-\frac{n^if_{ij}n^j}{n^2}+g^{ij}f_{ij}.\label{111} \end{equation} For this particular interaction the constraints $\zeta$ and $ \zeta_{i} $ and the matrix $ \vartriangle_{\mu\nu}$ read \begin{eqnarray}&& \zeta\equiv -\mathcal{R}_{0}^{(g)}+\frac{1}{n^{2}}\mathcal{R}_{0}^{(f)}-2m^4(gf)^{1/4}\left( \frac{-4}{n^{5}}+\frac{2n^{i}f_{ij}n^{j}}{n^{3}}\right),\\&& \zeta_{i} \equiv\frac{-1}{2}\mathcal{R}_{i}^{(g)}+\frac{1}{2}\mathcal{R}_{i}^{(f)}+2m^{4}(gf)^{1/4}(\frac{2n^{i}f_{ij}}{n^{2}}), \end{eqnarray} \begin{eqnarray}&& \vartriangle_{\mu\nu}=\left( \begin{array}{cr} -2\mathcal{R}_{0}^{(f)}/n^3 +\alpha(-20/n^6 +6n^i f_{ij}n^j/n^4) & -4\alpha n^if_{ij}/n^3 \\ & \\ -4\alpha n^i f_{ij}/n^3 & 2\alpha f_{ij}/n^2 \\ \end{array} \right), \label{e1} \end{eqnarray} where $ \alpha \equiv (2m^4(g)^{1/4}(f)^{1/4}) $. To find the probable null-vector of the matrix $ \vartriangle_{\mu\nu} $ first consider the last three columns which are proportional to $ \left( \begin{array}{cr} 2n^{i}f_{ij} \\ nf_{ij} \\ \end{array} \right) $. Since $f_{ij}$ is considered to be non-singular, each null-vector $ \lambda^{\mu} $ of $ \vartriangle_{\mu\nu}$ should necessarily be of the form $ (n,2n_i) $. However, such a vector obviously have not vanishing product with the first column. Moreover, direct calculation shows $ \vartriangle_{ \mu\nu}$ in Eq. (\ref{e1}) is nonsingular. This analysis indicates a bi gravity theory with interaction $ V=\mathcal{Z}_{1} $ consists ghost. However, one may consider more complicated interactions including $ \mathcal{Z}_{2} $, $ \mathcal{Z}_{3} $ and $ \mathcal{Z}_{4} $ in Eqs. (\ref{u11}-\ref{u1}). Theoretically it is not impossible to have an interaction for which the matrix $\vartriangle_{\mu\nu} $ is singular, and subsequent conditions for a ghost-free theory of bi gravity are satisfied. However, finding such a model seems to be a second realization of the old dream of having ghost free bi gravity theory (after HR model). \section{Conclusions } We performed the Hamiltonian analysis of four dimensional bi-gravity theories in the context of ADM formalism. First, we worked in the framework of the original lapse and shift variables. In order to generate the gauge symmetry, i.e. the diffeomorphism, in the 40 dimensional phase space, we need to have 8 first class constraints in the first and second level of consistency of constraints. Hence, the matrix of the second derivatives of the interaction term with respect to lapse and shift variables should at least have 4 null-vectors. However, if we demand omitting the ghost, we need one more null-vector. This structure is preserved in every reparametrization of the lapse and shift functions. In fact, the main work done in reference \cite{HR5} is to find a suitable change of variables, so as to show that for HR bi-gravity the $8 \times 8$ matrix of second derivatives of the potential term has rank three with respect to the new variables. Note, however, that nobody has claimed this characteristics is exclusive for HR bi-gravity. Although difficult, it is not impossible for future model builders to introduce new models with the same property. Suppose that the first condition is fulfilled and we have five constraints at the second level which are not second class so far. If consistency of these constraints gives no third level constraint, then we would have 6 second class and 10 first class constraints which corresponds to a ghost free model with 14 degrees of freedom. However, such a model would have one more gauge symmetry besides diffeomorphism. Theoretically it does not seem impossible to have a model of this kind, but there is no known model of this category. It is more or less known that 6 first class constraints, which generate the spacial diffeomorphism, can easily be found in every covariant model of bi-gravity. Hence, the only way to have a ghost free theory of bi-gravity is finding two more second class constraint after the second level. Unfortunately, our demand is not satisfied in a straightforward manner. It seems that we usually find equations to determine the lapse functions due to consistency of the remaining constraints of second level. In other words, by no means one can find ordinary constraints which do not depend on the lapse functions in this procedure. Our important observation in this paper is that at this stage we have in fact a bifurcation problem. The theory, as it stands, may have dynamical sectors in which the lapse functions are constrained. This is in contradiction with our physical expectation that lapse function should act as part of gauge parameters in diffeomorphism. On the other hand, if we restrict ourselves to a limited subregion of the phase space described by additional constraints, the consistency condition of the remaining constraints may have different solution. In other words, if we assume that in the physical sector of the theory the lapse functions should not vanish (or determined severely) then the only consistent subregion is achieved by imposing additional constraint. As we found, this constraint in HR bi-gravity gives under consistency a fourth level constraint, whose consistency determines a special combination of lapse functions. We argued that even in case where the consistency condition of remaining second level constraint leads to differential equations, the bifurcation characteristics of the problem remains unchanged. In such cases we introduced the notion of semi-local constraints which contain some limited number of derivatives of delta function. The interesting point is the original model at the bifurcation point may go through the branch which fixes the lapse functions. If so, the theory has not advantage of the full capacity of four dimensional diffeomorphism; i.e. the gauge symmetry is limited to spacial diffeomorphism. This shows in the Hamiltonian framework we have additional situations which may not occur in Lagrangian formulation. However, through the physical branch, in addition to two second class constraints needed for omitting the ghost, we also have found two more first class constraints needed to generate the full four dimensional diffeomorphism. Unfortunately, this analysis just relies on counting the number of first class constraints. A difficult problem concerns how the variations of dynamical variables due to diffeomorphism is generated by these first class constraints. This may be the issue of our future works. As mentioned in the introduction, the bi-gravity models maybe employed in describing the observations concerning Neutron star merger GW170817 event. As stated in \cite{YA} this event puts constraints on the physical parameters of the bi-gravity coupled to matter. However, our investigation in this paper concerns only pure gravity and the problem of ghost. It is obvious that having a consistent theory of bi-gravity at hand, we are able to adjust its coupling to matter so as to fulfill the constraints imposed by the observations. \vspace{8mm} {\bf{Acknowledgements:}} The authors would like to thank Claudia de Rham for helpful discussions. Z.M. thanks IPM for hospitality during the progress of this work. \vskip .3cm \vspace{8mm}
{ "timestamp": "2019-10-18T02:20:22", "yymm": "1805", "arxiv_id": "1805.02179", "language": "en", "url": "https://arxiv.org/abs/1805.02179" }
\section{Introduction} \label{intro} \medskip Systems of interacting particles are ubiquitous and can be found in many problems of physics, chemistry, biology, economics, and social science. A wide class of such systems can be presented as follows. Consider $N$ interacting particles described by a coupled system of $N$ ODEs: \begin{eqnarray} \text{d}X_i(t)= S(X_i)\text{d}t+\sqrt{2 D}\,\text{d}W_i(t)+\frac{1}{N} \sum_{j=1}^N u(X_i, X_j) \text{d}t,\quad \text{for $i=1,..,N$.} \label{IBM} \end{eqnarray} Here $X_i(t)$ is the position of the $i$th particle at time $t$, $S$ is either self-propulsion, internal frequency (as in Kuramoto model, see Subsection \ref{subsec:kuramoto}), or a conservative force field (e.g., gravity), $W_i(t)$ denotes the Weiner process, and $u(x,y)$ is an interaction force between two particles at positions $x$ and $y$. Oftentimes the force is represented by a function of the directed distance between particles, so $u$ can be written as follows \begin{eqnarray}\label{dist_cond} u(X_i, X_j)=\hat u(X_j- X_i). \end{eqnarray} System \eqref{IBM} has to be supplied with initial conditions. For a large number of particles $N$, finding the initial position for each particle is not practical. Instead, it is reasonable to assume that initially the positions $X_1$,$\dots$,$X_N$ are random, independent and identically distributed (i.i.d.). Thus, instead of determining a massive tuple of $N$ initial conditions, a single continuous probability distribution function is introduced. Note that system \eqref{IBM} represents first order dynamics in which the net force is proportional to velocity, i.e. $F_i\sim V_i=\dot{X}_i$, as opposed to second order dynamics, usually obtained from Newton's Law, in which the net force is proportional to acceleration $F_i\sim a_i=\ddot{X}_i$. First order dynamics are commonly used for models such as those of ants marching \cite{MorCapOel2005}, bacteria swimming \cite{RyaBerHai2013,RyaBerHai2011}, hierarchies in pigeons \cite{NagVásPet2013}, opinion dynamics \cite{MotTad2014}, point vortices \cite{GooHou1991}, etc. To solve \eqref{IBM} with random initial conditions means to find a joint probability distribution function (or $N$-particle pdf): \begin{equation*} f_N(t,x_1,x_2,\dots,x_N). \end{equation*} Then the probability of finding a tuple $(X_1,\dots,X_N)$ in a given domain $\Omega$ at time $t$ is \begin{equation*} \int\limits_{\Omega}f_N(t,x_1,\dots,x_N)\,\text{d}x_1\dots\text{d}x_N. \end{equation*} The function $f_N$ can be found as a solution of the Liouville equation \cite{Spo1991}. However finding a function of $N$ arguments, such as $f_N$, numerically means computing an $N$-dimen\-sional array which is prohibitively computationally expensive even for moderately large $N$. Therefore a simplification for the problem for $f_N$ is required. A classical approach is the Mean Field Approximation (MFA) \cite{Ald1999,Jab2014,Spo1991} which relies on the assumption that initially uncorrelated particles remain uncorrelated as time evolves. Then the joint probability distribution function $f_N$ is determined by a function of a two variables, $f_1(t,x)$: \begin{equation}\label{mean_field_formula} f_N(t,x_1,x_2,...,x_N)\approx\prod\limits_{i=1}^{N}f_1(t,x_i). \end{equation} One can substitute \eqref{mean_field_formula} into the Liouville equation for $f_N$ to obtain a partial differential equation (PDE) for $f_1$ (Vlasov equation). In the limit $N\to \infty$ (the mean field limit), formula \eqref{mean_field_formula} holds exactly \cite{BraHep1977,Dob1979}. The function $f_1$ has the meaning of a one-particle pdf. Alternatively, in the limit $N\to\infty$ one can describe the set of all particles as a continuum with density $f_1(t,x)$. Though MFA is useful in many applications, it is generally not as accurate for moderate or small $N$ \cite{MidFleGri2014}. MFA is also not applicable when the impact of correlations (which are neglected in MFA) is investigated. An example is collective behavior in bacterial suspensions \cite{SokAra2012,SokAraKes2007}: density of bacteria $-$ or equivalently one-particle pdf (since the number of bacteria is $N=10^{10}$ per $\text{cm}^3$) $-$ remains uniform, while correlation length increases, so that the two-particle pdf changes due to emergence of correlations. One set of approaches to account for correlations is based on using closures of the Bogoliubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy. This hierarchy is the system of $N$ PDEs: one for the one-particle pdf $f_1$, one for the two-particle pdf $f_2$, ..., and one for the $N$-particle pdf $f_N$. The PDE for $f_k$ ($k=1,...,N-1$) in the BBGKY hierarchy is obtained by integration of the Liouville equation with respect to $x_{k+1},...,x_N$. The equation for $f_N$ is the Liouville equation itself. Solving the BBGKY hierarchy is equivalent to solving the Louiville equation which is computationally prohibitive as explained above. On the other hand, the PDEs in the BBGKY hierarchy are coupled as follows: the PDE for $f_k$ depends on $f_{k+1}$. Therefore, one can obtain a closed system for $f_1,...,f_k$ by introducing a closure approximation for $f_{k+1}$ in terms of $f_1,...,f_k$. For example, MFA can be considered as a closure of the BBGKY hierarchy at level $k=1$ using the closure approximation: \begin{equation} \label{MF_assumption} f_2(t,x_1,x_2)=f_1(t,x_1)f_1(t,x_2). \end{equation} The closure approximation \eqref{MF_assumption} means that MFA relies on the assumption that correlations in the system of interacting particles are negligible. {\it To account for correlations one needs a closure approximation at least at level $k=2$.} The Kirkwood Superposition Approximation (KSA), developed in \cite{Kir1935}, is the most widely used closure of the BBGKY hierarchy at level $k=2$ and was applied, for example, in gas dynamics \cite{Leu1982}, simple liquids \cite{EgePagHea1971} and recently employed in biology \cite{BakSim2010,MidFleGri2014}. Following the general idea of closure approximations of the BBGKY hierarchy described above, in KSA a single ansatz for $f_3$ in terms of $f_1$ and $f_2$ is substituted in the equation for $f_2$. This ansatz is presented in Section \ref{sec:kinetic} and may be formulated in words as follows: the probability of finding the particle triple in a given configuration equals to the probability of finding each pair independently from the third particle \cite{Col1968}. Though KSA is a phenomenological ansatz, formal justification and further improvements are available \cite{BugGorKar1991,RicLek1965}. However, we note that, up to our best knowledge, there is no rigorous asymptotic approach to derive a closure of BBGKY hierarchy that takes into account correlations. Recently, a closure at level $k=2$, alternative to KSA, has been introduced in \cite{BerJabPot2016}. The main difference between KSA and the closure from \cite{BerJabPot2016} $-$ referred below to as the Truncation Approximation (TA) $-$ is that instead of a single ansatz for $f_3$, TA introduces an individual representation for each of the two terms in the equation for $f_2$ where $f_3$ appears. The choices in TA are made so that key properties of pdfs $f_1$ and $f_2$ are preserved (the properties are listed in Section \ref{sec:kinetic}). It was also proven that there is no such single representation ansatz for $f_3$ that preserves the key properties. Moreover TA is less computationally expensive than KSA. In this paper we consider system \eqref{IBM} with various types of interactions $u$. We compare the closures obtained from MFA, KSA and TA with each other and with Monte Carlo Simulations of \eqref{IBM}. We show that TA is at least as accurate as KSA (when comparing to Monte Carlo simulations). Moreover, we observe that TA is less computationally expensive and more numerically stable than KSA. Finally for each type of interaction considered in this paper we describe the effect of correlations by comparing MFA, which neglects correlations, with other methods. Here we consider not very large $N$ for the following two reasons. First, one- and two-particle histograms $\hat{f}_1$ and $\hat{f}_2$ obtained from Monte Carlo simulations do not require excessive computations for such $N$. Note that $\hat{f}_1$ and $\hat{f}_2$ converge to true $f_1$ and $f_2$ (that is, solutions of the original not truncated BBGKY hierarchy) as sample size, the number of realizations, grows to infinity. The second reason to choose $N$ not large is to have an observable impact of correlations (correlations vanish as $N\to \infty$). The paper is organized as follows. In Section \ref{sec:kinetic} we formulate the main problem and review and discuss the application of the BBGKY hierarchy and its truncations, such as MFA, KSA and TA. Results of numerical simulations are presented in Section \ref{sec:numerics} and discussed in Section \ref{sec:conclusion}. \section{Description of continuum approximations} \label{sec:kinetic} In this section we review the continuum approximations that are used in this work. First, we describe the BBGKY hierarchy and then discuss its closures such as MFA, KSA, and TA. Next, we compare the three approximations noting some key differences as well as similarities between them. Numerical integration of the corresponding PDEs is presented in Section \ref{sec:numerics}. \paragraph{BBGKY Hierarchy.} Consider system \eqref{IBM} with $u$ satisfying \eqref{dist_cond}. The one-particle pdf $f_1(t,x_1)$ solves the following evolution equation \begin{equation}\label{liouville_f_1} \partial_{t} f_{1}(t,x_1)+\nabla_{x_{1}}\cdot \left((S(x_1)+\mathcal{F}(t,x_1)) f_{1}(t,x_1)\right)=D\Delta_{x_1} f_1(t,x_1), \end{equation} where $\mathcal{F}$ is the conditional expectation of force exerted on the first particle which occupies the position $X_1(t)=x_1$ by all other particles: \begin{equation}\label{def_of_F} \mathcal{F}(t,x_{1})=\mathbb{E} \Bigg\{ \frac{1}{N} \sum\limits_{j=2}^N u(X_{1}(t),X_{j}(t)) \Bigg\| X_{1}(t)=x_{1}\Bigg\}. \end{equation} Equation \eqref{liouville_f_1} is an advection-diffusion equation for $f_1$ or, alternatively, it can be derived from the Liouville equation by direct integration with respect to all variables except $t$ and $x_1$. An explicit formula for $\mathcal{F}$ in terms $f_1$ and $f_2$ follows from the definition of conditional expectation: \begin{equation}\label{edef_of_F} \mathcal{F}(t,x_{1})= \frac{N-1}{N} \int u(x_{1},y) \frac{f_2(t,x_1,x_2)}{f_1(t,x_1)}\text{d}y . \end{equation} In view of formula \eqref{edef_of_F}, we note that equation \eqref{liouville_f_1} depends on the two-particle pdf $f_2(t,x_1,x_2)$. In order to find $f_2$ we need to consider an equation for $f_2$, analogous to \eqref{liouville_f_1} for $f_1$: \begin{eqnarray} \label{liouville_f_2} \partial_{t} f_{2}(t,x_1,x_2) &+& \nabla_{x_{1}} \cdot (\mathcal{F}_{1}f_{2}(t,x_1,x_2))+\nabla_{x_{2}}\cdot (\mathcal{F}_{2}f_{2}(t,x_1,x_2)) \nonumber \\ &+& \nabla_{x_{1}} \cdot (S(x_1)f_2(t,x_1,x_2))+\nabla_{x_{2}}\cdot (S(x_2)f_2(t,x_1,x_2)) \nonumber \\ &=& D(\Delta_{x_1} f_2(t,x_1,x_2)+\Delta_{x_2} f_2(t,x_1,x_2)), \end{eqnarray} where $\mathcal{F}_{i}(t,x_1,x_2)$ ($i=1,2$) are the conditional expectation of force exerted on the $i$th particle by other particles given that $X_1(t)=x_1$ and $X_2(t)=x_2$: \begin{equation} \label{eforce} \mathcal{F}_{i}(t,x_{1}, x_{2})=\mathbb{E} \Bigg\{ \frac{1}{N} \sum\limits_{j\neq i}^N u(X_{i}(t),X_{j}(t)) \Bigg\| \begin{array}{c}X_{1}(t)=x_{1}\\X_{2}(t)=x_{2}\end{array}\Bigg\}. \end{equation} Using that all particles are identical and substituting conditions $X_{1}(t)=x_{1}$ and $X_{2}(t)=x_{2}$ into the sum in the right hand side of \eqref{eforce}, we simplify the formula for $\mathcal{F}_i$ \begin{eqnarray} \label{eforce1} \mathcal{F}_{1}(t,x_{1}, x_{2})=\frac{1}{N}u(x_1,x_2)+\frac{N-2}{N} \int\frac{ u(x_{1},y)f_{3}(t,x_{1},x_{2},y)}{ f_{2}(t,x_{1},x_{2})}\text{d}y, \\ \label{eforce2} \mathcal{F}_{2}(t,x_{1}, x_{2})=\frac{1}{N}u(x_2,x_1)+\frac{N-2}{N} \int\frac{ u(x_{2},y)f_{3}(t,x_{1},x_{2},y)}{ f_{2}(t,x_{1},x_{2})}\text{d}y. \end{eqnarray} It is clear from \eqref{eforce1}-\eqref{eforce2} that to solve \eqref{liouville_f_2} one needs $f_3$, the three particle pdf. One can write the equation for $f_3$ similar to \eqref{liouville_f_1} and \eqref{liouville_f_2}, and this equation will depend on $f_4$. We can continue in this manner to obtain a system of $N$ coupled partial differential equations for $f_1,f_2, ... ,f_N$. The resulting system is the BBGKY hierarchy described in Section~\ref{intro}. This system is prohibitively computationally expensive to solve. Instead we look at various truncations of the BBGKY hierarchy which are computationally feasible and do not rely on the assumption that correlations are negligible unlike MFA which is a truncation in the equation for $f_1$ (at level $k=1$). Specifically, we focus on truncations in the equations for $f_1$ and $f_2$ (at level $k=2$). One can consider truncations at higher levels but the computational expense increases greatly with increasing the level of a truncation. As a result, we only consider truncations in the equations for $f_1$ and $f_2$. \medskip \paragraph{Mean Field Approximation.} MFA is a truncation of the BBGKY hierarchy at the equation for $f_1$ using the assumption \begin{eqnarray} \label{particles_are_uncorrelated} f_2(t,x_1,x_2)=f_1(t,x_1)f_1(t,x_2). \end{eqnarray} Substituting this assumption into \eqref{liouville_f_1} results in the following PDE \begin{eqnarray} \partial_t f_1(t,x_1) &+& \frac{(N-1)}{N}\nabla_{x_1} \cdot (\int u(x_1,y) f_1(t,y) \text{d}y f_1(t,x_1) ) + \nabla_{x_{1}}\cdot (S(x_1)f_1(t,x_1)) \nonumber \\ &=& D \Delta_{x_1} f_1(t,x_1). \end{eqnarray} Notice that the assumption \eqref{particles_are_uncorrelated} is equivalent to particles being uncorrelated. In other words, MFA does not take into account the effects of correlations. Taking the limit as $N \rightarrow \infty$ results in the coefficient $\frac{(N-1)}{N}$ being dropped and yields the Vlasov equation, \begin{eqnarray} \partial_t f_1(t,x_1) &+& \nabla_{x_1} \cdot (\int u(x_1,y) f_1(t,y) \text{d}y f_1(t,x_1) )+\nabla_{x_{1}}\cdot (S(x_1)f_1(t,x_1)) \nonumber \\ &=&D \Delta_{x_1} f_1(t,x_1). \label{Vlasov1} \end{eqnarray} It was shown in \cite{Dob1979} that equation \eqref{Vlasov1} is well-posed for smooth and bounded $u(x,y)$. The Vlasov equation \eqref{Vlasov1} can also be understood as follows. Write the BBGKY hierarchy for $N=\infty$, that is, the hierarchy is an infinite system of coupled PDEs for $f_1,f_2,\ldots$\,. Assume in addition that all particles are initially independent: \begin{eqnarray} f_n(0,x_1, ... ,x_n)=\prod\limits_{i=1}^{n} f_1(0,x_i), \quad n\geq 1. \end{eqnarray} Then one can show that \begin{eqnarray} f_n(t,x_1, ... ,x_n)=\prod\limits_{i=1}^{n} f_1(t,x_i) \text{ for all }t\geq 0 \text{ and } n\geq 1. \end{eqnarray} This is so called {\it propagation of chaos}: if particles are initially independent (no correlations, ``chaotic"), then they stay independent as time evolves. Propagation of chaos was shown to hold as $N \rightarrow \infty$ in \cite{BraHep1977}. Therefore, MFA assumption holds exactly in the limit $N\to \infty$ and it implies that correlations in the system \eqref{IBM} are negligible. Since we are interested in capturing how correlations affect the evolution of $f_1$, we must go beyond MFA. \paragraph{Kirkwood Superposition Approximation.} KSA is a truncation of the BBGKY hierarchy at the equation for $f_2$. KSA is based on the following representation ansatz for $f_3$ in terms of $f_1$ and $f_2$: \begin{eqnarray} f_3(t,x_1,x_2,x_3)=\frac{f_2(t,x_1,x_2)f_2(t,x_2,x_3)f_2(t,x_1,x_3)}{f_1(t,x_1)f_1(t,x_2)f_1(t,x_3)}. \label{KSA} \end{eqnarray} Substitute this approximation into \eqref{liouville_f_2} and obtain the following equation for $f_2$, \begin{eqnarray} \label{KSA_PDE} \partial_t f_2(t,x_1,x_2)& +&\frac{1}{N}\nabla_{x_1} \cdot \left(u(x_1,x_2)f_2(t,x_1,x_2)\right)+\frac{1}{N}\nabla_{x_2}\cdot \left(u(x_2,x_1)f_2(t,x_1,x_2)\right)\nonumber\\ &+& \frac{N-2}{N}\nabla_{x_1} \cdot \int u(x_1,y) \frac{f_2(t,x_1,x_2) f_2(t,x_1,y)f_2(t,x_2,y)}{f_1(t,x_1) f_1(t,x_2)f_1(t,y)}\text{d}y\nonumber\\ &+&\frac{N-2}{N}\nabla_{x_2} \cdot \int u(x_2,y) \frac{f_2(t,x_1,x_2) f_2(t,x_1,y)f_2(t,x_2,y)}{f_1(t,x_1) f_1(t,x_2)f_1(t,y)}\text{d}y \nonumber \\ &+& \nabla_{x_{1}}\cdot (S(x_1)f_2(t,x_1,x_2)) + \nabla_{x_{2}} \cdot (S(x_2)f_2(t,x_1,x_2)) \nonumber \\ &=& D(\Delta_{x_1} f_2(t,x_1,x_2)+\Delta_{x_2} f_2(t,x_1,x_2)). \end{eqnarray} Note that KSA representation ansatz \eqref{KSA} can be formally derived from the maximization of a truncated entropy functional \cite{Sin2004}. This method can be applied to find similar approximations for $f_n$, $n>3$, however the numerical cost of solving the associated PDEs becomes prohibitive. \paragraph{Truncation Approximation.} TA is obtained from the following observation. Consider $i=1$ and rewrite the first term in the sum \eqref{eforce} by substituting conditions $X_1(t)=x_1$ and $X_2(t)=x_2$: \begin{equation} \label{eforce_prime} \mathcal{F}_1(t,x_1,x_2)=\dfrac{1}{N}u(x_1,x_2)+\mathbb{E} \Bigg\{ \frac{1}{N} \sum\limits_{j\neq 1,2}^N u(X_{1}(t),X_{j}(t)) \Bigg\| \begin{array}{c}X_{1}(t)=x_{1}\\X_{2}(t)=x_{2}\end{array}\Bigg\} \end{equation} Next observe that the sum in \eqref{eforce_prime} does not have a term depending on $X_2(t)$ and thus it is natural to assume that the dependence of the expected value in \eqref{eforce_prime} on the condition $X_2(t)=x_2$ is weak and therefore can be ignored. This observation leads to the following approximation for $\mathcal{F}_1$: \begin{equation} \mathcal{F}_{1}(t,x_{1}, x_{2})=\frac{1}{N}u(x_1,x_2)+\mathbb{E} \Bigg\{ \frac{1}{N} \sum\limits_{j\neq 1,2}^N u(X_{1}(t),X_{j}(t)) \Bigg\| X_{1}(t)=x_{1}\Bigg\}. \label{TruncF1} \end{equation} A similar approximation can be written for $\mathcal{F}_2$: \begin{equation} \mathcal{F}_{2}(t,x_{1}, x_{2})=\frac{1}{N}u(x_2,x_1)+\mathbb{E} \Bigg\{ \frac{1}{N} \sum\limits_{j\neq 1,2}^N u(X_{2}(t),X_{j}(t)) \Bigg\| X_{2}(t)=x_{2}\Bigg\}. \label{TruncF2} \end{equation} Next we use the definition of conditional probability to rewrite \eqref{TruncF1} and \eqref{TruncF2}: \begin{eqnarray} \label{F_trunc1} \mathcal{F}_{1}(t,x_{1}, x_{2})=\frac{1}{N}u(x_1,x_2)+\frac{N-2}{N} \int u(x_{1},y)\frac{f_2(t,x_1,y)}{ f_{1}(t,x_1)}\text{d}y, \\ \label{F_trunc2} \mathcal{F}_{2}(t,x_{1}, x_{2})=\frac{1}{N}u(x_2,x_1)+\frac{N-2}{N} \int u(x_{2},y)\frac{f_2(t,x_2,y)}{f_{1}(t,x_2)}\text{d}y. \end{eqnarray} Substituting \eqref{F_trunc1}-\eqref{F_trunc2} into \eqref{liouville_f_2} yields the following PDE for $f_2$, without $f_3$: \begin{eqnarray} \label{Trunc_PDE} \partial_t f_2(t,x_1,x_2)& +&\frac{1}{N}\nabla_{x_1} \cdot \left(u(x_1,x_2)f_2(t,x_1,x_2)\right)+\frac{1}{N}\nabla_{x_2} \cdot \left(u(x_2,x_1)f_2(t,x_1,x_2)\right)\nonumber\\ &+& \frac{N-2}{N}\nabla_{x_1}\cdot \int u(x_1,y) \frac{f_2(t,x_1,x_2) f_2(t,x_1,y)}{f_1(t,x_1)}\text{d}y\nonumber\\&+&\frac{N-2}{N}\nabla_{x_2} \cdot \int u(x_2,y) \frac{f_2(t,x_1,x_2 )f_2(t,x_2,y)}{ f_1(t,x_2)}\text{d}y \nonumber \\ &+& \nabla_{x_{1}}\cdot(S(x_1)f_2(t,x_1,x_2)) + \nabla_{x_{2}} \cdot (S(x_2)f_2(t,x_1,x_2)) \nonumber \\ &=&D (\Delta_{x_1} f_2(t,x_1,x_2)+\Delta_{x_2} f_2(t,x_1,x_2)). \end{eqnarray} Solutions $f_1$ and $f_2$ of the system \eqref{liouville_f_1}-\eqref{Trunc_PDE} satisfy the following key properties of probability distribution functions \cite{BerJabPot2016}: \begin{enumerate} \item $f_2$ is symmetric with respect to $x_1$ and $x_2$: \begin{eqnarray}\label{symmetry} f_2(t,x_1,x_2)=f_2(t,x_2,x_1). \end{eqnarray} \item $f_2$ conserves its mass and positivity as time evolves: \begin{eqnarray}\label{mass_preservation} \int f_2(t,x_1,x_2) \text{d}x_1 \text{d}x_2=\int f_2(0,x_1,x_2) \text{d}x_1 \text{d}x_2, \end{eqnarray} and \begin{eqnarray} f_1(t,x_1) \geq 0, f_2(t,x_1,x_2) \geq 0 ~\text{ if } ~ f_1(0,x_1) \geq 0, f_2(0,x_1,x_2) \geq 0 \label{positivity}. \end{eqnarray} \item $f_1$ and $f_2$ are consistent: \begin{eqnarray} \label{consistency} f_1(t,x_1)=\int f_2(t,x_1,x_2) \text{d}x_2. \end{eqnarray} \item Propagation of chaos: $f_2(t,x_1,x_2)=f_1(t,x_1)f_1(t,x_2)$ where $f_1$ solves the Vlasov Equation \eqref{Vlasov1} is a solution of \eqref{Trunc_PDE} in the limit $N\to \infty$. \end{enumerate} It was also shown in \cite{BerJabPot2016} that no single representation for $f_3$ is able to satisfy all four of these properties. For example, solutions of KSA, which is derived from a single representation \eqref{KSA}, do not satisfy the property of consistency \eqref{consistency}. The fact that solutions of TA satisfy \eqref{consistency} implies that we can substitute \eqref{consistency} into \eqref{Trunc_PDE} to obtain a closed form equation for $f_2$. \paragraph{Comparison between approximations.} Here we focus on the comparison between TA and KSA since we are most interested in the effect of correlations which MFA neglects. First we present heuristics on how TA and KSA can be derived in a simple way. Consider a triplet of particles 1, 2, and 3 with positions at $x_1$, $x_2$, and $x_3$, respectively. Assume that one studies how particles 2 and 3 affect particle 1. If correlations in the system are not low, then we need to take into account all correlations including the correlation between particles 2 and 3. On the other hand, if overall correlations are not high, then one would expect that the contribution from correlation between particles 2 and 3 only appear at a lower order for particle 1, compared to correlations between particles 1 and 2 as well as 1 and 3. Therefore, take as an approximation assumption that particles 2 and 3 are almost independent: \begin{eqnarray}\label{independence_23} 1\approx\frac{f_2(t,x_2,x_3)}{f_1(t,x_2)f_1(t,x_3)}. \end{eqnarray} Furthermore, using Bayes' Theorem and again independence of particles 2 and 3 one obtains that \begin{eqnarray} f_3(t,x_1,x_2,x_3)&=&f_3(t,x_3 |x_1,x_2) f_2(t,x_1,x_2) \nonumber \\ &\approx& f_2(t,x_3|x_1)f_2(t,x_1,x_2) \nonumber \\ &=&\frac{f_2(t,x_1,x_2)f_2(t,x_1,x_3)}{f_1(t,x_1)}. \label{prob_step_1} \end{eqnarray} Here $f_3(t,x_3 |x_1,x_2)$ and $f_2(t,x_3|x_1)$ denote conditional pdfs. The formula \eqref{prob_step_1} can serve as an approximation for $f_3$ with the specific assumption that particles 2 and 3 are almost independent. To extend the formula to the case when a pair from the three particles (not specifically particles 2 and 3) is almost independent, multiply \eqref{prob_step_1} by \eqref{independence_23}. By doing this we get a representation for $f_3$ which is symmetric with respect to $x_1$, $x_2$, and $x_3$, and it exactly coincides with KSA representation \eqref{KSA}. However, multiplication by \eqref{independence_23} introduces an additional approximation error. Instead, TA uses exactly \eqref{prob_step_1} in the equation for $f_2$ where $f_3$ appears in $\mathcal{F}_1$, and \eqref{prob_step_1} with the assumption that particles 1 and 3 are almost independent in the term where $f_3$ appears in $\mathcal{F}_2$. From these observations it follows that TA is more accurate than KSA, and both KSA and TA are more accurate than MFA since they are derived from less restrictive assumptions. Finally we compare computational complexity between solving KSA and TA. To this end, note that equations for $f_2$ in KSA and TA only have a difference in the integral terms. For example, the first integral term in these equations looks as follows: \begin{eqnarray} \text{KSA:}&& \int u(x_1,y) \frac{ f_2(t,x_1,y)f_2(t,x_2,y)}{f_1(t,y)}\text{d}y\,\frac{f_2(t,x_1,x_2)}{f_1(t,x_1) f_1(t,x_2)}, \\ \text{TA:} &&\int u(x_{1},y){f_2(t,x_1,y)}\text{d}y\,\frac{f_{2}(t,x_{1},x_{2})}{f_1(t,x_1)}. \end{eqnarray} We see that the integral for KSA involves variables $x_1,$ $x_2$, and $y$, whereas there are only $x_1$ and $y$ for TA. This allows for the reduction of computational complexity for TA as compared to KSA since at each time step the following integral can be pre-computed: \begin{equation} c(x):=\int u(x,y){f_2(t,x,y)}\text{d}y, \end{equation} and used in both integral terms of the TA equation for $f_2$. Thus, TA is less computationally expensive than KSA. \section{Results of numerical simulations} \label{sec:numerics} In this section we compare numerical solutions of the continuum approximations MFA, KSA, and TA with direct simulations for various examples of interaction forces $u(X_i,X_j)=\hat{u}(X_j-X_i)$. Throughout this section, by direct simulations we mean Monte Carlo simulations of the individual based system \eqref{IBM}. First, we present our results for smooth interaction forces including positive, attraction, repulsion and attraction-repulsion interactions. Next, we consider these continuum approximations for the Morse interaction force and the Kuramoto model. All the interaction forces are introduced below. In all cases, we consider dynamics of the system of interacting particles for $0<t<T$ with one-dimensional positions $X_i(t)$ and periodic boundary conditions in $0\leq x \leq 1$. For the direct simulations of system \eqref{IBM} we use the Euler-Maruyama scheme in order to capture the stochastic term. The time step is $\Delta t =5\cdot 10^{-3}$ and the number of realizations is $10^6$. To compute solutions for the continuum approximations a finite difference scheme was used with the spatial and time steps $\Delta x = 10^{-2}$ and $\Delta t=10^{-5}$, respectively. We note here that KSA has a specific drawback, it does not satisfy the consistency relation between $f_1$ and $f_2$, that is $f_1\neq \int f_2$. In order to find $f_1$ for KSA we use equation \eqref{liouville_f_1}. The choice of the number of particles $N$ and magnitude of diffusion $D$ was made so the difference between the continuum approximations is visible and one can draw a conclusion on how the approximation captures properties of the system. For large $N$, approximations are nearly indistinguishable from each other as well as from the direct simulations, which is consistent with the mean field limit. Thus, we used small $N$, which in addition allowed us to have a reasonable computational time for the direct simulations, since they require many realizations and the computational cost of each realization depends quadratically on $N$ if \eqref{IBM} is approximated by a direct explicit method and linearly in $N$ if a more powerful particle method, such as Fast Multipole Method \cite{GreRok1987,GreRok1988,GreRok1989}, is applied. We compare probability distribution functions $f_1$ and $f_2$ obtained from the continuum approximations with histograms of the positions of particles and pairs of particles obtained from the direct simulations. Here our focus is on qualitative comparison, such as description of peak formation or convergence to uniform distributions, rather than on quantitative comparison such as for example $L^p$ errors since they are not informative about the effects of correlations. \subsection{Smooth Interaction Forces} \label{subsec:smooth} \indent We consider cases of positive, attracting and repulsive interaction forces as well as the one which combines short range repulsion and long range attraction. These forces are defined by \eqref{dist_cond} with $\hat{u}$ given by: \begin{eqnarray} \hat{u}_{\text{pos}}(x)&=&2\,e^{-10 x^2}, \label{def_pos}\\ \hat{u}_{\text{att}}(x)&=&10x \,e^{-10x^2}, \label{def_att}\\ \hat{u}_{\text{rep}}(x)&=&-10x \,e^{-10x^2}, \label{def_rep}\\ \hat{u}_{\text{att-rep}}(x)&=&-100x(0.1^2-x^2)\, e^{-10x^2}. \label{att-rep} \end{eqnarray} \begin{figure}[t] \centering \includegraphics[width=0.5\linewidth]{smoothfig.png} \caption{Plot of the smooth interaction forces given by \eqref{def_pos}-\eqref{att-rep}. Note that the sign of $x \cdot \hat{u}(x)$ determines attraction and repulsion, with a positive sign implying attraction and a negative sign signifying repulsion. \label{interaction_plot}} \end{figure} Note that all these interaction forces are smooth functions. In particular, they are continuous at 0, unlike, for example, the Morse force, considered in the next subsection. Initial conditions are chosen as follows: \begin{eqnarray} \label{intcondsin}f_1(0,x)&=&1.0 + 0.4 \sin (2\pi x) \quad 0\leq x \leq 1, \\ \label{intcondind}f_2(0,x,y)&=&f_1(0,x)f_1(0,y) \quad 0\leq x,y\leq 1. \end{eqnarray} In other words, we consider initial one-particle distribution function $f_1$ as a perturbation of the uniform distribution $f_1\equiv 1$, and the condition \eqref{intcondind} means that the particles are initially independent. Throughout this subsection we set $N=10$ and $D=0.005$. \smallskip \noindent{\it Positive interaction force $\hat{u}_{\text{pos}}$ given by \eqref{def_pos}.} This force acts so that particles exert forces on each other in the positive direction only, that is, particles in front pull particles behind and those behind push those in front. One way to think of this system is unidirectional swimming, for example fishes swimming in a narrow channel. The fish in front will lower the resistance for those behind causing them to swim faster, while the fish behind will push the water around them forward, helping those in front to move faster. Cannibalistic locusts oriented in the same direction in a one dimensional tunnel would also follow this type of interactions, a locust will chase the locusts in front trying to eat them, while running away from those trying to eat it from behind \cite{Bazazi2008,Romanczuk2009}. Direct simulations for this system showed that it exhibits interesting qualitative behavior for large times. Namely, particles interacting via the positive force $\hat{u}_{\text{pos}}$ tend to form a cluster which moves with speed close to $\hat{u}_{\text{pos}}(0)$. Nevertheless, the one-particle probability distribution function $f_1$ converges to a uniform value for large times, that is $f_1\approx 1$ as $t\to\infty$. This does not contradict to cluster formation, since as $t\to \infty$ the many particle system is in a highly correlated regime, therefore $f_2$ concentrates around the diagonal $x_1=x_2$ and $f_1$ is essentially the probability distribution function of the cluster location. \begin{figure}[!htb] \centering \includegraphics[width=.45\linewidth]{sinintcond.png} \includegraphics[width=.45\linewidth]{positivef1.png} \caption{Left figure: the initial distribution at $t=0$ for all approximations. Right figure: $f_1(0.5,x)$ for the various approximations to \eqref{IBM} with the positive interaction force $\hat{u}_{\text{pos}}(x)=e^{-12 x^2}$ with $N=10$ and initial conditions given by \eqref{intcondsin}.} \label{Pos1fig} \end{figure} In direct simulations we observe that the peak of $f_1(t,x)$ moves to the right and slightly grows as time increases. Motion to the right is because all the particles' velocities in this case are positive, that is, $\dot{X}_i(t)>0$ (if diffusion is disregarded). The growth of the peak is due to the particles clustering. In continuum approximations the solution $f_1$ for both KSA and TA moves to the right at almost the same pace as for the direct simulations and also captures the growth of the peak while MFA moves slower and is unable to capture the growth of the peak, see fig. \ref{Pos1fig}. This is due to particles clustering and that particles move faster when they are part of a cluster. Since these effects come from correlations, the methods such as TA and KSA, which take into account correlations, capture the speed and the peak growth better than the MFA. In fig. \ref{posfigf2}, one can see that like in the direct simulations the two-particle distribution function $f_2$ computed by TA and KSA has a single non-round peak (the yellow spot), while $f_2$ in MFA has smaller wider and round peak. \begin{figure}[!htb] \begin{center} \includegraphics[width=0.45\textwidth]{f2posds.png} \includegraphics[width=0.45\textwidth]{f2posmf.png}\\ \includegraphics[width=0.45\textwidth]{f2posta.png} \includegraphics[width=0.45\textwidth]{f2posksa.png} \caption{The figures show the approximations for $f_2$ at $t=0.5$ with the positive interaction force as follows: top left: Direct Simulations, top right: Mean Field, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation} \label{posfigf2} \end{center} \end{figure} \smallskip \noindent{\it Attracting interaction force $\hat u_{\text{att}}$ given by \eqref{def_att}}. This force results in particles approaching to one another and as time evolves the particles tend to concentrate at a single location determined by initial conditions. As in the case of positive interaction force, for $N<\infty$ one should distinguish between the concentration of many interacting particles and one-particle probability distribution function $f_1$. While the particles tend to cluster at a single location, the one-particle probability distribution function does not become a $\delta$-function. Moreover, if initially the distribution $f_1$ is close to uniform, it stays nearly uniform for all $t>0$, even though the particles will almost surely form a point cluster. This is because particles tend to concentrate but the point of concentration is random and almost uniformly distributed. On the other hand, for fixed initial $f_1$, if $N$ increases, then the one-particle distribution function $f_1$ eventually (i.e., as $t\to \infty$) exhibits larger peaks (unless it is initially uniform), and in the limit $N\to \infty$ it becomes a $\delta$-function. This is consistent with the mean field limit: as $N\to\infty$ correlations vanish, and the notion of one-particle probability distribution function $f_1$ coincides with the particles concentration. We also note that the two-particle distribution function $f_2(t,x_1,x_2)$ for all $N$ concentrates along the diagonal $x_1=x_2$ as $t\to \infty$. \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\textwidth]{smooth1t15.png} \includegraphics[width=.45\textwidth]{smooth1t5.png} \caption{Approximations of $f_1$ for $\hat u_{\text{att}}(x)=10x\,e^{-10x^2}$ with $N=10$ and initial conditions given by \eqref{intcondsin}. Left: $t=0.15$, right: $t=0.5$.}\label{Att1fig} \end{center} \end{figure} All three approximations capture the growth of the peak of $f_1$ at $t=0.15$, see fig. \ref{Att1fig} (left). The MFA overestimates the peak but KSA and TA both capture it well with TA being slightly more accurate. Both KSA and TA, unlike MFA, capture large values of $f_2$ near the diagonal, $x_1=x_2$ (see fig.~\ref{Attfigf21}: the peak represented by the yellow spot is elongated along the diagonal for KSA and TA, while for MFA it is round). Recall that concentration of $f_2$ near the diagonal $x_1=x_2$ means that any two particles are located close to each other. All approximations underestimate the maximum value of $f_2$ obtained from the direct simulations, with KSA being the closest. \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\textwidth]{f2smooth1t015ds.png} \includegraphics[width=.45\textwidth]{f2smooth1t015MF.png} \\ \includegraphics[width=.45\textwidth]{f2smooth1t015TA.png} \includegraphics[width=.45\textwidth]{f2smooth1t015KSA.png} \caption{Approximations for $f_2$ at $t=0.15$ with the attraction interaction force as follows. Top left: Direct Simulations, top right: Mean Field Approximation, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation. } \label{Attfigf21} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\textwidth]{f2smooth1t15ds.png} \label{Att2figf21} \\ \includegraphics[width=.45\textwidth]{f2smooth1t5MF.png} \includegraphics[width=.45\textwidth]{f2smooth1t5TA.png} \caption{Approximations for $f_2$ at $t=0.5$ with the attraction interaction force as follows. Top: Direct Simulations, bottom left: Mean Field, bottom right: Truncation Approximation. } \label{Attfigf22} \end{center} \end{figure} At $t=0.5$ MFA greatly overestimates the growth of the peak of $f_1$, TA also overestimates the growth of the peak but is much closer to the direct simulations than MFA, see fig.~\ref{Att1fig} (right). Moreover, among the two truncations at level $k=2$ considered in this work, TA was capable of producing results with the explicit numerical scheme, while the numerical simulations for KSA became unstable and are not presented. For $f_2$ TA approximates values obtained from direct simulations near the diagonal better than MFA, see fig. \ref{Attfigf22}. The maximal value of $f_2$ in direct simulations is also closer to TA than MFA. \noindent{\it Repulsion interaction force $\hat u_{\text{rep}}$ given by \eqref{def_rep}.} This force results in particles pushing away from one another. As time evolves, particles form a lattice with even spacing, where the final locations are determined by initial conditions. The one-particle probability distribution function $f_1$ becomes uniform as $t\to\infty$. Repulsion between particles leads to that values of the two-particle probability distribution function $f_2$ near the diagonal $x_1=x_2$ decrease with time, and $f_2$ concentrates at lines $|x_1-x_2|=\frac{k}{N}$, $k=1,...,N$ as $t\to \infty$ (these lines are parallel to the diagonal $x_1=x_2$ but the diagonal is not one of these lines). All three approximations capture tendency of $f_1$ to become uniform, see fig.~\ref{repfig}. The KSA and the TA, unlike the MFA, capture that the values of $f_2$ along the diagonal $x_1=x_2$ decrease in time, see fig.~\ref{Repfigf21}. \begin{figure} \begin{center} \includegraphics[width=.5\textwidth]{repulsion.png} \caption{Approximations of $f_1(0.5,x_1)$ for the repulsion interaction force $\hat u_{\text{rep}}$ with $N=10$ where initial conditions are given by \eqref{intcondsin}.} \label{repfig} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\textwidth]{f2smooth22ds.png} \includegraphics[width=.45\textwidth]{f2smooth22MF.png} \\ \includegraphics[width=.45\textwidth]{f2smooth22TA.png} \includegraphics[width=.45\textwidth]{f2smooth22KSA.png} \caption{Approximations of $f_2$ at $t=0.5$ for $\hat{u}_{\text{rep}}$. Top left: Direct Simulations, top right: Mean Field Approximation, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation. } \label{Repfigf21} \end{center} \end{figure} \smallskip \noindent{\it Interaction force with repulsion at short range and attraction at long range $\hat u_{\text{att-rep}}$ given by \eqref{att-rep}}. This interaction force results in particles pushing away from one another when the distance between the particles is less than $0.1$ and attracting otherwise. Interaction forces which are repulsive at short range and attracting at long range are very common in physics. For example, in order to describe forces between atoms, a variety of such interaction functions introduced via potentials is used, among them the Morse, the Yukawa, and the Lennard-Jones potentials. The Morse potential will be considered in Section~\ref{subsec:morse}. The main difference between $\hat u_{\text{att-rep}}$ and these potential forces that $\hat u_{\text{att-rep}}$ is smooth at $0$ which leads to that the repulsion part of $\hat u_{\text{att-rep}}$ is weaker than for the potential forces and hence it can not be considered as a good choice for modeling if interactions between particles at short range are steric, that is, particles have a finite size and do not penetrate each other. However, since equations for $f_1$ and $f_2$ contain derivatives of terms with $\hat{u}$, a smooth interaction forces are the most convenient for numerical simulations among all short-range-repelling/long-range-attracting interaction forces. \begin{figure} \begin{center} \includegraphics[width=.5\linewidth]{smooth3t5.png} \caption{Approximations of $f_1(0.5,x)$ for $\hat u_{\text{att-rep}}$ with $N=10$ where initial conditions are given by \eqref{intcondsin}.} \label{smooth3fig} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=.45\textwidth]{f2smooth3ds.png} \includegraphics[width=.45\textwidth]{f2smooth3MF.png}\\ \includegraphics[width=.45\textwidth]{f2smooth3TA.png} \includegraphics[width=.45\textwidth]{f2smooth3KSA.png} \caption{Approximations of $f_2$ at $t=0.5$ for $\hat{u}_{\text{att-rep}}$. Top left: Direct Simulations, top right: Mean Field, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation. } \label{S3figf21} \end{center} \end{figure} Results of numerical simulations for all approximations of one-particle probability distribution function $f_1$ with interaction force $\hat{u}_{\text{att-rep}}$ are depicted in fig.~\ref{smooth3fig}. MFA overestimates the peak of $f_1$ obtained from direct simulations. Note that this observation is similar to the one for the attraction interaction force, see fig.~\ref{Att1fig}; this is because the attraction component of interactions dominates due to the specific form of $\hat{u}_{\text{att-rep}}$, see fig.~\ref{interaction_plot}. However, TA and KSA underestimate the peak; this is presumably because these approximations overestimate the repulsion at the peak as in fig.~\ref{repfig}. TA, unlike MFA and KSA, captures that $f_2$ tends to decrease at the diagonal $x_1=x_2$. \subsection{Morse interaction force} \label{subsec:morse} The Morse interaction force was originally introduced in physics and chemistry to model inter-atomic forces, see e.g. \cite{Morse1929,Schiff1968}, and was further used in other disciplines such as for example mathematical biology, see \cite{Newman2005,MidFleGri2014}. As in the case of $u_{\text{att-rep}}$ from Subsection~\ref{subsec:smooth}, particles which interact through the Morse interaction force repel each other if they are close and attract otherwise. On the other hand, unlike $u_{\text{att-rep}}$, the repulsion of the Morse interaction force does not vanish as particles approach each other. The growth of repulsion as inter-particle distance goes to zero is relevant if for instance the repulsion component serves to model flexible volume constraints (that is, particles push each other away if the share the same place). Specifically, the system of many particles interacting through the Morse interaction force is \begin{eqnarray} \label{Morse1} \text{d}X_i&=&\sum_{j \neq i} \hat u(X_j-X_i)\,\text{d}t +\sqrt{2D} \,\text{d}W_t, \text{ where} \\ &&~~\hat u_{M}(x)=\left\{\begin{array}{ll}120\left[e^{-2(|x|-r_e)}-e^{-(|x|-r_e)}\right]\dfrac{x}{|x|}, & |x| \leq c, \\ 0,&|x|>c. \end{array} \right.\label{Morse2} \end{eqnarray} The Morse interaction force is defined in \eqref{Morse2}, see fig.~\ref{interaction_plot1}. The parameter $r_e=0.1$ is the equilibrium distance, that is, the distance at which the force vanishes, and $c$ is the radius of truncation of the Morse force or in other words $c$ is the range of interactions. \begin{figure}[!htb] \centering \includegraphics[width=\linewidth]{morsepot.png} \caption{The Morse interaction force with truncations at $c=0.2$ and $c=0.3$. Note that the sign of $x \cdot \hat{u}(x)$ determines attraction and repulsion, with a positive sign implying attraction and a negative sign signifying repulsion. Dashed line depicts the plot for the Morse force for $c=\infty$ (rescaled for better visibility).} \label{interaction_plot1} \end{figure} For the numerical simulations of system \eqref{Morse1}-\eqref{Morse2} the following initial conditions were chosen \begin{equation} f_1(t=0,x)=\dfrac{5}{6} (\tanh(30(x-0.2))+\tanh(30(0.8-x))). \label{tanIntCond} \end{equation} We note here that we chose to present results of numerical simulations for initial conditions \eqref{tanIntCond} instead of \eqref{intcondsin} from Subsection \ref{subsec:smooth}, since the system \eqref{Morse1}-\eqref{Morse2} for latter conditions showed very slow dynamics of the probability distribution function $f_1$. Visible changes in $f_1$ with initial condition \eqref{tanIntCond} as time evolves are due to large gradients of $f_1$ at $x\approx 0.2$ and $x\approx 0.8$. First we consider the system \eqref{Morse1}-\eqref{Morse2} with $N=5$, $D=0.045$ and $c=0.2$. Two distinct peaks in the plot of $f_1$ obtained by direct simulations are observed, see fig.~\ref{altmorsecomp}. Both TA and KSA capture these peaks, and TA approximates the peaks more accurately. MFA does not exhibit any peaks. In capturing $f_2$, both TA and KSA capture the low values along the diagonal lines, $x_1=x_2$ and $x_1 \approx x_2 \pm 0.2$, whereas MFA does not, see fig.~\ref{Mfigf21}. Additionally, TA captures the maximum values of $f_2$ better than KSA as well as the narrowness of the peaks' width. \begin{figure} \begin{center} \includegraphics[width=.5\textwidth]{altmorsecomp.png} \end{center} \caption{Approximations for $f_1(0.01,x)$ for the Morse interaction force with $c=0.2$, $D=0.045$, and initial conditions \eqref{tanIntCond}. } \label{altmorsecomp} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=.45\textwidth]{f2morse1ds.png} \includegraphics[width=.45\textwidth]{f2morse1MF.png}\\ \includegraphics[width=.45\textwidth]{f2morse1TA.png} \includegraphics[width=.45\textwidth]{f2morse1KSA.png} \caption{Approximations for $f_2$ at $t=0.01$ for the Morse interaction force with $c=0.2$. Top left: Direct Simulations, top right: Mean Field Approximation, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation. \label{Mfigf21}} \end{figure} Next consider the system \eqref{Morse1}-\eqref{Morse2} with a larger range of interactions, specifically, $c=0.3$ with all other parameters remaining the same: $N=5$ and $D=0.045$. Note that increasing the range of interactions effectively increases the strength of attraction between particles in the system. In this case the one-particle probability distribution function $f_1$ has a single peak at center, see fig.~\ref{M2figf1}. TA and KSA both capture the peak, while MFA does not. Comparing the approximations of $f_2$ depicted in fig.~\ref{M2figf24}, we see that both TA and KSA capture low values along the diagonal lines, $x_1=x_2$ and $x_1 \approx x_2 \pm 0.3$, whereas MFA does not. \begin{figure}\centering \includegraphics[width=.5\textwidth]{morse_final.png} \caption{This figure shows the various approximations for $f_1(0.01,x)$ for the system \eqref{IBM} with the Morse interaction force with $c=0.3$ and $D=0.045$ and initial conditions \eqref{tanIntCond}. } \label{M2figf1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=.45\textwidth]{f2morseds.png} \includegraphics[width=.45\textwidth]{f2morseMF.png} \\ \includegraphics[width=.45\textwidth]{f2morseTA.png} \includegraphics[width=.45\textwidth]{f2morseKSA.png} \caption{Approximations for $f_2$ at $t=0.01$ for the Morse interaction force with $c=0.3$. Top left: Direct Simulations, top right: Mean Field Approximation, bottom left: Truncation Approximation, bottom right: Kirkwood Superposition Approximation. } \label{M2figf24} \end{figure} \subsection{Kuramoto interaction force} \label{subsec:kuramoto} Among all models used for the description of synchronization phenomena, the Kuramoto model is the most popular one and it was successfully used in various branches of science such as chemistry, physics, neural science, biology and even social science, \cite{Kuramoto1975,Pik2003,Ace2005}. In general, this model considers $N$ oscillators so that each oscillator has the phase $X_i(t)$ at time $t$, and the oscillators are coupled by the following attracting interaction force (which we call here the Kuramoto interaction force): \begin{equation}\label{kuramoto_force} \hat u_{\text{K}}(x)=K\sin(2\pi x). \end{equation} The sub-index $\text{K}$ in the left hand side of \eqref{kuramoto_force} stands for ``Kuramoto" and the parameter $K=2.0$ in the right hand side of \eqref{kuramoto_force} is the strength of interactions. A distinguishing feature of the Kuramoto model is that in addition to pairwise interactions, each oscillator also has a given intrinsic frequency $w_i$. The resulting individual based system is \begin{equation} \text{d}X_i(t)= w_i\text{d}t+\frac{1}{N} \sum_{j=1}^N \hat{u}_K(X_i-X_j) \text{d}t +\sqrt{2 D}\,\text{d}W_i(t),\quad \text{for $i=1,..,N$.} \label{IBM_w} \end{equation} The interactions are purely attractive, therefore the particles tend to occupy a single location at each moment of time moving with the same frequency/velocity. However, if values of frequencies $w_i$ are high, then they dominate the attractive interactions and in this case the one-particle probability distribution function becomes uniform. In numerical simulations we choose $N=5$ and $D=0.045$. The frequencies $w_i$ are random variables, independently and identically distributed with the uniform distribution on $(-1,1)$. \begin{figure} \center{ \includegraphics[width=.5\textwidth]{f1morse2.png} } \caption{This figure shows the various approximations for $f_1(0.025,x)$ for the system \eqref{IBM_w} with the Kuramoto interaction force and initial conditions given by\eqref{intcondsin}. } \label{kura1} \end{figure} \begin{figure}[!htb] \centering \includegraphics[width=.45\textwidth]{f2kurads.png} \\ \includegraphics[width=.45\textwidth]{f2kuraMF.png} \includegraphics[width=.45\textwidth]{f2kuraTA.png} \caption{\label{Kfigf23} The figures show the various approximations for $f_2$ at $t=0.025$ with the Kuramoto interaction force as follows: top: Direct Simulations, bottom left: Mean Field Approximation, bottom right: Truncation Approximation. } \end{figure} From fig. \ref{kura1} we see that both TA and MFA exhibit a peak in $f_1$ and TA is more accurate than MFA. TA also approximates direct simulations more accurately than MFA. In comparing the approximations to direct simulations, TA captures the increase of $f_2$ near the diagonal $x_1=x_2$ whereas MFA does not. KSA was not used here since the introduction of intrinsic frequencies significantly increases computational complexity of KSA. \section{Conclusions} \label{sec:conclusion} In this paper three continuum approximations of a system of interacting particles $-$ Mean Field Approximation, Kirkwood Superposition Approximation, and Truncation Approximation $-$ were tested and compared to direct simulations of the individual based system for various types of interactions. It was shown that in all the considered cases TA and KSA performed noticeably better than MFA. When comparing TA and KSA, TA performed significantly better for all tested interactions except the repulsive interaction. For attractive interactions TA was stable while KSA was not. The major advantage TA had over KSA was the computational complexity. Due to the form of integral terms TA has significantly shorter computational time. This advantage becomes more important as the dimension of a problem is increased. In comparison to direct simulations, continuum approximations are faster as they do not depend on the number of particles $N$ and do not require many realizations to take into account randomness of initial particles' locations. \begin{acknowledgements} PEJ was partially supported by NSF Grant 1614537, and NSF Grant RNMS (Ki-Net) 1107444. LB and MP were supported by NSF DMREF Grant DMS-1628411. \end{acknowledgements} %
{ "timestamp": "2018-05-08T02:12:40", "yymm": "1805", "arxiv_id": "1805.02268", "language": "en", "url": "https://arxiv.org/abs/1805.02268" }
\section{Introduction} Multi-factor gradients are essential components of many biological phenomena. They are responsible in coordinating with one another to bring about cell-, time- and location- specific responses in living systems \cite{albert}. It is realized to be an important, evolutionarily conserved signalling mechanism for guiding the growth, migration, and differentiation of cells within the tissue \cite{Thomas}. They serve essential roles in inflammation, wound healing, cancer metastasis etc. Insight into the behavior of such systems is of fundamental importance in a wide spectrum of systems ranging from biological cells, where transport appears in varying environments, to shear flow of biomolecules (including bacteria). The presence of shear flow arising due to velocity gradient affects the transport and dispersion of biomolecules at the macroscale \cite{Bird, Doi}. Theoretically, de Gennes long back showed that the behavior of coil-stretch transition in polymer is highly dependent on the type of flow \cite{degenes}. Later Smith and coworkers \cite{smith} monitored the motion of individual molecules (DNA) under the shear flow. They observed tumbling motion of a polymer chain under shear flow {\it i.e.}, a DNA undergoes a cyclic stretching and collapse dynamics, with a characteristic frequency which depends on the shear rate and its internal relaxation time. As a result, the dynamics of flexible polymers in shear flow drew considerable interest in recent years \cite{schroeder, Buscalioni, Doyle, Winkler, victor, sanjib}. A single polymer chain in solution undergoes a transition from the coil (high temperature) state to the globule/folded (low temperature) state \cite{degenes1} as the temperature is lowered. It is also possible to study the coil-globule transition by changing the solvent quality {\it i.e.} by varying the interaction among monomers. A solvent is called a good solvent if polymer is found to be in the coil state, whereas it is referred as a poor solvent if polymer is in the globule state \cite{degenes1}. In addition to velocity gradient, one may think of a system having gradient arising due to the varying environment (solvent quality). In fact, chemotaxis is one of the process arising due to the change in solvent quality (chemical or concentration gradient), which leads to directional motion of cells, bacteria, biomolecules towards or away from a source \cite{pnas}. Notably, in experimental set-ups as well as in theories, one manipulates shear rate keeping the quality of solvent constant, thereby transport of biomolecules under shear flow is mainly explored by the velocity gradient \cite{smith, schroeder, Buscalioni, Doyle, Winkler, victor, sanjib}. The aim of this letter is to study the effect of net gradient field arising due to the competition between velocity and chemical potential on the dynamics of polymers, which still remains an unexplored territory. Since, the inclusion of gradient interactions to vary solvent quality in the model system under shear flow hampers the analytical treatment, therefore, we resort to computer simulations to shed light on the rich dynamical behaviour of such systems. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{fig1_schematic.pdf} \caption{Schematic diagram of flow profile; (a) with a positive gradient of interaction energy $\varepsilon$, both flow velocity, $v$ and interaction energy, $\varepsilon$ increase in the same direction, (b) with a negative gradient of $\varepsilon$ .} \label{fig1} \end{figure} We have considered the composite system consisting of polymer of $N (= 50)$ beads (\cref{sec:supplementary}) and fluid (implicit solvent) confined between two walls such that one is stationary and other is moving with velocity $V_x$ resulting a velocity gradient in the flow, in the direction perpendicular to wall. The flow velocity experienced by each monomer is $\dot{\gamma}y_i$, where $\dot{\gamma}=\frac{dV_x}{dy}$, is the velocity gradient (shear rate) along the $y-$ direction. In order to exclude the effect of confinement, we have taken the width of channel greater than 4 time of radius of gyration of polymer. To confine the system in a channel of width $L (=20)$, we have taken the particle-wall interaction (at $y=0$ and $y=L$) in the form of soft repulsion of the Weeks-Chandler-Andersen potential \cite{deb}. Simulation is carried out in the the reduced units (\cref{sec:supplementary}). Solvent quality gradient has been incorporated in the model system by linearly varying the interaction energy ($y\Delta\varepsilon$) associated with non-bonded monomers along the $y-$ direction. Fig.\ref{fig1} shows the schematic of flow profile with positive and negative gradient of $\varepsilon$, where value of $\varepsilon$ increases (Fig.\ref{fig1}(a)) and decreases (Fig.\ref{fig1}(b)) in direction of positive gradient of flow velocity, respectively. Here, we have taken $\Delta \varepsilon=0.05$ in the simulation. A change in color from red to blue (or {\it vice versa}) represent a gradient arising due to the change in solvent quality (or temperature). The dynamics of the $i^{th}$ bead of polymer chain (\cref{sec:Supplementary}) in shear flow is described by the generalized Langevin equation (GLE), which explicitly takes into account the effect of coupling to a thermostat to maintain the constant temperature (\cite{mcphie, dobson1, dobson2} and \cref{sec:supplementary}). \begin{figure} \includegraphics[width=0.5\textwidth]{fig2_time_his_ex-eps-converted-to.pdf} \caption{Time history of extension of polymer chain in flow direction at different Wiessenberg number, $Wi$. (a), (c) and (e) at temperature T=1.2 (poor solvent); (b), (d) and (f) at T=3.8 (good solvent) for both cases $\varepsilon$=$Const$ (red color) and $\varepsilon$=$y \Delta \varepsilon $ (blue color).} \label{fig2} \end{figure} In absence of flow, a decrease in temperature leads to the coil-globule transition. The ${\theta}-$temperature, $T_{\theta}$ at which this transition takes place, is estimated from the measurement of $\frac{<R_G^2>}{N}$ at different temperature, where $R_G$ is the radius of gyration. The $\theta$ temperature ($\varepsilon=1.0$) has been estimated to be $T_{\theta}=2.5 \pm 0.1$. At this temperature $\frac{<R_G^2>}{N}$ is found to be independent of $N$ \cite{murat}. We have performed simulation in both regimes {\it i.e.} $T > T_{\theta}$ (good solvent) and $T<T_{\theta}$ (poor solvent). The dimensionless flow strength is characterized by the Weissenberg number, $Wi = \dot{\gamma}\tau_0 $. Here, $\tau_0$ is the longest relaxation time of the polymer, which has been determined by fitting the time decay of the autocorrelation function of the end-to-end distance of polymer chain in the absence of a solvent flow. In Fig.\ref{fig2}, we have compared the time series of extension of a single chain at different $Wi$ for the uniform solvent quality ($\varepsilon= constant$) and the varying solvent quality ($\varepsilon$=$y \Delta \varepsilon$). Fig.\ref{fig2}(a) and (b) show the equilibrium extension ($\dot\gamma =0$) of polymer chain in poor ($T= 1.2 < T_{\theta}$) and good ($T=3.8 > T_{\theta}$) solvents, respectively. It is evident from the Fig.\ref{fig2}(c) that at low shear rate, polymer chain for varying interaction remains in globule state for a longer time compared to that of the constant interaction ($T < T_{\theta}$). However, at high temperature ($ T >T_{\theta}$) there is rapid fluctuations in the extension (Fig.\ref{fig2}(d)) for both types of solvent. The maximum extension of polymer depends on the shear rate irrespective of solvent quality. At high shear rate, (Fig.\ref{fig2}(e) and (f)) show significant increase in the tumbling events. \begin{figure}[] \includegraphics[width=0.5\textwidth]{fig3_autocorr-eps-converted-to.pdf} \caption{Normalised autocorrelation function (ACF) of the components of end to end vector $\vec{R}$ in flow and gradient direction $C_x$, $C_y$ for both cases $\varepsilon$=$cons$ and $\varepsilon$=$y \Delta \varepsilon $. (a) and (c) at different $Wi$ at temperature $T$=$1.2$, for poor solvent condition; (b) and (d) at $T$=$3.8$ for good solvent condition.} \label{fig3} \end{figure} To study the tumbling dynamics, we calculated the autocorrelation function (ACF) $C_x$, $C_y$ \cite{Schroeder1,Buscalioni,Florencio} of components $x$ and $y$ of end to end vector, $\vec{R_e}$ in flow and gradient direction respectively, which are defined as $C_{\alpha}(t) = \langle{ \delta{R_{\alpha}(t)} \delta{R_{\alpha}(0)}}\rangle / \langle{ \delta{R_{\alpha}(0)} \delta{R_{\alpha}(0)}}\rangle$, where $\alpha = x, y$, $\delta{R_{\alpha}(t)}=R_{\alpha}(t)-\langle{R_{\alpha}}\rangle$ and $\langle{.}\rangle$ denotes the time average. Fig.\ref{fig3} shows the ACFs $C_x$, $C_y$ for both cases $\varepsilon$=$constant$ and $\varepsilon$=$y \Delta \varepsilon $ at temperatures $T=1.2$ and $T=3.8$. One remarkable feature which can be noticed from these plots is that the tumbling dynamics of polymer chain remains insensitive to the positive and negative interaction gradients. Furthermore, the effect of interaction gradient is visible only in the case of poor solvent at low $Wi$. At higher $Wi$, this effect is vanishing and both the ACFs behave similar to the case of $\varepsilon=constant$ (Fig.\ref{fig3}(c)). In a good solvent ($T>T_{\theta}$), there is no effect of interaction gradient on ACFs even at lower value of $Wi$ (Fig.\ref{fig3}(b) and (d)). The dynamics of the chain under shear flow is characterised by well defined tumbling events. It is dissipative in nature and arising due to the external forcing similar to the one seen in randomly excited damped oscillator. This analogy may be used to calculate the characteristic time involved in tumbling. The generic form of a damped harmonic oscillator $F(t)=A^2 cos(w_dt+ \psi)exp(-\Gamma t)$ was used to fit the ACFs, $C_x$ and $C_y$ \cite{Florencio}. The damping rate $\Gamma$, the natural frequency $w_0$ ($w_o^2$=$w_d^2+\Gamma ^2$) and the phase constant $\psi$ are the three time parameters. The phase lag $\Delta \psi $=$\psi _y-\psi _x $ is always positive and gives a new characteristic time $\tau _{lag} = \Delta \psi/w_d$, which indicates how fast the chain extension $X$ in shear direction responses to a drag force arising due to the fluctuation in the extension $Y$ in the gradient direction. It is evident from Fig.\ref{fig3}(a) that there is an increase in $\tau _{lag}$ at low $Wi$ due to the interaction gradient at low temperature. Values of $w_d$, $\Gamma$ and $w_0$ obtained from the fits of ACFs ($C_x$ and $C_y$) for $T=1.2$ (poor solvent) are shown in Fig.\ref{fig4} \cite{good}. There are significant differences in the values of parameters of both ACFs. Furthermore, the dynamic of chain in flow direction is underdampped ($w_0 \simeq w_d$). However, for low value of $Wi(< 30)$, one observes $w_0 \simeq \Gamma$ in gradient direction. This difference becomes more prominent in presence of interaction gradient {\it i.e.} $w_0 \simeq \Gamma$ and $w_d \simeq 0$, while dynamics in flow direction remains unaffected. In all cases, the motion of the chain becomes underdamped as $Wi$ increases ($w_0 \simeq w_d$ and $\Gamma << w_0$). The natural frequency $w_0$ is related to tumbling time for tumbling process, succession of coil-stretch cycle, $\tau _{tumb}=\pi /w_0$ \cite{Florencio}. In all cases, irrespective of directions, good or poor solvent quality, absence or presence of interaction gradient, the natural frequency is found to be scaled as $Wi^{2/3}$ for high $Wi$, a robust feature of tumbling dynamics found in previous theoretical and experimental studies \cite{schroeder,victor,Florencio}. \begin{figure}[] \includegraphics[width=0.5\textwidth]{fig4_time_scale_T12-eps-converted-to.pdf} \caption{Values of $w_d$, $\Gamma$ and $w_0$ obtained from fits to ACF ($c_x$ and $c_y$) for T=1.2 (poor solvent). } \label{fig4} \end{figure} Tumbling of chain in shear flow is a stochastic process \cite{smith,Teixeria,Schroeder1}. The tumbling time, i.e. time interval between subsequent flips of the polymer, is a random variable with relatively large fluctuations. Now onwards, we focus our study on the distribution of angular tumbling time $P(\tau)$, where $\tau$ is the time interval between two subsequent zero crossing of end-to-end distance, $R_x=x_n-x_1 $ in the flow direction. For sufficiently large time intervals, it follows exponential distribution, $P(\tau) \approx exp(-\nu \tau)$ (Fig.\ref{fig5}), where {$\nu$} is the decay rate, its inverse gives the information of $\tau_{tumb}$. This is in agreement with previous studies \cite{victor,celani, puliafito, sanjib, Florencio}. Fig.\ref{fig5}, shows the distribution of angular tumbling time for different shear rates at low temperature. One can notice a change in slope at shear rate $\approx 0.02$ ($\varepsilon$ = constant) and $0.06$ ($\varepsilon = y \Delta \varepsilon$) in Fig.\ref{fig5}(a) and (b), respectively. We identified these values as the critical shear rate, where polymer undergoes a shear induced coil-stretch transition \cite{katz}. The most interesting feature of the Fig.\ref{fig5}(b) is the presence of two time scales at intermediate shear rates for $\varepsilon = y \Delta \varepsilon$, which is absent for $\varepsilon$ = constant. \begin{figure}[t] \includegraphics[width=0.4\textwidth]{fig5_prob1-eps-converted-to.pdf}\\ \includegraphics[width=0.4\textwidth]{fig5_prob2-eps-converted-to.pdf} \caption{Cumulative distribution of characteristic tumbling time $\tau$ , at different $Wi$ for both cases (a) $\varepsilon$=$cons$ and (b) $\varepsilon$=$y \Delta \varepsilon $, at temperature $T$=$1.2$, for poor solvent condition. Arrow demarcates the emergence of new scaling, which is absent in (a).} \label{fig5} \end{figure} Fig.\ref{fig6}(a) and (b) show limiting decay rate $\nu$ of tumbling time distribution scaled with relaxation time $\tau_0$ with $Wi$ in poor and good solvent respectively. $\tau_ {tumb}$ is nearly independent of $Wi$ at low values of $Wi$ and proportional to $\tau_0$. At higher $Wi$, $\nu \tau _{0} \approx Wi^{0.75 \pm 0.05}$ and $\nu \tau _{0} \approx Wi^{0.85 \pm 0.05}$ at high and low temperature, respectively. These exponents are steeper, show non-poissonian behaviour of tumbling process, in agreement with value in \cite{Florencio}. Surprisingly, at intermediate shear rates (below the critical shear rate), the decay rate decreases with $Wi$. Moreover for $\varepsilon = y \Delta \varepsilon$, the decrease in decay rate is more steeper. \begin{figure}[t] \includegraphics[width=0.5\textwidth]{fig6_exp.pdf} \caption{The limiting decay rate, $\nu$ of tumbling time distribution scaled with relaxation time $\tau_0$. } \label{fig6} \end{figure} In this letter, we have studied for the first time effects of multi-factor gradients on the tumbling dynamics of polymer chain in the free draining limit. We have incorporated the gradient of non-bonded interaction energy parameter, $\varepsilon$ associated with LJ-potential as well as shear flow (velocity gradient) in the model system. In presence of the shear flow, two interesting cases namely positive and negative gradient of $\varepsilon$ with respect to velocity gradient were studied. Surprisingly, there are no notable differences in the tumbling dynamics of chain for both cases. Another interesting observation is that the tumbling dynamics remains invariant for both cases $\varepsilon$ = constant and $\varepsilon = y \Delta \varepsilon$ for poor and good solvent in high shear regime. This is because the energy scale associated with tumbling due to rotation at higher shear rate is much higher than the energy needed to stabilize the polymer conformation. At lower shear rate in poor solvent, where both energy scales are comparable, one observes significant difference in ACFs ($C_x$, $C_y$), whereas in the good solvent such difference is found to be absent. It is found that the tumbling frequency scales as $Wi^{2/3}$ (Fig.4) consistent with previous studies. Similar scaling is reported if one considers hydrodynamics interaction \cite{Florencio, schroeder}. Here, we show that it is the robust feature of the tumbling process irrespective of solvent quality, directions and gradient of chemical potential. The long tail behavior of tumbling time distribution is found to be exponential with non-poissionian exponent (Fig.6). The exponent for poor solvent is found to be steeper than that of the good solvent. At low shear rates, tumbling time, $\tau_ {tumb}$ is nearly independent of shear rate, At very low shear rates, the polymer conformation will be in the collapsed state and its shape is nearly like a sphere of radius $R_G$. In this regime, sphere will roll and the work done due to shear force is negligible compare to the interaction energy of polymer beads. As a result, $\tau_ {tumb}$ is independent of shear rates. Above the critical shear rate, the work done by shear force is dominant and governs the tumbling dynamics of polymer. Increase in the rotational component of shear flow causes frequent tumbling of polymer accounting the decrease in $\tau_ {tumb}$ with increase in shear rate. The most striking feature of present study is the change in the trend of limiting decay rate below the critical shear rate at low temperature. There is decrease in $\nu$ with increase in shear rates. At intermediate shear rate below the critical value, the work done due to shear force now is comparable to the interaction energy, and thus polymer deforms from the sphere like shape to an ellipsoid pointing along the flow direction, which inhibit tumbling. At this point, our work calls for further investigation by including hydrodynamics interaction and effects of chain size on the tumbling dynamics \cite{katz}. It may be difficult to implement varying solvent quality in terms of the interaction gradient {\it in vitro} similar to the one seen {\it in vivo}. However, if one maintains the two surfaces of the channel at two different temperatures, it is possible to achieve the steady state in temperature, which may be thought similar to the interaction gradient. Another way to realize such a gradient {\it in vitro} could be due to the differences in the affinity of the salt with the top and bottom surfaces of the channel, which will give rise a salt gradient. One may consider the case of an ionic salt, where electric field gives rise to such gradient. Rheological studies involving isotropic molecules (spherical in shape) and anisotropic molecules (e.g. liquid crystals) may confirm our findings \cite{maren1, Cates, Foglino}. Therefore, the present studies open several new issues, which warrant further experimental investigations to explore such hitherto unknown scaling related to the tumbling dynamics of polymer under multi-gradient fields, which may have potential applications in understanding the dynamics of active particles. We thank Garima Mishra and Apratim Chaterjee for many helpful discussions on the subject. We are grateful to P.J. Daivis and Matthew Dobson for their discussions on generalized Langevin equation for this study. The financial assistance from SERB and INSPIRE program of DST, New Delhi, India are gratefully acknowledged.
{ "timestamp": "2018-05-08T02:17:18", "yymm": "1805", "arxiv_id": "1805.02497", "language": "en", "url": "https://arxiv.org/abs/1805.02497" }
\section{Introduction} After its hype finally receded about half a decade ago, rather few advances in Semantic Desktop (SemDesk) research have been reported. An overview of (modern) SemDesks can be found in \cite{DraganD12}: Existing implementations are, for example, reproached for being rather complicated to use, not scaling well (thus draining lots of system resources), and there is still no real "killer app" available. Concerning SemDesk applications, two categories could be observed: newly created semantic applications and plug-ins to enhance traditional, non-semantic ones \cite{DraganD12}. As a successor to the \textit{NEPOMUK Semantic Desktop}\footnote{\url{www.semanticdesktop.org}}, DFKI's Smart Data \& Knowledge Services department developed its own prototype\footnote{ meanwhile spanning over six years of permanent usage in the department and a group knowledge graph having approx. 2.6 million triple statements } \cite{maus2013weaving} making SemDesk technology ready for 24/7 usage in practice, covering both, private and corporate scenarios. After lessons learned in past\footnote{ e.g. \textit{ForgetIT} (\url{www.forgetit-project.eu}) and \textit{supSpaces} (\url{www.supspaces.de}) } and still ongoing projects\footnote{ e.g. \textit{Managed Forgetting} (\url{www.spp1921.de/projekte/p4.html.de}) }, we now propose \textit{Context Spaces} as an extension of this prototype addressing the issues mentioned before. \section{Approach} \noindent \textbf{Context Spaces.} One of SemDesk's cornerstones is the Personal Information Model (PIMO) \cite{sauermann2007pimo}, which tries to represent a user's mental model as good as possible. Information items (files, mails, bookmarks, ...) that are related to each other in a person's mind, but are separated on their computer (file system, mail client, web browser, ...), can thus be interlinked. With \textit{Context Spaces} (or \textit{cSpaces} for short) we extend this idea by explicitly (and additionally) associating items with contexts of the user (see lower left of Figure \ref{fig_cspaces}). \vspace{-0.4cm} \begin{figure} \centering \includegraphics[width=1\textwidth]{img/cspaces.png} \caption{Conceptual overview of the cSpaces Semantic Desktop} \label{fig_cspaces} \end{figure} \vspace{-0.4cm} This is based on the intuition that every activity is performed in a certain context. Hence, each information item stored on a person's computing device can be associated with one or more contexts (association strength may vary depending on the user's current context awareness). We therefore assume that users are explicitly aware of the concept of context \cite{GomezPerez2009} and that they are also aware of their current context (at least most of the time). Examples of such contexts are: \textit{Spain holiday 2017}, \textit{prepare ESWC18 paper}, or \textit{my childhood memories}. We do not enforce a certain definition of context: users should be able to stick to their own conceptualization as much as possible. However, we do assume that contexts express a certain relatedness of its elements. Besides being a kind of container for things, they may also be strongly related to (calendar) events, tasks or cases. Context hierarchies are also possible. More details about our context model, which is an extension of \cite{SchwarzContextModel}, will be presented in another paper. Instead of just having context as passive metadata, we in addition see it as an accessible element users can interact with (create new (sub)contexts, split or merge them, add/remove elements, etc.). SemDesk user studies \cite{SauermannEval} revealed that people omitted rather specific relations in favor of basic ones (like \textit{isRelatedTo} or \textit{isPartOf}), whereas the system is formal where possible, e.g. representing calendar events or address book contacts. This matches our idea of providing a low effort opportunity to already keep things a bit more tidied up when simply associating them with a certain context (or multiple). Additionally, some of these associations may also be inferred by the system reducing manual effort even more, e.g. a received email reply can automatically be associated with the original mail's context. More advanced features supporting the user will be discussed in the section after next \noindent \textbf{Transparent Integration.} Using contexts as an explicit interaction element only makes sense if applications also respect them. Like illustrated in Figure \ref{fig_cspaces}, we therefore integrate cSpaces into the rest of the system using standard protocols like \textit{Server Message Block (SMB)} for files, \textit{IMAP} for mails, \textit{CalDAV} for calendar entries, and \textit{CardDAV} for contacts. For web browsers, we use \textit{Web\-Extensions}\footnote{\url{https://wiki.mozilla.org/WebExtensions}}, which provide cross-browser functionality and an integration level similar to having an underlying protocol. Applications are thus able to transparently operate on the knowledge graph (PIMO) managed by our app. Especially in corporate scenarios, it is very convenient if users may just work with the resources in their contexts without caring whether they are actually spread across various sources like intranet shares, for example. Utilizing only standard protocols has certain limitations due to their rather basic, low-level character. Some activities, like writing a note or comment about a resource, can become inconvenient or non-intuitive. To avoid this, we provide an additional sidebar as a single interaction point for using advanced features. Users therefore do not need to learn a new (plugged-in) interface for each of their applications. They can just keep using them the usual way having only the sidebar as a new UI to familiarize with. From the development point of view, the effort of creating and maintaining plug-ins needed for higher level functionality is comparatively low to that of earlier SemDesks. They can be realized as \textit{headless plug-ins} having very little functionality, often just the capability of \textit{sending out} in-app-events to the sidebar (that is why we also shortly call them "plug-outs"). In addition, their corresponding UI elements and logic are located in the sidebar, where they can be easily reused. Plug-outs for different mail clients could, for example, share the same tagging UI \noindent \textbf{Self-Reorganization.} Features discussed so far primarily aim at our system's ease of use. The other aspects mentioned in the beginning (scalability, missing "killer app") will be addressed using \textit{Managed Forgetting}, by which we understand an escalating set of measures: temporal hiding, condensation, adaptive synchronization, archiving and deletion of resources and parts of the knowledge graph \cite{forgetitbook}. By having users work on cSpaces, we gather rich contextual information about all of their resources, which allows the system to semi-automatically help them in organizing their stuff. Thus, cSpaces are continuously spawned, retracted, merged, condensed, or forgotten. As an example, let us assume we do a consulting job for company XY. The contract involves five meetings about different topics. Our system could represent this by having an overall cSpace containing general information about XY, e.g. contact and contract information. For each meeting, there could be an individual sub-cSpace about its respective topic. Several months after the job has been completed, the system starts to remove details, e.g. train schedule to get to the meeting or auxiliary material for doing the presentation. After some years have passed, the sub-cSpaces could be merged with their parent, since the separation into different meetings is not relevant anymore. Only the most important items, e.g. individual reports or an overall final report, are kept. All other items are either condensed, moved to an archive or deleted completely (which can be adjusted by the user on a general level). An item's current and estimated future value for the user are therefore continuously assessed resulting in different forgetting measures like temporal hiding (e.g. some items during one of the meetings), deletion, etc. This especially means that the system is able to reorganize itself to a certain extent, which especially includes a kind of tidying-up-itself functionality. Some of the described features have already been implemented and successfully used in our research and industry prototypes \cite{manaforge, pimodiary}, however most of them are still under heavy development.\\[-0.3cm] \noindent \textbf{Demo.} In an early proof-of-concept implementation based on \cite{maus2013weaving}, we already realized some of the file system, browser and calendar parts. The screenshots in Figure \ref{fig_scr} show a typical feature of our system: \vspace{-0.45cm} \begin{figure} \centering\includegraphics[width=1\textwidth]{img/screenshot.png} \caption{ Screenshot showing sidebar, file explorer and browser before (left half) and after a context switch (right half), illustrating the effects of a dynamic reorganization of the system. (Note: windows were rearranged for easier comparison.) } \label{fig_scr} \end{figure} \vspace{-0.45cm} the user selects a different context using the sidebar. As a consequence, the \textit{current context}, available as a folder in the file system as well as the browser, is dynamically reorganized by our app. Note that the system tries to present meaningful views on the current context in each app: e.g., the view in the browser only contains web links. To really get an impression of how the interaction with the system looks like, we kindly refer the reader to our online demo video\footnote{\url{https://pimo.opendfki.de/cSpaces/}}, which also shows some additional features. \section{Conclusion \& Outlook} In this paper, we presented a new SemDesk prototype based context spaces that users directly interact with and work on. The system is transparently integrated using mostly standard protocols complemented by a sidebar for advanced features. Users may thus stick to their favorite applications which should strongly contribute to the overall ease of use. Learning efforts are presumably low due to the sidebar being the only new UI that is introduced. By exploiting its collected context information and applying features of Managed Forgetting, the system is able to dynamically reorganize itself which also includes a kind of tidying-up-itself functionality. We therefore expect it to be more scalable than its predecessors while providing new levels of user support. Nevertheless, a lot of functionality still needs to be fully implemented and evaluated. We plan to do extensive user studies once the system matures.\\ \noindent \textbf{Acknowledgements.} This work was funded by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) -- DE 420/19-1. \bibliographystyle{splncs03}
{ "timestamp": "2018-05-08T02:10:35", "yymm": "1805", "arxiv_id": "1805.02181", "language": "en", "url": "https://arxiv.org/abs/1805.02181" }
\section{\label{sec:intro}Introduction} \noindent Quantum computing has gathered significant attention by solving certain problems much faster than any known classical algorithm. In contrast to Boolean logic, quantum bits~(qubits) not only represent the classical 0 and 1 states but also any complex combination or \textit{superposition} of both, leading to a significant speed-up in computing. The Deutsch-Jozsa algorithm~\cite{deutsch1992rapid} and Shor's factorization algorithm~\cite{shor1994algorithms} are well-known examples demonstrating the power of quantum computing. This capability gives rise to the bounded-error quantum polynomial time~(BQP) complexity class with an open quest among computer scientists and mathematicians to establish the exact relation between BQP and other complexity classes. In order to accelerate scientific computing using the capabilities of a quantum computer, efficient quantum circuits for basic mathematical functions are needed. {The efficiency of a quantum circuit is measured by lower computational space~(number of qubits) and lower computational time~(logical depth). For fault-tolerant, error-protected quantum circuits to implement the quantum algorithms, it is projected that a large number of physical qubits are required for every logical qubit~\cite{campbell2017roads}. Naturally, potential solutions to reduce the number of logical qubits contribute to the overall efficiency of the quantum~circuit.} Multiplication is one of the elementary mathematical operations of arithmetic. Fast long integer arithmetic is at the very core of many computer algebra systems. In quantum computing, apart from being used as a block in itself, integer multiplication is used as a sub-routine in many applications such as Shor's integer factorization algorithm and in Newton iterations for calculating many functions like the inverse~\cite{soeken2017design}. In this paper, we present quantum implementation of the Toom-Cook multiplication algorithm~\cite{toom1963complexity,cook1969minimum}, which can attain better asymptotic complexity than simple schoolbook multiplication and the Karatsuba based integer multiplication~\cite{parent2017improved}. We further improve these bounds by analyzing pebble games on complete trees. \section{\label{sec:prior}Prior Works} \noindent The problem of multiplication in the quantum domain has been explored previously. For small numbers, the na\"ive schoolbook multiplication works best, with a runtime complexity $\mathcal{O}(n^2)$ that also translates to the logical depth in a quantum circuit realization. Karatsuba multiplication, implemented in quantum circuits~\cite{parent2017improved}, is usually faster when the multiplicands are longer than $320-640$~bits, which also provides asymptotic improvement in terms of Toffoli cost and Toffoli depth over the schoolbook multiplication. However, the number of qubits required for Karatsuba-based quantum implementation is higher than the schoolbook multiplication. In the realm of quantum circuits, so far, the Sch\"onhage-Strassen method~(using Fast Fourier transform) and Toom-Cook multiplication algorithm are not reported, even though, it is known from the classical implementations that these algorithms result in better run time, when the operand size is much larger. We primarily focus on the Toom-Cook multiplication in this work. As reported in our results, this leads to significant savings of all the performance metrics for an efficient quantum circuit, clearly outperforming the prior works. The family of Toom-Cook methods is an infinite set of polynomial algorithms~(Toom-$2.5$, Toom-3, Toom-4, \textit{etc})~\cite{knuth1997art}. Instead of using the more common Toom-3 implementation, we present the work with Toom-$2.5$ to avoid a division by 3 required by the former. This leads to reduction in overall circuit costs, as quantum division is costlier in terms of Toffoli count and Toffoli depth than simple addition or shift operations. Most of the higher Toom implementations require a similar division by constants that are not multiples of 2. The implementation of such divisions incur higher quantum costs and therefore we avoid them. When moving to quantum domains, gate sets need to go beyond classical to create the \emph{superposition} effect of the inputs. The standard universal quantum gate library that efficiently implements fault-tolerant quantum error correction codes is the Clifford+$T$ library~\cite{amy2014polynomial,fowler2009high}. In this library, the cost of implementing a $T$-gate is sufficiently high to customarily neglect the cost of other Clifford group gates, while determining the total cost of the quantum circuit. Therefore, the number of $T$-gates is a metric to judge the cost of a quantum circuit. Also, the number of qubits used in a quantum circuit is another important standard, since the current quantum technologies still struggle to achieve error free computation for large count of qubits. A study of the space-time trade off can be performed~\cite{wille2014trading} using these two metrics. Another metric of importance is the \emph{T-depth}. {\emph T-depth} is defined as the number of \emph{T-stages} is a quantum circuit where each such stage consists of one or more \emph{T} or $\emph{T}^{\dagger}$ gates performed concurrently on separate qubits. {It is important to note that an input circuit with continuous parameter gates (e.g. $z$-rotation gate $R_z(\theta)$ ) is decomposed using a set of discrete, basis gates, typically from the Clifford+T library. The exact number of Clifford+T gates needed for such a continuous parameter gate depends on the desired accuracy, and the discrete gate set provides only an approximation. In the context of the current work, we consider T-count and T-depth of the circuit to be proportional to the Toffoli-count and Toffoli-depth respectively, by following the Toffoli decomposition proposed in~\cite{abdessaied2016technology}}. \section{\label{sec:method}Toom-Cook Multiplier} \noindent Given two large integers $n_1$ and $n_2$, the Toom-Cook algorithm splits them into $k$ smaller parts of length $l$. The multiplication sub-operations are then computed recursively using Toom-Cook multiplication again, till we are able to apply another algorithm on it for the last stage of recursion, or until the desired multiplier is reached. The input numbers are divided into limbs of a given size, each in the form of polynomial, and the limb size is used as radix. Instead of multiplying the obtained polynomials directly, they are evaluated at a set of points and the values multiplied together at those points. Based on the products obtained at those points, the product polynomial is computed by interpolation. The final result is then obtained by substituting the radix. In general, Toom-$k$ runs in $\Theta(c(k)n^e)$, where $n$ denotes input size, $k$ is the number of parts that the input operand is decomposed into and \mbox{$e = \log_{k}{(2k-1)}$}. $c(k)$ is the time spent on auxiliary additions and multiplications by small constants. The Karatsuba algorithm~\cite{karatsuba1963multiplication} is a special case of Toom-Cook multiplication~(Toom-2), where the input operand is split into two smaller ones. It reduces 4 multiplications to 3 and so operates at $\Theta(n^{\log_{2}{3}})$. In general, Toom-$k$ reduces $k^2$~multiplications to $2k-1$~ordinary long multiplication~(equivalent to Toom-$1$) with complexity~$\Theta(n^2)$. \subsection{Implementation Details} \noindent Let $x$ and $y$ be two $n$~bit numbers. To proceed with Toom-$2.5$ algorithm, we first decompose $x$ and $y$ into two and three parts respectively. Express $x = x_12^i + x_0$ and $y = y_22^{2i} + y_12^i + y_0$ with $i \geq 1$. Typically $i$ is chosen as $max\Big\{ \left\lfloor \frac{\lceil\log_{2}{x}\rceil}{k}\right\rfloor,\left\lfloor \frac{\lceil\log_{2}{y}\rceil}{k}\right\rfloor\Big\}$, where $k=2.5$ in our case. We define the following four product terms~: \begin{align} P &= x_0y_0,\\ Q &= (x_0+x_1)(y_0+y_1+y_2),\\ R &= (x_0-x_1)(y_0-y_1+y_2),\\ S &= x_1y_2 \end{align} Then, the product~$xy$ is evaluated as :- \begin{align} xy &= A2^{3i} + B2^{2i} + C2^i + D\\ A &= S \\ B &= -P + \frac{1}{2}Q + \frac{1}{2}R\\ C &= \frac{1}{2}Q - \frac{1}{2}R - S\\ D &= P \end{align} Note that only 4 multiplications are required for computation of the product. Also, each of these multiplications consists of numbers of size smaller than the original problem size~(bit-width). Each smaller multiplication is between one number of bit-width $n/2$ and another of bit-width $n/3$. Since this method is applied in recurrence the second time for our analysis, we consider that the smaller limbs formed from the number which was split into 3 parts originally is now split into 2 parts and vice-versa. So after two steps, we get 16~smaller problems of size~$n/6$ each. Thus, we obtain the basic recurrence for the number of steps~$T(n)$. \begin{equation}\label{eq:recursion} T(n) = 16 T(\frac{n}{6}) \end{equation} All additions~(the intermediate ones as well as the final ones) are performed by separate adders which have bounded cost. \subsection{Gate count Analysis} \noindent For gate count analysis, we consider only the Toffoli count required by the quantum circuit or sub-circuit. This is because the other used gates~(NOTs, CNOTs) do not contribute to the $T$-count of the circuit, considering the Clifford+$T$ library. The designed circuit maps \mbox{$(x,y,0,0) \mapsto (x,y,g,xy)$}, where $g$ denotes some garbage output resulting as computation of $A, B, C$ and $D$. The product is copied after the computation and the circuit is then run backwards~({\em uncomputed}) to set the garbage outputs back to~$0$. In our circuit implementation, the Cuccaro adder is used~\cite{cuccaro2004new}. For addition of two $n$~bit numbers, Cuccaro adder requires $2n-1$~Toffoli gates. It is also established that the cost $A_n$, for an in-place adder adding two $n$~bit numbers, is bounded by $2n$~Toffoli gates. Let $T_{n,n}$ denote the multiplication call to Toom-$2.5$ circuit for calculating product to two $n$~bit numbers and $TC_{n}$ denote the number of Toffoli gates required for implementing $T_{n,n}$. First, we need to calculate $P, Q, R$ and $S$. This requires 4 recursive calls to $T_{\frac{n}{2},\frac{n}{3}}$. For calculating the intermediate sums required as input for $T_{\frac{n}{2},\frac{n}{3}}$, we need four $n/2$~bit adders and six $n/3$~bit adders. This also includes uncomputation of the intermediate garbage results, i.e., the qubits used for storage of intermediate results are returned to their initial states. The output of each $T_{\frac{n}{2},\frac{n}{3}}$ is a $5n/6$~bit number. Finally, for computing $A, B, C$ and $D$, four $5n/6$~adders are required. As already stated earlier in evaluation of $T_{\frac{n}{2},\frac{n}{3}}$ we assume that the $n/2$~bit number is split into 3 parts and vice versa. By performing similar analysis, we get evaluation of $T_{\frac{n}{6},\frac{n}{6}}$, in terms of which the recursive relation is provided. \begin{align} TC_n &= 16TC_{n/6} + 40A_{n/6} + 22A_{n/3} + 4A_{n/2} + 4A_{5n/6} \\ &= 16^{log_6n}TC_1 + 40(A_{\frac{n}{6}} + 16A_{\frac{n}{36}}+\dots) \nonumber\\ &+ 22(A_{\frac{n}{3}} + 16A_{\frac{n}{18}}+\dots) + 4(A_{\frac{n}{2}} + 16A_{\frac{n}{12}}+\dots) \nonumber\\ &+ 4(A_{\frac{5n}{6}} + 16A_{\frac{5n}{36}}+\dots) \end{align} The base case is the multiplication of two $1$~bit numbers which can be done by a Toffoli gate. Therefore, \mbox{$TC_{1}$ = 1}. Each of the summation of the adder gate counts have log$_6n$ terms. On evaluating the summations using geometric progression and doubling the cost to account for the aforementioned uncomputation, we get :- \begin{align} TC_n &= 2\big( 16^{log_{6}n} + 23.2n\big[\big( \frac{16}{6}\big)^{\log_6n}-1\big]\big)\\ &= 2n^{\log_{6}16} + 46.4n( n^{\log_{6}(16/6)}-1) \leq 49n^{\log_{6}16} \end{align} Note that all operations used in the circuit design are implemented using only adders and shifts, without any separate multiplication/division blocks. \begin{center} \begin{figure}[t] \includegraphics[height = 5cm]{tree.pdf} \caption{\em Recursion tree structure of the Toom-$2.5$~implementation.} \label{fig:tree} \end{figure} \end{center} \subsection{Space-Time Trade-offs} \noindent The recursive nature of the problem gives rise to an inherent tree structure as shown in Fig.~\ref{fig:tree}. The size of a node is representative of the problem size at that level. For example, the root level denotes the complete problem~($n$~bit multiplication). According to the recursion presented in Equation~(\ref{eq:recursion}), each node will have $16$~children nodes denoting a smaller problem~($n/6$~bit multiplication). For the Toom-$2.5$ circuit with an input of size $n$ at any level~$x$ in the tree, there are $16^{x}$ nodes of size $n6^{-x}$ each for a total cost of $n\big(\frac{16}{6}\big)^{x}$ at level $x$~(level numbering starting at $0$ from root). So, the space~cost~$Q_{orig}$ of the complete tree is \begin{align} Q_{orig} &= n\sum_{0}^{N-1} \Big(\frac{16}{6}\Big)^{x} \\ &= n\frac{(16/6)^{\log_{6}{n}}-1}{(16/6)-1} \\ &= \mathcal{O}(n(8/3)^{\log_{6}{n}}) = \mathcal{O}(n^{1+\log_{6}{(8/3)}}) \\ &\approx \mathcal{O}(n^{1.547}) \end{align} where tree height, $N=\log_{6}{n}$. \begin{figure*}[ht] \centering \includegraphics[height = 8cm]{circuit.pdf} \caption{\label{fig:circuit} \em{The quantum circuit for computing integer multiplication result using Toom-$2.5$ algorithm. The compute blocks are then run backwards~(uncomputed) to set the garbage outputs~($g$) back to 0~(not shown in the figure).}} \end{figure*} The reversible pebble game~\cite{bennett1989time} is a combinatorial game played on rooted Directed Acyclic Graphs~(DAGs). Each pebble represents some amount of space. The rules are similar to those used in the pebble game to model irreversible computation except that we simply cannot remove pebbles by reversibility constraint. There is a reverse computation for each corresponding computation performed, implying that during the game, the pebbles may still be removed but it is subject to the same conditions as applied during placing the pebbles. We use this reversible game to obtain better asymptotic bounds in the number of qubits~(space) to implement the Toom~$2.5$~algorithm. We want to find a level in the recursion tree such that the size of each node's sub-tree is approximately equal to the sum of the size of all nodes at that level chosen and above. Once all the nodes in the chosen level have been computed, we uncompute all the sub-trees below it. This is performed to minimize space --- the size of these sub-trees is chosen to be approximately equal to the remaining size of the tree above them. Let the required height be $k$ from the leaves of the tree. The cost of all height $k$ sub-trees is $$ n\sum_{N-k}^{N-1} \Big(\frac{16}{6}\Big)^{x}$$ Therefore, cost of a single height $k$ sub-tree is $$ \frac{n}{16^{N-k}}\sum_{N-k}^{N-1} \Big(\frac{16}{6}\Big)^{x} =\frac{n}{6^{N-k}}\sum_{0}^{k-1} \Big(\frac{16}{6}\Big)^{x} $$ We want this to equal the cost of all nodes above the $k^{th}$~level. \begin{equation} n \sum_{0}^{N-k-1} \Big(\frac{16}{6}\Big)^{x} = \frac{n}{6^{N-k}}\sum_{0}^{k-1} \Big(\frac{16}{6}\Big)^{x} \end{equation} Simplifying we obtain a bound that $k \leq \frac{N}{2-\log_{16}{6}}$. This is since $k \leq N$ and $ \Big(\frac{16}{6}\Big)^{N-k} \geq \frac{16^k}{6^N}$. Using the above technique, the qubit count is now optimized and bounded by $Q_{opti}$. \begin{equation} Q_{opti} = \mathcal{O}\Big( n\Big(\frac{8}{3}\Big)^{ \frac{1}{2-\log_{16}{6}} \log_{6}{n}}\Big) \approx \mathcal{O}(n^{1.404}) \end{equation} The time complexity of a quantum circuit is effectively equal to the depth of the circuit in terms of Toffoli gates. Each node in the computation tree shown in Fig.~\ref{fig:tree} at level $k$, must be computed sequentially. At the $k^{th}$ level, the number of sub-trees $ST_k$ and corresponding depth $D_k$ is defined as follows. \begin{align} ST_k &= 16^{\big(1-{\frac{\log{16}}{2\log{16}-\log{6}}\big)\log_{6}{n}}} \\ D_k &= \frac{n}{6^{\big(1-{\frac{\log{16}}{2\log{16}-\log{6}}\big)\log_{6}{n}}}}\\ ST_k*D_k &= n\Big(\frac{8}{3}\Big)^{ \big(1- \frac{\log{16}}{2\log{16}-\log{6}}\big) \log_{6}{n}}\approx n^{1.143} \end{align} The product $ST_k*D_k$ gives an overall depth for computing the entire $k^{th}$ level of the recursion tree. The method proposed above is most efficient if both the numbers to be multiplied are approximately of the same bit-width. In case one of them is much bigger than the other, it is better if the bigger number is repeatedly divided into 3 parts in each turn, until the smaller parts of both the numbers are roughly the same size. Following this method, the asymptotic computational complexity can be shown to be more efficient than that of the \textit{alternating} Toom-$2.5$ method adopted. The circuit of the described implementation is shown in Fig.~\ref{fig:circuit}. It describes the circuit for $T_{n,n}$ that multiples $x$~(decomposed into $x_0,x_1$) and $y$~(decomposed into $y_0,y_1,y_2$). All symbols and variables mentioned hold the same meanings as described in the analysis above. The adder, subtractor and shifting blocks are represented as `Adder', `Sub' and `Shift' respectively. The $T_{\frac{n}{2},\frac{n}{3}}$~blocks denote Toom-$2.5$ sub-circuits of smaller bit-width. \begin{table}[ht] \centering \caption{\em{Asymptotic performance analysis of the quantum implementation of various multiplication methods.}} { \begin{tabular}{ l l l l } \hline\hline \textbf{Method} & \multicolumn{1}{l}{\textbf{QC}} & \multicolumn{1}{l}{\textbf{TC}} & \multicolumn{1}{l}{\textbf{TD}} \\ \hline Na\"ive~\cite{parent2017improved} & ${\mathcal{O}(n)}$ &$\mathcal{O}(n^2)$ & $\mathcal{O}(n^2)$ \\ Na\"ive Improved.~\cite{draper2004logarithmic} & $\mathcal{O}(n)$ & ${\mathcal{O}(n^2)}$ & ${\mathcal{O}(n\log n)}$ \\ Karatsuba~\cite{parent2017improved} & $\mathcal{O}(n^{1.427})$ &$\mathcal{O}(n^{\log_23})$ & $\mathcal{O}(n^{1.158})$ \\ \textit{Toom-$2.5$} & $\mathbf{\mathcal{O}(n^{1.404})}$ &$\mathbf{\mathcal{O}(n^{log_616})}$ & $\mathbf{\mathcal{O}(n^{1.143})}$ \\ Const. Mult.~\cite{pavlidis2012fast} & $\mathcal{O}(n)$ & $\mathcal{O}(n^{2})$ & $\mathcal{O}(n)$ \\ \hline\hline \multicolumn{4}{l}{QC: Qubit count, TC: Toffoli count, TD: Toffoli depth} \\ \end{tabular} } \label{table:1} % \caption{\em{Cost of quantum implementation of multiplication.}} \centering {\scriptsize \begin{tabular}{ l l l l } \hline\hline \textbf{Method} & \textbf{QC} & \textbf{TC} & \textbf{TD} \\ \hline Na\"ive~\cite{parent2017improved} & $4n+1$ &$4n^2-3n$ & $4n^2-4n+1$ \\ Karatsuba~\cite{parent2017improved} & $ n\Big(\frac{3}{2}\Big)^{ \frac{ \log_{2}{n}}{2-\log_{3}{2}}}$ &$ 42n^{\log_{2}3}$ & $n\Big(\frac{3}{2}\Big)^{ \big(1- \frac{1}{2-\log_{3}2}\big) \log_{2}{n}}$ \\ \textit{Toom-$2.5$} & $ n\Big(\frac{8}{3}\Big)^{ \frac{ \log_{6}{n}}{2-\log_{16}{6}}}$ &$ 49n^{\log_{6}16}$ & $n\Big(\frac{8}{3}\Big)^{ \big(1- \frac{1}{2-\log_{16}6}\big) \log_{6}{n}}$ \\ Const. Mult~\cite{pavlidis2012fast} & $3n+1$ & $ 4n(n+1)$ & $8n$ \\ \hline\hline \end{tabular} } \label{table:impl} \end{table} \begin{figure*}[ht] \centering \begin{subfigure}[t]{5.5cm} \centering \includegraphics[width=\textwidth]{tcount.pdf} \caption{} \label{fig:tofgraph} \end{subfigure} \begin{subfigure}[t]{5.5cm} \centering \includegraphics[width=\textwidth]{depth.pdf} \caption{} \label{fig:depgraph} \end{subfigure} \begin{subfigure}[t]{5.5cm} \centering \includegraphics[width=\textwidth]{qubit.pdf} \caption{} \label{fig:qubitgraph} \end{subfigure} \caption{\em{Comparison of the quantum multiplier implementations based on : \subref{fig:tofgraph} Toffoli Count, \subref{fig:depgraph}~Toffoli~Depth} and \subref{fig:qubitgraph}~\#Qubits.} \label{fig} \end{figure*} \section{Results and discussions}\label{sec:exp} \noindent Table~I presents the asymptotic results of implementation of various multiplication methods while Table~II provides the exact constants involved. The na\"ive multiplication method suggested in~\cite{parent2017improved} allows implementation with the lowest number of qubits asymptotically but fares badly in terms of Toffoli count and depth. In~\cite{draper2004logarithmic}, the implementation of logarithmic depth adders have been provided. The na\"ive~(shift-add) multiplier can be improved in depth by using the logarithmic depth adder as a submodule. The $n$-bit adder has a depth of order $\mathcal{O}(\log{n})$, thus the multiplier shall have a depth of $\mathcal{O}(n\log{n})$. However, for both `in place' and `out of place' adders described in~\cite{draper2004logarithmic}, extra ancilla are required for intermediate computation. Also, the Toffoli count is greater compared to the Cuccaro Adder. Thus, the multiplier developed by extension though optimized in depth, will have greater asymptotic Toffoli and qubit count, equal to $\mathcal{O}(n^2)$. In Table~I, we provide the asymptotic complexity of such a multiplier. However, in the absence of an explicit design, we are unable to provide the exact constants involved in the cost metrics and hence Na\"ive Improved method mentioned in Table~I is excluded from Table~II. Toom-$2.5$ requires less number of qubits than Karatsuba~\cite{parent2017improved}. Toom-$2.5$ outperforms both the na\"ive and Karatsuba methods in terms of Toffoli count as well as Toffoli depth, highlighting the efficiency of the proposed method. Pavlidis et al.~\cite{pavlidis2012fast} presented a depth optimized multiplier, for multiplication by a constant only. Therefore, it is unfair to be directly compared with our implementation and the Karatsuba multiplication implementations presented in~\cite{parent2017improved}. It has a Toffoli depth of $8n$, a cost of $4n(n+1)$ and qubit count of $3n+1$. {The Clifford+T quantum gate library has garnered much interest in the implementation of fault-tolerant quantum circuits~\cite{weinstein2013non}. As mentioned in~\cite{maslov2016optimal,shende2008cnot}, the cost of Toffoli gate is higher compared to the NOT and CNOT gates. The Toffoli gate may be decomposed using Clifford+$T$-gates, which makes cost metrics associated with Toffoli gates important. Therefore, Toffoli count and Toffoli depth are used as the performance metrics to begin with. The cost of mapping a Toffoli gate to the Clifford+T fault tolerant library is upper bounded by { $7\times$ Toffoli count} and {$3\times$ Toffoli depth} ~\cite{abdessaied2016technology}. Therefore, fault tolerant implementation of the proposed multiplication method would have at most $7\times$ Toffoli count and $3\times$ Toffoli depth of the values mentioned in Table~I. It is further possible to improve these values by optimization techniques proposed in~\cite{abdessaied2016technology,amy2014polynomial}.} Fig.~\ref{fig:tofgraph} presents a comparison of the Toffoli count required by the various methods for variation in the bit-width of the inputs. The na\"ive multiplication method performs better in terms of total Toffoli cost at smaller input sizes~($<300$ bits), but is outperformed by the Karatsuba and Toom algorithms at higher bit-widths. Fig.~\ref{fig:qubitgraph} shows the variation in the qubit requirements by the different implementations across a range of input sizes. In this case the shift and add method~(na\"ive) outperforms both the recursive algorithms as it increases linearly. However, this low space requirement leads to a higher depth as demonstrated, in Fig.~\ref{fig:depgraph} in a logarithmic scale. Both Toom-$2.5$ and the Karatsuba implementations perform much better in this respect. We also present a bound on the CNOT counts of the considered implementations. In the proposed Toom-$2.5$ circuit shown in Fig.~\ref{fig:circuit}, CNOT gates are present in the Cuccaro adders and copy blocks. It can be seen from ~\citep{cuccaro2004new} that the number of CNOT gates in a $n$~bit adder can be bounded by $5n$. Proceeding similarly as the Toffoli count analysis, we get an exactly similar recurrence relation as presented in Gate Count Analysis in Section~\ref{sec:method}. Let $CC_n$ denote the number of CNOT gates in $T_{n,n}$. Also, let $Ac_n$ denote the number of CNOT for an in-place $n$~bit adder. \begin{align} CC_n &= 16CC_{n/6} + 40Ac_{n/6} + 22Ac_{n/3} + 4Ac_{n/2} + 4Ac_{5n/6} \\ &= 16^{log_6n}CC_1 + 40(Ac_{\frac{n}{6}} + 16Ac_{\frac{n}{36}}+\dots) \nonumber \\ &+ 22(Ac_{\frac{n}{3}} + 16Ac_{\frac{n}{18}}+\dots) + 4(Ac_{\frac{n}{2}} + 16Ac_{\frac{n}{12}}+\dots) \nonumber \\ &+ 4(Ac_{\frac{5n}{6}} + 16Ac_{\frac{5n}{36}}+\dots) + COPY_{cnot} \end{align} where $COPY_{cnot}$ denotes the number of CNOTs used in the two copy blocks. However, the number of CNOT gates arising out of the Copy blocks are of the order $\mathcal{O}(n)$ and is dominated by the terms of order $n^{\log_{6}16}$. $CC_1 = 0$ because $1$-bit multiplier just consists of 1~Toffoli gate. \begin{align} CC_n &\approx 2\big( 58n\big[\big( \frac{16}{6}\big)^{\log_6n}-1\big]\big)\\ &=116n( n^{\log_{6}(16/6)}-1) \leq 116n^{\log_{6}16} \end{align} By similar analysis, the CNOT count of the Karatsuba multiplier can be bounded by $100n^{\log_{2}3}$. For the na\"ive method, controlled adders are considered as described in~\cite{parent2017improved}. Each such adder has $2n$~CNOTs and the multiplier uses~$n-1$ such adders. Thus, the total CNOT count is $2n^2-2n$. For the Constant multiplier in~\cite{pavlidis2012fast} $2n$~CNOT gates are employed. These observations are summarized in Fig.~\ref{fig:cnotcount}. In~\cite{shende2008cnot}, it has been established that the $T$-gate is at least 6 times costlier compared to the CNOT gate, which emphasizes the importance of T-count and T-depth. However, with increasing circuit size, the cost of CNOT may take a dominant role if we follow the analysis in terms of upper/lower bounds~\cite{maslov2016optimal}. From that perspective, the study of overall cost is important. As we found for the case of multipliers, the CNOT count of Toom-$2.5$ Multiplier grows at slightly lower rate compared to that of the Karatsuba Multiplier with increasing input size. Considering the fact that Toffoli count of Toom-$2.5$ Multiplier already outperforms Karatsuba at large input sizes, the proposed design is clearly more efficient. \begin{center} \begin{figure}[t] \includegraphics[height = 4cm]{ccount.pdf} \caption{\label{fig:cnotcount} \em{Variation in CNOT counts across different implementations with increasing input size.}} \end{figure} \end{center} \section{\label{sec:conc}Conclusion} \noindent Designing an efficient quantum circuit with low resource requirements and faster run time is an important challenge with significant repercussions across several domains, such as, scientific computing and security. In this work, we reported an efficient quantum circuit for integer multiplication based on Toom-Cook algorithm. We provide design results, and techniques for lowering the resource requirements. In terms of asymptotic complexity, the presented implementation outperforms the state-of-the-art results for multiple performance metrics.
{ "timestamp": "2018-06-26T02:03:58", "yymm": "1805", "arxiv_id": "1805.02342", "language": "en", "url": "https://arxiv.org/abs/1805.02342" }
\section{Introduction} \label{sec:intro} The increasing amount of relatively inflexible, volatile, and unpredictable distributed generation from renewable energy resources calls for more flexibility at each level of the electrical energy chain. This flexibility is mandatory for maintaining power balance and for an efficient operation of power systems. Energy Storage Systems (ESSs) are seen as a particularly promising source of flexibility. ESSs could be used, for example, to compensate for volatile generation and consumption, thus enabling the dispatch of power output from inflexible generation and/or demand according to a pre-computed schedule \cite{Sossan16a}. Following the notion introduced in \cite{Sossan16a}, we employ the term \textit{dispatchable feeder} to refer to a grid-connected power system composed of an ESS and inflexible generation/demand whose power output is regulated according to a pre-computed schedule. We denote this schedule as \textit{Dispatch Schedule} (DiS). Several scheduling and control schemes have been proposed in the literature to operate dispatchable feeders, e.g. \cite{Sossan16a,Stai17,Lampropoulos15}. The computation of the DiS constitutes one of the major challenges in all these works. In fact, the exact upcoming inflexible generation/demand is unknown at the moment in which the DiS has to be computed. Even if forecasts for this inflexible generation/demand are available, they are still prone to errors. Therefore, the future power outputs are known, at best, in terms of random variables \cite{Zhang14}. These variables are often not normally distributed \cite{Salameh95} and it is generally difficult to describe their inherent dependency in an explicit mathematical form. For these reasons, most of the well-known techniques used to deal with random disturbances, e.g. \cite{Farina16}, are inapplicable. Therefore, two-stage stochastic programming with scenario selection is often applied in computation of DiSs, cf. \cite{Olivares15,Ding12,Garcia08,Vrakopoulou13,Stai17}. Alternatively, multi-stage robust optimization can be used \cite{Fabietti16}, but might be conservative. To overcome this issue, our preceding paper \cite{Appino18a} proposes stochastic robust optimization ensuring trackability of the DiS with a given probability. The underlying idea is the use of probabilistic forecasts of the energy profile, thus implicitly including the temporal correlations of the future power outputs. The present paper extends this approach. The contributions are as follows: First, we formulate the scheduling problem as a two-stage decision process. Then, taking this model as a starting point, we extend the stochastic robust formulation presented in \cite{Appino18a}, including a better exploitation of the probabilistic forecasts; i.e., while \cite{Appino18a} considered quantile-based energy forecasts, herein we work directly with Cumulative Density Functions (CDF). Moreover, we investigate constraint softening to the end of avoiding infeasibility. Finally, we compare the proposed approach to a scenario-based scheduling method similar to \cite{Stai17,Fabietti16}, relying on scenarios generated via probabilistic forecasts. Our results show that the proposed scheduling problem based on probabilistic forecasts of the energy profile outperforms both scheduling based on deterministic forecast and based on scenario forecasts. The remainder of the present paper is organized as follows: Section \ref{sec:problem_state} covers the problem setup; Section \ref{sec:security_level} presents the main contribution, i.e. the proposed methodology to tackle the stochastic scheduling problem using the CDF of the forecast quantities; Section \ref{sec:minimal_cost} describes scheduling using scenario-based optimization; Section \ref{sec:results} reports simulation results. \section{Problem Statement} \label{sec:problem_state} \subsection{System Description} Similar to \cite{Appino18a}, the present paper addresses the optimal operation of a dispatchable feeder, cf. Figure \ref{fig:Scheme}. The dispatchable feeder is operated such that the active power exchange with the utility grid, $p_g$, follows a pre-computed \textit{Dispatch Schedule} (DiS). The time window covered by the DiS is called the \textit{scheduling horizon}. In the following, we adopt a discrete time notation and divide the scheduling horizon into $N_d \in \mathbb{N}$ \textit{dispatch intervals} of equal duration $\Delta t$. We enumerate the dispatch intervals with the index $k \in \mathcal{K}= [k_{s}, k_{s}+N_d] \subset \mathbb{N}$ and indicate with $\Pgsch{k}$ the DiS at interval $k$. We refer to the sequence of $\Pgsch{k}$ over the scheduling horizon, i.e. the DiS, as $\{\Pgsch{k}\}_{k \in \mathcal{K}}$. The DiS is computed at $k_0 < k_{s}$, before the scheduling horizon. The operation of the dispatchable feeder can be described using a two-stage decision process. At the first stage $\{\Pgsch{k}\}_{k \in \mathcal{K}}$ is computed. In the second stage, the ESS power output at $k$, $p_s(k)$, is determined. In the following, we describe the details of these two stages, starting from the second one. The second stage of the decision process takes place \textit{after} the aggregated power output of the inflexible elements is known. The notation $p_l(k)$ indicates the value of this power output at $k$, with negative values representing generation. The ESS is used to compensate the volatility of the inflexible elements, as in \cite{Citro11}. Its power output, $p_s(k)$, is therefore chosen to meet the scheduled power exchange with the grid \begin{equation} p_s(k) = \Pgsch{k} - p_l(k). \label{eq:real_pow_bal} \end{equation} However, $p_s(k)$ cannot assume arbitrary values, being limited by the capacity and capability constraints of the ESS \begin{subequations} \label{eq:det_ess_limit} \begin{align} \underline{p}_s \leq {p}_s(k) \leq \overline{p}_s, \label{eq:det_pow_limit}\\ \underline{e}_s \leq {e}_s(k) \leq \overline{e}_s, \label{eq:det_en_limit} \end{align} \end{subequations} with \begin{subequations} \label{eq:complem_ess_limit} \begin{multline} e_s(k+1) = e_s(k) + \left( (1-\mu_n)p_s^+(k) + (1+\mu_n)p_s^-(k) \right) \Delta t, \\e_s(k_0) = e_s^{k_0},\label{eq:storage_dinamic_det} \end{multline} \begin{equation} [p_s^+(k),p_s^-(k)] \in \mathcal{F}_{d}(p_s(k)). \end{equation} \end{subequations} Here, $\underline{p}_s$ and $\overline{p}_s$ denote the minimum and maximum power output, $\underline{e}_s$ and $\overline{e}_s$ denote the minimum and maximum capacity. Finally, $e_s(k)$ is the stored energy at $k$, and $e_s^{k_0}$ is the initial state of charge of the storage. The conversion losses are modeled by $\mu_n \in [0,1]$, together with a discrimination between the different directions of $p_s(k)$, i.e. $p_s^+(k)$ and $p_s^-(k)$. To describe this discrimination, we introduce the set \begin{align*} \mathcal{F}_{d}(p) := \Big\{ [p^+,p^-]^\top \in \mathbb{R}^{2} \,|&\,\, p^- \cdot p^+ = 0, \, p^+ \geq p, \, p^- \leq p, \\& p^+ \geq 0, \, p^- \leq 0. \Big\}. \end{align*} Notice that, given a certain $\Pgsch{k}$, the value of $p_s(k)$ computed using \eqref{eq:real_pow_bal} might violate constraints \eqref{eq:det_ess_limit} for some values of $p_l(k)$. We assume that, when this is the case, the power balance is maintained by deviating the power exchange with the utility grid from the DiS. These deviations are often referred to as \textit{imbalances} \cite{Morales13}. We denote imbalances with $\Delta p_g(k)$. The total power exchange with the grid at instant $k$ is therefore \begin{equation} p_g(k) = \Pgsch{k} + \Delta p_g(k). \end{equation} Thus, the active power balance at the second stage is \begin{equation} p_s(k) = \Pgsch{k} + \Delta p_g(k) - p_l(k). \label{eq:real_pow_bal_dev} \end{equation} According to this model, $\Delta p_g(k)$ is also determined at the second-stage, once the value of $p_l(k)$ is known. Specifically, $p_s(k)$ and $\Delta p_g(k)$ follow from the optimization \begin{align} \label{eq:delta_pg_opt_problem} \min_{p_s(k),\Delta p_g(k)} \Delta p_g(k) \quad \text{s.t. \,\,} \eqref{eq:det_ess_limit}, \eqref{eq:complem_ess_limit}, \eqref{eq:real_pow_bal_dev}. \end{align} \begin{figure}[t] \vspace{-0.2cm} \centering \includegraphics[width=0.46\textwidth]{figure1_dispatchable_feeder_scheme2} \caption{Schematic diagram of a generic \textit{dispatchable feeder} (modified from \cite{Appino18a}). \label{fig:Scheme} } \vspace{-0.4cm} \end{figure} Recall that $\Pgsch{k}$ is computed at $k_0$, \textit{before} the value of $p_l(k)$ is known. This computation is the first stage of the decision process. At time $k_0$, a probabilistic forecast is the only available information on the inflexible power output. This forecast is described by a random variable, $\rv{P}_l(k)$, whose realization is $p_l(k)$. Thus, the power balance at the \textit{first stage} is \begin{equation} \rv{P}_s(k) = \Pgsch{k} + \rv{\Delta P}_g(k) - \rv{P}_l(k). \label{eq:gen_power_bal} \end{equation} Note that $\rv{\Delta P}_g$ and $\rv{P}_s(k)$ are random variables, as their realizations depend on $p_l(k)$, unknown at this stage. As the ESS power output $\rv{P}_s(k)$ is a stochastic quantity, so is the energy stored at time instant $k$, $\rv{E}_s(k)$. In the following, we employ a model with approximated ESS conversion losses to describe the dynamics of $\rv{E}_s(k)$, similar to \cite{Appino18a}. Specifically, we decouple deterministic and stochastic variables via \begin{subequations} \label{eq:exp_value_notation} \begin{align} \expPl{k} &= \mathbb{E}[\rv{P}_l(k)], \\ \rv{P}_l(k) &= \expPl{k} + \rv{\Delta P}_l(k),\\ \rv{P}_s(k) &= \expPs{k} + \rv{\Delta P}_s(k), \\ \rv{E}_s(k) &= \expEs{k_0} + \rv{\Delta E}_s(k). \end{align} \end{subequations} Requiring \begin{equation} \label{eq:exp_ess_power} \expPs{k} = \Pgsch{k} - \expPl{k}, \end{equation} the equality \begin{equation} \rv{\Delta P}_s(k) = \rv{\Delta P}_g(k) - \rv{\Delta P}_l(k), \label{eq:deviation_p_bal} \end{equation} follows from \eqref{eq:gen_power_bal}. Relying on this notation, we describe the dynamics of the expected value of $\rv{E}_s(k)$ using \eqref{eq:storage_dinamic_det}, i.e. \begin{subequations} \label{eq:exp_ess_energy} \begin{multline} \expEs{k+1} = \expEs{k} + \left( (1-\mu_n)\exPsp{k} + (1+\mu_n)\exPsm{k} \right) \Delta t, \\ \expEs{k_0} = e_s^{k_0}, \end{multline} \begin{equation} [\exPsp{k},\exPsm{k}] \in \mathcal{F}_{d}(\expPs{k}). \end{equation} \end{subequations} For the stochastic variable $\rv{\Delta E}_s(k)$ we employ an approximated loss-less model\footnote{This choice leads to a tractable stochastic model while introducing a negligible error, see \cite{Appino18a} for details.} ($\mu_n=0$) \begin{align*} \rv{\Delta E}_s(k+1) &= \rv{\Delta E}_s(k) + \rv{\Delta P}_s(k)\Delta t \nonumber \\ &= \rv{\Delta E}_s(k) + \left(\rv{\Delta P}_g(k) - \rv{\Delta P}_l(k)\right)\Delta t \nonumber \\ &= \sum_{i=k_0}^{k} \left(\rv{\Delta P}_g(i) - \rv{\Delta P}_l(i)\right)\Delta t, \end{align*} with initial condition $\rv{\Delta E}_s(k_0) = 0$ (the state of charge at $k=k_0$ is known). Using \begin{align*} \rv{\Delta E}_g(k) = \sum_{i=k_0}^{k-1} \rv{\Delta P}_g(i) \Delta t,\\ \rv{\Delta E}_l(k) = \sum_{i=k_0}^{k-1} \rv{\Delta P}_l(i) \Delta t, \end{align*} we obtain \begin{equation} \rv{\Delta E}_s(k) = \rv{\Delta E}_g(k) - \rv{\Delta E}_l(k) \label{eq:deviation_e_bal}. \end{equation} Note that $\rv{\Delta P}_l(k)$ and $\rv{\Delta P}_l(j)$ with $k\not= j$ are correlated, i.e. they are \textit{not} independent random variables. Therefore, it is in general difficult to compute $\rv{\Delta E}_l(k)$. In the following, we address this issue in two different ways: we utilize probabilistic forecasts of this variable in Section \ref{sec:security_level}, and we consider scenario forecasting (i.e. ensemble forecasts) in Section \ref{sec:minimal_cost}. Furthermore, the capacity and capability limits of the storage \eqref{eq:det_ess_limit} at the first decision stage are, with a slight abuse of notation, \begin{subequations} \label{eq:stoch_inex_ess_limit} \begin{align} \underline{p}_s \leq \rv{P}_s(k) \leq \overline{p}_s, \label{eq:inex_pow_limit}\\ \underline{e}_s \leq \rv{E}_s(k) \leq \overline{e}_s, \label{eq:inex_en_limit} \end{align} meaning that \eqref{eq:stoch_inex_ess_limit} should hold for each realization of $\rv{P}_s(k)$ and $\rv{E}_s(k)$. \end{subequations} Given \eqref{eq:exp_value_notation}, \eqref{eq:deviation_p_bal}, and \eqref{eq:deviation_e_bal}, the first stage constraint \eqref{eq:stoch_inex_ess_limit} can be described with \begin{subequations} \label{eq:inex_ess_limit_pl} {\begin{gather} \underline{p}_s \leq \Pgsch{k} - \expPl{k} + \rv{\Delta P}_g(k) - \rv{\Delta P}_l(k) \leq \overline{p}_s,\label{eq:inex_pow_limit_pl}\\ \underline{e}_s \leq \expEs{k} + \rv{\Delta E}_g(k) - \rv{\Delta E}_l(k) \leq \overline{e}_s.\label{eq:inex_en_limit_pl} \end{gather}} \end{subequations} \subsection{Scheduling Requirements} The requirements of DiS computation and optimization can be split into two major categories: \begin{itemize} \item \textit{Operational requirements}, i.e. the explicit requirements of the DiS such as peak shaving, load leveling and price-dependent load shifting. \item \textit{Tracking requirements}, i.e. the implicit requirement of tracking the DiS by means of an underlying intra-schedule controller. \end{itemize} The presence of these two, possibly conflicting, categories of requirements makes the computation of the DiS particularly challenging. In the following, we discuss how these requirements reflect the proposed two-stage decision process. We translate the operational requirements into an appropriate cost function, where a lower cost indicates improved satisfaction of requirements. As the DiS is deterministic at the first stage, its associated cost is also \textit{deterministic}. Here, we consider a quadratic cost function of the power exchange scheduled via the DiS \begin{align} \label{eq:cost_f} \begin{split} C(\Pgp{k},\Pgm{k}) = &c_{1}^+(k)(\Pgp{k})^2+c_{2}^+(k) \Pgp{k}\\+&c_{1}^-(k)(\Pgm{k})^2+c_{2}^-(k) \Pgm{k}, \end{split} \end{align} where $c_{1}^+(k)$, $c_{1}^-(k)$, $c_{2}^+(k)$, $c_{2}^-(k)$ are time-varying cost coefficients and $\Pgp{k}$ and $\Pgm{k}$ represent the different directions of $\Pgsch{k}$, subject to \begin{equation} \label{eq:Pg_dir} [\Pgp{k},\Pgm{k}] \in \mathcal{F}_{d}(\Pgsch{k}). \end{equation} By an appropriate choice of cost coefficients, cost function \eqref{eq:cost_f} can be used to achieve load leveling and/or price-based load shifting. The tracking requirements, instead, involve minimizing and/or constraining the imbalances, $\Delta p_g(k)$. We remark that, at the first stage, $\Delta p_g(k)$ is unknown and represented by the random variable $\rv{\Delta P}_g(k)$. Thus, the tracking requirements involve dealing with uncertainty. In this paper, we consider two separate tracking requirements inherently: (i) limiting the number of imbalances, and (ii) minimizing the cost of imbalances. We approach these requirements individually, with different techniques. We tackle the first one by applying joint chance constraints to the scheduling problem (Section \ref{sec:security_level}). We consider the second one by using scenario-based optimization (Section \ref{sec:minimal_cost}). \section{Enforcing a Given Security Level} \label{sec:security_level} Now, we formulate a scheduling problem to minimize the cost of the DiS while considering the requirement of limiting the number of imbalances. Specifically, we consider a DiS that can be tracked with a given probability at each time instant. We refer to this probability, $(1 - \varepsilon)$, as \textit{security level}. Given the model of the system described in the previous section, the absence of deviations corresponds to $\Delta p_g(k)=0$ and, therefore, to $\text{P}[\rv{\Delta P}_g(k)=0]=1$ and $\text{P}[\rv{\Delta E}_g(k)=0]=1$. Consequently, ensuring that the DiS at $k$, $\Pgsch{k}$, is met with at least security level $(1 - \varepsilon)$ equals to \begin{align} &\text{P}[ \mathcal{P}_k \cap \mathcal{E}_k] \geq (1 - \varepsilon),\label{eq:joint_ch_constr_both} \end{align} where the events $\mathcal{P}_k$ and $\mathcal{E}_k$ are related to satisfying the ESS constraints \eqref{eq:inex_ess_limit_pl} without any deviation from the schedule, i.e. \begin{align*} \mathcal{P}_k &= \left \lbrace \underline{p}_s \leq \Pgsch{k} - \expPl{k} - \rv{\Delta P}_l(k) \leq \overline{p}_s \right \rbrace, \nonumber \\ \mathcal{E}_k &= \left \lbrace \underline{e}_s \leq \expEs{k} - \rv{\Delta E}_l(k) \leq \overline{e}_s \right\rbrace. \end{align*} Instead of tackling constraint \eqref{eq:joint_ch_constr_both} directly, we first formulate and analyze separate joint chance constraints for power and energy \cite{Appino18a}: \begin{subequations} \label{eq:joint_ch_constr_P_and_E} \begin{align} &\text{P}[\mathcal{P}_k ] \geq (1 - \varepsilon_P)\text{,} \label{eq:joint_ch_constr_P}\\ &\text{P}[\mathcal{E}_k] \geq (1 - \varepsilon_E)\text{.} \label{eq:joint_ch_constr_E} \end{align} \end{subequations} Then, we require the DiS to be robust against worst-case realization of the uncontrolled power output $\rv{P}_l(k)$,\footnote{Throughout all simulations reported in Section \ref{sec:results}, this worst-case choice for $\varepsilon_P$ has not led to infeasibility of the scheduling problem.} i.e. $(1 -\varepsilon_P) \simeq 1$, which implies $\text{P}[\mathcal{P}_k] \simeq 1$ and $\text{P}[\mathcal{P}_k \cup \mathcal{E}_k] \simeq 1$. Thus $ \text{P}[ \mathcal{P}_k \cap \mathcal{E}_k] = \text{P}[\mathcal{P}_k] + \text{P}[\mathcal{E}_k] - \text{P}[\mathcal{P}_k \cup \mathcal{E}_k] \simeq \text{P}[\mathcal{E}_k], $ and $ \text{P}[ \mathcal{P}_k \cap \mathcal{E}_k] \geq (1 - \varepsilon_E), $ meaning that \eqref{eq:joint_ch_constr_both} is respected if \eqref{eq:joint_ch_constr_P_and_E} is satisfied with $(1 -\varepsilon_P) \simeq 1$ and $(1 -\varepsilon_E) \geq (1 -\varepsilon)$. Numerical tractability of the chance constraint \eqref{eq:joint_ch_constr_P} is achieved as in \cite{Appino18a}. Specifically, rewriting the event $\mathcal{P}_k$ as \begin{align*} \mathcal{P}_k &= \left\lbrace \Pgsch{k} - \overline{p}_s \leq \rv{P}_l(k) \leq \Pgsch{k} - \underline{p}_s \right\rbrace, \end{align*} and given the probabilistic forecast for $\rv{P}_l(k)$ \begin{equation} \text{P} \Big[ \rv{P}_l(k) \in [\underline{p}_{l,\pi^{P}}(k),\overline{p}_{l,\pi^{P}}(k)] \Big] = \pi^{P} = (1 - \varepsilon_P) \simeq 1, \label{eq:interval_Pl} \end{equation} constraint \eqref{eq:joint_ch_constr_P} is satisfied with $(1 - \varepsilon_P) \simeq 1$ if \begin{equation} \Pgsch{k} - \overline{p}_s \leq \underline{p}_{l,\pi^{P}}(k) , \quad \overline{p}_{l,\pi^{P}}(k) \leq \Pgsch{k} - \underline{p}_s, \label{eq:p_costr_bot} \end{equation} i.e. if $\mathcal{P}_k$ is verified for (approximately) the entire support of $\rv{P}_l(k)$. With respect to the energy constraint \eqref{eq:joint_ch_constr_E}, instead, we use a different and improved technique in comparison to \cite{Appino18a}. Therein, the event $\mathcal{E}_k$ is restated as \begin{align*} \mathcal{E}_k = \{ \expEs{k} - \overline{e}_s \leq \rv{\Delta E}_l(k) \leq \expEs{k} - \underline{e}_s \}, \end{align*} and \eqref{eq:joint_ch_constr_E} is achieved using a percentile approach \begin{align} \label{eq:old_constraint_ref} \expEs{k} - \overline{e}_s \leq \Delta \underline{e}_{l,\pi^{E}}(k) , \quad \Delta \overline{e}_{l,\pi^{E}}(k) \leq \expEs{k} - \underline{e}_s, \end{align} with interval forecasts built around the median \begin{align*} &\text{P} \Big[ \rv{\Delta E}_l(k) \in [\Delta \underline{e}_{l,\pi^{E}}(k),\Delta \overline{e}_{l,\pi^{E}}(k)] \Big] = \pi^{E} = (1 - \varepsilon_E). \end{align*} However, this method presents several limitations. First, considering different---even if larger---intervals with the same probability of containing the realizations of $\rv{\Delta E}_l(k)$ might lead to a reduction of the cost of the DiS. Second, the scheduling problem might be infeasible for an arbitrarily large value of $(1 - \varepsilon_E)$. This second issue is particularly relevant in the case of long-term scheduling, as the support of $\rv{\Delta E}_l(k)$, growing with $k$, might become very large in comparison to the available storage capacity. Recalling that \eqref{eq:joint_ch_constr_E} is equivalent to \begin{align} &F_{\rv{\Delta E}_l(k)}\left( \expEs{k} - \underline{e}_s \right) - F_{\rv{\Delta E}_l(k)}\left(\expEs{k} - \overline{e}_s \right)\geq (1 - \varepsilon_E), \label{eq:e_costr_bot} \end{align} we propose in the present paper to use probabilistic forecasts for the CDF of $\rv{\Delta E}_l(k)$, $F_{\rv{\Delta E}_l(k)}(\Delta e_l)$, to alleviate the issues of \eqref{eq:old_constraint_ref}.\footnote{We recall that, given a random variable $\rv{Z} \in \mathrm{L}^2(\Omega, \mu; \mathbb{R})$, the CDF is a function $F_{\rv{Z}}(z): \Omega \rightarrow [0,1]$ such that $F_{\rv{Z}}(z):=\text{P}[\rv{Z} \leq z]$. If $F_{\rv{Z}}(z)$ is known, the inequalities $F_{\rv{Z}}(a) - F_{\rv{Z}}(b) \geq (1 - \varepsilon)$, $\overline{z} \leq a$, and $\underline{z} \geq b$ are a deterministic reformulation of the joint chance constraint $\text{P}[\underline{z} \leq \rv{Z} \leq \overline{z}] \geq (1 - \varepsilon)$, \cite{Miller65}. As $F_{\rv{Z}}(z)$ is increasing, all these inequalities are contemporarily verified if $F_{\rv{Z}}(\overline{z}) - F_{\rv{Z}}(\underline{z}) \geq (1 - \varepsilon)$ holds. } The proposed constraints formulation is similar, yet not equivalent, to the concept of stochastic robust optimization presented in \cite{King12}. In fact, similar to \cite{King12}, the satisfaction of joint chance constraint \eqref{eq:joint_ch_constr_P_and_E} is achieved by enforcing problem feasibility for convex compact subsets of the support of each random variable. However, contrary to \cite{King12}, these subsets are chosen using full information on the distribution of the random variables and do not have to be symmetric w.r.t. the expected value. In this sense, the proposed constraints formulation extends the one based on interval forecasts presented in \cite{Appino18a}, avoiding the usage of fixed intervals. Moreover, the reformulation of the energy constraint \eqref{eq:e_costr_bot} enables to overcome the infeasibility problem with constraint softening \cite{Kerrigan00}. Specifically, \eqref{eq:e_costr_bot} can be replaced by \begin{equation} \label{eq:e_costr_bot_soft} F_{\rv{\Delta E}_l(k)}\left(\hat{e}_s(k) - \overline{e}_s \right) - F_{\rv{\Delta E}_l(k)}\left( \hat{e}_s(k) - \underline{e}_s \right) + (1 - \varepsilon_E) \leq \epsilon(k), \end{equation} adding the penalty term $\alpha \cdot \epsilon(k)$ to the cost function of the scheduling problem, introduced later, with a sufficiently large value of $\alpha$ \cite{Kerrigan00}. This technique maximizes the probability of satisfying the energy constraint when it is not possible to guarantee the desired security level, making this approach particularly interesting for robust optimization. To sum up, we propose to compute the DiS solving \begin{align} \label{eq:pfs_opt_problem} \min_{ \begin{subarray}{c} \{\mathbf{p}_a(k)\}_{k\in \mathcal{K}'}, \\ \{\epsilon(k)\}_{k\in \mathcal{K}'} \end{subarray}} \sum^{k_{s} + N_d + N_f}_{k = k_{s}} &\left( C\left(\Pgsch{k}\right) + \alpha \epsilon(k) \right)\\ \text{s.t. \,\,} & \eqref{eq:exp_ess_power},\eqref{eq:exp_ess_energy}, \eqref{eq:Pg_dir}, \eqref{eq:p_costr_bot}, \eqref{eq:e_costr_bot_soft} \nonumber \end{align} where the cost function $C$ is from \eqref{eq:cost_f} and $\mathcal{K}'= [k_{s}, k_{s}+N_d+N_f] \subset \mathbb{N}$. For each time step $k$, the stacked vector of decision variables is \begin{align*} \bold{p}_a(k) := [\Pgp{k}, \Pgm{k}, \exPsp{k}, \exPsm{k}]^\top \in \mathbb{R}^4. \end{align*} Observe that Problem \eqref{eq:pfs_opt_problem} is non-convex, that there is no cost associated to the value of $\expEs{k_{s}+N_{d}}$, and that a non-zero value for $\Delta e_g(k_s)$, i.e. a non-zero realization of $\rv{\Delta E}_g(k_s)$, might be included in \eqref{eq:e_costr_bot_soft}. For a detailed description of how one can address these issues and how to choose the extended horizon $N_d + N_f$, we refer the reader to \cite[Remarks 1-3]{Appino18a}. Finally, notice that no restrictive assumption on the distribution of either $\rv{P}_l(k)$ or $\rv{\Delta E}_l(k)$ is required in the reformulation of the joint chance constraints \eqref{eq:joint_ch_constr_P_and_E} into \eqref{eq:p_costr_bot} and \eqref{eq:e_costr_bot_soft}. \section{Minimizing the Cost of Deviations} \label{sec:minimal_cost} In this section, we propose a scheduling formulation alternative to Problem \eqref{eq:pfs_opt_problem}, aiming to minimize the total expected operating cost, including the cost of the DiS and the expected cost of imbalances. This cost can be expressed as \begin{equation} C_{\text{tot}}(\Pgsch{k},\rv{\Delta P}_g(k)) = C(\Pgsch{k}) + \mathbb{E}[C_{\text{dev}}(\rv{\Delta P}_g(k))]. \end{equation} As for the DiS cost, the cost of imbalances $C_{\text{dev}}(\Delta p_g(k))$ might reflect a real monetary cost in a two-stage market, c.f. \cite{Morales13}, or model different requirements on the imbalances. Differently from what presented in Section \ref{sec:security_level}, in this case the uncertainty does not only affect the constraints, but it enters the cost function too. Thus, it is not important to determine \textit{if} an imbalance will occur, but what would be \textit{its consequences} on the final cost. Likewise, it is important to know \textit{when} and \textit{with which magnitude} the imbalance will occur. To this end, given the difficulties of computing an explicit description of the correlation between $\rv{\Delta P}_l(k)$ at different intervals and the non-linearity of the ESS model, we adopt a scenario approach similar to \cite{Stai17,Fabietti16}. Each scenario $i$ describes a possible profile of the inflexible power output, i.e. $\{p_l^i(k)\}_{k\in \mathcal{K}}$, and has a certain weight $\omega_i$. Differently from \cite{Stai17,Fabietti16}, however, we generate the scenarios starting from probabilistic forecasts of $\rv{P}_l$, as described in Section \ref{sec:results}. For the sake of concise notation, we introduce \begin{align*} \bold{p}_s^i(k) &:= [p_s^{+,i}(k), p_s^{-,i}(k), \Delta p_g^{+,i}(k),\Delta p_g^{-,i}(k)]^\top \in \mathbb{R}^{4}, \\ \bold{p}_b(k) &:= [p_g^+(k), p_g^-(k), \bold{p}_s^1(k)^\top, \hdots, \bold{p}_s^{N_s}(k)^\top]^\top \in \mathbb{R}^{2+4N_s}, \end{align*} where $N_s$ is the number of scenarios, and the set \begin{align*} \mathcal{F}_{s}\Big(e_s(k)\Big) := \{ p_s(k) \in \mathbb{R} \,|& \text{s.t. } \eqref{eq:det_pow_limit}, \eqref{eq:complem_ess_limit}\text{, and}\\ & \underline{e}_s \leq {e}_s(k+1) \leq \overline{e}_s \text{ hold}.\}. \end{align*} The optimization problem based on scenario forecasts is \begin{align} \label{eq:opt_problem} \min_{\{\mathbf{p}_b(k)\}_{k\in \mathcal{K}'}} &\sum^{k_{s} + N_d + N_f}_{k = k_{s}} \left( C\left(\Pgsch{k}\right) + \sum^{N_s}_{i = 1} \omega_i C_{\text{dev}} \left( \Delta p_g^i(k) \right) \right)\\ \text{s.t. \,\, } & p_s^i(k) = \Pgsch{k} + \Delta p_g^i(k) - p_l^i(k) \,\, \forall k \in \mathcal{K}', \nonumber \\ & {p}_s^i(k) \in \mathcal{F}_{s}\Big(e_s(k)\Big) \,\, \forall k \in \mathcal{K}', \nonumber \\ & [\Pgp{k},\Pgm{k}] \in \mathcal{F}_{d}(\Pgsch{k}) \,\, \forall k \in \mathcal{K}', \nonumber \\ & [\Delta p_g^{+,i}(k),\Delta p_g^{-,i}(k)] \in \mathcal{F}_{d}(\Delta p_g^i(k)) \,\, \forall k \in \mathcal{K}'. \nonumber \end{align} As described in Section \ref{sec:problem_state}, $\{\Pgsch{k}\}_{k \in \mathcal{K}}$ are first-stage decision variables, independent from the realization of the random variables, i.e. from $\{p_l^i(k)\}_{k\in \mathcal{K}}$. On the other hand, the imbalances profile $\{\Delta p_g^i(k)\}_{k\in \mathcal{K}}$ is a sequence of second-stage scenario-based decision variables that depend both on the first-stage decision, $\{\Pgsch{k}\}_{k \in \mathcal{K}}$, and on the inflexible power output of the specific scenario, $\{p_l^i(k)\}_{k\in \mathcal{K}}$. In Problem \eqref{eq:opt_problem} $\{\Delta p_g^i(k)\}_{k\in \mathcal{K}}$ is optimized for each scenario, with the implicit assumption that, once the DiS is applied, the imbalances can be planned with perfect knowledge of the profile $\{p_l(k)\}_{k\in \mathcal{K}}$. As remarked in \cite{Fabietti16}, this is an approximation, since the imbalances are actually computed with the sole knowledge of the present values of $p_l(k)$ and $e_s(k)$, c.f. Problem \eqref{eq:delta_pg_opt_problem}. Extending the horizon of Problem \eqref{eq:delta_pg_opt_problem} using short-term deterministic forecast, i.e. adding an additional model predictive control level to optimize the imbalances as in \cite{Appino18a}, might reduce the severity of this approximation. We include this additional controller in the following simulations. \section{Simulations and Results} \label{sec:results} \begin{table*}[!t] \small \renewcommand{\arraystretch}{1.1} \caption{Simulation results. Bold numbers highlight lowest cost.\label{tab:simulation_results}} \vspace{-0.2cm} \begin{center} \begin{tabular}{ l || c | c c | c c c c c c | c c} \hline &&&&&&&&&&& \vspace{-0.3cm}\\ & DFS &\multicolumn{2}{c}{PFS (from \cite{Appino18a})} &&& \multicolumn{2}{c}{PFS} &&& \multicolumn{2}{c}{SFS} \\ \hline &&&&&&&&&&& \vspace{-0.3cm} \\ \hline &&&&&&&&&&& \vspace{-0.3cm} \\ $(1 - \varepsilon_E)$ or $C1$/$C2$ & - &$\phantom{0}0.42$ &$\phantom{0}0.48$ &$\phantom{0}0.42$ &$\phantom{0}0.48$ &$\phantom{0}0.54$ &$\phantom{0}0.60$ &$\phantom{0}0.66$ &$\phantom{0}0.72$ & \textit{C1} & \textit{C2} \\ \hline &&&&&&&&& \vspace{-0.3cm} \\ computation time (s) & $\phantom{0}0.07$ & $\phantom{0}0.14$ & $\phantom{0}0.15$ & $\phantom{0}0.41$ & $\phantom{0}0.43$ & $\phantom{0}0.41$ & $\phantom{0}0.42$ & $\phantom{0}0.43$ & $\phantom{0}0.43$ & $\phantom{0}5.33$ & $\phantom{0}3.81$ \\ \hline &&&&&&&&& \vspace{-0.3cm} \\ $R^{\delta}(\{\Pgsch{k}\}_{k \in \mathcal{K}})$ & $\phantom{0}0.45$ & $\phantom{0}0.70$ & $\phantom{0}0.73$ & $\phantom{0}0.60$ & $\phantom{0}0.68$ & $\phantom{0}0.71$ & $\phantom{0}0.75$ & $\phantom{0}0.75$ & $\phantom{0}0.78$ & $\phantom{0}0.61$ & $\phantom{0}0.71$\\ balancing energy (kWh) & $\phantom{0}5.82$ & $\phantom{0}3.61$ & $\phantom{0}3.47$ & $\phantom{0}4.39$ & $\phantom{0}3.56$ & $\phantom{0}3.10$ & $\phantom{0}2.86$ & $\phantom{0}2.79$ & $\phantom{0}2.66$ & $\phantom{0}4.81$ & $\phantom{0}3.56$\\ \hline &&&&&&&&& \vspace{-0.3cm} \\ cost $\{\Pgsch{k}\}_{k \in \mathcal{K}}$ (\euro) & $\bold{\phantom{0}4.86}$ & $\phantom{0}6.40$ & $\phantom{0}6.73$ & $\phantom{0}5.48$ & $\phantom{0}5.87$ & $\phantom{0}6.29$ & $\phantom{0}6.68$ & $\phantom{0}6.84$ & $\phantom{0}6.96$ & $\phantom{0}5.40$ & $\phantom{0}6.24$\\ cost $\{\Delta p_g(k)\}_{k \in \mathcal{K}}$ \textit{C1} (\euro) & $\phantom{0}4.06$ & $\phantom{0}2.57$ & $\phantom{0}2.49$ & $\phantom{0}3.07$ & $\phantom{0}2.51$ & $\phantom{0}2.19$ & $\phantom{0}2.02$ & $\phantom{0}1.98$ & $\bold{\phantom{0}1.90}$ & $\phantom{0}3.41$ & - \\ cost total \textit{C1} (\euro) & $\phantom{0}8.92$ & $\phantom{0}8.97$ & $\phantom{0}9.22$ & $\phantom{0}8.55$ & $\bold{\phantom{0}8.38}$ & $\phantom{0}8.48$ & $\phantom{0}8.70$ & $\phantom{0}8.83$ & $\phantom{0}8.86$ & $\phantom{0}8.81$ & - \\ cost $\{\Delta p_g(k)\}_{k \in \mathcal{K}}$ \textit{C2} (\euro) & $27.77$ & - & - & $19.86$ & $16.56$ & $14.46$ & $13.53$ & $13.43$ & $\bold{13.10}$ & - & $17.51$ \\ cost total \textit{C2} (\euro) & $30.36$ & - & - & $25.34$ & $22.42$ & $20.75$ & $20.21$ & $20.27$ & $\bold{20.06}$ & - & $23.39$ \\ \hline \end{tabular} \end{center} \vspace{-0.6cm} \end{table*} A household provided with a PV generator and a domestic battery is selected as test case, similar to \cite{Appino18a}. The data of PV production and load consumption is retrieved from the freely accessible dataset provided by Ausgrid \cite{Ratnam15a}.\footnote{The dataset offers the time series of the load and PV generation profile of 300 Australian households with installed rooftop PV systems for the time frame of 01/07/10 to 30/06/13. The utilized data refers to household $109$. Notice that we employ for $\{p_l(k)\}_{k\in \mathcal{K}}$ the hourly-averaged profile of the real production/consumption, considering that the storage compensates for the zero-mean intra-hour variability.} The technical specifications of the battery comes from the catalog of a commercial producer.\footnote{www.tesla.com/powerwall} Accounting only for the usable capacity, these are: $\underline{e}_s = 0$ kWh, $\overline{e}_s = 13.5$ kWh, $\underline{p}_s = -5$ kW, $\overline{p}_s = 5$ kW, $\mu_n = 5\%$. In the present paper, we consider day-ahead scheduling with scheduling horizon spanning midnight to midnight. The scheduling horizon is divided into $N_d = 24$ dispatch intervals and the dispatch schedule has to be computed before midday, i.e. $\Delta t = 1\text{h}$, $k_{s} = 12 + k_0$. For sake of simplicity, we use time invariant coefficients in cost function \eqref{eq:cost_f}: $c_{2}^+ = 0.05 \frac{\text{\euro} \cdot \text{h}}{\text{kW}}$, $c_{2}^-= 0.05 \frac{\text{\euro} \cdot \text{h}}{\text{kW}}$, $c_{1}^+= 0.3 \frac{\text{\euro} \cdot \text{h}}{\text{kW}}$, $c_{1}^-=0.15 \frac{\text{\euro} \cdot \text{h}}{\text{kW}}$. These values represent a pricing policy incentivizing self-consumption and load leveling. The simulations are carried out in MATLAB, employing standard open-source optimization tools developed in the systems and control community to solve the scheduling problems. Specifically, we use CasaDi \cite{Andersson13b} with the IPOPT \cite{Wachter06}. All the computations have been performed using a PC with an Intel\textsuperscript{\textregistered} Core\textsuperscript{TM} i5-6400 CPU at 2.70 GHz and 8.00 GB RAM. We simulate and compare the effect of three different scheduling techniques: (i) the Deterministic Forecast Scheduling (DFS), where the schedule is computed applying deterministic forecasts, i.e. $\rv{\Delta P}_l \equiv 0$; (ii) the Probabilistic Forecast Scheduling (PFS), cf. Problem \eqref{eq:pfs_opt_problem}; (iii) the Scenario Forecast Scheduling (SFS), cf. Problem \eqref{eq:opt_problem}. The performance of the PFS is assessed using different values for the security level ranging from 0.42, to 0.72. The simulations cover five different weeks in the time frame going between 01/02/13 and 30/06/13. To the end of covering the effects of seasonal changes, these weeks are selected in different months. We consider two different pricing policies of the imbalances, $\textit{C1}$ and $\textit{C2}$, to examine the effects of the various scheduling techniques on the final operating cost. Specifically, in $\textit{C1}$ and $\textit{C2}$ the tariff of imbalances is twice as high and, respectively, ten times as high as the one of the DiS. Thereby, power excess and shortage count as purchased power. Table \ref{tab:simulation_results} provides an overview of the results obtained with the different schemes. To foster comparison we restate also the results reported in \cite{Appino18a}. The required probabilistic forecasts for both power and energy are created using quantile regressions \cite{Fahrmeir13,Koenker05} based on a k-nearest-neighbor data-driven approach \cite{GonzalezOrdiano16}.\footnote{The forecasting models are generated with the open-source MATLAB toolbox SciXMiner \cite{Mikut17}. The data from 01/07/10 to 01/12/12 is utilized for training the model. Please notice that all forecasts are only based on historical power time series, since the Ausgrid dataset does not contain weather forecast.} These quantile regressions predict the quantiles of $\rv{P}_l(k)$ and $\rv{\Delta E}_l(k)$. In the PFS, the quantiles of $\rv{P}_l(k)$ are used to determine the interval $[\underline{p}_{l,\pi^{P}}(k),\overline{p}_{l,\pi^{P}}(k)]$ according to \eqref{eq:interval_Pl}, whereas the quantiles of $\rv{\Delta E}_l(k)$ are used to fit the parameters of two logistic functions whose sum is utilized as a description of $F_{\rv{\Delta E}_l(k)}(\Delta e_l)$.\footnote{The fitting is done using least-squares. Extensive numerical studies have shown that the choice of a logistic function with six parameters $[a_1 ... a_6]$, i.e. $F_{\rv{\Delta E}_l(k)}(\Delta e_l)=\frac{a_1}{1+e^{-a_2(\Delta e_l-a_3)}}+\frac{a_4}{1+e^{-a_5(\Delta e_l-a_6)}}$, is able to reproduce the skewness of the quantiles. Other choices, ex. hyperbolic tangent, arctangent or specific algebraic functions, have shown poor results.} While the power quantile regressions take only past generated power data as input, the energy forecasting models receive as input the integrated values of the powers' median regression \cite{Appino18a}. In addition, quantile regressions are applied also in the SFS, to generate the various power forecast scenarios. In this case the regressions predict the quantiles of the power value an hour into the future. Each scenario is created (i) by randomly selecting one of the predicted quantiles, (ii) by using it as input of the one-hour-ahead quantile regressions, and (iii) by repeating the first two steps for the length of the extended scheduling horizon. Furthermore, we apply the algorithm presented in \cite{Conejo10} to reduce the number of scenarios to $N_s = 30$ and assign a weight $\omega_i$ to each of them. The average computation times required to solve the considered scheduling problems are reported in Table \ref{tab:simulation_results}. One can see that all three variants are solved within fractions of a second (DFS and PFS) or within a few seconds (SFS). Thus, the computational load does not appear to be an implementation barrier for any of the scheduling formulations. To evaluate how well the DiS is met during the operation of the DF we define the scheduling tracking ratio \begin{equation*} R^{\delta}(\{\Pgsch{k}\}_{k \in \mathcal{K}})=\frac{\#\left\{ k \in \mathcal{K} \mid \left | p_g(k) - \Pgsch{k} \right |\leq \delta \right\}}{\#\mathcal{K}}, \end{equation*} where $\#$ denotes the cardinality of the set and $\delta = 10^{-4}$. The average values of $R^{\delta}(\{\Pgsch{k}\}_{k \in \mathcal{K}})$ resulting from the different scheduling schemes are listed in Table \ref{tab:simulation_results}, as well as the average amount of energy required daily from the grid to compensate for the imbalances. This energy has to be considered as the total daily energy request, regardless of whether it was absorbed or injected into the grid. The detailed imbalances profile for three different cases over a week is depicted in Figure \ref{fig:soc_imb_compare}. Both from Table \ref{tab:simulation_results} and Figure \ref{fig:soc_imb_compare}, it can be noticed that the DFS has the worst tracking performance. The PFS, instead, always achieves the desired outcome of meeting the security level, i.e. $R(\{\Pgsch{k}\}_{k \in \mathcal{K}}) \geq (1-\varepsilon_E)$. In the SFS the tracking ratio depends on the pricing policy of the imbalances. This is aligned with the motivations behind the SFS, aiming at the best trade off between the DiS cost and the expected cost of imbalances. In fact, as shown in Table \ref{tab:simulation_results}, better tracking performances are associated with the increment of the cost of the DiS. These opposite tendencies are also visible between the different scheduling techniques. Figure \ref{fig:profile_all} reports the power output profiles over the same week applying different scheduling procedures. The green plot represents the baseline profile $\left\lbrace p_l(k)\right\rbrace_{k\in\mathcal{K}}$, the blue one represents the DiS $\{\Pgsch{k}\}_{k \in \mathcal{K}}$, and the red one the profile $\left\lbrace p_g(k)\right\rbrace_{k\in\mathcal{K}}$ resulting from the actual $\left\lbrace p_l(k)\right\rbrace_{k\in\mathcal{K}}$. While the DFS leads to the DiS with minimum cost, this DiS cannot be tracked as efficiently as the one computed using the PFS. The SFS tries to balance these two aspects. However, while the SFS achieves a lower total cost compared to the DFS, the minimum actual total cost follows surprisingly from the application of the PFS with an appropriate security level in both pricing policies \textit{C1} and \textit{C2}. This phenomenon can be explained considering that the SFS has two limitations \cite{Fabietti16}: it optimizes the problem for a finite set of scenarios and it considers an unrealistic optimization of the imbalances. These limitations do not affect the PFS, where the reserves to compensate for eventual imbalances are allocated on the basis of probabilistic forecasts for the energy profile. This way infinite possible realizations of $\{p_l(k)\}_{k\in \mathcal{K}}$ are considered without any assumption on an optimized redistribution of imbalances. The improved allocation of reserves is also visible in Figure \ref{fig:soc_imb_compare}, which describes the State Of Charge (SOC) profiles for the different cases. It can be seen that the PFS leads to a complete exploitation of the storage capacity. However, notice that the PFS does not outperform the SFS in minimizing the total operating cost for all the values of $(1 - \varepsilon_E)$. Furthermore, the optimal value of $(1 - \varepsilon_E)$ to minimize the total cost differs with the pricing policy of the imbalances. Still, a good value for $(1 - \varepsilon_E)$ can be computed by means of simulations. Therefore, the PFS can be efficiently applied to satisfy both the tracking requirements: limiting the number of imbalances and minimizing the total operating cost. Finally we remark that the PFS described in the present paper leads to better performance than the one presented in \cite{Appino18a}, because of the reduced conservativeness (cf. Section \ref{sec:security_level}). In particular: (i) higher security levels can be considered, (ii) the actual tracking is more aligned with the desired security level, and (iii) the total operation cost is reduced. \begin{figure}[t] \centering \includegraphics[width=0.495\textwidth]{Wee3_cost1_comparison} \caption{Comparison of results over a simulated week: state of charge and imbalances. Cost case \textit{C1}, PFS with $(1-\varepsilon) = 0.54$.} \label{fig:soc_imb_compare} \vspace{-0.5cm} \end{figure} \begin{figure}[t] \centering \includegraphics[width=0.495\textwidth]{Wee3_cost1_comparison_profiles3} \label{fig:profile_dfs} \vspace{-0.3cm} \caption{Comparison of results over a simulated week: power profiles. Cost case \textit{C1}, PFS with $(1-\varepsilon) = 0.54$.} \label{fig:profile_all} \vspace{-0.55cm} \end{figure} \section{Conclusions} \label{sec:conclusion} The present paper investigated different techniques to compute an efficient schedule of the power exchange between a \textit{dispatchable feeder} and the utility grid. In particular, we propose a formulation of the scheduling problem that exploits probabilistic forecasts in terms of cumulative density function. The result is a \textit{dispatch schedule} that can be tracked in operation with at least a given probability, called \textit{security level}. We compare the proposed method to scheduling based on deterministic forecast and scenario-based optimization. The simulation results show that the proposed method achieves an efficient computation of a dispatch schedule that not only ensures the desired security level, but that also leads to lower total operational cost than a scenario-based scheduling designed for total cost minimization. Future work will consider scheduling of populations of storages and their coupling through distribution networks, extending the proposed approach to deferrable loads, and a proof-of-concept implementation. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-05-25T02:08:02", "yymm": "1805", "arxiv_id": "1805.02525", "language": "en", "url": "https://arxiv.org/abs/1805.02525" }
\section{Introduction} Recently, more and more attention from both academia and industry is paying to building non-task-oriented chatbots that can naturally converse with humans on any open domain topics. Existing approaches can be categorized into generation-based methods \cite{DBLP:conf/acl/ShangLL15,vinyals2015neural,serban2015building,sordoni2015neural,xing2017topic,serban2017hierarchical, xing2017hierarchical} which synthesize a response with natural language generation techniques, and retrieval-based methods \cite{hu2014convolutional,lowe2015ubuntu,DBLP:conf/sigir/YanSW16,zhou2016multi,wu2017sequential} which select a response from a pre-built index. In this work, we study response selection for retrieval-based chatbots, not only because retrieval-based methods can return fluent and informative responses, but also because they have been successfully applied to many real products such as the social-bot XiaoIce from Microsoft \cite{shum2018eliza} and the E-commerce assistant AliMe Assist from Alibaba Group \cite{li2017alime}. A key step to response selection is measuring the matching degree between a response candidate and an input which is either a single message \cite{hu2014convolutional} or a conversational context consisting of multiple utterances \cite{wu2017sequential}. While existing research focuses on how to define a matching model with neural networks, little attention has been paid to how to learn such a model when few labeled data are available. In practice, because human labeling is expensive and exhausting, one cannot have large scale labeled data for model training. Thus, a common practice is to transform the matching problem to a classification problem with human responses as positive examples and randomly sampled ones as negative examples. This strategy, however, oversimplifies the learning problem, as most of the randomly sampled responses are either far from the semantics of the messages or the contexts, or they are false negatives which pollute the training data as noise. As a result, there often exists a significant gap between the performance of a model in training and the same model in practice \cite{wang2015syntax,wu2017sequential}.\footnote{The model performs well on randomly sampled data, but badly on human labeled data.} We propose a new method that can effectively leverage unlabeled data for learning matching models. To simulate the real scenario of a retrieval-based chatbot, we construct an unlabeled data set by retrieving response candidates from an index. Then, we employ a weak annotator to provide matching signals for the unlabeled input-response pairs, and leverage the signals to supervise the learning of matching models. The weak annotator is pre-trained from large scale human-human conversations without any annotations, and thus a Seq2Seq model becomes a natural choice. Our approach is compatible with any matching models, and falls in a teacher-student framework \cite{hinton2015distilling} where the Seq2Seq model transfers the knowledge from human-human conversations to the learning process of the matching models. Broadly speaking, both of \cite{hinton2015distilling} and our work let a neural network supervise the learning of another network. An advantage of our method is that it turns the hard zero-one labels in the existing learning paradigm to soft (weak) matching scores. Hence, the model can learn a large margin between a true response with a true negative example, and the semantic distance between a true response and a false negative example is short. Furthermore, due to the simulation of real scenario, harder examples can been seen in the training phase that makes the model more robust in the testing. We conduct experiments on two public data sets, and experimental results on both data sets indicate that models learned with our method can significantly outperform their counterparts learned with the random sampling strategy. Our contributions include: (1) proposal of a new method that can leverage unlabeled data to learn matching models for retrieval-based chatbots; and (2) empirical verification of the effectiveness of the method on public data sets. \section{Approach} \subsection{The Existing Learning Approach} Given a data set $\mathcal{D} = \{x_i,(y_{i,1},\ldots, y_{i,n})\}_{i=1}^N$ with $x_i$ a message or a conversational context and $y_{i,j}$ a response candidate of $x_i$, we aim to learn a matching model $\mathcal{M}(\cdot, \cdot)$ from $\mathcal{D}$. Thus, for any new pair $(x,y)$, $\mathcal{M}(x, y)$ measures the matching degree between $x$ and $y$. To obtain a matching model, one has to deal with two problems: (1) how to define $\mathcal{M}(\cdot, \cdot)$; and (2) how to perform learning. Existing work focuses on Problem (1) where state-of-the-art methods include dual LSTM \cite{lowe2015ubuntu}, Multi-View LSTM \cite{zhou2016multi}, CNN \cite{DBLP:conf/sigir/YanSW16}, and Sequential Matching Network \cite{wu2017sequential}, but adopts a simple strategy for Problem (2): $\forall x_i$, a human response is designated as $y_{i,1}$ with a label $1$, and some randomly sampled responses are treated as $(y_{i,2},\ldots,y_{i,n})$ with labels $0$. $\mathcal{M}(\cdot,\cdot)$ is then learned by maximizing the following objective: \begin{equation}\label{oriobj} \small \resizebox{1\hsize}{!}{$ \sum_{i=1}^{N} \sum_{j=1}^n \left[r_{i,j} \log(\mathcal{M}(x_i,y_{i,j})) + (1-r_{i,j})\log(1-\mathcal{M}(x_i,y_{i,j}))\right],$} \end{equation} where $r_{i,j}\in \{0,1\}$ is a label. While matching accuracy can be improved by carefully designing $\mathcal{M}(\cdot,\cdot)$ \cite{wu2017sequential}, the bottleneck becomes the learning approach which suffers obvious problems: most of the randomly sampled $y_{i,j}$ are semantically far from $x_i$ which may cause an undesired decision boundary at the end of optimization; some $y_{i,j}$ are false negatives. As hard zero-one labels are adopted in Equation (\ref{oriobj}), these false negatives may mislead the learning algorithm. The problems remind us that besides good architectures of matching models, we also need a good approach to learn such models from data. \subsection{A New Learning Method} As human labeling is infeasible when training complicated neural networks, we propose a new method that can leverage unlabeled data to learn a matching model. Specifically, instead of random sampling, we construct $\mathcal{D}$ by retrieving $(y_{i,2},\ldots,y_{i,n})$ from an index ($y_{i,1}$ is the human response of $x_i$). By this means, some $y_{i,j}$ are true positives, and some are negatives but semantically close to $x_i$. After that, we employ a weak annotator $G(\cdot,\cdot)$ to indicate the matching degree of every $(x_i, y_{i,j})$ in $\mathcal{D}$ as weak supervision signals. Let $s_{ij} = G(x_i,y_{i,j})$, then the learning approach can be formulated as: \begin{equation} \small \underset{\mathcal{M}(\cdot,\cdot)}{\arg\min} \sum_{i=1}^N \sum_{j=1}^n \max(0, \mathcal{M}(x_i,y_{i,j}) - \mathcal{M}(x_i,y_{i,1}) + s'_{i,j}), \label{loss} \end{equation} where $s'_{ij}$ is a normalized weak signal defined as $max(0,\frac{s_{i,j}}{s_{i,1}}-1)$. The normalization here eliminates bias from different $x_{i}$. Objective (\ref{loss}) encourages a large margin between the matching of an input and its human response and the matching of the input and a negative response judged by $G(\cdot,\cdot)$ (as will be seen later, $\frac{s_{i,j}}{s_{i,1}}>1$). The learning approach simulates how we build a matching model in a retrieval-based chatbot: given $\{x_i\}$, some response candidates are first retrieved from an index. Then human annotators are hired to judge the matching degree of each pair. Finally, both the data and the human labels are fed to an optimization program for model training. Here, we replace the expensive human labels with cheap judgment from $G(\cdot,\cdot)$. We define $G(\cdot,\cdot)$ as a sequence-to-sequence architecture \cite{vinyals2015neural} with an attention mechanism \cite{bahdanau2014neural}, and pre-train it with large amounts of human-human conversation data. The Seq2Seq model can capture the semantic correspondence between an input and a response, and then transfer the knowledge to the learning of a matching model in the optimization of (\ref{loss}). $s_{ij}$ is then defined as the likelihood of generating $y_{i,j}$ from $x_i$: \begin{equation} \small s_{ij} = \sum_{k} \log[p(w_{y_{i,j}, k}, | x_i, w_{y_{i,j}, l < k})], \end{equation} where $w_{y_{i,j}, k}$ is the $k$-th word of $y_{i,j}$ and $w_{y_{i,j}, l < k}$ is the word sequence before $w_{y_{i,j}, k}$. Since negative examples are retrieved by a search engine, the oversimplification problem of the negative sampling approach can be partially mitigated. We leverage a weak annotator to assign a score for each example to distinguish false negative examples and true negative examples. Equation (\ref{loss}) turns the hard zero-one labels in Equation (\ref{oriobj}) to soft matching degrees, and thus our method encourages the model to be more confident to classify a response with a high $s_{i,j}$ score as a negative one. In this way, we can avoid false negative examples and true negative examples are treated equally during training, and update the model toward a correct direction. It is noteworthy that although our approach also involves an interaction between a generator and a discriminator, it is different from the GANs \cite{goodfellow2014generative} in principle. GANs try to learn a better generator via an adversarial process, while our approach aims to improve the discriminator with supervision from the generator, which also differentiates it from the recent work on transferring knowledge from a discriminator to a generative visual dialog model \cite{lu2017best}. Our approach is also different from those semi-supervised approaches in the teacher-student framework \cite{dehghani2017fidelity,dehghani2017avoiding}, as there are no labeled data in learning. \section{Experiment} We conduct experiments on two public data sets: STC data set \cite{wang2013dataset} for single-turn response selection and Douban Conversation Corpus \cite{wu2017sequential} for multi-turn response selection. Note that we do not test the proposed approach on Ubuntu Corpus \cite{lowe2015ubuntu}, because both training and test data in the corpus are constructed by random sampling. \subsection{Implementation Details} We implement our approach with TensorFlow. In both experiments, the same Seq2Seq model is exploited which is trained with $3.3$ million input-response pairs extracted from the training set of the Douban data. Each input is a concatenation of consecutive utterances in a context, and the response is the next turn ($\{u_{<i}\},u_i$). We set the vocabulary size as $30,000$, the hidden vector size as $1024$, and the embedding size as $620$. Optimization is conducted with stochastic gradient descent \cite{bottou2010large}, and is terminated when perplexity on a validation set ($170$k pairs) does not decrease in $3$ consecutive epochs. In optimization of Objective (\ref{loss}), we initialize $\mathcal{M}(\cdot,\cdot)$ with a model trained under Objective (\ref{oriobj}) with the (random) negative sampling strategy, and fix word embeddings throughout training. This can stabilize the learning process. The learning rate is fixed as $0.1$. \subsection{Single-turn Response Selection} \textbf{Experiment settings}: in the STC (stands for Short Text Conversation) data set, the task is to select a proper response for a post in Weibo\footnote{\url{http://weibo.sina.com}}. The training set contains $4.8$ million post-response (true response) pairs. The test set consists of $422$ posts with each one associated with around $30$ responses labeled by human annotators in ``good'' and ``bad''. In total, there are $12,402$ labeled pairs in the test data. Following \cite{wang2013dataset, wang2015syntax}, we combine the score from a matching model with TF-IDF based cosine similarity using RankSVM whose parameters are chosen by $5$-fold cross validation. Precision at position 1 (P@1) is employed as an evaluation metric. In addition to the models compared on the data in the existing literatures, we also implement dual LSTM \cite{lowe2015ubuntu} as a baseline. As case studies, we learn a dual LSTM and an CNN \cite{hu2014convolutional} with the proposed approach, and denote them as LSTM+WS (Weak Supervision) and CNN+WS, respectively. When constructing $\mathcal{D}$, we build an index with the training data using Lucene\footnote{\url{https://lucenenet.apache.org/}} and retrieve $9$ candidates (i.e., $\{y_{i,2},\ldots,y_{i,n}\}$) for each post with the inline algorithm of the index. We form a validation set by randomly sampling $10$ thousand posts associated with the responses from $\mathcal{D}$ (human response is positive and others are treated as negative). \begin{table}[h] \small \centering \begin{tabular}{l|c} \noalign{\hrule height 1pt} & P@1 \\ \hline TFIDF \cite{wang2013dataset} & 0.574\\ +Translation \cite{wang2013dataset} & 0.587\\ +WordEmbedding & 0.579\\ +DeepMatch$_{topic}$ \cite{lu2013deep} & 0.587 \\ +DeepMatch$_{tree}$ \cite{wang2015syntax} & 0.608\\ \hline +LSTM \cite{lowe2015ubuntu} & 0.592 \\ +LSTM+WS & 0.616 \\ \hline +CNN \cite{hu2014convolutional} & 0.585\\ +CNN+WS & 0.604\\ \noalign{\hrule height 1pt} \end{tabular} \caption{Results on STC \label{exp:single}} \end{table} \textbf{Results}: Table \ref{exp:single} reports the results. We can see that CNN and LSTM consistently get improved when learned with the proposed approach, and the improvements over the models learned with random sampling are statistically significant (t-test with $p$-value $< 0.01$). LSTM+WS even surpasses the best performing model, DeepMatch$_{tree}$, reported on this data. These results indicate the usefulness of the proposed approach in practice. One can expect improvements to models like DeepMatch$_{tree}$ with the new learning method. We leave the verification as future work. \subsection{Multi-turn Response Selection} \textbf{Experiment settings}: Douban Conversation Corpus contains $0.5$ million context-response (true response) pairs for training and 1000 contexts for test. In the test set, every context has 10 response candidates, and each of the response has a label ``good" or ``bad" judged by human annotators. Mean average precision (MAP) \cite{baeza1999modern}, mean reciprocal rank (MRR) \cite{voorhees1999trec}, and precision at position 1 (P@1) are employed as evaluation metrics. We copy the numbers reported in \cite{wu2017sequential} for the baseline models, and learn LSTM, Multi-View, and SMN with the proposed approach. We build an index with the training data, and retrieve $9$ candidates with the method in \cite{wu2017sequential} for each context when constructing $\mathcal{D}$. $10$ thousand pairs are sampled from $\mathcal{D}$ as a validation set. \textbf{Results}: Table \ref{exp:multi} reports the results. Consistent with the results on the STC data, every model (+WS one) gets improved with the new learning approach, and the improvements are statistically significant (t-test with $p$-value $< 0.01$). \begin{table}[h] \small \centering \begin{tabular}{l|c|c|c} \noalign{\hrule height 1pt} &MAP&MRR& P@1 \\ \hline TFIDF & 0.331 &0.359 &0.180\\ RNN & 0.390 &0.422 &0.208\\ CNN & 0.417 &0.440 &0.226\\ BiLSTM &0.479&0.514&0.313\\ DL2R \cite{DBLP:conf/sigir/YanSW16} &0.488&0.527&0.330 \\ \hline LSTM \cite{lowe2015ubuntu} & 0.485 & 0.527 &0.320 \\ LSTM+WS & 0.519 & 0.559 &0.359 \\ \hline Multi-View \cite{zhou2016multi} &0.505&0.543&0.342 \\ Multi-View+WS &0.534&0.575&0.378 \\ \hline SMN \cite{wu2017sequential} &0.526&0.571&0.393\\ SMN+WS &0.565&0.609&0.421\\ \noalign{\hrule height 1pt} \end{tabular} \caption{Results on Douban Conversation Corpus \label{exp:multi}} \end{table} \subsection{Discussion} \textbf{Ablation studies}: we first replace the weak supervision $s_{i,j}'$ in Equation (\ref{loss}) with a constant $\epsilon$ selected from $\{0.1,0.2, \ldots, 0.9\}$ on validation, and denote the models as model+const. Then, we keep everything the same as our approach but replace $\mathcal{D}$ with a set constructed by random sampling, denoted as model+WSrand. Table \ref{exp:abl} reports the results. We can conclude that both the weak supervision and the strategy of training data construction are important to the success of the proposed learning approach. Training data construction plays a more crucial role, because it involves more true positives and negatives with different semantic distances to the positives into learning. \begin{table}[t] \small \centering \begin{tabular}{l|c|c|c|c} \noalign{\hrule height 1pt} & \multicolumn{1}{c|}{STC} & \multicolumn{3}{c}{Douban}\\ \hline & P@1 &MAP&MRR& P@1 \\ \hline CNN+WSrand &0.590&-&-&- \\ CNN+const &0.598&-&-&- \\ CNN+WS &0.604&-&-&- \\ \hline LSTM+WSrand &0.598& 0.501& 0.532&0.323 \\ LSTM+const &0.607 & 0.510 & 0.545 &0.331 \\ LSTM+WS&0.616 & 0.519 & 0.559 &0.359 \\ \hline Multi-View+WSrand &-&0.515&0.549&0.357 \\ Multi-View+const &- &0.528&0.564&0.370 \\ Multi-View+WS&- &0.534&0.575&0.378 \\ \hline SMN+WSrand &- & 0.536 & 0.574 &0.377 \\ SMN+const &- &0.558&0.603&0.417\\ SMN+WS &-&0.565&0.609&0.421\\ \noalign{\hrule height 1pt} \end{tabular} \caption{Ablation results. \label{exp:abl}} \end{table} \textbf{Does updating the Seq2Seq model help?} It is well known that Seq2Seq models suffer from the ``safe response'' \cite{li2015diversity} problem, which may bias the weak supervision signals to high-frequency responses. Therefore, we attempt to iteratively optimize the Seq2Seq model and the matching model and check if the matching model can be further improved. Specifically, we update the Seq2Seq model every $20$ mini-batches with the policy-based reinforcement learning approach proposed in \cite{li2016deep}. The reward is defined as the matching score of a context and a response given by the matching model. Unfortunately, we do not observe significant improvement on the matching model. The result is attributed to two factors: (1) it is difficult to significantly improve the Seq2Seq model with a policy gradient based method; and (2) eliminating ``safe response" for Seq2Seq model cannot help a matching model to learn a better decision boundary. \textbf{How the number of response candidates affects learning}: we vary the number of $\{y_{i,j}\}_{j=1}^n$ in $\mathcal{D}$ in $\{2,5,10,20\}$ and study how the hyper-parameter influences learning. We study with LSTM on the STC data and SMN on the Douban data. Table \ref{exp:instance_number} reports the results. We can see that as the number of candidates increases, the performance of the the learned models becomes better. Even with $2$ candidates (one from human and the other from retrieval), our approach can still improve the peformance of matching models. \begin{table}[h] \small \centering \begin{tabular}{l|c|c|c|c} \noalign{\hrule height 1pt} &LSTM$_2$ &LSTM$_5$&LSTM$_{10}$& LSTM$_{20}$ \\ \hline P@1&0.603&0.608&0.615&0.616 \\ \hline &SMN$_2$ &SMN$_5$&SMN$_{10}$& SMN$_{20}$ \\ \hline MAP&0.542&0.556&0.565&0.567\\\hline MRR&0.588&0.594&0.609&0.609\\\hline P@1&0.408&0.412&0.421&0.423\\\hline \noalign{\hrule height 1pt} \end{tabular} \caption{The effect of instance number \label{exp:instance_number}} \end{table} \section{Conclusion and Future Work} Previous studies focus on architecture design for retrieval-based chatbots, but neglect the problems brought by random negative sampling in the learning process. In this paper, we propose leveraging a Seq2Seq model as a weak annotator on unlabeled data to learn a matching model for response selection. By this means, we can mine hard instances for matching model and give them scores with a weak annotator. Experimental results on public data sets verify the effectiveness of the new learning approach. In the future, we will investigate how to remove bias from the weak supervisors, and further improve the matching model performance with a semi-supervised approach. \section*{Acknowledgment} Yu Wu is supported by Microsoft Fellowship Scholarship and AdeptMind Scholarship. This work is supported by the National Natural Science Foundation of China (Grand Nos. 61672081,U1636211,61370126), Beijing Advanced Innovation Center for Imaging Technology (No.BAICIT-2016001).
{ "timestamp": "2018-05-11T02:04:35", "yymm": "1805", "arxiv_id": "1805.02333", "language": "en", "url": "https://arxiv.org/abs/1805.02333" }
\section{Introduction} Since the formulation in 1987 of the Lugiato-Lefever (LL) model describing light propagation in nonlinear optical Kerr cavities \cite{lugiato_spatial_1987}, the existence and origin of spatially extended patterned solutions has been widely studied in both temporal and spatial systems \cite{Firth_patterns1,Firth_patterns2,Scroggie_Tlidi,Tlidi_96, Haelterman,Gomila_hexa1}. In the LL model, it was shown that patterns arise through a Turing instability, usually referred to as a modulational instability (MI) in the optics context \cite{Kapral,Turing,Castets,Cross}. In this type of instability a homogeneous steady state (HSS) becomes unstable to perturbations with a given wavelength, which then further develops into an ordered modulated structure: a {\it pattern}. In recent years, dissipative structures arising in the one-dimensional LL model have been studied extensively because of their intimate connection to frequency combs in microresonators driven by a continuous wave laser \cite{Haelterman,coen_modeling_2013,chembo_spatiotemporal_2013}. Such frequency combs correspond to the frequency spectrum of localized or extended light patterns that circulate inside the cavity \cite{leo_nature,herr_universal_2012,Leo_OE_2013,Parra-Rivas_PRA_KFCs,godey_stability_2014}, and can be used for a wide variety of applications \cite{kippenberg_microresonator}. In this work, we study the stability and bifurcation structure of extended patterns in the LL model, \begin{equation}\label{LLE} \partial_tA=-(1+i\theta)A+i\nu\partial_x^2A+i|A|^2A+\rho, \end{equation} where $\rho$ and $\theta$ are real control parameters representing normalized energy injection and frequency detuning, respectively. We focus here on the anomalous group velocity dispersion (GVD) regime and therefore set $\nu=1$ throughout this work. We study patterns with the critical wave number $k_c$ introduced below, originating from the modulational instability. For the parameter values for which the patterns are subcritical, this bifurcation also leads to the formation of localized structures. For a detailed study of the bifurcation structure of such localized states in the LL model, we refer to \cite{Parra_Rivas_P1}. This paper is organized as follows. In Section~\ref{sec:1}, we perform the linear stability analysis of the HSS solution with respect to spatially periodic perturbations. This not only reveals the modulational instability, but more generally indicates which perturbation wave numbers lead to instabilities and pattern formation. Next, in Section~\ref{sec:2}, we show how analytical expressions for weakly nonlinear pattern solutions can be found near certain bifurcations. Later, in Section~\ref{sec:3}, we numerically track those analytical solutions to values of the pump parameter $\rho$ away from those bifurcation points, thus revealing the bifurcation structure of the patterns for a fixed value of the detuning. In Section~\ref{sec:4} we study how this bifurcation structure changes as the parameter space defined by the cavity detuning $\theta$ and the pump $\rho$ is traversed, and present phase diagrams showing parameter regimes with distinct pattern behavior. In Section ~\ref{sec:5} a linear stability analysis of the pattern solutions is performed, and the different secondary instabilities that these states undergo are discussed. Finally, in Section~\ref{sec:6} we give some concluding remarks. \section{Linear stability analysis of the homogeneous steady states} \label{sec:1} The HSS solutions $A_0$ can be found by solving the classic cubic equation of dispersive optical bistability, namely \begin{equation}\label{HSS} I_0^3-2\theta I_0^2+(1+\theta^2)I_0=\rho^2, \end{equation} where $I_0\equiv|A_0|^2$. The solutions in real variables ($U_0 =$ Re$[A_0], V_0 =$ Im$[A_0]$) are given by \begin{equation}\label{hom_real} \left[\begin{array}{c} U_0 \\ V_0\end{array}\right]=\left[\begin{array}{c} \displaystyle\frac{\rho}{1+(I_0-\theta)^2} \\ \displaystyle\frac{(I_0-\theta)\rho}{1+(I_0-\theta)^2}\end{array}\right]. \end{equation} For $\theta<\sqrt{3}$, Eq.~(\ref{HSS}) is single-valued and hence the system is monostable. In contrast, for $\theta>\sqrt{3}$, Eq.~(\ref{HSS}) is triple-valued. The transition between the three different solutions occurs via a pair of saddle-node bifurcations SN$_{b}$ and SN$_{t}$ located at \begin{equation} I_{t,b}\equiv|A_{t,b}|^2=\frac{2\theta}{3}\pm\frac{1}{3}\sqrt{\theta^2-3}, \end{equation} and these arise from a cusp or hysteresis bifurcation at $\theta=\sqrt{3}$. In what follows, we denote the bottom solution branch (from $I_0=0$ to $I_b$) by $A_0^b$, the middle branch between $I_b$ and $I_t$ by $A_0^m$, and the top branch by $A_0^t$ ($I_0>I_t$). A linear stability analysis of the HSS solution with respect to spatially periodic perturbations of the form \begin{equation} \left[\begin{array}{c} U\\V \end{array}\right]=\left[\begin{array}{c} U_0\\V_0 \end{array}\right]+\epsilon\left[\begin{array}{c} u_1(x,t)\\v_1(x,t) \end{array}\right]+\mathcal{O}(\epsilon^2), \end{equation} where $|\epsilon|\ll1$ and \begin{equation}\label{perturbations} \left[\begin{array}{c} u_1\\v_1 \end{array}\right] =\left[\begin{array}{c} a_k\\b_k \end{array}\right]e^{ikx+\Omega t}+c.c., \end{equation} leeds to the dispersion relation \begin{equation} \Omega(k)=-1\pm\sqrt{4I_0\theta-3I_0^2-\theta^2+(4I_0-2\theta) k^2- k^4}. \end{equation} Here $\Omega(k)$ is the linear growth rate of a perturbation with wave number $k$. In the linear approximation, the superposition principle applies and therefore any pattern solution of the problem can be written as the linear combination \begin{equation} \left[\begin{array}{c} u_1\\v_1 \end{array}\right]_{(x,t)}=\displaystyle\sum_{k}\left[\begin{array}{c} a_k\\b_k \end{array}\right]e^{ikx+\Omega t}+c.c., \end{equation} where the mode amplitudes $a_k$, $b_k$ depend on the parameters $\theta$ and $\rho$. The growth $\Omega(k)$ will in general be positive for wave numbers within an interval $[k^-,k^+]$, where the wave numbers $k^-$ and $k^+$ depend on $I_0$ and solve the quadratic equation \begin{equation}\label{conditon1} k^4-(4I_0-2\theta)k^2+3I_0^2+\theta^2-4I_0\theta-1=0. \end{equation} Any mode within this interval will grow, and the profile of the pattern arising from random noise will be dominated by the most unstable mode $k_u$ defined by the condition $\Omega'(k_u)\equiv\frac{d\Omega}{dk}\large|_{k_u}=0$, giving \begin{equation}\label{condition2} k_u=\sqrt{2I_0-\theta}. \end{equation} The loss of stability occurs at a critical value of $k_c$ where the growth rate first reaches zero, i.e., when conditions (\ref{conditon1}) and (\ref{condition2}) are satisfied simultaneously. This transition is called a Turing \cite{Kapral,Turing,Castets,Cross} or modulational instability (MI), and occurs at $I_0=I_c$, $k=k_c$, where \begin{equation} I_c=1,\qquad k_c=\sqrt{2-\theta}. \end{equation} Evidently, this transition is only found when $\theta<2$. The condition $I_0=I_c$ defines a line in the parameter space $(\theta,\rho)$ given by \begin{equation} \rho_c=\sqrt{1+(1-\theta)^2}. \end{equation} \begin{figure} \centering \includegraphics[scale=1]{diagram_paper.pdf} \caption{(Color online) The stable HSS (black solid line) is destabilized at the modulational instability MI. Close to MI (i), the unstable HSS evolves to the pattern branch P$_1$ (red) consisting of stationary patterns with wave number $k_1 = 8.889 \approx k_c = 8.886$. Further away from MI (ii), the unstable HSS evolves into a different pattern branch P$_2$ (green), now characterized by patterns with wave number $k_2 = 7.620 \approx k_u = 7.510$. Stable (unstable) solutions are denoted by solid (dashed) lines. Here $\theta = 1.5$ and $L=160$.} \label{MI_patterns_intro} \end{figure} Figure~\ref{MI_patterns_intro} illustrates how the HSS destabilizes when the pump parameter $\rho$ exceeds $\rho=\rho_c$ and how the pattern state is subsequently reached. The wave number of this pattern changes with the pump parameter as does the most unstable wave number [see Eq.\ (\ref{condition2})]. Close to the MI the HSS develops into a pattern that lies on a branch of pattern solutions with wave number close to $k_c$, originating near MI. For larger values of the pump, however, the selected pattern belongs to a pattern branch corresponding to a wave number close to the fastest growing wave number $k_u$. This observation highlights the fact that the pattern branches form a continuum, parametrized by the wavenumber $k\in[k^-,k^+]$, with the wave number selected by nonlinear processes that depend on the system parameters. In this work we restrict attention to pattern branches corresponding to the critical wave number $k_c$ and its harmonics, and describe their bifurcation structure in some detail. The study of patterns with other wave numbers is left for future work. Before turning to the bifurcation structure of pattern solutions, we start our analysis by studying the set of points $k^-$ and $k^+$ satisfying Eq.~(\ref{conditon1}). These points define the so-called {\it marginal stability curve} defined by \begin{equation}\label{marginal} {\rm I}^{\pm}_k(\theta)=\frac{2}{3}(\theta+k^2)\pm\frac{1}{3}\sqrt{\theta^2+k^4+2\theta k^2-3}. \end{equation} The marginal stability curves are shown in the panels on the left of Fig.~\ref{marginal1} for increasing values of the detuning $\theta$. The HSS solutions at the corresponding values of $\theta$ are shown in the panels on the right, with solid (dashed) lines representing the HSS solutions that are stable (unstable) against perturbations of the form (\ref{perturbations}). For a fixed value of $\theta$, and for a given wave number $k'$, the HSS solution is unstable if ${\rm I}^{-}_{k'}(\theta)<{\rm I}_0< {\rm I}^{+}_{k'}(\theta)$ and stable otherwise. Thus, for a given wave number $k=k_c$ a pattern P$_{k_c}$ bifurcates from the points I$^{\pm}_{k_c}(\theta)$ indicated in Fig.~\ref{marginal1} and similarly for patterns with wavenumber $2k_c$, $4k_c$ etc. \begin{figure} \centering \includegraphics[scale=1]{Marginal.pdf} \caption{(Color online) Left: Marginal instability curves for (a) $\theta=1.1$, (b) $\theta=1.5$, (c) $\theta=1.8$ and (d) $\theta=2.0$ (d). Right: The HSS solutions corresponding to the same values of $\theta$. Solid (dashed) lines represent stable (unstable) HSSs with respect to perturbations of the form (\ref{perturbations}). The locations I$^{\pm}_{k}$ corresponding to instabilities with wave number $k$ are indicated using solid circles. The dashed line inside the marginal instability curves in the left panels represents the most unstable mode $k=k_u$.} \label{marginal1} \end{figure} In Fig.~\ref{marginal1}(a), for $\theta=1.1$, the HSS is always stable against perturbations with $k=0$. Furthermore, a pattern with wavenumber $k_c$ bifurcates from the MI at I$^-_{k_c}={\rm I}_c$ and then reconnects with HSS again at I$^+_{k_c}>{\rm I}^-_{k_c}$. Similarly, a pattern with $2k_c$ arises initially from I$^-_{2k_c}$ and reconnects to HSS at I$^+_{2k_c}$. The situation for all subsequent harmonics is similar. As the detuning $\theta$ increases, the different instability points for modes with $k=k_c$ and its harmonics approach each other as the whole tongue of unstable modes shifts to lower values of $k$ [see Fig.~\ref{marginal1}(b)]. This behavior can also be seen in Fig.~\ref{parameterb} where we plot the instability boundaries in the parameter space $(\theta,I_0)$ and $(\theta, \rho)$, respectively, together with the location of the saddle-node bifurcations SN$_b$ and SN$_t$ of the HSS solution. For $\theta<\sqrt{3}$, $A_0$ is always stable against spatially uniform perturbations with $k=0$. In contrast, when $\sqrt{3}<\theta<2$, the response of the HSSs as a function of the pump parameter $\rho$ becomes bistable. In this case, the bottom $A_0^b$ and top $A^t_0$ branches are stable with respect to $k=0$ perturbations, while the middle branch $A_0^m$ is unstable to such perturbations. However, $A^t_0$ and $A^m_0$ are always unstable with respect to $k>0$ perturbations, while $A^b_0$ is only destabilized above $I_0=I_c$. This situation is depicted in Fig.~\ref{marginal1}(c) for $\theta=1.8$, where the tongue of unstable wavenumbers now starts at $k=0$. Finally, when the detuning incfeases to $\theta = 2$ from below the instability points I$^{\pm}_{nk_c}$, $n=1,2,\dots$, approach one another until they all collapse at $k=0$ and the MI disappears [see Fig.~\ref{marginal1}(d)]. A similar collapse can be seen in Fig.~\ref{parameterb}, where I$^+_{k_c}$ and I$^-_{2k_c}$, and I$^+_{2k_c}$ and I$^-_{4k_c}$ collide pairwise at the codimension-two bifurcation X$_1$ and X$_2$ located at $(\theta_{{\rm X}_1},\rho_{{\rm X}_1})=(1.1111,1.4768)$, and $(\theta_{{\rm X}_2},\rho_{{\rm X}_2})=(1.4286,4.468)$, respectively. The results presented in Fig.~\ref{marginal1} and Fig.~\ref{parameterb} are limited to $\theta<2$ for which the MI exists and takes place at $I_0=I_c$. When approaching $\theta=2$ from below, the critical wave number approaches zero ($k_c\rightarrow0$), implying that the wavelength of the nascent pattern diverges. Since a pattern with infinite wavelength corresponds to a single peak in the domain, the distinction between patterns and localized structures becomes blurred in this limit. A detailed analysis of how the bifurcation structure of such localized structures changes as one approaches this critical point $\theta=2$ can be found in Ref.~\cite{Parra_Rivas_P1}. \begin{figure} \centering \includegraphics[scale=0.97]{parameter_A.pdf} \caption{(Color online) (a) The instability lines I$^{\pm}_{k_c}$ and the location of the saddle-node bifurcations of the HSSs in the parameter space $(\theta,I_0)$. (b) Same as (a) but in the parameter space $(\theta,\rho)$. (c) Zoom of (b) showing the main regions with distinct bifurcation behavior (see text). The labels X$_1$ and X$_2$ indicate codimension-two points. In both (a) and (c) the gray area represents region IV where the system has a bistable response in the HSS solutions.} \label{parameterb} \end{figure} At this point we can already identify several distinct solution regimes based on the existence of patterns and the stability of $A_0$: \begin{itemize} \item Region I: The HSS solution $A_0$ is stable. This region spans the parameter space $\rho<\rho_{c}$. \item Region II: The pattern P$_{k_c}$ exists between MI and I$^+_{k_c}$, and $A_0$ unstable. \item Region III: The pattern P$_{2k_c}$ exists between I$^-_{2k_c}$ and I$^+_{2k_c}$, and $A_0$ is unstable. \item Region IV: The pattern P$_{4k_c}$ exists between I$^-_{4k_c}$ and I$^+_{4k_c}$, and $A_0$ is unstable. \item Region V: Multistability of the HSS $A_0$. $A_0^b$ is stable, while $A_0^t$ and $A_0^m$ are unstable. This region spans the parameter region between SN$_b$ and SN$_t$. The patterns P$_{k_c}$ and P$_{2k_c}$ also exist in this region since they appear subcritically. \end{itemize} In the following sections we study how the different patterns reconnect as parameters are varied, and identify the different instabilities these patterns undergo. \section{Weakly nonlinear pattern solutions}\label{sec:2} Weakly nonlinear patterns are present in the vicinity of the MI bifurcation at $I_0=I_c$ and can be computed using multiscale perturbation analysis. At leading order in the expansion parameter $\epsilon$, defined by the relation $\rho=\rho_c+\epsilon^2\mu$, the pattern solution is given by \begin{equation}\label{pattern_asymp} \left[\begin{array}{c} U\\V \end{array} \right]= \left[\begin{array}{c} U_c\\V_c \end{array} \right]+\epsilon\left[\begin{array}{c} u_1\\v_1 \end{array} \right]+\epsilon^2\left[\begin{array}{c} U_2\\V_2 \end{array} \right], \end{equation} where $U_c$ and $V_c$ correspond to the HSS solution (\ref{hom_real}) at $\rho=\rho_c$, $U_2$ and $V_2$ represent the leading order correction to this HSS, given by \begin{equation} \left[\begin{array}{c} U_2\\V_2 \end{array} \right]=\frac{\mu}{{\left(\theta^{2} - 2 \, \theta + 2\right)} {\left(\theta - 2\right)}}\left[\begin{array}{c} \theta^{2} \\-\theta^{2} - \theta + 2\end{array}\right], \end{equation} and the space-dependent correction is given by \begin{equation} \left[\begin{array}{c} u_1\\v_1 \end{array} \right]=2 \left[\begin{array}{c} a\\1 \end{array} \right]B {\rm cos}(k_c x+\varphi), \end{equation} where $\varphi$ is an arbitrary phase, and \begin{equation} a=\frac{\theta}{2-\theta}. \end{equation} The amplitude $B$ for the pattern state corresponds to the constant solution of the amplitude equation \begin{equation} C_1 B_{XX}+\mu C_2B+C_3B^3=0, \end{equation} i.e., \begin{equation} B=\sqrt{-\delta C_2/ C_3}. \end{equation} Here \begin{equation} C_1= -\frac{2 \, {\left(\theta^{2} - 2 \, \theta + 2\right)}}{\theta - 2}, \end{equation} \begin{equation} C_2= \frac{2 \, {\left(\theta^{2} - 2 \, \theta + 2\right)}^{\frac{3}{2}}}{{\left(\theta - 2\right)}^{4}}, \end{equation} \begin{equation} C_3=\frac{4 \, {\left(\theta^{2} - 2 \, \theta + 2\right)}^{2} {\left(30 \, \theta - 41\right)}}{9 \, {\left(\theta - 2\right)}^{6}}. \end{equation} It follows that the pattern is supercritical for $\theta<41/30$ but subcritical for $\theta<41/30$, as already predicted in Refs.~\cite{lugiato_spatial_1987,Perinet_Eckhaus}. In the following we refer to this pattern as P$_{k_c}$. Details of the above calculation can be found in Ref.~\cite{Parra_Rivas_P1}. \begin{figure*}[!t] \centering \includegraphics[scale=1]{dia_motiv.pdf} \caption{(Color online) Bifurcation diagrams for patterns with wave numbers $k_c$, $2k_c$, and $4k_c$ for $\theta=1.5$. Solution profiles along the different branches are shown in panels (i)-(xii).} \label{motiv} \end{figure*} \section{Bifurcation structure of patterns}\label{sec:3} We now present the main features of the bifurcation structure of the pattern states for a fixed value of the detuning, choosing $\theta=1.5$ as a representative value, leaving the study of how this structure is modified as $\theta$ varies to the following section. Starting from the analytical solution (\ref{pattern_asymp}), valid close to the MI bifurcation, we use a numerical continuation algorithm to construct the bifurcation diagram shown in Fig.~\ref{motiv}, showing the intensity $||A||^2$ as a function of the parameter $\rho$. As in Fig.~\ref{MI_patterns_intro}, the black lines represent HSSs, while red, blue and green lines correspond to patterned states with wave number $k_c$, $2k_c$ and $4k_c$, respectively. Furthermore, solid lines denote stable solutions, while dashed lines indicate unstable ones. Different profiles along these branches are shown in panels (i)-(xii). As shown in Fig.~\ref{MI_patterns_intro}, the pattern P$_{k_c}$ with wave number $k_c$ originates at the MI bifurcation. While the MI bifurcation corresponds to the point where the HSSs lose stability to temporal perturbations, it is also possible to study this transition in the context of spatial dynamics. Here, the HSS is interpreted as a fixed point in a four-dimensional phase space \cite{Parra_Rivas_P1}, and the MI corresponds to a Hamiltonian-Hopf (HH) bifurcation with eigenvalues $\lambda=\pm ik_c$ of double multiplicity. In this formulation the pattern state corresponds to a periodic orbit, and this orbit bifurcates from HSS at $\rho_c$ (for $\theta<2$) with initial period (wavelength) $2\pi/k_c$. Together with this critical pattern there is a continuous family of patterns with $k\in[k^-,k^+]$ that bifurcates from the HSS solution for $\rho>\rho_c$. Within the spatial dynamics framework the HSS points for $\rho>\rho_c$ are nonhyperbolic and the bifurcations to P$_{2k_c}$, P$_{4k_c}$,$\dots$ have no particular signature within the spatial dynamics point of view. However, linear stability theory in the time domain shows that bifurcations occur whenever the spatial eigenvalues on the imaginary axis are in resonance, $k=nk_c$, where $n$ is an integer. Theory also shows that the primary bifurcation to periodic orbits at $\rho_c$ is accompanied by the simultaneous appearance of a pair of branches of spatially localized structures, provided only that the periodic states bifurcate subcritically. As a result the localized states can be interpreted as portions of the pattern state embedded in a uniform background. The bifurcation structure of such localized structures is studied in detail in Ref.~\cite{Parra_Rivas_P1}. As the detuning $\theta$ in Fig.~\ref{motiv} is larger than $41/30$, the pattern P$_{k_c}$ is created subcritically and is therefore initially temporally unstable [see profile (i)]. Following this branch away from MI, the pattern grows in amplitude and gains stability at a saddle-node bifurcation SN$_{1}$ [profiles (ii)-(iv)], but loses stability at a second saddle-node SN$_{2}$ [profiles (v)-(vi)]. Once SN$_{2}$ is passed, spatial oscillations (SOs) start to appear in between the peaks in the pattern profile as seen most clearly in profile (v). These SOs correspond to the growth of the second harmonic $2k_c$ of the pattern wave number, and these grow in amplitude with increasing $\rho$ [profile (vi)] until P$_{k_c}$ merges with the pattern P$_{2k_c}$, a state with wave number $2k_c$ (plus harmonics). The merging of these two periodic orbits occurs in a 2:1 spatial resonance \cite{Armbruster,Porter,Proctor}, which in the context of patterns corresponds to a finite wavelength (FW) instability of P$_{2k_c}$ that doubles its wavelength, i.e., to a (spatial) subharmonic instability. The pattern P$_{2k_c}$ itself bifurcates supercritically from HSS at I$^-_{2k_c}$. Since this branch inherits the unstable eigenvalue of HSS the P$_{2k_c}$ branch is initially unstable. The resulting likewise grows in amplitude as $\rho$ increases [profiles (vii)-(viii)] but at SN$_4$, it folds back and just as for P$_{k_c}$, SOs appear between successive peaks in the profile and the pattern terminates at a FW$'$ point on the P$_{4k_c}$ branch with characteristic wave number $4k_c$ once the amplitude of the SOs reaches that of the original peaks. This new pattern again bifurcates supercritically from the HSS, this time at I$^-_{4k_c}$ [profile (xi)], and is likewise initially unstable before terminating in yet another 2:1 spatial resonance [profile (xii)]. We have identified a whole cascade of such bifurcations involving ever higher harmonics of $k_c$. Bifurcation theory sheds light on the bifurcation sequence described above. We imagine that the bifurcations to P$_{k_c}$ and P$_{2k_c}$ occur in close succession and so look for solutions in the form $(U,V)\propto z_1\exp ik_cx+z_2\exp 2ik_cx+{\rm c.c.}+{\rm h.o.t}$. The complex amplitudes $z_1$, $z_2$ then satisfy the equations \cite{Armbruster,Proctor,Porter} \begin{equation}\label{sr_eq} \begin{array}{l} \dot{z}_1=\mu z_1+c_1{\bar z}_1z_2+(e_{11}|z_1|^2+e_{12}|z_2|^2)z_1+\dots\\\\ \dot{z}_2=(\mu-\nu) z_2+c_2z_1^2+(e_{21}|z_1|^2+e_{22}|z_2|^2)z_2+\dots \end{array} \end{equation} We see that for fixed $\nu>0$ the HSS solution $(z_1,z_2)=(0, 0)$ loses stability in succession to modes with wave numbers $k_c$, $2k_c$ as $\mu$ increases. We also see that the equations admit a pure P$_{2k_c}$ solution $(0,z_2)$ but that the P$_{k_c}$ state acquires a contribution with wave number $2k_c$ as soon as $\mu>0$, exactly as observed in the figure, i.e., the mode starting out as $(z_1,0)$ is in fact a mixed mode $(z_1,z_2)$ as soon as $\mu>0$. Moreover, as $\mu$ increases the contribution from the amplitude $z_2$ grows and the mixed mode terminates on the $(0,z_2)$ branch of pure wave number $2k_c$ states, also as observed. The latter is a 2:1 resonance since at this bifurcation a pure mode with wave number $2k_c$ bifurcates into a mixed mode with a contribution from wave number $k_c$. We can therefore think of this bifurcation as a subharmonic instability in space. In the next section, we explore how the bifurcation structure connecting P$_{k_c}$ with all its harmonics is modified when the cavity detuning $\theta$ varies. \begin{figure}[!t] \centering \includegraphics[scale=1]{pattern_parameter_reducedA.pdf} \caption{(Color online) Phase diagram in the $(\theta,\rho)$ parameter space $(\theta,\rho)$ showing the main bifurcations of HSS and the pattern states. The region of bistability between $A^b_0$ and P$_{k_c}$ is indicated in dark gray, while the wider region of stability of P$_{k_c}$ is colored in light gray. The symbol $\bullet$ represents the codimension-two points X$_1$, D$_1$, D$_2$, and D$_3$.} \label{parameter} \end{figure} \section{Patterns in the $(\theta,\rho)$ plane}\label{sec:4} Figure~\ref{parameter} shows the different bifurcation lines and dynamical regions introduced in the previous sections in the $(\theta,\rho)$ parameter space. As this phase diagram is quite dense and therefore difficult to interpret, we show the changes of the bifurcation structure as a function of the pump $\rho$ for increasing values of the detuning $\theta$ in Fig.~\ref{bif_dia_several}. For small values of $\theta$ [Fig.~\ref{bif_dia_several}(a), $\theta=1.1<41/30$], the pattern P$_{k_c}$ (red line) bifurcates supercritically from MI at $I_0=I_c$ and connects back to the HSS at I$^+_{k_c}$; P$_{2k_c}$ (blue line) is disconnected from P$_{k_c}$ and bifurcates from I$^-_{2k_c}$ and then extends to higher values of $\rho$ before connecting with HSS at I$^+_{2k_c}$. When $\theta$ increases, I$^+_{k_c}$ and I$^-_{2k_c}$ collide at a codimension-two bifurcation labeled X$_1$, after which the P$_{k_c}$ and P$_{2k_c}$ branches connect to one another with a FW instability originating in X$_1$. This is the 2:1 spatial resonance mentioned in the previous section. This situation is shown in Fig.~\ref{bif_dia_several}(b). Here both patterns emerge supercritically from the HSS state, with P$_{k_c}$ stable and P$_{2k_c}$ initially unstable. However, the latter can change stability through subsequent Eckhaus (EC) and finite-wavelength-Hopf (FWH) instabilities (see Fig.~\ref{parameter}), resulting in more complex scenarios studied in Section~\ref{sec:4}. \begin{figure}[t!] \centering \includegraphics[scale=0.91]{Pattern_diagram_theta.pdf} \caption{(Color online) Bifurcation diagrams corresponding to (a) $\theta=1.1$, (b) $\theta=1.3$, (c) $\theta=1.4$, (d) $\theta=1.5$, (e) $\theta=1.6$ and (f) $\theta=1.8$. Red lines correspond to P$_{k_c}$ and the blue lines to P$_{2k_c}$. Panels (a) and (b) show the situation before and after the codimension-two point X$_1$. Panels (b) and (c) show the transition from supercritical to subcritical bifurcation of pattern P$_{k_c}$ via a degenerate HH at $\theta=41/30$. For $\theta=1.5$ [panel (d)] P$_{2k_c}$ bifurcates supercritically from HSS at I$^-_{2k_c}$. In contrast, for $\theta=1.6$ [panel (e)] P$_{2k_c}$ emerges subcritically. Solid (dashed) lines indicate stable (unstable) branches.} \label{bif_dia_several} \end{figure} At $\theta=41/30$, the bifurcation to P$_{k_c}$ is a degenerate HH bifurcation denoted in Fig.~\ref{bif_dia_several}(c) by D$_1$. For $\theta>41/30$ the bifurcation is subcritical as shown in Fig.~\ref{bif_dia_several}(c) for $\theta=1.4$. Here, P$_{k_c}$ is initially unstable but acquires stability at a saddle-node labeled SN$_1$. This branch then connects with P$_{2k_c}$ at FW. Thus a parameter regime is present in which $A^b_0$ and P$_{k_c}$ coexist stably. As a result localized structures (LS) are also present and these are organized in a so-called homoclinic snaking structure \cite{Parra_Rivas_P1,Coullet,Gomila_Schroggi,Woods1999,Burke_Knobloch}. These LS are found in the region of bistability between $A^b_0$ and P$_{k_c}$ colored in dark gray in Fig.~\ref{parameter}, while the wider region of stability of P$_{k_c}$ is colored in light gray. For $\theta=1.5$ [Fig.~\ref{bif_dia_several}(d)], the situation remains similar, but P$_{k_c}$ now bifurcates subcritically from FW, i.e. an unstable pattern emerges from FW and gains stability at SN$_{2}$. This change in direction of branching is also associated with a codimension-two point, this time labeled D$_2$ (Fig.~\ref{parameter}). As a result, the upper portion of the P$_{k_c}$ branch is stable between SN$_1$ and SN$_2$ while the lower parts between MI and SN$_1$ and between FW and SN$_2$ are both unstable. For $\theta=1.6$ [Fig.~\ref{bif_dia_several}(e)], the HSS branch is still monotonic but P$_{2k_c}$ now also emerges subcritically, having crossed another degeneracy at D$_3$ (Fig.~\ref{parameter}). This leads to the creation of a saddle-node bifurcation SN$_3$ on the P$_{2k_c}$ branch similar to SN$_1$ on the P$_{k_c}$ branch. At the same time an Eckhaus bifurcation moves in from larger values of $\rho$, stabilizing the large $\rho$ part of the P$_{2k_c}$ branch. With further increase in $\theta$ the EC point collides with FW, and the whole P$_{2k_c}$ branch beyond FW becomes stable. For yet larger $\theta$ the FW point moves towards SN$_3$ so that P$_{k_c}$ now terminates on P$_{2k_c}$ at SN$_3$ and the P$_{2k_c}$ branch stable from SN$_3$ towards larger $\rho$. This multiple bifurcation occurs for $\theta\approx 1.72$ but is not analyzed in this work. Figure~\ref{bif_dia_several}(f) shows the resulting bifurcation diagram when $\theta=1.8$. Since this value of $\theta$ exceeds $\sqrt{3}$ the HSS branch is no longer monotone, with I$^-_{k_c}$ lying below the resulting fold SN$_b$ and I$^-_{2k_c}$ above it. In Figs.~\ref{parameter} and \ref{bif_dia_several}, we focus on the bifurcations associated with P$_{k_c}$ and P$_{2k_c}$, although very similar transitions occur between P$_{2k_c}$ and P$_{4k_c}$, P$_{4k_c}$ and P$_{8k_c}$, and so on. This scenario resembles foliated snaking of localized structures that appears for $\theta > 2$ \cite{Parra_Rivas_P1}. Since $k_c \rightarrow 0$ as $\theta \rightarrow 2$ from below, in a finite system a pattern with domain-size wavelength becomes indistinguishable from a single peak localized structure present for $\theta>2$, i.e., in the limit $\theta \rightarrow 2$ P$_{k_c}$ becomes a single peak LS, P$_{2k_c}$ becomes a two peak LS, etc. thereby reproducing precisely the foliated snaking bifurcation scenario. A similar pattern organization exists for patterns with wave number $k\ne k_c$, implying that the complete scenario is fundamentally complex. A detailed study of secondary bifurcations of patterns with wave numbers $k\ne k_c$ is therefore left for future work. \section{Linear stability analysis of the pattern solutions}\label{sec:5} The preceding section has highlighted the importance of a secondary wavelength changing instability called the Eckhaus instability. This is a long wavelength instability, with domain-size wavelength, and its nonlinear evolution generally leads to the generation of a phase slip whereby a new roll is injected (or annihilated) at the location of the phase slip, followed by relaxation of the new pattern towards a periodic structure with a new and different wavelength in the domain \cite{KramerZimmermann,BarkleyTuckerman}. The traditional approach to describing the Eckhaus is based on the use of an amplitude equation, the Ginzburg-Landau equation, that describes the pattern-forming instability close to the primary pattern-forming bifurcation, assumed to be supercritical \cite{BarkleyTuckerman,Eckhaus_saliya}. As a result the predictions concerning the onset and evolution of the Eckhaus instability are valid only when the instability sets in close to the primary instability. We have seen that in the present case this is not so -- in some cases the primary bifurcation is subcritical and the analysis of the Eckhaus instability is then substantially modified \cite{KaoKnobloch}. For this reason we apply here a technique described in \cite{Harkness1,Gomila_hexa1} that permits us to compute the onset of the Eckhaus instability for finite amplitude fully nonlinear spatially periodic patterns. The technique is necessarily numerical but allows us to find and characterize, as a function of $\theta$, $\rho$, and $k$, the secondary bifurcations introduced in Section~\ref{sec:4}. Similar numerical studies have been performed in the context of fluid mechanics in Ref.~\cite{Mercader} and for supercritical patterns within the LL equation in Ref.~\cite{Perinet_Eckhaus}. The stationary patterns, hereafter $A_p=(U_p, V_p)$, can be written as a Fourier modal expansion \begin{equation}\label{sta_ansatz2} A_p(x)=\displaystyle\sum_{m=0}^{N-1} a_m e^{imkx}, \end{equation} with $k$ the wave number of the pattern, $a_m$ the complex amplitude of the Fourier mode with wave number $mk$, and $N$ the number of Fourier modes retained in the analysis. To study the linear stability of such a pattern state, one must first linearize Eq.~(\ref{LLE}) around the state (\ref{sta_ansatz2}). Writing $A(x,t)=A_p(x)+\epsilon \delta A(x,t)$, $\epsilon\ll1$, leads to the following leading order equation for the perturbation $\delta A$: \begin{equation}\label{linear_complex} \partial_t\delta A=-(1+i\theta)\delta A+i\partial_x^2\delta A+2i|A_p|^2\delta A+iA_p^2\delta A^*. \end{equation} Owing to the periodicity of $A_p$, we can apply the Bloch ansatz and write the eigenmodes of this equations as Bloch waves \begin{equation}\label{sta_ansatz1} \delta A(x,t)=e^{iqx}\delta a(x,t,q)+e^{-iqx}\delta a^*(x,t,-q), \end{equation} where $\delta a$ has the same spatial period as the pattern $A_p$ and can be written in the form \begin{equation} \delta a(x,t,q)=\displaystyle\sum_{m=0}^{N-1} \delta a_m(t,q)e^{ikmx}. \end{equation} Inserting Eqs.~(\ref{sta_ansatz2}) and (\ref{sta_ansatz1}) in Eq.~(\ref{linear_complex}) leads to a set of linear equations for the complex amplitudes $\delta a_n^{\pm}\equiv\delta a_n(t,\pm q)$, namely \begin{multline} \frac{d}{dt} \delta a^{\pm}_n=-(1+i\theta)\delta a^{\pm}_n-i(kn\pm q)^2\delta a^{\pm}_n+\\ i\sum_{l,m=0}^{N-1}a_la_m^*\delta a^{\pm}_{n-l+m}+i\sum_{l,m=0}^{N-1}a_la_m\delta a^{* \pm}_{-n+l+m}. \end{multline} This equation has the form \begin{equation} \partial_t{\Sigma}_n(t,q)=L(a_n,q)\Sigma_n(t,q), \end{equation} where $$\Sigma_n(t,q)\equiv(\delta a^+_0,\cdots,\delta a^+_{N-1},\delta a^{*-}_0,\cdots,\delta a^{*-}_{N-1}).$$ Thus, the linear stability analysis of $A_p(x)$ reduces to finding the 2N eigenvalues $\lambda_n(q)$ of the $N\times N$ matrix $L(a_n,q)$ and the corresponding eigenvectors, for each value of $q$. For more details, see Refs.~\cite{Gomila_hexa1,Harkness1,Harkness2}. The eigenvalues for a given $q$ determine the stability of the pattern against perturbations containing wave numbers $k\pm q$ for any $k$. For this purpose it is sufficient to consider only $q$ values inside the first Brillouin zone. Any perturbation with wave number $q'$ outside the Brillouin zone is equivalent to another with $q=q'+k$. In solid state physics this representation is described as the {\it reduced zone scheme} \cite{Ashcroft_Mermin}. Using this technique we characterize how the eigenspectrum of $L(A_p)$ changes as a function of $q$ for different values of $(\theta,\rho)$, and predict the different secondary bifurcations that a pattern with wave number $k$ undergoes. \begin{figure}[!t] \centering \includegraphics[scale=1]{pattern_parameter_reducedB.pdf} \caption{(Color online) (a) Phase diagram in $(\theta,\rho)$ parameter space showing an enlargement of the diagrams shown in Fig.~\ref{parameter} focusing on the main stability regions of P$_{2k_c}$ labeled III$_{\rm A,...,C}$. The dashed line at $\theta=1.5$ refers to the slice of this diagram shown in Fig.~\ref{diagrama_EC}. On top of these lines, the symbol $\bullet$ corresponds to points where the stability analysis of the patterns shown in Section~\ref{sec:4} was performed. } \label{parameter2} \end{figure} Figure~\ref{parameter2} shows an enlarged version of the phase diagram in Fig.~\ref{parameter}. We see that the pattern P$_{k_c}$ is stable everywhere between SN$_1$ and SN$_2$. However, P$_{2k_c}$ undergoes three types of secondary instability indicated in Figs.~\ref{parameter} and \ref{parameter2} by the lines EC (Eckhaus), FW (finite-wavelength), and FWH (finite-wavelength-Hopf). These bifurcations divide region III [see Fig.~\ref{parameter2}] into the following subregions: \begin{itemize} \item Region III$_{\rm A}$: The pattern P$_{2k_c}$ is Eckhaus unstable. This region spans the parameter space between I$_{2k_c}^-$ and SN$_3$ from below, and FWH$_1$ and EC from above. \item Region III$_{\rm B}$: P$_{2k_c}$ is stable between EC and SN$_3$ from below, and FWH$_1$ and FWH$_2$ from above. \item Region III$_{\rm C}$: P$_{2k_c}$ oscillates in time and in space. This region spans the parameter space inside the region defined by FWH$_1$ and FWH$_2$ from below, and between FWH$_2$ and SN$_4$. \end{itemize} In Fig.~\ref{diagrama_EC}, we show the bifurcation diagram for $\theta = 1.5$, a value we will use to explore the different instabilities in more detail. For $\theta=1.6$, discussed in Section ~\ref{sec:5}, the results are similar except that P$_{2k_c}$ bifurcates initially subcritically. The temporal evolution indicated by arrows in the figure results from phase slips, as discussed next, and is obtained on a periodic domain of length $L=2\pi n/k_c$, with $n=16$. \begin{figure} \centering \includegraphics[scale=1]{dia_pattern_EC_PRE.pdf} \caption{(Color online) Bifurcation diagram for $\theta=1.5$. The pattern branch P$_{k_c}$ (red) bifurcates subcritically from HSS at I$^-_{k_c}$, while the branch P$_{2k_c}$ (blue) bifurcates supercritically at I$^-_{2k_c}$. Labels (a)-(c) correspond to the unstable patterns with 32 rolls initially that evolve in time to patterns with different numbers of rolls depending on the value of $\rho$ and lying on new branches of periodic states (gray) labeled by P$_n$, where $n$ is the new roll number. The points where linear stability analysis has been carried out are indicated using the symbol $\bullet$.} \label{diagrama_EC} \end{figure} \subsection{Eckhaus instability} For values of $\theta$ and $\rho$ in region III$_{\rm A}$ (see Fig.~\ref{parameter2}), patterns are unstable against long-wavelength perturbations ($q\sim0$), and for this reason the Eckhaus instability is also known as a long-wavelength (LW) instability \cite{Cross,Walgraef_book}. Furthermore, this instability is triggered by a phase instability \cite{Walgraef_book}. For small values of $q$, the least stable branch of eigenvalues $\lambda_1(q)$ has a parabolic shape centered at $q=0$, namely Re$[\lambda_1(q)]\propto|q|^2$, and the instability takes place when the convexity of this eigenvalue branch changes sign. \begin{figure}[!h] \centering \includegraphics[scale=1]{lambdaq_EC.pdf} \caption{The eigenspectrum in the vicinity of the EC instability of the P$_{2k_c}$ branch when $\theta=1.5$, showing Re$[\lambda_1(q)]$ for different values of $\rho$: (a) $\rho=1.57$, (b) $\rho=\rho_{\rm EC}=1.58$, and (c) $\rho=1.59$.} \label{EC_insta} \end{figure} The result of the stability analysis of P$_{2k_c}$ for $\theta=1.5$ and increasing values of $\rho$ as one crosses the EC instability threshold is summarized in Fig.~\ref{EC_insta}. In panel (c) $\rho=1.59$ and Re$[\lambda_1(q)]$ is negative for all nonzero $q$. Therefore, P$_{2k_c}$ is stable no matter the wavelength of the perturbation. This situation corresponds to region III$_{\rm B}$ in Fig.~\ref{parameter2}. In panel (b) $\rho=1.58$ and the eigenspectrum flattens around Re$[\lambda_1(q)]=0$, indicating the onset of the EC instability. Finally in panel (a) $\rho=1.57$ and the eigenspectrum has changed its convexity, indicating that the pattern is now unstable to perturbations with $q\in[0,q^*]$. This property characterizes region III$_{\rm A}$ which extends from EC down to I$^-_{2k_c}$ as $\rho$ decreases. \begin{figure}[!t] \centering \includegraphics[scale=1]{EC_instabilities.pdf} \caption{Re$[\lambda_1(q)]$ at $\theta=1.5$ in the region of Eckhaus instability and the associated temporal evolution of an unstable initial pattern to patterns of different wavelengths. These new states are shown in gray in Fig.~\ref{diagrama_EC}: an unstable pattern with initially 32 rolls evolves to P$_{25}$ in panel (c) for $\rho=1.4$, to P$_{24}$ in panel (b) for $\rho=1.3$, and to P$_{22}$ in panel (a) for $\rho=1.2$. The left panels show the unstable modes $0<q<q^*$ while the right panels describe the resulting evolution in space-time plots.} \label{Evol_EC} \end{figure} In Fig.~\ref{Evol_EC} the right panels show the temporal evolution of an unstable initial condition along the branch P$_{2k_c}$ together with the real part of the leading eigenvalue $\lambda_1(q)$ [left panels] for different values of $\rho$ in region III$_{\rm A}$. The labels (a)-(c) correspond to different points along the branch P$_{2k_c}$ identified in Fig.~\ref{diagrama_EC}. For $\rho=1.4$ [Fig.~\ref{Evol_EC}(c)], P$_{2k_c}$ is unstable to perturbations with $q$ in between 0 and $q^*$, and the most unstable mode is that corresponding to maximum growth rate. Time simulations show that after an initial transient during which the pattern appears stable, the wavelength of the pattern suddenly increases to the wavelength of the most unstable mode. The pattern, which initially had 32 rolls, becomes a pattern with 25 rolls that we label P$_{25}$. This new pattern can be tracked in $\rho$ and results in the P$_{25}$ solution branch plotted in Fig.~\ref{diagrama_EC}. Reducing the value of $\rho$ further, the P$_{2k_c}$ pattern becomes unstable to any $q\in[0,k'/2]$, with $k'=k_c/2$, and the most unstable wave number increases [Fig.~\ref{Evol_EC}(a)-(b)]. The maximum growth rate Re$[\lambda_1(q)]$ also increases so that the time needed to destabilize the pattern decreases with $\rho$. The final patterns that are reached further beyond the EC instability are P$_{24}$ with 24 peaks in case (b), and the pattern P$_{22}$ in case (a). Once tracked in $\rho$, these stationary patterns generate the solution branches shown in Fig.~\ref{diagrama_EC}. \subsection{Finite-wavelength instability} We now characterize the finite-wavelength (FW) instability that allows the pattern P$_{k_c}$ to terminate on P$_{2k_c}$. As already mentioned these locations correspond to a spatial 2:1 resonance located along the line FW in Fig.~\ref{parameter2}. However, the theory described in Refs.~\cite{Armbruster,Proctor,Porter} applies only near the codimension-two case in which the two primary bifurcations from HSS to states with wavenumbers $k_c$ and $2k_c$ occur in close succession. This is not the case here, and we therefore employ the numerical technique of the previous section to compute the location of the FW bifurcation when this occurs in the fully nonlinear regime. \begin{figure}[!t] \centering \includegraphics[scale=1]{lambdaq_FW.pdf} \caption{The eigenspectrum of P$_{2k_c}$ in the vicinity of the FW instability when $\theta=1.5$, showing the first two branches Re$[\lambda_1(q)]$ and Re$[\lambda_2(q)]$ for different values of $\rho$: (a) $\rho=1.175$, (b) $\rho=\rho_{\rm FW}\approx 1.177$, and (c) $\rho=1.179$.} \label{instaFW} \end{figure} If $k'=2k_c$ is the wavenumber of P$_{2k_c}$, the FW bifurcation is characterized by a branch of eigenvalues $\lambda_2(q)$ having a parabolic shape centered at $q=k'/2$, i.e., Re$[\lambda_2(q)]\propto|q-k'/2|^2$, which crosses Re[$\lambda_2(q)]=0$ at $q=k'/2$. This transition is shown in Fig.~\ref{instaFW} for $\theta=1.5$ and for three values of $\rho$ in the vicinity of the FW bifurcation [see the inset in Fig.~\ref{diagrama_EC}]. The real part of the two leading eigenvalues $\lambda_1(q)$ and $\lambda_2(q)$ is shown in the left panels, while the right columns show the full eigenspectrum at $q=k'/2=k_c$. In any case Re[$\lambda_1(q)$] is positive for all the range $q\in[0,k'/2=k_c]$, and therefore P$_{2k_c}$ is unstable against Bloch modes with $q\in[0,k_c]$, i.e. in this regime P$_{2k_c}$ is EC unstable. The FW transition is triggered by the second eigenvalue $\lambda_2$ centered at $q=k'/2$. In (a) $\rho<\rho_{\rm FW}$, and a portion of the branch Re[$\lambda_2(q)$] is positive, with its maximum occurring at $q=k'/2$. Therefore, in this case P$_{2k_c}$ is unstable to the most unstable mode, i.e. $q=k'/2=k_c$, and therefore to P$_{k_c}$, in addition to the unstable EC mode. In (b) $\rho=\rho_{\rm FW}$, and the maximum growth rate Re[$\lambda_2(q)$] at $q=k'/2$ vanishes, as can be appreciated by looking at the corresponding eigenspectrum in the right column. This point therefore corresponds to presence of the FW bifurcation. Finally, panel (c) shows the situation at $\rho>\rho_{\rm FW}$, where Re[$\lambda_2(q)$] is negative for all $q$, and the P$_{2k_c}$ pattern is FW stable. \subsection{Finite-wavelength-Hopf instability} For values of $\theta$ and $\rho$ in region III$_{\rm C}$ patterns undergo a finite-wavelength-Hopf instability, hereafter FWH. In contrast to the homogeneous Hopf bifurcation which occurs with $q=0$, this Hopf bifurcation sets in with a finite wave number $q\neq0$, here $q=k_c$. In the former case, patterns which are Hopf unstable will oscillate with a uniform amplitude and temporal period $T=2\pi\omega$, with $\omega={\rm Im}(\lambda_2(0))={\rm Im}(\lambda_3(0))$. Here $\lambda_{2,3}(0)$ are the Hopf modes. In the FWH case, however, patterns oscillate both in time and in space, and this is why this instability is also referred to as a {\it wave instability} (WI) \cite{Cross, Walgraef_book,Hildebrand,Epstain1,Epstain2}. \begin{figure}[!t] \centering \includegraphics[scale=1]{lambdaq_FWH.pdf} \caption{Hopf bifurcation of P$_{2k_c}$ at $\theta=1.5$ showing (left panels) Re$[\lambda(q)]$ for different values of $\rho$: (a) $\rho=1.82$, (b) $\rho=\rho_{\rm FWH}=1.87$, and (c) $\rho=1.92$. The right panels show the corresponding eigenspectrum at $q=k_c$, the onset wave number.} \label{Hopf_insta} \end{figure} \begin{figure}[!t] \centering \includegraphics[scale=1]{OT.pdf} \caption{Time evolution of the oscillating patterns for $\theta=1.5$ and (a) $\rho=1.9$, (b) $\rho=2.1$ and (c) $\rho=2.3$.} \label{Hopf_evol} \end{figure} In Fig.~\ref{Hopf_insta} the real part of the three leading eigenvalues (left) and the full eigenspectrum at $q=k'/2=k_c$ (right) are plotted when crossing the FWH bifurcation at $\theta=1.5$ [see Figs.~\ref{parameter2} and \ref{diagrama_EC}]. In panel (a) $\rho=1.82$, and the real parts of $\lambda_2(q)$ and $\lambda_3(q)$ are both negative, with a parabolic shape centered at $q=k'/2=k_c$. In fact these eigenvalues are complex conjugates of one another, as can be seen in the full eigenspectrum for $q=k_c$ shown in the right panel. This is the situation in region III$_{\rm B}$ where P$_{2k_c}$ is FWH stable. In panel (b) $\rho=\rho_{\rm FWH}=1.87$ and the real part of the complex conjugate eigenvalues $\lambda_{2,3}(q)$ vanishes at $q=k_c$, indicating the onset of the FWH bifurcation. Finally, in (c) $\rho=1.92$, and the real part of the eigenvalues is now positive and P$_{2k_c}$ starts to oscillate, not only in time but also in space. This is the situation of region III$_{\rm C}$ shown in Fig.~\ref{parameter2}. In Fig.~\ref{Hopf_evol}, we show the resulting oscillatory states for different values of $\rho$ in region III$_{\rm C}$ when $\theta=1.5$. For $\rho=1.9$ [see panel (i)], the amplitude of P$_{2k_c}$ oscillates non-uniformly not only in time but also in space resulting in zig-zag motion whose amplitude grows with increasing $\rho$ as seen in panel (b). Finally, in panel (c), for $\rho=2.4$, the pattern exhibits much complex dynamics including phase slips at which peaks merge or splitresulting in fluctuations in the total number $n(t)$ of rolls in the domain at any one time. A complete description and understanding of the dynamics of these oscillatory states in time and space involves interaction with the marginally stable $q=0$ mode [Fig.~\ref{Hopf_insta} and \cite{CoxMatthews}] and is beyond the scope of this paper. \section{Conclusions} \label{sec:6} In this paper we have studied the bifurcation structure and stability properies of spatially periodic patterns arising in the LL model in the anomalous GVD dispersion regime. Linear stability theory predicts that the HSS solution becomes modulationally unstable at $I_0=I_c=1$ to a pattern with a critical wave number $k_c=\sqrt{2-\theta}$, namely P$_{k_c}$ \cite{lugiato_spatial_1987,Tlidi_96}. A weakly nonlinear analysis has allowed us to obtain a perturbative description of this pattern in the neighborhood of this bifurcation. From this calculation one finds that P$_{k_c}$ emerges supercritically for $\theta<41/30$ and subcritically when $\theta>41/30$, where $\theta=41/30$ corresponds to a degenerate HH point. This analytical approximation for the pattern P$_{k_c}$ around the MI point (or equivalently: HH) has been used as an initial condition in a numerical continuation algorithm that allowed us to track the pattern solutions to parameter values away from the bifurcation point. Using this method, we have studied the bifurcation structure of spatially periodic patterns as a function of $\rho$ for different values of the detuning $\theta$. In doing so, we have found that for low $\theta$ patterns arising from the MI bifurcation reconnect with the HSS for larger values of the pump intensity $I_0$, at I$^{+}_{k_c}$. In addition, harmonic patterns with wave numbers $nk_c$, $n=2,4,\dots$ also bifurcate from the HSS, P$_{2k_c}$ at I$^{\pm}_{2k_c}$, P$_{4k_c}$ at I$^{\pm}_{4k_c}$, etc. With increasing $\theta$ these these two types of patterns connect pairwise in a 2:1 spatial resonance, for example P$_{k_c}$ with P$_{2k_c}$ and P$_{2k_c}$ with P$_{4k_c}$. We have referred to these bifurcation points as finite-wavelength (FW) instabilities, and computed their location via numerical Floquet analysis. This FW bifurcation originates in the codimension-two point X, which appears to organize these connections. Finally, as $\theta \rightarrow 2$ and $k_c \rightarrow 0$ the bifurcation structure of patterns transforms into foliated snaking of localized structures \cite{Parra_Rivas_P1}, as a pattern with infinite wavelength corresponds in effect to a single peak localized structure in a finite size system. We have provided an almost complete discussion of the various possible secondary bifurcations in the parameter space $(\theta,\rho)$ of the LL equation, mapping the different dynamical regions for the patterns P$_{k_c}$ and P$_{2k_c}$. In particular, patterns corresponding to P$_{2k_c}$ were found to undergo Eckhaus and finite-wavelength-Hopf instabilities, in addition to the FW instability, and these were found to lead to rich and complex dynamics. Several significant but higher codimension bifurcation were also identified, but a detailed study of these remains for future work. While we have focused our study on patterns with the critical wave number $k_c$ determined by the onset of the MI, and its harmonics, we have confirmed that similar behavior also occurs for patterns with wave number $k\ne k_c$ that also emerge from the HSS solution when $I_0>I_c$. Together with the instabilities described in this work, other bifurcations such as an FW with $q=k/3$ are also known to exist \cite{Perinet_Eckhaus}. A detailed study of secondary instabilities of patterns with arbitrary wave number $k$ are beyond the scope of this paper, however, and are likewise left to future work.\\ \acknowledgments We acknowledge support from the Research Foundation--Flanders (FWO-Vlaanderen) (PPR), internal Funds from KU Leuven (PPR), the Belgian Science Policy Office (BelSPO) under Grant IAP 7-35, the Research Council of the Vrije Universiteit Brussel, and the Agencia Estatal de Investigaci\'on (AEI, Spain) and Fondo Europeo de Desarrollo Regional under Project ESoTECoS, Grants No. FIS2015-63628-C2-1-R (AEI/FEDER,UE) (DG) as well as the National Science Foundation under grant DMS-1613132 (EK).
{ "timestamp": "2018-05-08T02:18:38", "yymm": "1805", "arxiv_id": "1805.02555", "language": "en", "url": "https://arxiv.org/abs/1805.02555" }
\section{Introduction} The synchronization control problem for multi-agent systems has attracted considerable attention due to its various applications to many important tasks \cite{OlfatiSR2007,RenW2007}, such as formation flying of unmanned air vehicles, spacecraft attitude cooperative control, distributed sensor configuration and information flow control. A great number of existing works on multi-agent systems mainly focus on the synchronization problem on networks with various topologies \cite{LiZK2010,RenW2005,WenGH2017TSMCS}, communication constraints \cite{WenGH2016,YuWW2013}, complex dynamics \cite{DuHB2017}, final state restrictions \cite{WenGH2017TSMCS2}, robustness \cite{LiZK2017} and so on. In practice, it is desirable to improve some control performances such as convergence rate and control energy cost while achieving synchronization, which is typically the goal of distributed optimization. The distributed optimization problem for multi-agent systems has been widely investigated recently. Some earlier works are presented in \cite{ShiG2013,LinP2014}, where the dynamics of agents are described by integrators. Combining synchronization control methods with optimization techniques, the optimal synchronization problem was solved for double-integrator dynamics \cite{QiuZR2016} and then extended to Euler-Lagrangian systems \cite{ZhangYQ2017}, where the final synchronization state is required to minimize a global cost functional. For general linear dynamics \cite{ZhaoY2017}, cooperative optimization is achieved through local interactions by implementing edge- or node-based adaptive algorithms. To optimize the transient response of the synchronization process, the objective functional is reformulated to be an integral of synchronization error over time in \cite{WangJY2013,JiaoQ2016}. $H_\infty$ and $H_2$ control protocols are proposed in \cite{WangJY2013} for multi-agent systems to achieve synchronization synthesised with desired transient performance. $\mathcal{L}_2$-gain output-feedback synchronization problems for both homogeneous and heterogeneous multi-agent systems are addressed in \cite{JiaoQ2016}, to achieve synchronization and meanwhile limit the $\mathcal{L}_2$-gain of the synchronization error. When combining the transient response of synchronization together with the control energy cost, the distributed optimization problem for linear multi-agent systems becomes the distributed linear quadratic synchronization problem, where the objective functional integrates the quadratic synchronization error and quadratic input signals. One case of distributed linear quadratic synchronization is the linear quadratic regulator (LQR), where all the agents are required to be stabilized with a quadratic cost functional minimized \cite{CaoY2010,KempkerPL2014,NguyenDH2014}. The LQR optimal synchronization problem is studied in \cite{CaoY2010}, where the communication topology corresponds to a complete graph. The overall LQR control problem is separated into independent local subproblems for coordinated linear systems thereby deriving a lower-order distributed numerical algorithm in \cite{KempkerPL2014}. For an undirected communication topology, in \cite{NguyenDH2014} a distributed stabilizing control approach is taken to minimize the LQR performance index, where the involving weighted matrices have to be properly chosen. Based on the algebraic Riccati equation, optimal control protocols with diffusive couplings are presented in \cite{MontenbruckJM2015} for linear synchronization problems with quadratic cost and the results are extended to a static output feedback scenario in \cite{ZengS2017}. For the leader-follower synchronization problem \cite{ModaresH2017}, the Hamilton-Jacobi-Bellman equation is utilized to find an optimal control protocol based on distributed estimation of the leader state for each follower agent. It should be noted, however, that despite the considerable advances on distributed optimization, the problem of designing distributed optimal synchronization algorithms with general linear quadratic cost functionals remains a challenge. Motivated by the above observations, a distributed optimization algorithm is proposed in this paper to achieve optimal synchronization minimizing a linear quadratic cost for multi-agent systems with an undirected communication topology. By introducing some auxiliary synchronization state variables, the optimal synchronization problem is formulated as a distributed optimization problem subject to reguired agent dynamics and synchronization constraints with a linear quadratic cost functional that integrates quadratic synchronization error and quadratic input signals. A new distributed control protocol design framework is proposed by combining the distributed synchronization method with the alternating direction method of multiplier (ADMM). With this construction, the optimal synchronization control problem is separated to several independent subproblems: a synchronization optimization, an input minimization and a dual optimization. Then, a distributed numerical algorithm corresponding to each subproblem is designed based on the Lyapunov method and dynamic programming. Comparing with the literature on distributed optimization control, the contributions of this paper are three-fold, as summarized below: \begin{enumerate}[] \item A new distributed control protocol design is proposed by combining the distributed synchronization method with the ADMM for the linear quadratic synchronization control problem. For the first time, a variant of the generalized ADMM algorithm is applied to separate the optimal synchronization control problem to several independent subproblems that can be solved in a distributed way. A further convergence analysis shows that the control sequence generated by the proposed algorithm converges to the optimal solution of the linear quadratic synchronization control problem. This new framework is very desirable for distributed control protocol design since the communication topology and the agent dynamics are successfully separated, making the design and analysis much easier. \item The synchronization control problem for multi-agent systems with linear quadratic cost is solved by a single-agent-level algorithm. As indicated in \cite{MontenbruckJM2015}, the quadratic term of the Laplacian matrix appears in the objective functional and in the Riccati equation, which brings more difficulties in order reduction. In this paper, the optimal synchronization control problem is divided into synchronization and optimal control by the ADMM technique. In the synchronization step, the optimal synchronization state for each iteration is solved by differential equations using local information. Then, optimal control input can be designed individually for each agent in the optimal control step with the synchronization state fixed. Therefore, the design algorithm for optimal control has the same order as each agent in both steps. Moreover, the order reduction does not introduce additional constraints on the communication topology or the weighted matrices in the cost functional. \item The distributed numerical algorithm is valid for both homogenous and heterogenous linear systems with eigenvalues either inside or on the unit circle, or for the eigenvalues outside the unit circle respectively. By an application of the ADMM technique, the topology issue is removed from the optimal control input design step so that the design algorithm can be easily applied to general heterogenous linear systems. On the other hand, the dynamic programming scheme used in solving the optimal control input ensures a stable final synchronization state for both stable and unstable dynamics. \end{enumerate} The rest of this paper is organized as follows. In Section \ref{s:Preliminaries and Problem Formulation}, some preliminaries and the formulation of the optimal synchronization problem with linear quadratic cost are presented. A variant of the generalized ADMM algorithm and its convergence analysis for synchronization control in a centralized manner are presented in Section \ref{s:Distributed Linear Quadratic Synchronization Control Problem Design}. Section \ref{s:subproblemalgorithm} develops distributed algorithms for the synchronization, the control design and the overall optimal synchronization problem, respectively. The performances of the proposed algorithms are illustrated by numerical examples in Section \ref{s:Simulations}, with conclusions given in Section \ref{s:conclusion}. The notations used in this paper are as follows. The set of $n$-dimensional real vectors and $m\times n$ real matrices are indicated by $\mathbb{R}^n$ and $\mathbb{R}^{m\times n}$, $\otimes$ denotes the Kronecker product of matrices, and $\|\cdot\|$ denotes the Euclidean norm of the corresponding vector and matrix. For $x_i\in\mathcal{R}^{n_i},~A_i\in\mathcal{R}^{m_i\times n_i}~i=1,\cdots,m$, define $col\{x_1,\cdots,x_m\}\triangleq [x_1^T,\cdots,x_m^T]^T$ and $diag\{A_1,\cdots,A_m\}$ be a block diagonal matrix. \section{Preliminaries and Problem Formulation}\label{s:Preliminaries and Problem Formulation} Consider a network of $N$ heterogeneous agents with discrete-time linear dynamics in the following form \begin{equation}\label{e:systemoriginal} x_i(k+1)=A_ix_i(k)+B_iu_i(k),~i\in\{1,2,\cdots,N\}, \end{equation} where $x_i\in\mathbb{R}^n$ is the state of the $i$-th agent, $u_i\in\mathbb{R}^m$ is its control input and $A_i\in \mathbb{R}^{n\times n},~B_i\in \mathbb{R}^{n\times m}$ are constant matrices. The agents are assumed to exchange information through a communication network described by an undirected and connected graph $\mathcal{G}=(\mathcal{V},\mathcal{E})$, with $\mathcal{V}=\{v_1,~v_2,\cdots,~v_N\}$ being the set of nodes and $\mathcal{E}\subset\mathcal{V}\times\mathcal{V}$ being the set of edges. In the graph $\mathcal{G}$, $(v_i,~v_j)\in\mathcal{E}$ means that the $i$-th agent can exchange information with the $j$-th agent. The weighted adjacency matrix of the graph $\mathcal{G}$ is defined as $\mathcal{A}=(a_{ij})\in\mathbf{R}^{N\times N}$, where $a_{ii}=0$, and $a_{ij}=a_{ji}>0$ if $(v_i,~v_j)\in\mathcal{E}$. The Laplacian matrix of $\mathcal{G}=(\mathcal{V},\mathcal{E})$ is denoted by $\mathcal{L}=(l_{ij})\in\mathbf{R}^{N\times N}$, where $l_{ii}=\sum_{j=1}^N a_{ij}$,~$l_{ij}=-a_{ij}$ for $i\neq j$. And $\mathcal{V}_i=\{j\in\mathcal{V}:\{i,j\}\in\mathcal{E}\}$ denotes the neighborhood set of $i$. The first problem considered in this paper is to find controllers $u_i$ to guarantee the synchronization of all agents, i.e., \begin{equation}\label{e:controlgoal} \lim_{k\rightarrow \mathcal{N}}\|x_i(k)-x_j(k)\|=0,~\forall i,j\in\{1,2,...,N\}, \end{equation} where $\mathcal{N}$ denotes the total (finite) steps needed to achieve synchronization. Denote the finite synchronization state as $z_i,~i\in\{1,2,...,N\}$. Then, the synchronization condition (\ref{e:controlgoal}) can be rewritten as \begin{equation}\label{e:controlgoal2} \begin{aligned} &\lim_{k\rightarrow \mathcal{N}} x_i(k)=z_i,~i\in\{1,2,...,N\},\\ &(\mathcal{L}\otimes I_n)Z=0, \end{aligned} \end{equation} where $Z\triangleq col\{z_1,z_2,...,z_N\}$. Define the synchronization error vector of the network as \begin{equation}\label{e:synerror} e_i(k)=x_i(k)-z_i,~~i\in\{1,2,...,N\}. \end{equation} Let the control input sequence be $u_i \triangleq col\{u_i(0),u_i(1),\cdots,u_i(\mathcal{N}-1)\}$, $U\triangleq col\{u_1,u_2,\cdots,u_N\}$, and the cost functional \begin{equation}\label{e:cost} \begin{aligned} J(U,Z)=&\sum_{i=1}^{N}\left\{\sum_{k=0}^{\mathcal{N}-1}\left[e_i^T(k)Q_ie_i(k)+u_i^T(k)R_iu_i(k)\right]+e_i^T(\mathcal{N})Q_{i\mathcal{N}}e_i(\mathcal{N})\right\}, \end{aligned} \end{equation} for some $Q_{i\mathcal{N}},Q_i\in\mathbb{R}^{n\times n},R_i\in \mathbb{R}^{m\times m}$ with $Q_{i\mathcal{N}}\geq0,Q_i\geq0,R_i>0$. Physically, this quadratic cost functional is composed of the energies of the error signal and of the input signal. It can be used as a performance index to quantify the swiftness, vibration and energy consumption of the network synchronization. Consequently, the second problem is to design a control sequence $U^\star$ that minimizes (\ref{e:cost}) subject to (\ref{e:systemoriginal}), which implicitly achieves synchronization as $\mathcal{N}$ becomes large enough. \begin{property}\label{p:LQR} Combining the two problems mentioned above, the linear quadratic synchronization control problem can be expressed as \begin{equation}\label{e:optimizationproblem} \begin{aligned} \min \limits_{U,Z}~&J(U,Z)\\ s.t.~&x_i(k+1)=A_ix_i(k)+B_iu_i(k),~i\in\{1,2,...,N\}\\ &(\mathcal{L}\otimes I_n)Z=0. \end{aligned} \end{equation} \end{property} \begin{remark}\label{LQRsignificance} In the cost functional $J(u_i,z_i,x_0)$, the terms $e_i^T(k)Q_ie_i(k)$ and $e_i^T(\mathcal{N})Q_{i\mathcal{N}}e_i(\mathcal{N})$ are introduced to improve the synchronization rate and the final synchronization precision respectively. The weighted matrices $Q_i$ and $Q_{i\mathcal{N}}$ are set to be positive semi-definite so that the familiar output synchronization can be regarded as a special case of Problem \ref{p:LQR} here. For example, if the output of agent $i$ is described by $y_i(k)=C_ix_i(k)$, the synchronization error becomes $e_{io}(k)=C_ix_i(k)-C_iz_i=C_ie_i(k)$, where $C_i$ may not be of full row rank. Thus, the output synchronization error term in the cost functional can be selected as $e_i(k)C_i^TC_ie_i(k)$. In this case, to achieve output synchronization, matrices $Q_{i} $ and $Q_{i\mathcal{N}}$ can be selected as $Q_{i}=Q_{i\mathcal{N}}=C_i^TC_i\ge 0$. Moreover, $u_i^T(k)R_iu_i(k)$ acts as a control penalty on the control input power. In fact, without this term the amplitude of the control input will go to infinity since maintaining smaller synchronization error requires larger control input. Thus, the weighted matrix $R_i$ should be positive definite to restrict all the components of the control input vector within a reasonable range. In a real design, the selection of $Q_i, Q_{i\mathcal{N}}$ and $R_i$ implies a tradeoff among synchronization rate, final synchronization error and control energy. \end{remark} \section{A Centralized Algorithm for Synchronization Control}\label{s:Distributed Linear Quadratic Synchronization Control Problem Design} In this section, consider the optimal linear quadratic synchronization control problem (\ref{e:optimizationproblem}). Using the method of multipliers, the augmented Lagrangian is first formulated as follows: \begin{equation}\label{e:Lagrangian} \begin{aligned} L_\rho(U,Z,\Lambda)=&\sum_{i=1}^{N}\left\{e_i^T(\mathcal{N})Q_{i\mathcal{N}}e_i(\mathcal{N})+\sum_{k=0}^{\mathcal{N}-1}\left[e_i(k)^TQ_ie_i(k)+u_i(k)^TR_iu_i(k)\right]\right\}\\ &+\Lambda^T(\mathcal{L}\otimes I_n)Z+\frac{\rho}{2}Z^T(\mathcal{L}\otimes I_n)Z, \end{aligned} \end{equation} where $\Lambda\triangleq col\{\lambda_1,\lambda_2,...,\lambda_n\}$ is the Lagrangian multipliers and $\rho>0$ is the augmented Lagrangian parameter. Then, a variant of the ADMM algorithm proposed in \cite{BoydS2010,GaoX2017} can be applied, which consists of the iterations (\ref{se:admmC}). \begin{algorithm}[htb] \caption{Centralized Linear Quadratic Synchronization Control Algorithm} \label{a:ADMM} Initialize $U^0, Z^0$ and $\Lambda^0$. For $q=0,1,...$, until convergent: \begin{subequations}\label{se:admmC} \begin{equation}\label{e:admmZ} \begin{aligned} Z^{q+1}=&\arg \min \limits_{Z}\left\{L_\rho(U^{q},Z,\Lambda^q)+\textstyle\frac{1}{2}\displaystyle(Z-Z^q)^TG(Z-Z^q)\right\}, \end{aligned} \end{equation} \begin{equation}\label{e:admmU} \begin{aligned} U^{q+1}=&\arg \min\limits_{U}\left\{L_\rho(U,Z^{q+1},\Lambda^q)+\textstyle\frac{1}{2}\displaystyle(U-U^q)^TH(U-U^q)\right\}, \end{aligned} \end{equation} \begin{equation}\label{e:admmL} \lambda^{q+1}_i=\lambda^q_i+\rho z_i^{q+1},~~i\in\{1,2,...,N\}, \end{equation} \end{subequations} where $G\triangleq diag\{G_1,G_2,\cdots,G_N\}$ and $H\triangleq diag\{I_\mathcal{N}\otimes H_1,I_\mathcal{N}\otimes H_2,\cdots,I_\mathcal{N}\otimes H_N\}$. \end{algorithm} In Algorithm \ref{a:ADMM}, matrices $G_i$ and $H_i$ are chosen positive matrices. This algorithm divides the linear quadratic synchronization control problem (\ref{e:optimizationproblem}) into a $Z$-minimization step (\ref{e:admmZ}), a $U$-minimization step (\ref{e:admmU}) and a dual variable update step (\ref{e:admmL}), which separates the node dynamics and the communication topology. Therefore, step (\ref{e:admmU}) can be regarded as a linear quadratic tracking problem with respect to individual subsystems and steps (\ref{e:admmZ}, \ref{e:admmL}) are used to achieve synchronization on the communication topology. In fact, Algorithm \ref{a:ADMM} is a variant of the generalized ADMM proposed in \cite{GaoX2017}. Then, the convergence analysis of Algorithm \ref{a:ADMM} is presented in the following Theorem whose proof can be found in Appendix. \begin{theorem}\label{t:convergence} Suppose that $Q_{i\mathcal{N}}\geq0,Q_i\geq0,R_i>0$ and the final time step $\mathcal{N}$ is finite. Then, the sequence $\{U^q,Z^q\}$ generated by Algorithm \ref{a:ADMM} converges to an optimal solution if the following conditions are satisfied: \begin{equation}\label{e:convergencecon} G_i>0, H_i>\left({L_\delta}+\frac{L_\delta^2}{2\sigma_{min}\{R_i\}}\right)I_m, \end{equation} where $L_\delta$ is the Lispschitz constant for the gradient of the cost functional. \end{theorem} \begin{remark}\label{r:referadmm} Theorem \ref{t:convergence} extends the existing results on the ADMM algorithm to deal with the distributed linear quadratic synchronization control problem. Comparing with the existing studies of distributed optimization control \cite{QiuZR2016}, the objective functional here is not necessarily separable across variables, i.e., the coupling functional $J_1(U,Z)$ appears in the cost functional. The objective becomes nonseparable because not only the final synchronization state but also the time cumulation of the synchronization error and control energy are considered here. This nonseparable objective functional makes it hard to directly apply the classical ADMM technique \cite{BoydS2010}, therefore its variant is proposed as the new Algorithm \ref{a:ADMM}. It is also worth noticing that the method leading to Theorem \ref{t:convergence} is, in essence, consistent with the generalized ADMM method proposed in \cite{GaoX2017}, where the convex optimization problem with a nonseparable objective functional is studied. \end{remark} \section{Distributed Synchronization Control}\label{s:subproblemalgorithm} Based on the convergence result presented in Theorem \ref{t:convergence}, the linear quadratic synchronization control problem (\ref{e:optimizationproblem}) is successfully divided into a Z-minimization step (\ref{e:admmZ}) and a U-minimization step (\ref{e:admmU}) in Algorithm \ref{a:ADMM}, which however is still centralized. In this section, distributed algorithms for steps (\ref{e:admmZ}) and (\ref{e:admmU}) are derived respectively. \begin{theorem}\label{t:step1} If the communication topology is undirected and connected, then the optimal solution of (\ref{e:admmZ}) can be obtained at the equilibrium point of \begin{equation}\label{e:step1} \begin{aligned} \dot z_i=&-\left(2\mathcal{N}Q_i+2Q_{i\mathcal{N}}+G_i\right)z_i-\sum_{j\in\mathcal{V}_i}\left[\rho(z_i-z_j)+(\lambda_i^q-\lambda_j^q)\right]+2Q_{i\mathcal{N}}x_i^q(\mathcal{N})+2\sum_{k=0}^{\mathcal{N}-1}Q_ix_i^q(k)+G_iz_i^q. \end{aligned} \end{equation} \end{theorem} \begin{proof} First of all, rewrite (\ref{e:step1}) in a compact form: \begin{equation}\label{e:step1compact} \begin{aligned} \dot Z=&-\left[2\mathcal{N}Q+2Q_\mathcal{N}+G+\rho(\mathcal{L}\otimes I_n)\right]Z-(\mathcal{L}\otimes I_n)\Lambda^q+2\sum_{k=0}^{\mathcal{N}-1}QX^q(k)+2Q_\mathcal{N}X^q(\mathcal{N})+GZ^q, \end{aligned} \end{equation} where $X^q(k)=col\{x_1^q(k),...x_N^q(k)\}$, $Q=diag\{Q_1,...,Q_N\}$ and $Q_\mathcal{N}=diag\{Q_{1\mathcal{N}},...,Q_{N\mathcal{N}}\}$. Consider the Lyapunov function $V_Z=\frac{1}{2}Z^TZ$, which has the time derivative \begin{equation}\label{e:dV_Z} \dot V_Z=-Z^T\left[2\mathcal{N}Q+2Q_\mathcal{N}+G+\rho(\mathcal{L}\otimes I_n)\right]Z, \end{equation} and it is negative definite because $Q\ge0,Q_\mathcal{N}\ge0,(\mathcal{L}\otimes I_n)\ge0,G>0$. Consequently, the solution of differential equation (\ref{e:step1compact}) will converge to its equilibrium point that satisfies the KKT condition \cite{Boyd2004} of (\ref{e:admmZ}): \begin{equation}\label{e:equilibrium} \begin{aligned} 0=&\left[2\mathcal{N}Q+2Q_\mathcal{N}+G+\rho(\mathcal{L}\otimes I_n)\right]Z+(\mathcal{L}\otimes I_n)\Lambda^q-2\sum_{k=0}^{\mathcal{N}-1}QX^q(k)-2Q_\mathcal{N}X^q(\mathcal{N})-GZ^q. \end{aligned} \end{equation} In conclusion, the solution of algorithm (\ref{e:step1}) will converge to the optimal solution of (\ref{e:admmZ}) since $2\mathcal{N}Q+2Q_\mathcal{N}+G+\rho(\mathcal{L}\otimes I_n)$ is positive definite. \end{proof} The following theorem presents the optimal controller for each agent individually to solve the linear quadratic synchronization control problem. \begin{theorem}\label{t:step2} Given $z_i^{q+1}, u_i^q,~i\in\{1,..,N\}$, the cost functional $L_\rho(U,Z^{q+1},\lambda^q)$ in (\ref{e:admmU}) is minimized by, for each step $k=\mathcal{N}-1,\mathcal{N}-2,...,0$, the control input \begin{equation}\label{e:controller} \begin{aligned} u_i^\star(k)=&-U_i(k)\left[V_i(k)x_i(k)-W_i(k)\right],i\in\{1,2,...,N\}, \end{aligned} \end{equation} where \begin{equation}\label{e:step1iteration} \begin{aligned} U_i&(k)=(R_i+H_i+B_i^TS_{i1}(k+1)B_i)^{-1},\\ V_i&(k)=B_i^TS_{i1}(k+1)A_i,\\ W_i&(k)=-\frac{1}{2}B_i^TS_{i2}^T(k+1)+H_iu_i^q(k),\\ S_{i1}&(k)=Q_i+V_i^T(k)U_i^T(k)(R_i+H_i)U_i(k)V_i(k)+\left[A_i-B_iU_i(k)V_i(k)\right]^TS_{i1}(k+1)\left[A_i-B_iU_i(k)V_i(k)\right],\\ S_{i2}&(k)=2W_i^T(k)U_i^T(k)\left[B_i^TS_{i1}(k+1)A_i-(R_i+H_i)U_i(k)V_i(k)-B_i^TS_{i1}(k+1)B_iU_i(k)V_i(k)\right]\\ &+S_{i2}(k+1)\left[A_i-B_iU_i(k)V_i(k)\right]-2(z_i^{q+1})^TQ_i+2u_i^q(k)^TH_iU_i(k)V_i(k),\\ S_{i3}&(k)=(z_i^{q+1})^TQ_iz_i^{q+1}+W_i^T(k)U_i^T(k)\left[R_i+H_i+B_i^TS_{i1}(k+1)B_i\right]U_i(k)W_i(k)\\ &+S_{i2}(k+1)B_iU_i(k)W_i(k)+S_{i3}(k+1)+u_i^q(k)^TH_i\left[u_i^q(k)-2U_i(k)W_i(k)\right],\\ S_{i1}&(\mathcal{N})=Q_{i\mathcal{N}},~S_{i2}(\mathcal{N})=-2(z_i^{q+1})^TQ_{i\mathcal{N}},\\ S_{i3}&(\mathcal{N})=(z_i^{q+1})^TQ_{i\mathcal{N}}z_i^{q+1}. \end{aligned} \end{equation} The optimal objective value is given by \begin{equation}\label{e:optimalvalue} L_\rho^{\star q}=\sum_{i=1}^{N}L_i^\star(0), \end{equation} where $L_i^\star(0)=x_i^T(0)S_{i1}(0)x_i(0)+S_{i2}(0)x_i(0)+S_{i3}(0)$ and $x_i(0)$ is the initial state of the $i$-th agent, $i\in\{1,2,...,N\}$. \end{theorem} \begin{proof} Mathematical induction and dynamic programming are used in this proof. Fist, (\ref{e:step1iteration}) is verified for $k=\mathcal{N}-1$. According to the optimization principle \cite{BellmanR1972}, the optimal control input $u_i^\star(\mathcal{N}-1)$ must satisfy \begin{equation}\label{e:optlast} \begin{aligned} u_i^\star(\mathcal{N}-1)=\arg \min \limits_{u_i(\mathcal{N}-1)} J_i(\mathcal{N}-1), \end{aligned} \end{equation} where \begin{equation}\label{e:Jn1} \begin{aligned} J_i(\mathcal{N}&-1)=(x_i(\mathcal{N})-z_i^{q+1})^TQ_{i\mathcal{N}}(x_i(\mathcal{N})-z_i^{q+1})+(x_i(\mathcal{N}-1)-z_i^{q+1})^TQ_{i}(x_i(\mathcal{N}-1)-z_i^{q+1})\\ &+u_i(\mathcal{N}-1)^TR_iu_i(\mathcal{N}-1)+[u_i(\mathcal{N}-1)-u_i^q(\mathcal{N}-1)]^TH_i[u_i(\mathcal{N}-1)-u_i^q(\mathcal{N}-1)]. \end{aligned} \end{equation} Substituting (\ref{e:systemoriginal}) into (\ref{e:Jn1}) and taking the gradient with respect to $u_i(\mathcal{N}-1)$, one obtains \begin{equation}\label{e:Jn1gradient} \begin{aligned} \nabla J_i(\mathcal{N}&-1)=2B_i^TQ_{i\mathcal{N}}\left[A_ix_i(\mathcal{N}-1)+B_iu_i(\mathcal{N}-1)-z_i^{q+1}\right]+2R_iu_i(\mathcal{N}-1)+2H_i[u_i(\mathcal{N}-1)-u_i^q(\mathcal{N}-1)]. \end{aligned} \end{equation} Then, the KKT condition of (\ref{e:Jn1gradient}) can be derived, as \begin{equation}\label{e:Jn1KKT} \begin{aligned} u_i^\star(\mathcal{N}-1)=&-(B_i^TQ_{i\mathcal{N}}B_i+R_i+H_i)^{-1}\left[B_i^TQ_{i\mathcal{N}}A_ix_i(\mathcal{N}-1)-B_i^TQ_{i\mathcal{N}}z_i^{q+1}-H_iu_i^q(\mathcal{N}-1)\right]\\ =&-U_i(\mathcal{N}-1)\left[V_i(\mathcal{N}-1)x_i(\mathcal{N}-1)-W_i(\mathcal{N}-1)\right]. \end{aligned} \end{equation} Obviously, the unique solution $u_i^\star(\mathcal{N}-1)$ presented by (\ref{e:Jn1KKT}) leads to the minimum cost $J^\star_i(\mathcal{N}-1)$ since $B_i^TQ_{i\mathcal{N}}B_i+R_i+H_i>0$. Then, substituting (\ref{e:Jn1KKT}) into (\ref{e:Jn1}), one can get the minimum cost as \begin{equation}\label{e:Jn1min} \begin{aligned} J_i^\star(\mathcal{N}-1)=&x_i^T(\mathcal{N}-1)S_{i1}(\mathcal{N}-1)x_i(\mathcal{N}-1)+S_{i2}(\mathcal{N}-1)x_i(\mathcal{N}-1)+S_{i3}(\mathcal{N}-1). \end{aligned} \end{equation} Therefore, (\ref{e:controller})-(\ref{e:step1iteration}) are satisfied for $k=\mathcal{N}-1$. Now, assume that (\ref{e:controller})-(\ref{e:step1iteration}) are correct for $k=\mathcal{M}$, i.e., \begin{equation}\label{e:optM} \begin{aligned} u_i^\star(\mathcal{M})=&-U_i(\mathcal{M})\left[V_i(\mathcal{M})x_i(\mathcal{M})-W_i(\mathcal{M})\right],\\ L_i^\star(\mathcal{M})=&x_i^T(\mathcal{M})S_{i1}(\mathcal{M})x_i(\mathcal{M})+S_{i2}(\mathcal{M})x_i(\mathcal{M})+S_{i3}(\mathcal{M}). \end{aligned} \end{equation} and $S_{i1}(\mathcal{M})$ is positive semi-definite. From the optimization principle, again, it follows that the optimal control input $u_i^\star(\mathcal{M}-1)$ must minimum $J_i(\mathcal{M}-1)$, where \begin{equation}\label{e:JnM} \begin{aligned} J_i(\mathcal{M}-1)=&L_i^\star(\mathcal{M})+(x_i(\mathcal{M}-1)-z_i^{q+1})^TQ_{i}(x_i(\mathcal{M}-1)-z_i^{q+1})+u_i(\mathcal{M}-1)^TR_iu_i(\mathcal{M}-1)\\ &+[u_i(\mathcal{M}-1)-u_i^q(\mathcal{M}-1)]^TH_i[u_i(\mathcal{M}-1)-u_i^q(\mathcal{M}-1)]. \end{aligned} \end{equation} Substituting (\ref{e:systemoriginal}) into (\ref{e:JnM}) and taking the gradient with respect to $u_i(\mathcal{M}-1)$, one obtains \begin{equation}\label{e:JnMgradient} \begin{aligned} \nabla J_i(\mathcal{M}-1)=&2B_i^TS_{i1}(\mathcal{M})\left[A_ix_i(\mathcal{M}-1)+B_iu_i(\mathcal{M}-1)\right]+B_i^TS_{i2}^T(\mathcal{M})+2(R_i+H_i)u_i(\mathcal{M}-1)-2H_iu_i^q(\mathcal{M}-1). \end{aligned} \end{equation} Then, the KKT condition of (\ref{e:JnMgradient}) can be obtained, as \begin{equation}\label{e:JnMKKT} \begin{aligned} u_i^\star(\mathcal{M}-1)=&-(B_i^TS_{i1}(\mathcal{M})B_i+R_i+H_i)^{-1}[B_i^TS_{i1}(\mathcal{M})A_ix_i(\mathcal{N}-1)+\frac{1}{2}B_i^TS_{i2}^T(\mathcal{M})-H_iu_i^q(\mathcal{M}-1)]\\ =&-U_i(\mathcal{M}-1)\left[V_i(\mathcal{M}-1)x_i(\mathcal{M}-1)-W_i(\mathcal{M}-1)\right]. \end{aligned} \end{equation} Obviously, the unique solution $u_i^\star(\mathcal{M}-1)$ presented by (\ref{e:JnMKKT}) leads to the minimum cost $J^\star_i(\mathcal{M}-1)$ since $B_i^TS_{i1}(\mathcal{M})B_i+R_i+H_i>0$. Then substituting (\ref{e:JnMKKT}) into (\ref{e:JnM}), one can get the minimum cost as \begin{equation}\label{e:JnMmin} \begin{aligned} J_i^\star(\mathcal{M}-1)&=x_i^T(\mathcal{M}-1)S_{i1}(\mathcal{M}-1)x_i(\mathcal{M}-1)+S_{i2}(\mathcal{M}-1)x_i(\mathcal{M}-1)+S_{i3}(\mathcal{M}-1), \end{aligned} \end{equation} which indicates that (\ref{e:controller})-(\ref{e:step1iteration}) are satisfied for $k=\mathcal{M}-1$. In conclusion, the control input sequence $u_i^\star(k),k=0,1,...,\mathcal{N}-1$, minimizes the cost functional $L_\rho(U,Z^{q+1},\Lambda^q)$ in (\ref{e:admmU}) subject to (\ref{e:systemoriginal}), and the optimal objective value can be calculated by (\ref{e:optimalvalue}). \end{proof} With the results presented above, a distributed algorithm is established for the linear quadratic synchronization control problem. \begin{algorithm}[htb] \caption{Distributed Linear Quadratic Synchronization Control Design Algorithm} \label{a:disLQR} \begin{algorithmic}[1] \Require Initialize $q=0,~\rho>0,~x_i^0(k)\in \mathbb{R}^n,~u_i^0(k)\in \mathbb{R}^m,~z^0_i\in \mathbb{R}^n,~\lambda_i^0, G_i>0,~H_i>({L_\delta}+\frac{L_\delta^2}{2\sigma_{min}\{R_i\}})I_m$, for all $i\in\{1,2,\cdots,N\},~k=0,1,2,\cdots,\mathcal{N}$. Set the stop condition $N_q>0$. For subsystem $i\in\{1,2,\cdots,N\}$ do in parallel: \Repeat \State Solve (\ref{e:step1}) with communication to obtain the equilibrium point $z_{ie}$; \State Update the synchronization state $z_i^{q+1}=z_{ie}$; \For{$k=\mathcal{N}-1$ to $0$} \State Compute the control input $u_i(k)$ from (\ref{e:controller}); \EndFor \State Update the state $x_i^{q+1}(k),~k=1,2,\cdots,\mathcal{N}$ from (\ref{e:systemoriginal}); \State Update the Lagrangian multiplier $\lambda_i^{q+1}$ according to (\ref{e:admmL}); \State Set $q=q+1$; \Until{$q>N_q$} \end{algorithmic} \end{algorithm} \section{Examples with Simulations}\label{s:Simulations} \subsection{A Homogeneous System}\label{s:HomogeneousSim} A scenario of three homogeneous agents is considered first. The edge set of the communication topology is $\{(1,2),(1,3)\}$ and the corresponding Laplacian matrix is $$\mathcal{L}= \begin{bmatrix} 2 &-1 &-1\\ -1 &1 &0\\ -1 &0 &1 \end{bmatrix}. $$ Let the agents in (\ref{e:systemoriginal}) be neutrally unstable systems with \begin{equation*}\label{e:neutrallyunstable} A_i= \begin{bmatrix} 1 &1\\ 0 &1 \end{bmatrix},~ B_i= \begin{bmatrix} 0\\ 1 \end{bmatrix},~i\in\{1,2,3\}.~ \end{equation*} The weighted matrices in cost functional (\ref{e:cost}) are set as $Q_i=I_2,~Q_{i\mathcal{N}}=I_2,~R_i=1,~i\in\{1,2,3\}$, and $\mathcal{N}=40$. Choose the parameters in Algorithm \ref{a:disLQR} as $G_i=I_2,~H_i=100,~i\in\{1,2,3\}$ and $\rho=1$. The initial condition is taken as $x_1(0)=[0,0]^T,~x_2(0)=[10,-4]^T,~x_3(0)=[-20,10]^T$, $u_i^0(k)=0,~z_i^0=[0,0]^T,~i\in\{1,2,3\}$. For comparison, the static state-feedback (SSF) method proposed in \cite{LZK2011} is also simulated to verify the effectiveness of Algorithm \ref{a:disLQR} derived in this paper. Define the trajectories of synchronization error and control cost as $e(k)=(\mathcal{L}\otimes I_n)\times col\{x_1(k),x_2(k),x_3(k)\}$ and $\|u(k)\|=\|col\{u_1(k),u_2(k),u_3(k)\}\|$, respectively. The response trajectories generated by Algorithm \ref{a:disLQR} and the SSF method are depicted in Fig. \ref{f:neutrallyunstable}, from which it can be seen that the controller designed by Algorithm \ref{a:disLQR} achieves synchronization faster and requires less control energy. \begin{figure}[!htbp] \centering \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[scale=0.5]{neuunstable_xe.eps} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[scale=0.5]{neuunstable_nu.eps} \end{minipage} \caption{The curves of $e(k)$ and $\|u(k)\|$ with neutrally unstable agents} \label{f:neutrallyunstable} \end{figure} In addition, more scenarios such as stable, unstable and neutrally stable dynamics are studied to give a more comprehensive view of the advantages of Algorithm \ref{a:disLQR}. A quantitative comparison is displayed in Table \ref{t:comparison}. Here, the relative cost functional is denoted as $$J=e^T(\mathcal{N})Q_{\mathcal{N}}e(\mathcal{N})+\sum_{k=0}^{\mathcal{N}-1}\left[e(k)^TQe(k)+u(k)^TRu(k)\right],$$ where $R=diag\{R_1,R_2,R_3\},~u(k)=col\{u_1(k),u_2(k),u_3(k)\}$. In both scenarios, Algorithm \ref{a:disLQR} achieves a smaller relative cost and, the more unstable the dynamics are, the better effect the new technique has. From the unstable scenario, it is interesting to see that the Algorithm \ref{a:disLQR} always has a stable solution even if the unstable eigenvalues are far from the unit circle. \begin{table*}[!htbp] \centering \caption{Quantitative Comparison}\label{t:comparison} \begin{tabular}{l|cc|c|cl} \hline \hline Scenario & $A_i$ & $B_i$ & Method & Relative Cost Functional & Synchronization State\\ \hline \multirow{2}{*}{Stable} & \multirow{2}{*}{$\begin{bmatrix} 0.2 & 1\\ 0 & 0.2 \end{bmatrix}$} & \multirow{2}{*}{$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$} & ADMM & 814.93 & $[0.004,~0.02]^T$ \\ & & & SSF & 1416.82 & $[0,~0]^T$ \\ \hline \multirow{2}{*}{Neutrally Stable} & \multirow{2}{*}{$\begin{bmatrix} 0.2 & 1\\ 0 & 1 \end{bmatrix}$} & \multirow{2}{*}{$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$} & ADMM & 907.71 & $[0.33,~0.28]^T$ \\ & & & SSF & 2219.66 & $[-4.12,~-3.29]^T$ \\ \hline \multirow{2}{*}{Neutrally Unstable} & \multirow{2}{*}{$\begin{bmatrix} 1 & 1\\ 0 & 1 \end{bmatrix}$} & \multirow{2}{*}{$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$} & ADMM & 1039.36 & $[-1.69,~0.02]^T$ \\ & & & SSF & 6924.04 & $[-\infty,~-3.29]^T$ \\ \hline \multirow{2}{*}{Unstable1} & \multirow{2}{*}{$\begin{bmatrix} 1.2 & 1\\ 0 & 1 \end{bmatrix}$} & \multirow{2}{*}{$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$} & ADMM & 1.38e3 & $[-2.26,~0.47]^T$ \\ & & & SSF & 2.25e4 & $[-\infty,~-\infty]^T$ \\ \hline \multirow{2}{*}{Unstable2} & \multirow{2}{*}{$\begin{bmatrix} 2 & 1\\ 0 & 1 \end{bmatrix}$} & \multirow{2}{*}{$\begin{bmatrix} 0 \\ 1 \end{bmatrix}$} & ADMM & 8.74e3 & $[-4.28,~4.26]^T$ \\ & & & SSF & NAN & NAN \\ \hline \end{tabular} \end{table*} \subsection{A Heterogeneous System} Now, it is to demonstrate the effectiveness of Algorithm \ref{a:disLQR} in the heterogeneous scenario. Consider a network of agents described by (\ref{e:systemoriginal}) with \begin{equation*} \begin{aligned} &A_1= \begin{bmatrix} 1.2 &1 &2\\ 0 &2.4 &2\\ 2 &0 &1.5 \end{bmatrix},~ A_2= \begin{bmatrix} 0 &1.3 &-0.7\\ 0.5 &0.85 &0.85\\ 0.5 &-0.65 &1.35 \end{bmatrix}, A_3= \begin{bmatrix} 0.3 &1 &0\\ 0 &1.2 &1\\ 0 &0 &0.4 \end{bmatrix}, \\&B_1= \begin{bmatrix} 0 & 1 & 1\\ 2 & 0 & -1 \end{bmatrix}^T, B_2= \begin{bmatrix} 0 & 0 & 1\\ 0 & 2 & 0 \end{bmatrix}^T, B_3= \begin{bmatrix} 0 & 1 & 0\\ -1 & 0 & -2 \end{bmatrix}^T. \end{aligned} \end{equation*} \begin{figure}[!htb] \centering \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[scale=0.5]{x.eps} \end{minipage} \begin{minipage}[c]{0.45\textwidth} \centering \includegraphics[scale=0.5]{u.eps} \end{minipage} \caption{The curves of output and input signals for each agent} \label{f:heterogeneous} \end{figure} Assume that the communication topology is given, the same as that in Subsection \ref{s:HomogeneousSim}. The weighted matrices in cost functional (\ref{e:cost}) are set as $Q_1=diag\{0,8,13\},~Q_{1\mathcal{N}}=diag\{0,8,1\},~Q_2=diag\{0,3,5\},~Q_{1\mathcal{N}}=diag\{0,5,1\},~Q_3=diag\{0,4,15\},~Q_{3\mathcal{N}}=diag\{0,12,5\},~R_i=I_2,~i\in\{1,2,3\}$, and $\mathcal{N}=50$. In this scenario, the weighted matrices $Q_i$ and $Q_{1\mathcal{N}}$ are selected as positive semi-definite matrices, i.e., $y_i=[0,1,1]x_i,~i\in\{1,2,3\}$, to demonstrate the output synchronization ability of the proposed algorithm. Choose the parameters in Algorithm \ref{a:disLQR} as $G_i=I_3,~H_i=1e3\times I_2,~i\in\{1,2,3\}$, and $\rho=1$. The initial condition is taken as $x_1(0)=[-5,20,0]^T,~x_2(0)=[1,-4,20]^T,~x_3(0)=[-2,-20,3]^T$, $u_i^0(k)=[0,0]^T,~z_i^0=[0,0,0]^T,~i\in\{1,2,3\}$. The trajectories of the last two components of the states and the control inputs are shown in Fig. \ref{f:heterogeneous}, which indicates that the outputs of the agents synchronize rapidly and the control inputs converge (to different values) to maintain the synchronization. \section{Conclusions}\label{s:conclusion} The distributed optimal synchronization problem with linear quadratic cost is solved in this paper for multi-agent systems with a undirected communication topology. The optimal synchronization problem is formulated as a distributed optimization problem with a linear quadratic cost functional that integrates the energies of the synchronization error signal and of the input signal. By the application of a modified ADMM technique, the optimal synchronization control problem is separated into the synchronization step and the optimal control step. These two subproblems are then solved by distributed numerical algorithms based on the Lyapunov method and dynamic programming. The performances of the proposed design are is demonstrated by numerical examples for both homogenous and heterogenous linear multi-agent systems with either stable or unstable dynamics. \appendices \section{Proof of Theorem \ref{t:convergence}} Before proceeding to the convergence analysis, a useful lemma is first introduced. \begin{lemma} \cite{DroriY2015}\label{l:convexinequality} For any convex function $f$ on $\mathbb{R}^m$, which is continuously differentiable with gradient $\nabla f$ satisfying the Lipschitz continuous condition \begin{equation}\label{Lipschitzf} \|\nabla f(x)-\nabla f(y)\|\le L_f\|x-y\|,~~\forall x,y\in \mathbb{R}^m, \end{equation} one has \begin{equation}\label{e:convexeq} \begin{aligned} f(x)\le& f(y)+\nabla f(z)^T(x-y)+\frac{L_f}{2}\|x-z\|^2,\forall x,y,z\in \mathbb{R}^m. \end{aligned} \end{equation} \end{lemma} Next, the proof of Theorem \ref{t:convergence} is presented. \begin{proof} By substituting (\ref{e:systemoriginal}) into (\ref{e:cost}), one has \begin{equation}\label{e:costpar} J(U,Z)=J_1(U,Z)+J_2(U), \end{equation} where \begin{equation}\label{e:costJ1J2} \begin{aligned} J_1&(U,Z)=\sum_{i=1}^{N}\left\{\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)\right.-z_i\right)^TQ_{i\mathcal{N}}\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)-z_i\right)\\ &+\sum_{k=0}^{\mathcal{N}-1}\left[\left(A_i^kx_i(0)+\sum_{j=0}^{k-1}A_i^{k-1-j}B_iu_i(j)-z_i\right)^T\left.Q_{i}\left(A_i^kx_i(0)+\sum_{j=0}^{k-1}A_i^{k-1-j}B_iu_i(j)-z_i\right)\right]\right\},\\ J_2&(U)=\sum_{i=1}^{N}\sum_{k=0}^{\mathcal{N}-1}u_i^T(k)R_iu_i(k). \end{aligned} \end{equation} It is easy to see that $J_1(U,Z)$ is convex with respect to $u_i,z_i$ and $J_2(U)$ is strongly convex with respect to $u_i$ since $Q_{i\mathcal{N}}\geq0,Q_i\geq0,R_i>0$. Then, the gradient of $J_1(U,Z)$ can be obtained as \begin{equation}\label{gradientui} \nabla_{u_i}J_1=2 \begin{bmatrix} \begin{aligned} &\left(A_i^{\mathcal{N}-1}B_i\right)^TQ_{i\mathcal{N}}\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)-z_i\right)\\ &+\sum_{k=1}^{\mathcal{N}-1}\left(A_i^{k-1}B_i\right)^TQ_i\left(A_i^kx_i(0)+\sum_{j=0}^{k-1}A_i^{k-1-j}B_iu_i(j)-z_i\right) \end{aligned}\\[10ex] \begin{aligned} \left(A_i^{\mathcal{N}-2}B_i\right)^TQ_{i\mathcal{N}}\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)-z_i\right)\\ +\sum_{k=2}^{\mathcal{N}-1}\left(A_i^{k-2}B_i\right)^TQ_i\left(A_i^kx_i(0)+\sum_{j=0}^{k-1}A_i^{k-1-j}B_iu_i(j)-z_i\right) \end{aligned}\\[5ex] \vdots\\ B_i^TQ_{i\mathcal{N}}\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)-z_i\right) \end{bmatrix}, \end{equation} \begin{equation}\label{gradientzi} \begin{aligned} \nabla_{z_i}J_1=&-2Q_{i\mathcal{N}}\left(A_i^\mathcal{N}x_i(0)+\sum_{j=0}^{\mathcal{N}-1}A_i^{\mathcal{N}-1-j}B_iu_i(j)-z_i\right)-Q_i\left(x_i(0)-z_i\right)\\ &-2\sum_{k=1}^{\mathcal{N}-1}Q_i\left(A_i^kx_i(0)+\sum_{j=0}^{k-1}A_i^{k-1-j}B_iu_i(j)-z_i\right), \end{aligned} \end{equation} which can be rewritten in a compact form as \begin{equation}\label{gradient} \setcounter{equation}{33} \begin{aligned} \nabla J_1= \begin{bmatrix} \nabla_U J_1\\ \nabla_Z J_1 \end{bmatrix}=&L_0(A_i,B_i,Q_i,Q_{i\mathcal{N}}) \begin{bmatrix} x_1(0)\\ \vdots\\ x_N(0) \end{bmatrix} +L_\Delta(A_i,B_i,Q_i,Q_{i\mathcal{N}}) \begin{bmatrix} U\\ Z \end{bmatrix}. \end{aligned} \end{equation} Therefore, the cost functional $J_1(U,Z)$ satisfies \begin{equation}\label{e:Lipschitz} \begin{aligned} \|\nabla J_1(U_1,Z_1)-\nabla J_1(U_2,Z_2)\|&=\left\| L_\Delta(A_i,B_i,Q_i,Q_{i\mathcal{N}},R_i) \begin{bmatrix} U_1-U_2\\ Z_1-Z_2 \end{bmatrix}\right\|\\ &\le\|L_\Delta(A_i,B_i,Q_i,Q_{i\mathcal{N}},R_i)\| \left\| \begin{bmatrix} U_1\\ Z_1 \end{bmatrix}- \begin{bmatrix} U_2\\ Z_2 \end{bmatrix}\right\|\\ &\le L_\delta \left\| \begin{bmatrix} U_1\\ Z_1 \end{bmatrix}- \begin{bmatrix} U_2\\ Z_2 \end{bmatrix}\right\|, ~~~\forall \begin{bmatrix} U_1\\ Z_1 \end{bmatrix},~ \begin{bmatrix} U_2\\ Z_2 \end{bmatrix}. \end{aligned} \end{equation} where $L_\delta$ is a Lipschitz constant for $\nabla J_1(U,Z)$. In the following, the convergence of Algorithm \ref{a:ADMM} is proved. According to (\ref{e:Lagrangian}), the augmented Lagrangian can be written as \begin{equation}\label{e:Lagrangianp} \begin{aligned} L_\rho(U,Z,\Lambda)=&J(U,Z)+\Lambda^T(\mathcal{L}\otimes I_n)Z+\frac{\rho}{2}Z^T(\mathcal{L}\otimes I_n)Z, \end{aligned} \end{equation} By the optimality condition \cite{FacchineiF2003}, the optimal solution of subproblems (\ref{e:admmZ}) and (\ref{e:admmU}) satisfies \begin{equation}\label{e:saddlepointZ} \begin{aligned} &(Z-Z^{q+1})^T\left[\nabla_ZJ_1(U^{q},Z^{q+1})+G(Z^{q+1}-Z^q)+\rho(\mathcal{L}\otimes I_n)Z^{q+1}+(\mathcal{L}\otimes I_n)\Lambda^{q}\right]\ge 0,~~~\forall Z\in \mathbb{R}^n, \end{aligned} \end{equation} and \begin{equation}\label{e:saddlepointU} \begin{aligned} &(U-U^{q+1})^T\left[\nabla_UJ_1(U^{q+1},Z^{q+1})+2\bar RU^{q+1}+H(U^{q+1}-U^q)\right]\ge 0,~~~\forall U\in \mathbb{R}^m, \end{aligned} \end{equation} where $\bar R=diag\{I_\mathcal{N}\otimes R_1,I_\mathcal{N}\otimes R_2,\cdots,I_\mathcal{N}\otimes R_N\}$. By the Lipschitz continuity and Lemma \ref{l:convexinequality}, one can get \begin{equation}\label{e:gradienteq1} \begin{aligned} &(Z-Z^{q+1})^T\nabla_Z J_1(U^q,Z^{q+1})+(U-U^{q+1})^T\nabla_U J_1(U^{q+1},Z^{q+1})\\ =&(Z-Z^{q+1})^T\nabla_Z J_1(U^q,Z^{q+1})+(U-U^{q+1})\nabla_U J_1(U^q,Z^{q+1})\\ &+(U-U^{q+1})^T\left[\nabla_UJ_1(U^{q+1},Z^{q+1})-\nabla_U J_1(U^q,Z^{q+1})\right]\\ \le&(Z-Z^{q+1})^T\nabla_ZJ_1(U^q,Z^{q+1})+(U-U^{q+1})\nabla_UJ_1(U^q,Z^{q+1})+L_\delta\|U-U^{q+1}\|\|U^{q+1}-U^q\|\\ =&(Z-Z^{q+1})^T\nabla_ZJ_1(U^q,Z^{q+1})+(U-U^q)^T\nabla_UJ_1(U^q,Z^{q+1})+(U^q-U^{q+1})^T\nabla_UJ_1(U^q,Z^{q+1})\\ &+L_\delta\|U-U^{q+1}\|\|U^{q+1}-U^q\|\\ \le&J_1(U,Z)-J_1(U^{q},Z^{q+1})+(U^q-U^{q+1})^T\nabla_UJ_1(U^q,Z^{q+1})+L_\delta\|U-U^{q+1}\|\|U^{q+1}-U^q\|, \end{aligned} \end{equation} and \begin{equation}\label{e:gradienteq2} \begin{aligned} &J_1(U,Z)-J_1(U^{q},Z^{q+1})+(U^q-U^{q+1})^T\nabla_UJ_1(U^q,Z^{q+1})+L_\delta\|U-U^{q+1}\|\|U^{q+1}-U^q\|\\ \le&J_1(U,Z)-J_1(U^{q},Z^{q+1})+J_1(U^{q},Z^{q+1})-J_1(U^{q+1},Z^{q+1})+\frac{L_\delta}{2}\|U^q-U^{q+1}\|^2+L_\delta\|U-U^{q+1}\|\|U^{q+1}-U^q\|\\ \le&J_1(U,Z)-J_1(U^{q+1},Z^{q+1})+\left(\frac{L_\delta}{2}+\frac{L_\delta^2}{4\sigma_{min}\{R_i\}}\right)\|U^q-U^{q+1}\|^2+\sigma_{min}\{R_i\}\|U-U^{q+1}\|^2, \end{aligned} \end{equation} where $\sigma_{min}\{R_i\}$ denotes the minimum value of the eigenvalues of $R_1,R_2,\cdots,R_N$, and the last two inequalities follow from (\ref{Lipschitzf}) and (\ref{e:convexeq}). Combining (\ref{e:saddlepointZ}) and (\ref{e:saddlepointU}) yields \begin{equation}\label{e:saddlepointUZ} \begin{aligned} 0\le& (Z-Z^{q+1})^T\left[\nabla_ZJ_1(U^{q},Z^{q+1})+G(Z^{q+1}-Z^q)+\rho(\mathcal{L}\otimes I_n)Z^{q+1}+(\mathcal{L}\otimes I_n)\Lambda^{q}\right]\\ &+(U-U^{q+1})^T\left[\nabla_UJ_1(U^{q+1},Z^{q+1})+2\bar RU^{q+1}+H(U^{q+1}-U^q)\right]\\ \le&J_1(U,Z)-J_1(U^{q+1},Z^{q+1})+(Z-Z^{q+1})^TG(Z^{q+1}-Z^q)+(U-U^{q+1})^TH(U^{q+1}-U^q)\\ &+\sigma_{min}\{R_i\}\|U-U^{q+1}\|^2+\left(\frac{L_\delta}{2}+\frac{L_\delta^2}{4\sigma_{min}\{R_i\}}\right)\|U^q-U^{q+1}\|^2+U^T\bar RU-U^{q+1}\bar RU^{q+1}\\ &-\sigma_{min}\{R_i\}\|U-U^{q+1}\|^2+(Z-Z^{q+1})^T\left[\rho(\mathcal{L}\otimes I_n)Z^{q+1}+(\mathcal{L}\otimes I_n)\Lambda^{q}\right]\\ =&J(U,Z)-J(U^{q+1},Z^{q+1})+(Z-Z^{q+1})^TG(Z^{q+1}-Z^q)+(U-U^{q+1})^TH(U^{q+1}-U^q)\\ &+\left(\frac{L_\delta}{2}+\frac{L_\delta^2}{4\sigma_{min}\{R_i\}}\right)\|U^q-U^{q+1}\|^2+(Z-Z^{q+1})^T\left[\rho(\mathcal{L}\otimes I_n)Z^{q+1}+(\mathcal{L}\otimes I_n)\Lambda^{q}\right]. \end{aligned} \end{equation} It is easy to verify that \begin{equation}\label{e:UGU} \begin{aligned} (Z-Z^{q+1})^TG(Z^{q+1}-Z^q)=&-\frac{1}{2}(Z-Z^{q+1})^TG(Z-Z^{q+1})+\frac{1}{2}(Z-Z^{q})^TG(Z-Z^{q})\\ &-\frac{1}{2}(Z^q-Z^{q+1})^TG(Z^q-Z^{q+1}), \end{aligned} \end{equation} and \begin{equation}\label{e:ZHZ} \begin{aligned} &(U-U^{q+1})^TH(U^{q+1}-U^q)+\left(\frac{L_\delta}{2}+\frac{L_\delta^2}{4\sigma_{min}\{R_i\}}\right)\|U^q-U^{q+1}\|^2\\ \le&-\frac{1}{2}(U-U^{q+1})^TH(U-U^{q+1})+\frac{1}{2}(U-U^{q})^TH(U-U^{q})\\ &-\frac{1}{2}(U^q-U^{q+1})^T\left(H-{L_\delta}I+\frac{L_\delta^2I}{2\sigma_{min}\{R_i\}}\right)(U^q-U^{q+1}). \end{aligned} \end{equation} Then, from (\ref{e:admmL}), it follows that \begin{equation}\label{e:LrL} \begin{aligned} &(Z-Z^{q+1})^T\left[\rho(\mathcal{L}\otimes I_n)Z^{q+1}+(\mathcal{L}\otimes I_n)\Lambda^{q}\right]\\ =&(\Lambda-\Lambda^{q+1})^T\left[\frac{1}{\rho}(\mathcal{L}\otimes I_n)(\Lambda^{q+1}-\Lambda^q)-(\mathcal{L}\otimes I_n)Z^{q+1}\right]+(Z-Z^{q+1})^T(\mathcal{L}\otimes I_n)\Lambda^{q+1}\\ =&-\frac{1}{2}(\Lambda-\Lambda^{q+1})^T\frac{1}{\rho}(\mathcal{L}\otimes I_n)(\Lambda-\Lambda^{q+1})+\frac{1}{2}(\Lambda-\Lambda^{q})^T\frac{1}{\rho}(\mathcal{L}\otimes I_n)(\Lambda-\Lambda^{q})\\ &-\frac{1}{2}(\Lambda^q-\Lambda^{q+1})^T\frac{1}{\rho}(\mathcal{L}\otimes I_n)(\Lambda^q-\Lambda^{q+1})-(\Lambda-\Lambda^{q+1})^T(\mathcal{L}\otimes I_n)Z+(Z-Z^{q+1})^T(\mathcal{L}\otimes I_n)\Lambda . \end{aligned} \end{equation} Substituting (\ref{e:UGU}), (\ref{e:ZHZ}) and (\ref{e:LrL}) into (\ref{e:saddlepointUZ}), gives \begin{equation}\label{e:saddlepointUZ2} \begin{aligned} 0\le&J(U,Z)-(\Lambda-\Lambda^{q+1})^T(\mathcal{L}\otimes I_n)Z-J(U^{q+1},Z^{q+1})+(Z-Z^{q+1})^T(\mathcal{L}\otimes I_n)\Lambda\\ &+\frac{1}{2} \begin{bmatrix} U-U^q\\ Z-Z^q\\ \Lambda-\Lambda^q \end{bmatrix}^T M_1 \begin{bmatrix} U-U^q\\ Z-Z^q\\ \Lambda-\Lambda^q \end{bmatrix}-\frac{1}{2} \begin{bmatrix} U-U^{q+1}\\ Z-Z^{q+1}\\ \Lambda-\Lambda^{q+1} \end{bmatrix}^T M_1 \begin{bmatrix} U-U^{q+1}\\ Z-Z^{q+1}\\ \Lambda-\Lambda^{q+1} \end{bmatrix}-\frac{1}{2} \begin{bmatrix} U^q-U^{q+1}\\ Z^q-Z^{q+1}\\ \Lambda^q-\Lambda^{q+1} \end{bmatrix}^T M_2 \begin{bmatrix} U^q-U^{q+1}\\ Z^q-Z^{q+1}\\ \Lambda^q-\Lambda^{q+1} \end{bmatrix}, \end{aligned} \end{equation} where \begin{equation}\label{e:THETAM1M2} \begin{aligned} M_1&=\begin{bmatrix} H &0 &0\\ 0 &G &0\\ 0 &0 &\frac{1}{\rho}(\mathcal{L}\otimes I_n) \end{bmatrix},M_2=\begin{bmatrix} H-{L_\delta I}+\frac{L_\delta^2 I}{2\sigma_{min}\{R_i\}} &0 &0\\ 0 &G &0\\ 0 &0 &\frac{1}{\rho}(\mathcal{L}\otimes I_n) \end{bmatrix}. \end{aligned} \end{equation} Letting $U=U^\star,Z=Z^\star,\Lambda=\Lambda^\star$, in which the superscript $\star$ represents the optimal solution, and denoting \begin{equation}\label{e:THETA} \Theta= \begin{bmatrix} U^T &Z^T &\Lambda^T \end{bmatrix}^T, \end{equation} one obtains \begin{equation}\label{e:optimalineq} \begin{aligned} &\frac{1}{2} (\Theta^\star-\Theta^q)^TM_1(\Theta^\star-\Theta^q)-\frac{1}{2}(\Theta^\star-\Theta^{q+1})^TM_1(\Theta^\star-\Theta^{q+1})-\frac{1}{2}(\Theta^q-\Theta^{q+1})^TM_2(\Theta^q-\Theta^{q+1})\\ \ge&J(U^{q+1},Z^{q+1})-J(U^\star,Z^\star)+(\Lambda^\star-\Lambda^{q+1})^T(\mathcal{L}\otimes I_n)Z^\star-(Z^\star-Z^{q+1})^T(\mathcal{L}\otimes I_n)\Lambda^\star\\ \ge&0. \end{aligned} \end{equation} If $G_i>0, H_i>({L_\delta}+\frac{L_\delta^2}{2\sigma_{min}\{R_i\}})I_m $, it can be concluded that $M_1\ge0,M_2\ge0$. From (\ref{e:optimalineq}), one can obtain \begin{equation}\label{e:convergencesq} \begin{aligned} &\frac{1}{2} (\Theta^\star-\Theta^q)^TM_1(\Theta^\star-\Theta^q)-\frac{1}{2}(\Theta^\star-\Theta^{q+1})^TM_1(\Theta^\star-\Theta^{q+1})\ge\frac{1}{2}(\Theta^q-\Theta^{q+1})^TM_2(\Theta^q-\Theta^{q+1})\ge0, \end{aligned} \end{equation} which means that $\left\{(\Theta^\star-\Theta^q)^TM_1(\Theta^\star-\Theta^q),~q=1,2,\cdots\right\}$ is a decreasing sequence. Then, from $(\Theta^\star-\Theta^q)^TM_1(\Theta^\star-\Theta^q)\ge0$, it follows that the sequence $\left\{(\Theta^\star-\Theta^q)^TM_1(\Theta^\star-\Theta^q),~q=1,2,\cdots\right\}$ is convergent and $\{\Theta^q,~q=1,2,\cdots\}$ is bounded. Therefore, it follows from (\ref{e:convergencesq}) that \begin{equation}\label{e:convergenceq} \begin{aligned} \lim_{q \to +\infty}(\Theta^q-\Theta^{q+1})^TM_2(\Theta^q-\Theta^{q+1})=0, \end{aligned} \end{equation} which implies that $\lim_{q \to +\infty}(U^q-U^{q+1})=0,~\lim_{q \to +\infty}(Z^q-Z^{q+1})=0$ and $\lim_{q \to +\infty}(\mathcal{L}\otimes I_n)(\Lambda^q-\Lambda^{q+1})=0$. Hence, the sequences $(U^q,Z^q)$ and $(U^{q+1},Z^{q+1})$ converge to the same cluster points $(U^\infty,Z^\infty)$. From the first inequality of (\ref{e:saddlepointUZ}) and (\ref{e:LrL}), one gets \begin{equation}\label{limitsaddlepoint} \begin{aligned} &\begin{bmatrix} U-U^{\infty}\\ Z-Z^{\infty}\\ \Lambda-\Lambda^{\infty} \end{bmatrix}^T\left\{ \begin{bmatrix} \nabla_UJ(U^{\infty},Z^{\infty})\\ \nabla_ZJ(U^{\infty},Z^{\infty})\\ 0 \end{bmatrix}+ \begin{bmatrix} 0\\ (\mathcal{L}\otimes I_n)\Lambda^{\infty}\\ -(\mathcal{L}\otimes I_n)Z^\infty \end{bmatrix}\right\}\ge0. \end{aligned} \end{equation} By the ensemble variational inequality \cite{FacchineiF2003}, it consequently follows that $(U^\infty,Z^\infty,\Lambda^\infty)$ is an optimal solution. Therefore, $(U^q,Z^q,\Lambda^q)$ converges to the optimal solution of the distributed linear quadratic synchronization control problem (\ref{e:optimizationproblem}). \end{proof} \section*{Acknowledgment} This work is supported by the National Natural Science Foundation (NNSF) of China under Grants 61673026 and 61528301, and the HongKong Research Grants Council under the GRF Grant CityU 11234916. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{ieeetr}
{ "timestamp": "2018-05-08T02:13:51", "yymm": "1805", "arxiv_id": "1805.02331", "language": "en", "url": "https://arxiv.org/abs/1805.02331" }
\section{introduction} Quantum fluid models are used to describe, for instance, superfluids \cite{Loffredo1993On}, quantum semiconductors\cite{PhysRevB.48.7944}, thermistor theory and weakly interacting Bose gases \cite{Grant1973Pressure}, and quantum trajectories of Bohmian mechanics \cite{Wyatt2006Quantum}. In this paper we mainly study the quantum MHD equations in $\Omega={\mathbb T}^{3}$ (${\mathbb T}^{3}$ is the 3-dimensional torus in ${\mathbb R}^{3}$) which read as follows: \begin{equation}\label{e1.1} \left\{ \begin{aligned} &\partial_t \rho+\diver(\rho u)=0, x\in \Omega, t>0,\\ &\partial_t(\rho u)+\diver(\rho u\otimes u)+\nabla(P(\rho)+P_c(\rho))-2\diver(\rho D(u))\\ &\quad -2\kappa^2\rho\nabla\left(\frac{\Delta\sqrt\rho}{\sqrt\rho}\right)-(\nabla\times B)\times B=0,\\ &\partial_t B-\nabla\times(u\times B)+\nabla(\nu_b(\rho)\nabla\times B)=0,\\ &\diver B=0. \end{aligned} \right. \end{equation} with the initial data \begin{equation} \rho(0,x)=\rho_0(x), (\rho u)(0,x)=m_0,B(0,x)=B_0(x), \diver B_0=0. \end{equation} where the functions $\rho,u$ and $B$ represent the mass density, the velocity field and the magnetic field respectively. The function $P(\rho)=\rho^{\gamma}$ with $\gamma>1$ is the pressure, $P_c(\rho)$ is a singular continuous functions and called the cold pressure. $D(u)=\frac{\nabla u+(\nabla u)^{\top}}{2}$ is the stress tensor, $\nu_b(\rho)$ is the magnetic diffusion viscosity coefficient, $\kappa>0$ is the quantum planck constant. The expression $2\kappa \rho\nabla\left({\Delta\sqrt \rho}/{\sqrt \rho}\right)$ can be interpreted as a quantum Bohm potential, and has the following identity: \begin{equation} 2\kappa^2 \rho\nabla\left(\frac{\Delta\sqrt \rho}{\sqrt \rho}\right)=\kappa^2\diver(\rho\nabla^2\log \rho)=\kappa^2\nabla\Delta \rho-4\kappa^2\diver(\nabla\sqrt \rho\otimes\nabla\sqrt \rho).\end{equation} Recently, quantum fluid models have received a great deal of attention of mathematicians. J\"{u}ngel\cite{JDissipative} derived the dissipative quantum fluid models from Wigner-Boltzmann equation by a moment method, and the quantum ideal magnetohydrodynamic model was derived by Hass\cite{Haas2011Quantum}. The existence of global weak solutions have been studied by many authors. For the compressible Navier-Stokes equations with constant viscosity coefficients, the pioneer work is P.L.Lions \cite{Lions1998Mathematical} who proved the global existence of weak solutions for the barotropic compressible Navier-Stokes systems with $\gamma>3n/(n+2)$. Later, Feireisl \cite{Feireisl2001} extend this result to the case $\gamma>n/2$. When the viscosity coefficients $\mu,\lambda$ are density-dependent, the systems become much more difficult due to the velocity cannot be defined in the vacuum region. Under the assumption on the viscosity coefficients i.e $\lambda(\rho)=2(\rho\mu^{'}(\rho)-\mu(\rho))$. Bresch-Desjardins \cite{BRESCH2006362,Bresch2003Existence,Bresch2004Some,Didier2003On} made great progress, they discover a new mathematical entropy inequality which is not only applied to the vacuum case but also applied to get the existence of global weak solutions. Mellet-Vasseur \cite{doi:10.1080/03605300600857079} study the stability of the baratropic compressible Navier-Stokes equations. Later, Vasseur-Yu \cite{Vasseur2016} and Li-Xin \cite{Li2015Global} independently prove the global weak solution for $3D$ degenerate compressible Navier-Stokes equation, where they constructed separately appropriate approximation by different approaches. Now we recall some results on the compressible quantum Navier-Stokes equations. J\"{u}ngel \cite{doi:10.1137/090776068} proved the global existence of weak solutions by choosing $\rho\varphi$ as test function. However, this particular choice of test function does not contain the region $\{\rho(x,t)=0\}$ in the weak formulation. Gisclon-Violet \cite{GISCLON2015106} also showed the global existence results but in the classical definition of weak solutions by adding the cold pressure term. They also pointed that the cold pressure term can be replaced by the drag force term. Vasseur-Yu \cite{doi:10.1137/15M1013730} also considered the compressible quantum Navier-Stokes equations with damping term which is helpful to get the result \cite{Vasseur2016}. So far there are few results on the global existence of the quantum MHD equations. Yang-Ju \cite{doi:10.1063/1.4891492} proved the existence of global weak solutions for a special parabolic systems on the density $\rho$ and on the momentum $\rho u$ by doing a transformation for the velocity. Very recently, Guo-Xie \cite{Boling2017GLOBAL} established the global existence of weak solutions for the $2D$ general quantum MHD equations for which both viscosity coefficients and the dispersion term are general function. In the present paper, we prove the global existence of weak solutions for $3D$ quantum MHD models under special assumption on the viscosity coefficients and the dispersion term. It should be noted that we also require the magnetic diffusion coefficient satisfies the specific condition which is important to get the BD entropy. In addition, we perform the lower planck limit. In the present paper we make some assumptions that have a physical background which is similar to \cite{Boling2017GLOBAL} \textbf{Assumption 1.} $\mu,\lambda$ are respectively the shear and bulk viscosity coefficients and satisfy the BD entropy relationship i.e $\lambda(\rho)=2(\rho\mu^{'}(\rho)-\mu(\rho))$. In this paper,we deal with $\mu(\rho)=\rho, \lambda(\rho)=0$. \textbf{Assumption 2.} The cold pressure $P_c(\rho)$ is a suitable increasing function satisfying \begin{equation} \lim_{\rho\to 0} P_c(\rho)=\infty . \end{equation} More precisely, we assume \begin{equation} P_c^{'}(\rho)=\left\{ \begin{aligned} &c_1\rho^{-\gamma^{-}-1},\rho\leq 1,\\ &c_2\rho^{\gamma-1}, \rho>1, \end{aligned} \right. \end{equation} where $\gamma^{-},\gamma\ge 1, c_1, c_2>0$. \textbf{Assumption 3.} The magnetic diffusion viscosity coefficient $\nu_b(\rho)$ is a continuous functions of the density, bounded from above and take large values for both small and large densities. Furthermore, we assume that there exists $M>0$, positive constants $d_0,d_1,d_2,d_3$ large enough, and $2\leq a<a^{'}<3$ such that \begin{equation} \left\{ \begin{aligned} & \frac{d_0}{s^a}\leq \nu_b(s)\leq \frac{d_1}{s^{a^{'}}}, s<M,\\ &d_2\leq \nu_b(s)\leq d_2 s^b, s\ge M. \end{aligned} \right. \end{equation} \textbf{Assumption 4.} Functions $H(\rho)$ and $H_c(\rho)$ are satisfy the following relationship: \begin{equation}\label{e1.7} \rho H^{'}(\rho)-H(\rho)=P(\rho), \rho H_c^{'}(\rho)-H_c(\rho)=P_c(\rho). \end{equation} Our paper is organized as follows: In section 2 we collect some elementary facts and some important inequalities which will be used in the proof of our result. In section 3 we state our main result. In section 4, 5 we prove the global existence of weak solution for the approximate systems by using Faedo-Galerkin method. In section 6 we devoted to deriving the B-D entropy which plays an important role to perform the limit for the parameters. In section 7 we justify the vanish lower planck limit. \section{preliminaries} In this section we recall some known facts and inequalities which will be frequently used through out the paper. The following well-known Gargliardo-Nirenberg inequality which can be found in \cite{Zeidler1990Nonlinear} \begin{lem} Let $\Omega\subset \mathbb R^n$ be a bounded open set with $\partial\Omega\in C^{0,1}$, $m\in N, 1\leq p,q,r\leq \infty$. Then there exists a constant $C>0$ such that for all $u\in W^{m,p}\cap L^q$ $$\left\|D^{\alpha} u\right\|_{L^r}\leq C\left\|u\right\|_{W^{m,p}}^{\theta}\left\|u\right\|_{L^q}^{1-\theta},$$ where $0\leq \left|\alpha\right|\leq m-1,$ and $\frac{1}{r}-\frac{\alpha}{d}=\theta(\frac{1}{p}-\frac{m}{d})+(1-\theta)\frac1q$. If $m-\left|\alpha\right|-\frac{d}{p}\notin N_0$, then $\theta\in[\left|\alpha\right|/m,1]$ is allowed. \end{lem} The following two lemmas will be used to get the strong convergence of the solutions through out this paper. \begin{lem}[Aubin-Lions lemma\cite{Simon198765}] Let $X_0,X$ and $X_1$ be three Banach space with $X_0\subset X\subset X_1$. Suppose that $X_0$ is compactly embeded in $X$ and $X$ is continuously embeded in $X_1$. \begin{enumerate} \item Let $G$ be bounded in $L^p(0,T;X_0)$ where $1\leq p<\infty$ and $\frac{dG}{dt}$ be bounded in $L^1(0,T;X_1)$, Then $G$ is relatively compact in $L^p(0,T;X)$. \item Let $F$ be bounded in $L^\infty(0,T;X_0)$ and $\frac{dF}{dt}$ be bounded in $L^p(0,T;X_1)$ with $p>1$ then $F$ is relatively compact in $C([0,T];X)$. \end{enumerate} \end{lem} \begin{lem} Let $\mathbb K$ be a compact subset of $\mathbb R^n(n\ge1)$. And a sequence $v^{\epsilon}$ satisfy \begin{enumerate} \item $v^{\epsilon}$ is uniformly bounded in $L^{1+\alpha}(\mathbb K)$ with $\alpha>0$, \item $v^{\epsilon}$ converge almost everywhere to $v$, \end{enumerate} then $v^{\epsilon}$ converge strongly to $v$ in $L^1(\mathbb K)$ with $v\in L^{1+\alpha}(\mathbb K)$. \end{lem} \begin{lem}[\cite{doi:10.1137/090776068,doi:10.1137/15M1013730}] For any smooth positive function $\rho(x)$, we have \begin{equation*} C_1\int \left|\nabla^2\sqrt\rho\right|^2 dx+C_2\int \left|\nabla\rho^{1/4}\right|^4 dx\leq \int \rho\left|\nabla\log\rho\right|^2 dx, \end{equation*} where $C_1,C_2$ are positive constant. \end{lem} \section{main results} In this section we present two results. The first one gives the existence of weak solutions to \eqref{e1.1} without any assumption on $\gamma$ for 3-dimensional case. The second one is devoted to lower planck limit and also shows that global weak solution of \eqref{e1.1} tends to the weak solution of \eqref{e1.1} with $\kappa=0$. Next, we will give the definition of the weak solution to \eqref{e1.1}. \begin{mydef}\label{e2.1} Functions $(\rho,u,B)$ are called a weak solution to \eqref{e1.1} if the following conditions are satisfied: \begin{enumerate} \item The continuity equation holds in the sense of distributions, i.e \begin{equation} \int\rho_0\varphi(0)+\iint(\rho\varphi_t+\sqrt\rho\sqrt\rho u\nabla\varphi)dxdt=0, \end{equation} for any smooth test function with compactly supported $\varphi$ such that $\varphi(T,.)=0$. \item The momentum equation satisfies \begin{equation} \begin{aligned} &\int m_0\varphi(0)+\iint\left(\sqrt\rho(\sqrt\rho u)\varphi_t+\sqrt\rho u\otimes\sqrt\rho u\nabla\varphi+P(\rho)\diver\varphi+P_c(\rho)\diver\varphi \right)dxdt\\ &-\iint [2(\sqrt\rho u\otimes\nabla\sqrt\rho)\nabla\varphi)\nabla\varphi-2(\nabla\sqrt\rho\otimes\sqrt\rho u)\nabla\varphi-\sqrt\rho\sqrt\rho u\Delta\varphi -\sqrt\rho\sqrt\rho u\nabla\diver\varphi ]dxdt\\ &-4\kappa^2\iint(\nabla\sqrt\rho\otimes\nabla\sqrt\rho)\nabla\varphi-2\kappa^2\iint\sqrt\rho\nabla\sqrt\rho\nabla\diver\varphi-\iint(\nabla\times B)\times B\cdot\varphi dxdt=0, \end{aligned} \end{equation} for any smooth test function with compactly supported $\varphi$ such that $\varphi(T,.)=0$. \item The magnetic field $B$ satisfies \begin{equation} \int B_0\varphi(0)=\iint \left(B\varphi_t+(u\times B)\cdot(\nabla\times\varphi)-\nu_b(\rho)\nabla\times B:\nabla\varphi\right) dxdt, \end{equation} for any smooth test function with compactly supported $\varphi$ such that $\varphi(T,.)=0$. Our main results on the weak solutions reads as follows: \begin{thm} Assume $T>0, \gamma^{-}\ge 4, \gamma>1$, and let $(\rho_0,u_0,B_0)$ satisfies $\rho_0\ge 0$ and \begin{equation} \int\left(\frac{\left|m_0\right|^2}{2\rho_0}+H(\rho_0)+H_c(\rho_0)+2\kappa^2\left|\nabla\sqrt\rho_0\right|^2+\left|B_0\right|^2\right)dx\leq C. \end{equation} Then, there exists a global weak solution to the problem \eqref{e1.1}-\eqref{e1.7} in the sense of distribution of Definition 3.1. In particular, the weak solution $(\rho,u,B)$ satisfies the energy estimate \eqref{e3.14} and entropy inequality \eqref{e5.2}, \eqref{e5.20}. \begin{align*} &\rho\ge 0, \sqrt\rho\in L^\infty(0,T;H^1)\cap L^2(0,T;H^2),\\ &\rho\in L^\infty(0,T;L^\gamma), \rho^{-1}\in L^\infty(0,T;L^{\gamma-}),\rho^\gamma\in L^{5/3}(0,T;L^{5/3}),\\ &\sqrt\rho u\in L^\infty(0,T;L^2), \nabla\left(\frac{1}{\sqrt n}\right)\in L^2(0,T;L^2),\\ &B\in L^\infty(0,T;L^2)\cap L^2(0,T;H^1). \end{align*} \end{thm} \begin{rk} It should be noted that the Assumption 3 on the magnetic diffusion coefficient is required, which plays an important role to obtain the B-D entropy inequality. \end{rk} \begin{rk} Compared with Yang-Ju \cite{doi:10.1063/1.4891492} work, we make an improvement in the present paper. In \cite{doi:10.1063/1.4891492} both the continuity equation and the momentum equation are become parabolic with respect to $\rho$ and $\rho u$ by using a transformation. \end{rk} \begin{thm} Assume $T>0, \gamma^{-}\ge 4, \gamma>1$, and the initial data $(\rho_0,u_0,B_0)$ satisfies the assumption in Theorem 2.1. If we assume $(\rho_\kappa,u_\kappa,B_\kappa)$ are solutions of system \eqref{e1.1} .We have when $\kappa\to 0$ the limit function $(\rho,u,B)$ is the weak solution to the problem \eqref{e1.1}-\eqref{e1.7} with $\kappa=0$. \end{thm} \end{enumerate} \end{mydef} \section{faedo-galerkin approximation} In this section we proved the existence of approximate solutions to the systems \eqref{e1.1} by the Faedo-Galerkin method. Motivated by the work of Feireisl, Novotny and Petzeoltova, we proceed similarly as in \cite{Feireisl2009Singular,Feireisl2004Dynamics}. \subsection{Local solvability of the approximate system } This section is dedicated to prove the local existence of the approximate system. We adopt the following strategy: \begin{itemize} \item For given $u\in C([0,T];X_n) $, the approximate continuity equation can be solved directly by means of the classical theory of parabolic equations $\rho=S(u)$. \item Given $u$, the magnetic equation is also a linear parabolic equation and we can also find a solution $B=G(u)$ by the standard Galerkin method. \item Having solved the $\rho, B$, we can treat the approximate momentum equation as a nonlinear integral equation. The solution of the approximate equation is based on the fixed point argument in the Banach space $C([0,T];X_n)$. \end{itemize} We introduce the finite dimensional space $X_n=span\{e_1,e_2,\dots, e_n\}, n\in \mathbb N$, each $e_i$ is an orthonormal basis of $L^2$ which is also the orthogonal basis of $H^2$ we notice that $u\in C([0,T];X_n)$ is given by \begin{equation} u(t,x)=\sum_{i=1}^n \lambda_i(t)e_i(x), (t,x)\in [0,T]\times\Omega. \end{equation} for some functions $\lambda_i(t)$, and the norm of $u\in C([0,T];X_n)$ can be define as \begin{equation*} \left\|u(x,t)\right\|_{ C([0,T];X_n)}=\sup_{t\in[0,T]}\sum_{i=1}^n \left|\lambda_i(t)\right|. \end{equation*} And, $u$ can be bounded in $ C([0,T];C^k)$ for any $k\ge0$, thus we have \begin{equation} \left\|u\right\|_{C([0,T];C^k)}\leq C(k)\left\|u\right\|_{C([0,T];L^2)}. \end{equation} \begin{enumerate} \item Approximate continuity equation \begin{equation}\label{e3.3} \partial_t\rho+\diver(\rho u)=\epsilon\Delta\rho, \end{equation} with the initial data\begin{equation}\rho(x,0)=\rho_0(x)\ge \mu>0, \rho_0(x)\in C^\infty.\end{equation} where $\mu>0$ is a constant. For given $u\in C([0,T];X_n)$, there exists a classical solution to \eqref{e3.3}. By the maximal principle we know that the density satisfies the follows inequality \begin{equation} \inf_{x\in\Omega}\rho_0(x)\exp^{-\int_0^T\left\|\diver u\right\|_{L^\infty} ds}\leq \rho(x,t)\leq \sup_{x\in \Omega}\rho_0(x)\exp^{\int_0^T\left\|\diver u\right\|_{L^\infty} ds}. \end{equation} for all $(x,t)\in[0,T]\times\Omega$. Furthermore, we can also get that there exist a constant $\bar\rho$ such that \begin{equation} 0<\bar\rho\leq \rho(x,t)\leq \frac{1}{\bar\rho}, (x,t)\in [0,T]\times\Omega. \end{equation} Thus, we can introduce a operator $S$ from $ C([0,T];X_n)$ to $C([0,T];C^k)$ by $S(u)=\rho$. and the operator is Lipschitz continuous in the following sense: \begin{equation} \left\|S(u_1)-S(u_2)\right\|_{C([0,T];C^k)}\leq C(n,k)\left\|u_1-u_2\right\|_{C([0,T];L^2)}. \end{equation} Since for given $u\in C([0,T];X_n)$, the density is solved in terms of $u$. The magnetic equation become a linear parabolic-type equation. we can find a unique solution $B\in L^\infty(0,T;L^2)\cap L^2(0,T;H^1)$ by the standard Galerkin method and satisfies the following \begin{equation}\label{e3.8} \left\{ \begin{aligned} &\partial_t B-\nabla\times(u\times B)+\nabla\times(\nu_b(\rho)\nabla\times B)=0,\\ &\diver B=0,\\ &B(0,x)=B_0(x). \end{aligned} \right. \end{equation} In fact, if we assume $B=B_1-B_2$ where $B_1,B_2$ are two solutions of equation with the same data then we know that $B$ also satisfied the \eqref{e3.8}. Multiplying the equation $\eqref{e3.8}_1$ by $B$ and integrate over $\Omega$ we get \begin{equation}\label{e3.9} \begin{aligned} &\frac12\frac{d}{dt}\int \left|B\right|^2 dx+\int\nu_b(\rho)\left|\nabla\times B\right|^2 dx\\ &=\int (u\times B)\cdot (\nabla\times B) dx\\ &\leq \frac12\int\left|\nabla\times B\right|^2 dx+C(\left|u\right|_\infty)\int \left|B\right|^2 dx. \end{aligned} \end{equation} Due to the assumption 3, we know that $\nu_b(\rho)$ has lower bound, then by Gronwall inequality to \eqref{e3.9} we can get that $B=0$ .Furthermore, there exists a continuous solution operator $G$ from $C([0,T];X_n)$ to $L^\infty(0,T;L^2)\cap L^2(0,T;H^1)$ by $G(u)=B$. Now we turn to solve the approximate momentum equation on the space $X_n$. For $\rho=S(u), B=G(u)$ , we are looking for a function $u\in C([0,T];X_n)$ such that \begin{equation}\label{e3.10} \begin{aligned} &\int_{\Omega} \rho u(T)\varphi dx-\int_{\Omega}m_0\varphi dx-\int_0^T\int_{\Omega}\left(\rho u\otimes u:\nabla\varphi-P(\rho)\diver\varphi-P_c(\rho)\diver\varphi\right) dxdt\\ &+2\kappa^2\int_0^T\int_{\Omega}\frac{\Delta\sqrt\rho}{\sqrt\rho}\diver(\rho\varphi)dxdt+\epsilon\int_0^T\int_{\Omega}\nabla\rho\cdot\nabla u\cdot\varphi dxdt\\ &-\delta\int_0^T\int_{\Omega}\Delta^s(\diver(\rho\varphi)):\Delta^{s+1}\rho dxdt+2\int_0^T\int_{\Omega}\rho D(u)\cdot\nabla\varphi dxdt\\ &+\eta\int_0^T\int_{\Omega}\Delta u\cdot\Delta\varphi dxdt -\int_0^T\int_{\Omega}\left((\nabla\times B)\times B\right)\cdot\varphi dxdt=0, \end{aligned} \end{equation} for all $\varphi\in X_n$. we will apply the Banach fixed point theorem to prove the local existence of solutions for the equation \eqref{e3.10}. Following the same argument \cite{Feireisl2001} we can solve \eqref{e3.10}. Next, we introduce an operator defined on the set $\{\rho\in L^1, \rho\ge \underline\rho>0\}$, and $M[\rho]:X_n\to X_n^{\ast} $ \begin{equation} <M[\rho]v,u>=\int \rho v\cdot u dx , v,u\in X_n, \end{equation} where $X_n^{\ast}$ stands for the dual space of $X_n$. And the operator $M[\rho]$ has the following properties: \begin{itemize} \item $\left\|M[\rho]\right\|_{L(X_n,X_n^{\ast})}\leq C(n)\left\|\rho\right\|_{L^1}$. \item $M^{-1}[\rho]$ is invertible under the condition $\rho\ge \underline\rho>0$ and $$\left\|M^{-1}[\rho]\right\|_{L(X_n^{\ast},X_n)}\leq (\underline\rho)^{-1},$$ where $M^{-1}[\rho]:X_n^{\ast}\to X_n$. \item $M^{-1}[\rho]$ is Lipschtiz continuous. \end{itemize} \end{enumerate} The first two properties of the operator is easily to get. Since$$M^{-1}[\rho_1]-M^{-1}[\rho_2]=M^{-1}[\rho_2](M[\rho_2]-M[\rho_1])M^{-1}[\rho_1],$$ where $\rho_1,\rho_2\in\{\rho\in L^1, \rho\ge\underline\rho>0\}$. we can get $$\left\|M^{-1}[\rho_1]-M^{-1}[\rho_2]\right\|_{L(X_n^{\ast},X_n)}\leq C(n,\underline\rho)\left\|\rho_1-\rho_2\right\|_{L^2}.$$ Thus, $M^{-1}$ is Lipschitz continuous. Now, using the definition of the operator $M[\rho]$ the equation \eqref{e3.10} can be rewritten as \begin{equation} u_n(t)=M^{-1}[\rho]\left(m_0^{\ast}+\int_0^T N[\rho(s),u(s),B(s)] ds\right). \end{equation} where $\rho=S(u), B=G(u)$ and \begin{equation} \begin{aligned} N[\rho(s),u(s),B(s)] &=\int_{\Omega}\left(\rho u\cdot\varphi_t+\rho u\otimes u:\nabla\varphi+P(\rho)\diver\varphi+P_c(\rho)\diver\varphi\right)dx\\ &-2\kappa^2\int_{\Omega}\frac{\Delta\sqrt\rho}{\sqrt\rho}\diver(\rho\varphi)dx-\epsilon\int_{\Omega}\nabla\rho\cdot\nabla u\cdot\varphi dx\\ &-\delta\int_{\Omega}\Delta^s(\diver(\rho\varphi)):\Delta^{s+1}\rho dx-2\int_{\Omega}\rho D(u)\cdot\nabla\varphi dx\\ &-\eta\int_{\Omega}\Delta u\cdot\Delta\varphi dx+\int_{\Omega}\left((\nabla\times B)\times B\right)\cdot\varphi dx, \varphi\in X_n. \end{aligned} \end{equation} Due to the operators $S,G,M^{-1}$ are Lipschitz continuous, the above nonlinear equation can be solved by the fixed point argument on a short time interval $[0,T^{'}], T^{'}\leq T$ in the Banach space $C([0,T];X_n)$. Therefore, we can proved the local existence of solutions $(\rho_n,u_n,B_n)$ to the approximate systems \eqref{e3.3},\eqref{e3.8} and \eqref{e3.10}. \subsection{Uniform estimate and global existence of solutions} Assume $(\rho_n, u_n,B_n)$ is the approximate solutions exists on the $[0,T^{'}], T^{'}\leq T$. Our goal in this section is to extend the approximate solutions $(\rho_n, u_n,B_n)$ to the whole interval $[0,T]$, it is sufficient to establish the uniform bound on the norm $\left\|u_n\right\|_{X_n}$ which allows us to iterate the above procedure many times to reach the whole interval $[0,T]$. \begin{lem} Assume $T^{'}\leq T$, $(\rho_n,u_n,B_n)$ be the solutions to the \eqref{e3.3},\eqref{e3.8} and \eqref{e3.10}. then we have the following holds: \begin{equation}\label{e3.14} \begin{aligned} &\frac{d}{dt}E(\rho_n,u_n,B_n)+2\int_{\Omega} \rho_n \left|D(u_n)\right|^2 dx +\epsilon\int_{\Omega}\left(H^{''}(\rho_n)+H^{''}_c(\rho_n)\right)\left|\nabla\rho_n\right|^2 dx\\ &+\int_{\Omega}\nu_b(\rho_n)\left|\nabla\times B_n\right|^2 dx +\eta\int_{\Omega}\left|\Delta u_n\right|^2 dx+\delta\epsilon\int_{\Omega}\left|\Delta^{s+1}\rho_n\right|^2 dx\\ &+\epsilon\kappa^2\int_{\Omega} \rho_n\left|\nabla^2\log\rho_n\right|^2 dx=0, \end{aligned} \end{equation} where \begin{equation} E(\rho_n,u_n,B_n)=\int_{\Omega}\left(\frac12\rho_n\left|u_n\right|^2+H(\rho_n)+H_c(\rho_n)+\kappa^2\left|\nabla\sqrt\rho_n\right|^2+\frac12\left|B_n\right|^2+\frac{\delta}{2}\left|\nabla^{2s+1}\rho_n\right|^2\right)dx. \end{equation} \end{lem} \begin{pf} Differentiating \eqref{e3.10} with respect to time and using the test function $\varphi=u_n$ we have \begin{equation}\label{e3.16} \begin{aligned} &\frac{d}{dt}\int \frac12\rho\left|u\right|^2dx -\int(\partial_t\rho+ \diver(\rho u))\frac{\left|u\right|^2}{2}dx+\epsilon\int \Delta\rho \left|u\right|^2 dx+\epsilon\int \nabla\rho\cdot\nabla u\cdot u dx\\ &+\int\nabla(P(\rho)+P_c(\rho))\cdot u dx-\delta\int_0^T\int_{\Omega}\Delta^s(\diver(\rho u)):\Delta^{s+1}\rho dxdt+2\int_0^T\int_{\Omega}\rho \left|D(u)\right|^2 dxdt\\ &\eta\int\left|\Delta u\right|^2 dx+2\kappa^2\int \frac{\Delta\sqrt\rho}{\sqrt\rho}\diver(\rho u)dxdt-\int \left(\nabla\times B\right)\times B\cdot u dx=0. \end{aligned} \end{equation} due to the fact that \begin{equation}\label{e3.17} \begin{aligned} &\int\nabla(P(\rho)+P_c(\rho))\cdot u dx\\ &=\int\frac{1}{\rho}(P^{'}(\rho)+P_c^{'}(\rho))\nabla\rho\cdot \rho u dx\\ &=\int\nabla(H^{'}(\rho)+H_c^{'}(\rho))\cdot \rho u dx\\ &=-\int (H^{'}(\rho)+H_c^{'}(\rho))\diver(\rho u) dx\\ &=\int (H^{'}(\rho)+H_c^{'}(\rho))(\partial_t\rho-\epsilon\Delta\rho)dx\\ &=\frac{d}{dt}\int(H(\rho)+H_c(\rho)) dx+\epsilon\int (H^{''}(\rho)+H_c^{''}(\rho))\left|\nabla\rho\right|^2 dx, \end{aligned} \end{equation} \begin{equation}\label{e3.18} \begin{aligned} &2\kappa^2\int\frac{\Delta\sqrt\rho}{\sqrt\rho}\diver(\rho u)dxdt\\ &=2\kappa^2\int\frac{\Delta\sqrt\rho}{\sqrt\rho}\Delta\rho-4\kappa^2\int\Delta\sqrt\rho\partial_t\sqrt\rho dx\\ &=\epsilon\kappa^2\int\rho\left|\nabla^2\log\rho\right|^2 dx +\frac{\kappa^2}{2}\frac{d}{dt}\int\left|\nabla\sqrt\rho\right|^2 dx. \end{aligned} \end{equation} and similarly calculation \begin{equation}\label{e3.19} \delta\int_{\Omega}\Delta^s(\diver(\rho u)):\Delta^{s+1}\rho dx=\epsilon\delta\int\left|\Delta^{s+1}\rho\right|^2 dx+\frac{\delta}{2}\frac{d}{dt}\int \left|\nabla^{2s+1}\rho\right|^2 dx. \end{equation} Then multiply the magnetic equation by $B_n$ we get \begin{equation}\label{e3.20} \frac12\int \left|B_n\right|^2-\int\nabla\times(u\times B_n)\cdot B_n dx+\int \nu_b(\rho)\left|\nabla\times B_n\right|^2 dx=0. \end{equation} where we used the following the identity \begin{equation*} \int (\nabla\times B)\times B\cdot u dx=-\int\nabla\times(u\times B)\cdot B dx. \end{equation*} Summing up \eqref{e3.16} and \eqref{e3.20} together, we can get the desire estimate \eqref{e3.14}. \end{pf} From Lemma 3.1 we have the following estimate \begin{equation}\label{e3.21} \begin{aligned} &\rho_n\in L^\infty(0,T;L^\gamma),\rho_n^{-1}\in L^\infty(0,T;L^{\gamma^{-}}),\nabla\sqrt\rho_n\in L^\infty(0,T;L^2),\\ &\sqrt\rho_n u_n\in L^\infty(0,T;L^2),\sqrt\rho_n Du_n\in L^2(0,T;L^2),\sqrt\eta\Delta u_n\in L^2(0,T;L^2),\\ &\nabla\rho_n^{\frac{\gamma}{2}}\in L^2(0,T;L^2),\sqrt\delta \rho_n\in L^\infty(0,T;H^{2s+1}),\\ &B_n\in L^\infty(0,T;L^2) \cap L^2(0,T;H^1),\sqrt{\delta\epsilon}\rho_n\in L^2(0,T;H^{2s+2}),\\ &\sqrt\epsilon\sqrt\rho_n\nabla^2\log\rho_n\in L^2(0,T;L^2),\sqrt{\nu_b(\rho_n)}\nabla B_n\in L^2(0,T;L^2). \end{aligned} \end{equation} By sobolev embedding we have $\left\|\rho^{-1}\right\|_{L^\infty}\leq C\left\|\rho^{-1}\right\|_{W^{3,1}}$ and \begin{equation} \left\|\nabla^3\rho^{-1}\right\|_{L^1}\leq C\left(1+\left\|\nabla^3\rho\right\|_{L^\infty(0,T;L^2)}\right)^3\left(1+\left\|\rho^{-1}\right\|_{L^\infty(0,T;L^4)}\right)^4. \end{equation} Therefore, provided that $\gamma^{-}\ge 4, 2s+1\ge 3$ we can get \begin{equation}\label{e3.23} \left\|\rho^{-1}\right\|_{L^\infty((0,T)\times \Omega)} \leq C(\delta). \end{equation} Furthermore, by lemma 2.4 we get \begin{equation}\label{e3.24} \sqrt\epsilon \kappa\left\|\sqrt\rho_n\right\|_{L^2(0,T;H^2)}+\epsilon^{1/4}\kappa^{1/2}\left\|\nabla\rho_n^{1/4}\right\|_{L^4(0,T;L^4)}\leq C. \end{equation} Together with \eqref{e3.21} we get the uniform bound for $u_n$, i.e $$\sup_{[0,T_{\max}]}\left\|u_n\right\|_{X_n}\leq C.$$ where $C$ is independent of $T_{\max}$. Thus, we get a global existence of approximate solutions. \section{passage to the limit $n\to\infty$} After we have already constructed a family of approximate solutions $(\rho_n,u_n,B_n)$. The purpose of this section is to let $n\to\infty$. This can be achieved by using the uniform estimate of approximate solutions and Aubin-Lions lemma. \begin{lem} The following estimates holds for some constant $C$ independent of $n$ \begin{equation}\label{e4.1} \begin{aligned} &\left\|\partial_t\rho_n\right\|_{L^2(0,T;L^2)}+\left\|\rho_n\right\|_{L^2(0,T;H^{2s+2})}\leq C,\\ &\left\|\partial_t\sqrt\rho_n\right\|_{L^2(0,T;H^{-1})}+\left\|\sqrt\rho_n\right\|_{L^2(0,T;H^{2})}\leq C,\\ &\left\|\partial_t(\rho_n u_n)\right\|_{L^2(0,T;H^{-(2s+1)})}+\left\|(\rho_n u_n)\right\|_{L^2(0,T;W^{1,3/2})}\leq C,\\ &\left\|P(\rho_n)\right\|_{L^{5/3}}+\left\|P_c(\rho_n)\right\|_{L^{5/3}}\leq C,\\ &\left\|\partial_t B_n\right\|_{L^2(0,T;H^{-1})}+\left\|B_n\right\|_{L^2(0,T;H^1)}\leq C,\\ &\left\|\partial_t(1/\sqrt\rho_n)\right\|_{L^\infty(0,T;W^{-1,1})}\leq C. \end{aligned} \end{equation} \end{lem} \begin{pf} By the continuity equation we have \begin{equation*} \partial_t\rho_n=\epsilon\Delta\rho_n-\diver(\rho_n u_n)\in L^2(0,T;L^2). \end{equation*} Then since \begin{equation*} \partial_t\sqrt\rho_n+\frac{1}{2\sqrt\rho_n}\diver(\rho_n u_n)=\epsilon\left(\Delta\sqrt\rho_n+\frac{\left|\nabla\sqrt\rho_n\right|^2}{\sqrt\rho_n}\right). \end{equation*} together with the estimate \eqref{e3.24} yields that $$\partial_t\sqrt\rho_n\in L^2(0,T;H^{-1}).$$ due to the momentum equation \begin{align*} \partial_t(\rho_n u_n)&=-\diver(\rho_n u_n\otimes u_n)-\nabla(P(\rho_n)+P_c(\rho_n))+2\diver(\rho_n D(u_n))+\eta\Delta u_n\\ &\quad +2\kappa^2\rho_n\nabla\left(\frac{\Delta\sqrt\rho_n}{\sqrt\rho_n}\right)-(\nabla\times B_n)\times B_n+\delta\rho_n\nabla\Delta^{2s+1}\rho_n+\epsilon\nabla\rho_n\cdot\nabla u_n. \end{align*} we deduce that $\partial_t(\rho_n u_n)\in L^2(0,T;H^{-(2s+1)})$. Moreover, we also have $$\nabla(\rho_n u_n)=2\nabla\sqrt\rho_n\sqrt\rho_n u_n+2\sqrt\rho_n\nabla u_n\nabla\sqrt\rho_n\in L^2(0,T;L^{3/2}).$$ then we have $P(\rho_n)\in L^\infty(0,T;L^1)$ and $P(\rho_n)\in L^1(0,T;L^3)$ by the interpolation inequality we get $P(\rho_n)\in L^{5/3}(0,T;L^{5/3})$. and similarly we can proof $P_c(\rho_n)$.Finally, by the magnetic equation and \eqref{e3.24} we have $$\partial_t B=\nabla\times(u\times B)-\nabla\times(\nu_b(\rho)\nabla\times B)\in L^2(0,T;H^{-1}).$$ Since $$\partial_t\left(\frac{1}{\sqrt\rho_n}\right)+\frac12\diver(\rho^{-1/2}u)-\frac32\rho^{-3/2}\diver u=-\epsilon\left(\frac{\Delta\sqrt\rho_n}{\sqrt\rho_n}+\frac{\left|\nabla\sqrt\rho_n\right|^2}{\rho_n^{3/2}}\right).$$ and using the previous estimate \eqref{e3.21} we have $\partial_t\rho_n^{-1/2}\in L^2(0,T;W^{-1,6/5})$. \end{pf} By Aubin-Lions lemma and the Lemma 4.1, we have \begin{equation}\label{e4.2} \begin{aligned} &\sqrt\rho_n\to \sqrt\rho \,\text{ strongly in} \,L^2(0,T;H^1),\\ &\rho_n\to\rho\,\text{strongly in } \,L^2(0,T;H^{2s+1}), \text{and weakly in } L^2(0,T;H^{2s+2}),\\ &\rho_n u_n\to\rho u \,\text{strongly in}\, L^2(0,T;L^2),\\ &u_n\to u\,\text{weakly in}\, L^2(0,T;L^2), B_n\to B\, \text{strongly in}\, L^2(0,T;L^2),\\ &\rho_n^{-1/2}\to \rho^{-1/2} almost \,everywhere.\nabla B_n\to \nabla B\, \text{weakly in}\, L^2(0,T;L^2). \end{aligned} \end{equation} Moreover, we can get that $ \rho_n u_n\otimes u_n\to \rho u\otimes u$ in the distribution sense. Together with the estimate \eqref{e3.21} and \eqref{e4.2}, we can prove that \begin{align*} &P(\rho_n)\to P(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega),\\ &P_c(\rho_n)\to P_c(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega). \end{align*} The viscosity term can pass to the limit $$\iint\diver(\rho_n D(u_n) \varphi dxdt\to \iint\diver(\rho D(u) \varphi dxdt.$$ due to the fact that \begin{equation*} \begin{aligned} &\iint\rho(\nabla u+(\nabla u)^{\top}):\nabla\varphi dxdt\\ &=\iint(\rho\partial_i u^j\partial_i\varphi_j +\rho\partial_j u^i\partial_i\varphi_j)dx dt\\ &=\iint(\rho u^j)_i\partial_i\varphi_j+(\rho u^i)_j\partial_i\varphi^j -\partial_i\rho u^j\partial_i\varphi^j-\partial_j\rho u^i\partial_i\varphi^j dx\\ &=-2\iint(\nabla\sqrt\rho\otimes\sqrt\rho u)\cdot\nabla\varphi dxdt-2\iint(\sqrt\rho u\otimes\nabla\sqrt\rho):\nabla\varphi dxdt\\ &-\iint\sqrt\rho\sqrt\rho u\cdot \Delta\varphi dxdt-\iint\sqrt\rho\sqrt\rho u\nabla\diver\varphi dxdt. \end{aligned} \end{equation*} For the quantum term can also pass to the limit by using the convergence of the $\sqrt\rho_n$. In fact, for any test function \begin{equation}\label{e4.3} \begin{aligned} &\int \rho_n\nabla\left(\frac{\Delta\sqrt\rho_n}{\sqrt\rho_n}\right) \varphi dx\\ &=-\int \frac{\Delta\sqrt\rho_n}{\sqrt\rho_n}\diver(\rho_n\varphi) dx\\ &=-\int \Delta\sqrt\rho_n\sqrt\rho_n\diver\varphi dx-2\int\Delta\sqrt\rho_n\nabla\sqrt\rho_n\varphi dx\\ &=-\int\diver(\sqrt\rho_n\nabla\sqrt\rho_n)\diver\varphi dx+\int\left|\nabla\sqrt\rho_n\right|^2\diver\varphi dx\\ &\quad-2\int \diver(\nabla\sqrt\rho_n\otimes\nabla\sqrt\rho_n)\varphi dx+2\int(\nabla\sqrt\rho_n\cdot\nabla)\nabla\sqrt\rho_n\varphi dx\\ &=\int \sqrt\rho_n\nabla\sqrt\rho_n\nabla\diver\varphi dx+2\int \nabla\sqrt\rho_n\otimes\nabla\sqrt\rho_n\nabla\varphi dx. \end{aligned} \end{equation} For the capillarity term we can pass to the limit $$\delta\iint\rho_n\nabla\Delta^{2s+1}\rho_n\varphi dxdt\to \delta \iint \rho \nabla \Delta^{2s+1} \rho\varphi dxdt.$$ due to the strong convergence of $\rho_n$ and the following fact \begin{equation*} \delta\iint\rho_n\nabla\Delta^{2s+1}\rho_n\varphi dxdt=\delta\iint\nabla^{2s+1}\rho_n\Delta^{s}\diver(\rho_n\varphi) dxdt. \end{equation*} From the assumption, we get $\nu_b(\rho)$ has uniform lower bound which yields the weak convergence of $\nabla B_n$. Then together with the strong convergence of the $B_n$ and weak convergence of $u_n$ enable us pass to the limit for the magnetic equation. Therefore, the proof of pass to limit $n\to\infty$ is completed.Thus, we can show that $(\rho,u,B)$ solve the following systems \begin{equation}\label{e4.4} \left\{ \begin{aligned} &\partial_t \rho+\diver(\rho u)=\epsilon\Delta\rho, x\in \Omega, t>0,\\ &\partial_t(\rho u)+\diver(\rho u\otimes u)+\nabla(P(\rho)+P_c(\rho))-2\diver(\rho D(u))\\ &\quad+\eta\Delta^2 u+\epsilon\nabla\rho\cdot \nabla u-\delta\rho\nabla\Delta^{2s+1}\rho -2\kappa^2\rho\nabla\left(\frac{\Delta\sqrt\rho}{\sqrt\rho}\right)\\ &\quad-(\nabla\times B)\times B=0,\\ &\partial_t B-\nabla\times(u\times B)+\nabla(\nu_b(\rho)\nabla\times B)=0.\\ \end{aligned} \right. \end{equation} in the distribution sense and also satisfies the energy estimate \eqref{e3.14} and \eqref{e3.24}. \section{B-D entropy estimate and pass to the limit $\epsilon,\eta,\delta\to 0$} The purpose of this section is to derive the B-D entropy estimate for the approximated system \eqref{e4.4}. This estimate first established by Bresch-Desjardin-Lin in \cite{Didier2003On}. By the \eqref{e3.23} and \eqref{e4.1} we get \begin{equation} \rho(x,t)\ge C(\delta)>0,\text{ and}\, \rho\in L^2(0,T;H^{2s+2})\cap L^\infty(0,T;H^{2s+1}). \end{equation} then we can choose $\nabla\phi(\rho)=2\nabla\rho/\rho$ as test function to multiply the momentum equation to derive the entropy estimate. \begin{lem}(B-D entropy estimate) The following equality holds \begin{equation}\label{e5.2} \begin{aligned} &\frac{d}{dt}\int\left(\frac12\rho\left|u+\nabla\phi(\rho)\right|^2+H(\rho)+H_c(\rho)+\kappa^2\left|\nabla\sqrt\rho\right|^2+\frac12\left|B\right|^2+\frac{\delta}{2}\left|\nabla^{2s+1}\rho\right|^2\right)dx\\ &\quad+\eta\int \left|\Delta u\right|^2 dx+2\int \rho\left|A(u)\right|^2 dx+2\int\frac{1}{\rho}(P^{'}(\rho)+P_c^{'}(\rho))\left|\nabla\rho\right|^2 dx\\ &\quad+2\kappa^2\int\rho\left|\nabla^2\log\rho\right|^2 dx+\epsilon\kappa^2\int\rho\left|\nabla^2\log\rho\right|^2 dx+\int\nu_b(\rho)\left|\nabla\times B\right|^2dx\\ &\quad+\epsilon\delta\int\left|\Delta^{s+1}\rho\right|^2 dx+2\delta\int\rho\left|\Delta^{s+1}\rho\right|^2dx+\epsilon\int\frac{1}{\rho}(P^{'}(\rho)+P_c^{'}(\rho))\left|\nabla\rho\right|^2 dx\\ &=\epsilon\int \nabla\phi(\rho)\cdot\nabla(\phi^{'}(\rho)\Delta\rho) dx-\epsilon\int \nabla\rho\cdot\nabla u\cdot\nabla\phi(\rho) dx+\epsilon\int \frac{\left|\nabla\phi(\rho)\right|^2}{2}\Delta\rho dx\\ &\quad-\eta\int \Delta u\cdot\nabla\Delta\phi(\rho) dx-\epsilon\int \diver(\rho u)\phi^{'}(\rho)\Delta\rho dx+\int(\nabla\times B)\times B\cdot \nabla\phi(\rho) dx. \end{aligned} \end{equation} \end{lem} \begin{pf} We first multiply the approximate continuity equation by $\frac{\left|\nabla\phi(\rho)\right|^2}{2}$, we have \begin{equation} \begin{aligned} &\frac{d}{dt}\int\frac12\rho\left|\nabla\phi(\rho)\right|^2 dx\\ &=\int\rho\partial_t\left(\frac{\left|\nabla\phi(\rho)\right|^2}{2}\right) dx+\int\partial_t\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2} dx\\ &=\int\rho\nabla\phi(\rho)\cdot\nabla(\phi^{'}(\rho)\partial_t\rho) dx+\int\partial_t\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2}dx\\ &=\int\rho\nabla\phi(\rho)\cdot\nabla(\phi^{'}(\rho)\partial_t\rho) dx+\epsilon\int \Delta\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2} dx-\int\frac{\left|\nabla\phi(\rho)\right|^2}{2} \diver(\rho u) dx.\\ \end{aligned} \end{equation} due to the first term on the right hand side is equal to \begin{equation} \begin{aligned} &\int\rho\nabla\phi(\rho)\cdot\nabla(\phi^{'}(\rho)\partial_t\rho) dx\\ &=\epsilon\int\rho\nabla\phi(\rho)\nabla(\phi^{'}(\rho)\Delta\rho) dx-\int\rho\nabla u:\nabla\phi(\rho)\otimes \nabla\phi(\rho)dx+\int\rho\left|\nabla\phi(\rho)\right|^2\diver u dx\\ &\quad+\int\rho^2\phi^{'}(\rho)\Delta\phi(\rho)\diver u dx+\int \left|\nabla\phi(\rho)\right|^2\diver(\rho u) dx+\int\rho u\cdot\nabla^2\phi(\rho)\cdot\nabla\phi(\rho) dx\\ &=\epsilon\int\rho\nabla\phi(\rho)\nabla(\phi^{'}(\rho)\Delta\rho) dx-\int\rho\nabla u:\nabla\phi(\rho)\otimes \nabla\phi(\rho)dx+\int\rho\left|\nabla\phi(\rho)\right|^2\diver u dx\\ &\quad+\int\rho^2\phi^{'}(\rho)\Delta\phi(\rho) \diver u dx+\frac12\int\left|\nabla\phi(\rho)\right|^2\diver(\rho u) dx. \end{aligned} \end{equation} Thus \begin{equation}\label{e5.5} \begin{aligned} &\frac{d}{dt}\int\frac12\rho\left|\nabla\phi(\rho)\right|^2 dx=\epsilon\int\rho\nabla\phi(\rho)\nabla(\phi^{'}(\rho)\Delta\rho) dx-\int\rho\nabla u:\nabla\phi(\rho)\otimes \nabla\phi(\rho)dx\\ &\quad+\int\rho\left|\nabla\phi(\rho)\right|^2\diver u dx+\int\rho^2\phi^{'}(\rho)\Delta\phi(\rho) \diver u dx+\epsilon\int \Delta\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2} dx. \end{aligned} \end{equation} Next, we need to calculation the following term \begin{equation}\label{e5.6} \begin{aligned} &\frac{d}{dt}\int \rho u\cdot\nabla\phi(\rho) dx\\ &=\int \partial_t(\rho u)\cdot \nabla\phi(\rho) dx+\int \rho u\cdot\nabla(\phi^{'}(\rho)\partial_t\rho) dx\\ &=\int \partial_t(\rho u)\cdot \nabla\phi(\rho) dx+\epsilon\int\rho u\cdot\nabla(\phi^{'}(\rho)\Delta\rho) dx+\int\phi^{'}(\rho)(\diver(\rho u))^2 dx. \end{aligned} \end{equation} Combine \eqref{e5.5} and \eqref{e5.6} together, we have \begin{equation}\label{e5.7} \begin{aligned} &\frac{d}{dt}\int\frac12\rho\left|\nabla\phi(\rho)\right|^2+\int \rho u\cdot\nabla\phi(\rho) )dx\\ &=\epsilon\int\rho\nabla\phi(\rho)\nabla(\phi^{'}(\rho)\Delta\rho) dx-\int\rho\nabla u:\nabla\phi(\rho)\otimes \nabla\phi(\rho)dx+\int\rho\left|\nabla\phi(\rho)\right|^2\diver u dx\\ &\quad+\int\rho^2\phi^{'}(\rho)\Delta\phi(\rho) \diver u dx+\epsilon\int \Delta\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2}+\int\phi^{'}(\rho)(\diver(\rho u))^2 dx\\ &\quad+\int\diver(\rho u\otimes u):\nabla\phi(\rho) dx-\epsilon\int\diver(\rho u)\phi^{'}(\rho)\Delta\rho dx-\int\nabla(P(\rho)+P_c(\rho))\cdot\nabla\phi(\rho) dx\\ &\quad+2\int\nabla u:\nabla\rho\otimes\nabla\phi(\rho) dx-2\int\nabla\rho\cdot\phi(\rho)\diver u dx-\epsilon\int\nabla\rho\cdot\nabla u\cdot\nabla\phi(\rho) dx\\ &\quad-2\int\rho\Delta\phi(\rho)\diver u dx-2\kappa^2\int\rho\left|\nabla^2\log\rho\right|^2 dx-2\delta\int\left|\Delta^{2s+1}\rho\right|^2 dx\\ &\quad-\eta\int\Delta u\cdot\nabla\Delta\phi(\rho) dx+\int((\nabla\times B)\times B)\cdot\nabla\phi(\rho) dx.\\ &=\epsilon\int\rho\nabla\phi(\rho)\nabla(\phi^{'}(\rho)\Delta\rho) dx+\epsilon\int \Delta\rho\frac{\left|\nabla\phi(\rho)\right|^2}{2}-\int\nabla(P(\rho)+P_c(\rho))\cdot\nabla\phi(\rho) dx\\ &\quad+2\int\rho\left|D(u)\right|^2 dx-2\int \rho \left|A(u)\right|^2 dx-\epsilon\int\diver(\rho u)\phi^{'}(\rho)\Delta\rho dx-2\delta\int\left|\Delta^{2s+1}\rho\right|^2 dx\\ &\quad-2\kappa^2\int\rho\left|\nabla^2\log\rho\right|^2 dx-\eta\int\Delta u\cdot\nabla\Delta\phi(\rho) dx-\epsilon\int\nabla\rho\cdot\nabla u\cdot\nabla\phi(\rho) dx\\ &\quad+\int((\nabla\times B)\times B)\cdot\nabla\phi(\rho) dx. \end{aligned} \end{equation} Together with \eqref{e5.7} and \eqref{e3.14} yields the desire estimate. Therefore,the proof of lemma is completed. \end{pf} Next, we turn to estimate each term on the right hand side of \eqref{e5.2}. The sign of the first term is negative. In fact, integration by parts \begin{equation} \epsilon\int \nabla\phi(\rho)\cdot\nabla(\phi^{'}(\rho)\Delta\rho) dx=-4\epsilon\int \frac{1}{\rho}\left|\Delta\rho\right|^2 dx. \end{equation} For the second term which can be estimate as follows \begin{align} \left|\epsilon\int \nabla\rho\cdot\nabla u\cdot\nabla\phi(\rho) dx\right|&=2\epsilon\left|\int\frac{1}{\rho}\nabla\rho\cdot\nabla u\cdot\nabla\rho dx\right|\\ &=2\epsilon\left|\int \frac{\left|\nabla\rho\right|^2}{\rho^{3/2}}\rho^{1/2}\nabla u dx\right|\notag\\ &\leq \epsilon\int \frac{\left|\nabla\rho\right|^4}{\rho^3} dx+\epsilon\int\rho\left|\nabla u\right|^2 dx\notag. \end{align} due to \begin{equation*} \epsilon\int\rho\left|\nabla u\right|^2 dx\leq 2\epsilon(\int \rho\left|A(u)\right|^2 dx+\int\rho\left|D(u)\right|^2 dx), \end{equation*} Therefore, we have \begin{equation} \left|\epsilon\int \nabla\rho\cdot\nabla u\cdot\nabla\phi(\rho) dx\right|\leq \epsilon\int \frac{\left|\nabla\rho\right|^4}{\rho^3} dx+2\epsilon\int \rho\left|A(u)\right|^2 dx+2\epsilon\int\rho\left|D(u)\right|^2 dx. \end{equation} The third term can be estimated as follows \begin{align} \left|\epsilon\int \frac{\left|\nabla\phi(\rho)\right|^2}{2}\Delta\rho dx\right|&=2\epsilon\left|\frac{\left|\nabla\rho\right|^2}{\rho^2}\Delta\rho dx\right|\\ &=2\epsilon\left|\int\frac{\left|\nabla\rho\right|^2}{\rho^{3/2}}\frac{\Delta\rho}{\rho^{1/2}} dx\right|\notag\\ &\leq \epsilon\int \frac{\left|\nabla\rho\right|^4}{\rho^3}dx +\epsilon\int \frac{\left|\Delta\rho\right|^2}{\rho} dx\notag. \end{align} The fourth term estimated as follows \begin{equation} \left|\eta\int \Delta u\cdot\nabla\Delta\phi(\rho) dx\right|\leq \frac{\eta}{2}\left\|\Delta u\right\|_2^2+\frac{\eta}{2}\left\|\nabla\Delta\phi(\rho)\right\|_2^2. \end{equation} and the second term on the right hand side is equal to \begin{align*} \nabla\Delta\phi(\rho)&=2\partial_{kk}\left(\frac{\partial_i \rho}{\rho}\right)\\ &=2\partial_{k}\left(-\frac{1}{\rho^2}\partial_{k}\rho\partial_{i}\rho+\frac{\partial_{ik}\rho}{\rho}\right)\\ &=2\left(2\rho^{-3}(\partial_{k}\rho)^2\partial_i\rho-\frac{1}{\rho^2}\partial_{kk}\rho\partial_i\rho-\frac{2}{\rho^2}\partial_k\rho\partial_{ik}\rho+\frac{\partial_{ikk}\rho}{\rho}\right)\\ &=\frac{2\nabla\Delta\rho}{\rho}-\frac{4(\nabla\rho\cdot\nabla)\nabla\rho}{\rho^2}+\frac{4\left|\nabla\rho\right|^2\Delta\rho}{\rho^3}-\frac{2\Delta\rho\nabla\rho}{\rho^2}. \end{align*} then we get \begin{equation} \left\|\nabla\Delta\phi(\rho)\right\|_2\leq C(1+\left\|\rho\right\|_{H^{2s+1}})^3(1+\left\|\rho^{-1}\right\|_{L^\infty})^3\leq C(\delta). \end{equation} we can choose $\eta$ small enough with respect to $\delta$ such that we can get the uniform bound. The fifth term on the right hand side of \eqref{e5.2} is equal to \begin{equation} 2\epsilon\int \diver(\rho u)\phi^{'}(\rho)\Delta\rho dx =2\epsilon\int \Delta\rho\diver u dx+2\epsilon\int\frac{\Delta\rho}{\rho}\nabla\rho\cdot udx. \end{equation} since \begin{align*} 2\epsilon\left|\int \Delta\rho\diver u dx\right|&\leq \epsilon\int\frac{\left|\Delta\rho\right|^2}{\rho} dx+\epsilon\int \rho\left|\nabla u\right|^2 dx\\ &\leq \epsilon\int\frac{\left|\Delta\rho\right|^2}{\rho} dx+2\epsilon\int \rho\left|A(u)\right|^2 dx+\int\rho\left|D(u)\right|^2 dx.\\ \end{align*} \begin{equation*} 2\epsilon\left|\int\frac{\Delta\rho}{\rho}\nabla\rho\cdot udx\right|\leq2\epsilon\left\|\rho^{-1}\right\|_{L^\infty}^{3/2}\left\|\rho\right\|_{H^{2s+1}}. \end{equation*} we can choose $\epsilon$ small enough with respect to $\delta$ yields the uniform bound. Finally, we estimate the last term on the right hand side of \eqref{e5.2}. \begin{equation} \int(\nabla\times B)\times B\cdot \nabla\phi(\rho) dx\leq \int \frac{\left|\nabla\times B\right|^2}{\epsilon\rho^2} dx+\epsilon\int \left|\nabla\rho\times B\right|^2 dx. \end{equation} The first term can be absorbed by the magnetic diffusion term under the Assumption 3, and the second term estimated as follows \begin{equation*} \epsilon\int\left|\nabla\rho\times B\right|^2 dx\leq \epsilon\left\|\nabla\rho\right\|_{L^\infty}^2\left\|B\right\|_2^2\leq \left\|\nabla\rho\right\|_{H^2}^2\left\|B\right\|_2^2\leq C\epsilon\left\|\rho\right\|_{H^{2s+1}}^2. \end{equation*} we can follow the same argument choose $\epsilon$ small enough with respect to $\delta$ enable us get the uniform bound. \subsection{pass to the limits as $\epsilon,\eta\to 0$} In this step we assume $(\rho_{\epsilon,\eta},u_{\epsilon,\eta},B_{\epsilon,\eta})$ are the approximate solutions. Then from the B-D entropy estimate we can deduce that \begin{equation}\label{e5.16} \begin{aligned} &\rho_{\epsilon,\eta}\in L^\infty(0,T;L^\gamma),\rho_{\epsilon,\eta}^{-1}\in L^\infty(0,T;L^{\gamma^{-}}),\nabla\sqrt\rho_{\epsilon,\eta}\in L^\infty(0,T;L^2),\\ &\sqrt\rho_{\epsilon,\eta} u_{\epsilon,\eta}\in L^\infty(0,T;L^2),\sqrt\rho_{\epsilon,\eta} Du_{\epsilon,\eta}\in L^2(0,T;L^2),\sqrt\eta\Delta u_{\epsilon,\eta}\in L^2(0,T;L^2),\\ &\nabla\rho_{\epsilon,\eta}^{\frac{\gamma}{2}}\in L^2(0,T;L^2),\sqrt\delta \rho_{\epsilon,\eta}\in L^\infty(0,T;H^{2s+1}),\\ &B_{\epsilon,\eta}\in L^\infty(0,T;L^2) \cap L^2(0,T;H^1),\sqrt{\delta}\rho_{\epsilon,\eta}\in L^2(0,T;H^{2s+2}),\\ &\sqrt\kappa\sqrt\rho_{\epsilon,\eta}\nabla^2\log\rho_{\epsilon,\eta}\in L^2(0,T;L^2),\sqrt{\nu_b(\rho_{\epsilon,\eta})}\nabla B_{\epsilon,\eta}\in L^2(0,T;L^2). \end{aligned} \end{equation} and also satisfies the following \begin{equation}\label{e5.17} \sqrt\kappa\left\|\sqrt\rho_{\epsilon,\eta}\right\|_{L^2(0,T;H^2)}+\kappa^{1/2}\left\|\nabla\rho_{\epsilon,\eta}^{1/4}\right\|_{L^4(0,T;L^4)}\leq C. \end{equation} where $C$ is independent of the parameters $\epsilon,\eta,\delta.$ By the above estimate we can get $\nabla\rho_{\epsilon,\eta}^{-1/2}\in L^2(0,T;L^2)$. Since $$\nabla\rho_{\epsilon,\eta}^{-1/2}=-\frac12\frac{\nabla\rho_{\epsilon,\eta}}{\rho_{\epsilon,\eta}^{3/2}}=-\frac{\nabla\sqrt\rho_{\epsilon,\eta}}{\rho_{\epsilon,\eta}}.$$ For the case $\rho>1$ then the proof is obvious. then we focus on the case $\rho<1$ \begin{align*} \nabla\rho^{-1/2}&=\nabla(\rho^{\frac{\gamma^{-}-1}{2}}\rho^{-\frac{\gamma^{-}}{2}})\\ &=\rho^{\frac{\gamma^{-}-1}{2}}\nabla\rho^{-\frac{\gamma^{-}}{2}}+\nabla\rho^{\frac{\gamma^{-}-1}{2}}\rho^{-\frac{\gamma^{-}}{2}}\\ &=\rho^{\frac{\gamma^{-}-1}{2}}\nabla\rho^{-\frac{\gamma^{-}}{2}}+(1-\gamma^{-})\nabla\rho^{-1/2}. \end{align*} so $\gamma^{-}\nabla\rho^{-1/2}=\rho^{\frac{\gamma^{-}-1}{2}}\nabla\rho^{-\frac{\gamma^{-}}{2}}$, due to $\int_0^{T}\int_{\Omega} H_c^{''}(\rho)\left|\nabla\rho\right|^2 dxdt\leq C$ together with the relationship $\rho H_c^{''}(\rho)=P_c^{'}(\rho)$ yield $\int_0^{T}\int_{\Omega}\rho^{\gamma^{-}-1}\left|\nabla\rho^{-\frac{\gamma^{-}}{2}}\right|^2 dxdt \leq C$.Thus, we have $\nabla\rho^{-1/2}\in L^2(0,T;L^2)$. Following the same procedure as in the proof of the Lemma 4.1. Application to Aubin-Lions lemma and \eqref{e5.16} and \eqref{e5.17} give rise to the following compactness \begin{lem} The following convergence holds \begin{equation} \begin{aligned} &\sqrt\rho_{\epsilon,\eta}\to \sqrt\rho \,\text{ strongly in} \,L^2(0,T;H^1),\\ &\rho_{\epsilon,\eta}\to\rho\,\text{strongly in } \,L^2(0,T;H^{2s+1}), \text{and weakly in } L^2(0,T;H^{2s+2}),\\ &\rho_{\epsilon,\eta} u_{\epsilon,\eta}\to\rho u \,\text{strongly in}\, L^2(0,T;L^2),\\ &u_{\epsilon,\eta}\to u\,\text{strongly in}\, L^2(0,T;L^2), B_{\epsilon,\eta}\to B\, \text{strongly in}\, L^2(0,T;L^2),\\ &\rho_{\epsilon,\eta}^{-1/2}\to \rho^{-1/2} almost \,everywhere.\nabla B_{\epsilon,\eta}\to \nabla B\, \text{weakly in}\, L^2(0,T;L^2),\\ &P(\rho_{\epsilon,\eta})\to P(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega),\\ &P_c(\rho_{\epsilon,\eta})\to P_c(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega). \end{aligned} \end{equation} \end{lem} \begin{pf} Here we just show the convergence of velocity other convergence can be referred to the previous process. It is noted that the strong convergence of velocity $u$ can be obtained in this step. In fact, since the lower bound of the density is just depend on the parameter $\delta$ and the regularity of $\sqrt\rho\nabla u\in L^2(0,T;L^2)$. Then we get the uniform bound of $\nabla u\in L^2(0,T;L^2)$ which is independent of $\epsilon,\eta$. By the strong convergence of $\rho_{\epsilon,\eta}u_{\epsilon,\eta}\to\rho u$ we have $\rho_{\epsilon,\eta}u_{\epsilon,\eta}\to\rho u$ almost everywhere in $(x,t)\in (0,T)\times \Omega$. together with the convergence of $\rho_{\epsilon,\eta}^{-1}$ yields that the strong convergence of velocity $u_{\epsilon,\eta}$. \end{pf} Thus, we can pass to the limit for the nonlinear term $\rho_{\epsilon,\eta}u_{\epsilon,\eta}\otimes u_{\epsilon,\eta}$, Similarly we can also pass to the limit for the capillarity term $\delta\rho_{\epsilon,\eta}\nabla\Delta\rho_{\epsilon,\eta}^{2s+1}$, the quantum term and viscosity terms. And we have \begin{align*} &\left|\eta\iint \Delta^2 u_{\epsilon,\eta}\varphi dx dt\right|\leq \sqrt\eta\left\|\sqrt\eta\Delta u_{\epsilon,\eta}\right\|_2\left\|\Delta\varphi\right\|_{L^2(0,T;L^2)} \to 0,\\ &\left|\epsilon\iint\Delta \rho_{\epsilon,\eta}\varphi dxdt\right|\leq \sqrt\epsilon\left\|\nabla\rho\right\|_{L^2(0,T;L^2)}\left\|\nabla\varphi\right\|_{L^2(0,T;L^2)}\to 0, \\ &\epsilon\left|\iint\nabla\rho_{\epsilon,\eta}\cdot\nabla u_{\epsilon,\eta} \varphi dxdt\right|\leq C \sqrt\epsilon\left\|\sqrt\epsilon\nabla\rho_{\epsilon,\eta}\right\|_{L^2(0,T;L^2)}\left\|\nabla u_{\epsilon,\eta}\right\|_{L^2(0,T;L^2)}\to 0. \end{align*} Thus we have shown that the limit function $(\rho,u,B)$ is the weak solution of the following system \begin{equation} \left\{ \begin{aligned} &\partial_t \rho+\diver(\rho u)=0, x\in \Omega, t>0,\\ &\partial_t(\rho u)+\diver(\rho u\otimes u)+\nabla(P(\rho)+P_c(\rho))-2\diver(\rho D(u))\\ &\quad-\delta\rho\nabla\Delta^{2s+1}\rho -2\kappa^2\rho\nabla\left(\frac{\Delta\sqrt\rho}{\sqrt\rho}\right)-(\nabla\times B)\times B=0,\\ &\partial_t B-\nabla\times(u\times B)+\nabla(\nu_b(\rho)\nabla\times B)=0.\\ \end{aligned} \right. \end{equation} and satisfies the BD entropy estimate \eqref{e5.2} and energy estimate \eqref{e3.14}. Moreover, we also have the \begin{equation}\label{e5.20} \kappa\left\|\sqrt\rho\right\|_{L^2(0,T;H^2)}+\kappa^{1/2}\left\|\nabla\rho^{1/4}\right\|_{L^4(0,T;L^4)}\leq C. \end{equation} where $C$ is independent of $\delta$. \subsection{pass to the limit as $\delta\to 0$} Our goal in this step is to perform the limit as $\delta\to 0$. Here we will lose the uniform lower bound of the density. we can also get additional regularity information. Furthermore the following estimate also holds \begin{equation}\label{e5.21} \begin{aligned} &\rho_{\delta}\in L^\infty(0,T;L^\gamma),\rho_{\delta}^{-1}\in L^\infty(0,T;L^{\gamma^{-}}),\nabla\sqrt\rho_{\delta}\in L^\infty(0,T;L^2),\\ &\sqrt\rho_{\delta} u_{\delta}\in L^\infty(0,T;L^2),\sqrt\rho_{\delta} Du_{\delta}\in L^2(0,T;L^2),\\ &\nabla\rho_{\delta}^{\frac{\gamma}{2}}\in L^2(0,T;L^2),\sqrt\delta \rho_{\delta}\in L^\infty(0,T;H^{2s+1}),\\ &B_{\delta}\in L^\infty(0,T;L^2) \cap L^2(0,T;H^1),\sqrt{\delta}\rho_{\delta}\in L^2(0,T;H^{2s+2}),\\ &\sqrt\kappa\sqrt\rho_{\delta}\nabla^2\log\rho_{\delta}\in L^2(0,T;L^2),\sqrt{\nu_b(\rho_{\delta})}\nabla B_{\delta}\in L^2(0,T;L^2). \end{aligned} \end{equation} and satisfies the estimate \begin{equation}\label{e5.22} \kappa\left\|\sqrt\rho_{\delta}\right\|_{L^2(0,T;H^2)}+\kappa^{1/2}\left\|\nabla\rho_{\delta}^{1/4}\right\|_{L^4(0,T;L^4)}\leq C. \end{equation} \begin{lem} From the above estimate \eqref{e5.21} and \eqref{e5.22} , there exists a constant C independent of $\delta$ such that \begin{align*} &\left\|\nabla u\right\|_{L^p(0,T;L^q)}\leq C,\, p=\frac{2\gamma^{-}}{\gamma^{-}+1}, q=\frac{6\gamma^{-}}{3\gamma^{-}+1}, \\ &\left\|u\right\|_{L^P(0,T;L^{q^{\ast}})}\leq C,\, q^{\ast}=\frac{3q}{3-q},\\ &\left\|\sqrt\rho u\right\|_{L^{p^{'}}(0,T;L^{q^{'}})}\leq C, \,p^{'}>2,q^{'}>2. \end{align*} \end{lem} \begin{pf} $\nabla u=\frac{1}{\sqrt\rho}\sqrt\rho u$, since $\sqrt\rho u\in L^\infty(0,T;L^2)$ and using the previous estimate we get $\frac{1}{\sqrt\rho}\in L^{2\gamma^{-}}(0,T;L^{6\gamma^{-}})$, apply the H$\ddot{o}$lder inequality yields that desire estimate. Due to $W^{1,q}$ embedding $L^{q^{\ast}}$, we can obtained $\left\|u\right\|_{L^P(0,T;L^{q^{\ast}})}\leq C$. Next we turn to estimate the $\sqrt\rho u$. First for $0<r<1/2$ we have $\sqrt\rho u=(\sqrt\rho u)^{2r}u^{1-2r}\rho^{1/2-r}$, by the estimate \eqref{e5.21} yield $\sqrt\rho\in L^\infty(0,T;L^3)$ and we get $\rho^{1/2-r}\in L^\infty(0,T;L^{3/(1/2-r)}$. As a consequence we have $(\sqrt\rho u)^{2r}\in L^\infty(0,T;L^{1/r})$ and $u^{1-2r}\in L^{\frac{p}{1-2r}}(0,T;L^{\frac{q^{\ast}}{1-2r}})$ By H$\ddot{o}$lder inequality we deduce $\left\|\sqrt\rho u\right\|_{L^{p^{'}}(0,T;L^{q^{'}})}\leq C, \,p^{'}>2,q^{'}>2$ where $\frac{1}{p^{'}}=\frac{1-2r}{p},\frac{1}{q^{'}}=\frac{r}{1}+\frac{1/2-r}{3}+\frac{1-2r}{q^{\ast}}$ then take $\frac{1}{10}<r<\frac12$ can enable us to get the estimate. \end{pf} Repeated the same procedure as shown in the above part. we can deduce the following compactness information \begin{equation} \begin{aligned} &\sqrt\rho_{\delta}\to \sqrt\rho \,\text{ strongly in} \,L^2(0,T;H^1),\\ &\sqrt\rho_{\delta} u_{\delta}\to \sqrt\rho u\, \text{strongly in }\, L^2(0,T;L^2),\\ &\rho_{\delta} u_{\delta}\to\rho u \,\text{strongly in}\, L^2(0,T;L^2),\\ &u_{\delta}\to u\,\text{weakly in}\, L^2(0,T;L^2), B_{\delta}\to B\, \text{strongly in}\, L^2(0,T;L^2),\\ &\rho_{\delta}^{-1/2}\to \rho^{-1/2} almost \,everywhere.\nabla B_{\delta}\to \nabla B\, \text{weakly in}\, L^2(0,T;L^2),\\ &P(\rho_{\delta})\to P(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega),\\ &P_c(\rho_{\delta})\to P_c(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega). \end{aligned} \end{equation} In this step we can get the strong convergence of $\sqrt\rho_{\delta} u_{\delta}$ which is different with the above process. Since from the momentum equation we can get the information of $\partial_t(\rho_{\delta} u_{\delta})$. Apply to the Aubin-Lions lemma yields the almost everywhere convergence of $\rho_{\delta} u_{\delta}$, which, due to the almost everywhere convergence of $\sqrt\rho$. we can get $\sqrt\rho_\delta u_\delta\to \sqrt\rho u$ almost everywhere.Thus we can get the strong convergence of $\sqrt\rho_\delta u_\delta\to \sqrt\rho u$ in $L^2(0,T;L^2)$. The main obstacle in this step is deal with the term $\delta\rho_\delta\nabla\Delta^9\rho_\delta$. Other terms can be easily treated as the above process. \begin{lem} For any test function $\varphi$ we have \begin{equation*} \delta\iint \rho_\delta\nabla\Delta^9\rho_\delta\varphi dxdt\to 0. \end{equation*} \end{lem} \begin{pf} By \eqref{e5.21} we have the following uniform estimate $$\rho_\delta\in L^\infty(0,T;L^3), \sqrt\delta\rho_\delta\in L^\infty(0,T;H^{2s+1}),\sqrt\delta\rho_\delta\in L^2(0,T;H^{2s+2}).$$ Together with the Gagliardo-Nirenberg inequality yields \begin{equation}\label{e5.24} \left\|\nabla^{2s+1}\rho_\delta\right\|_{L^3}\leq C\left\|\rho_\delta\right\|_{W^{2s+2,2}}^{\alpha}\left\|\rho_\delta\right\|_{L^3}^{1-\alpha}. \end{equation} where $\alpha\in (0,1)$ satisfy the following relationship $$\frac13-\frac{2s+1}{3}=\alpha\left(\frac12-\frac{2s+2}{3}\right)+(1-\alpha)\frac13,\, \alpha=\frac{4s+2}{4s+3}.$$ Moreover, we can also get $\delta^{\frac{\alpha}{2}}\left|\nabla^{2s+1}\rho_\delta\right|\in L^{\frac{\alpha}{2}}(0,T;L^3)$ by using \eqref{e5.24}. Due to $$\delta\iint \rho_\delta\nabla\Delta^9\rho_\delta\varphi dxdt= -\delta\iint\Delta^{2s+1}\rho_\delta\Delta^s \diver(\rho_\delta\varphi)dxdt.$$ Then we focus on the most difficult term \begin{align*} &\left|\delta\iint \Delta^{s+1}\rho_\delta\Delta\nabla\rho_\delta\varphi dxdt\right|\\ &\leq C(\varphi) \delta^{\frac{1-\alpha}{2}}\left|\delta^{\frac{\alpha}{2}}\nabla^{2s+1}\rho_\delta\right|_{L^{\frac{2}{\alpha}}(0,T;L^3)}\left|\sqrt \delta \nabla^{2s+2} \rho_\delta\right|_{ L^2(0,T;L^2)}\\ & \to 0 \quad \text{as}\, \delta \to 0. \end{align*} Other terms also converge to 0 by the same approach, Thus the proof of the Lemma 6.4 is completed. \end{pf} \section{Lower planck limit} Our goal in this section is to prove the Theorem 3.2. For the sequence of solutions $(\rho_\kappa ,u_\kappa,B_\kappa)$, then we need to prove that we can pass to the limit for each term that occurs in the equation. Following the same procedure as shown in section 4 we can get the same compactness information on the solutions. It's worth noting that the strong convergence of $\sqrt\rho_\kappa$ in this step is only in the space $L^2(0,T;L^2)$ rather than $L^2(0,T;H^1)$. Since the uniform bound of $\sqrt\rho_\kappa$ in the space $ L^2(0,T;H^1)$ is just only enable us to get the convergence in $L^2(0,T;L^2)$. \begin{lem} Under the condition of Theorem 3.2 and using the uniform estimate we have \begin{equation}\label{e6.1} \begin{aligned} &\sqrt\rho_{\kappa}\to \sqrt\rho \,\text{ strongly in} \,L^2(0,T;L^2),\\ &\sqrt\rho_{\kappa} u_{\kappa}\to \sqrt\rho u\, \text{strongly in }\, L^2(0,T;L^2),\\ &\rho_{\kappa} u_{\kappa}\to\rho u \,almost\, everywhere,\\ &u_{\kappa}\to u\,\text{weakly in}\, L^2(0,T;L^2),\\ &\rho_{\kappa}^{-1/2}\to \rho^{-1/2} almost \,everywhere,\\ &P(\rho_{\kappa})\to P(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega),\\ &P_c(\rho_{\kappa})\to P_c(\rho) \,\text{strongly in }\, L^1((0,T)\times\Omega),\\ & B_{\kappa}\to B\, \text{strongly in}\, L^2(0,T;L^2),\\ &\sqrt{\nu_b(\rho_\kappa)}\nabla B_{\kappa}\to \sqrt\nu_b(\rho) \nabla B\, \text{weakly in}\, L^2(0,T;L^2).\\ \end{aligned} \end{equation} \end{lem} \begin{pf} We can use the same procedure as shown in section 3 to obtained the convergence of the above terms. The only different here is the strong convergence of $\sqrt\rho_\kappa$ in the space $L^2(0,T;L^2)$. Due to $\nabla\sqrt\rho\in L^2(0,T;L^2)$ and $\partial_t\sqrt\rho\in L^2(0,T;H^{-1})$, Together with Aubin-Lions lemma yields $\sqrt\rho_\kappa\to \sqrt\rho\, in\, L^2(0,T;L^2)$. \end{pf} Next we focus on the quantum term $\rho_\kappa\nabla\left(\frac{\Delta\rho_\kappa}{\sqrt\rho_\kappa}\right)$ \begin{lem} For any test function we have \begin{equation} 2\kappa^2\iint\rho_\kappa\nabla\left(\frac{\Delta\rho_\kappa}{\sqrt\rho_\kappa}\right) \varphi dxdt\to 0. \end{equation} \end{lem} \begin{pf} By the inequality \eqref{e4.3} we get \begin{equation} \begin{aligned} &2\kappa^2\left|\iint \rho_\kappa\nabla\left(\frac{\Delta\sqrt\rho_\kappa}{\sqrt\rho_\kappa}\right) \varphi dx dt\right|\\ &\leq 2\kappa^2\left|\iint \sqrt\rho_\kappa\nabla\sqrt\rho_\kappa\nabla\diver\varphi dxdt\right|+4\kappa^2\left|\iint \nabla\sqrt\rho_\kappa\otimes\nabla\sqrt\rho_\kappa\nabla\varphi dxdt\right|.\\ &\leq 2\kappa^2\left\|\sqrt\rho_\kappa\right\|_{L^\infty(0,T;L^2)}\left\|\nabla\sqrt\rho_\kappa\right\|_{L^(0,T;L^2)}\left\|\nabla\diver\varphi\right\|_{L^\infty((0,T)\times \Omega)}\\ &\quad+4\kappa^2\left\|\nabla\sqrt\rho_\kappa\right\|_{L^(0,T;L^2)}^2\left\|\nabla\varphi\right\|_{L^\infty((0,T)\times \Omega)}\\ &\leq C\kappa^2 \to 0, \quad as \kappa\to 0. \end{aligned} \end{equation} \end{pf} Finally, the proof of Theorem 3.2 is completed. \bibliographystyle{plain}
{ "timestamp": "2018-05-08T02:14:57", "yymm": "1805", "arxiv_id": "1805.02390", "language": "en", "url": "https://arxiv.org/abs/1805.02390" }
\section{Introduction} \section{Introduction} \vspace{-0.05in} With the high cost and competitive landscape of the healthcare industry~\cite{Bodenheimer2005}, health services researchers have applied operations research methods in an effort to decrease costs or increase revenue~\cite{brandeau2004operations}. Additionally, patient wait times have been linked to patient satisfaction and perception of the quality of care~\cite{Bleustein2014}, and are an outcome that operational improvements can address. One area of interest for solving these problems in healthcare is scheduling optimization for outpatient appointments and procedures~\cite{Gupta2008}. Studies have used mathematical programming models to optimize for desired outcomes such as utilization, throughput, and patient wait times~\cite{Cayirli2003}. Other studies have used stochastic models such as discrete event simulations to describe complex clinical processes~\cite{Jun2009}. These studies tune resource constraints such as staffing, equipment, or rooms to improve simulated outcomes~\cite{Mielczarek2012}. In healthcare, these methods are typically applied to busy and high value areas of the system such as chemotherapy~\cite{Le2015}, surgery~\cite{Cardoen2010}, radiation therapy~\cite{Saure2012}, or the emergency department~\cite{Hoot2008}. \textbf{Open problem.} There are several problems with simulation and mathematical models developed in previous studies. First, models describing healthcare processes are specific to a clinic or institution, making the model difficult or impossible to generalize to other use cases~\cite{Roberts2011}. Additionally, these models are difficult to validate with workflow data. Finally, model variables such as procedure times are often multi-faceted or non-modifiable for clinical reasons, thus complicating interventions designed to improve workflow. While many studies have sought to optimize scheduling or resources in order to improve certain outcomes, little work has been done to automate the identification of problems with clinic operations given real-world data. \textbf{Key contributions.} Unlike previous studies that optimize for a given utility function or outcome, our study seeks simply to diagnose problems with clinic workflow that cause appointments to start later than scheduled. Our model makes no assumptions about resources or existing distributions of services times. Therefore, our model is generalizable to any care setting or institution where data is available for scheduled appointment time, scheduled appointment duration, actual patient arrival time, and actual appointment duration. \iffalse Our model will be able to identify whether late patient arrivals or insufficient time allocated for appointments is primarily responsible for a clinic getting off schedule. The intended audience of the results of our model will clinic administrators and providers that can consider changes to the clinic process that address the problems identified. \fi This paper provides the following contributions to the study of computer aided clinic workflow diagnosis: \begin{itemize} \item It discusses how a constraint satisfaction model can depict the existing state of patient arrival times, appointment start times, and appointment durations. \item It discusses how comparing the existing state to scheduled appointment times can show mismatches in the planned and actual schedules. \item It discusses how a constraint optimization problem can diagnose whether late patients, poor appointment duration allocation, or variability in treatment duration most likely led to the mismatch between planned and actual schedules. \end{itemize} \section{Motivating Example} \label{motivation} \vspace{-0.05in} We apply our constraint satisfaction problem to appointments at an outpatient clinic of Vanderbilt University Medical Center between March 27 and April 21, 2017. The basis for our actual schedule are timestamps for when the patient arrives at the clinic, when the patient moves to the exam room for the start of their appointment, and when the patient leaves the clinic. Timestamp data for patient flow are collected by two systems in that area. One system is a workflow management tool integrated with the electronic medical record, where staff track the progress of patients through their appointments\cite{Weinberg2006}. The second system is an automated patient tracking tool, where patients receive a Bluetooth low energy beacon that tracks their room location within the clinic. For each checkpoint in the patient process, we take the earlier timestamp of the two systems to improve accuracy. We also pre-processed actual cycle times by assuming that the provider clinic is a single server process. This means that providers only saw one patient at a time in order of their appointment start times. Since most providers see patients in multiple rooms there are many cases where a patient room-in time overlaps with the next patient. In this case, we assume that the earlier patient departed and the later patient arrived in the room halfway through the time where their room-in times overlapped. The planned schedule is taken from the appointment record. Each appointment has a scheduled start time and scheduled duration. The mismatch between the scheduled appointments and the timestamp data are the basis for our constraint satisfaction model. \section{Diagnosing long patient wait times} \label{questions} ... \subsubsection{Question 1: Are late patients responsible?} ... \subsubsection{Question 2: Are poor appointment duration time estimates responsible?} ... \subsubsection{Question 3: Is treatment time unpredictability responsible?} ... \iffalse \subsubsection{Question 4: Is slow rooming of patients responsible?} ... \subsubsection{Question 5: Is late provider arrival responsible?} ... \fi \section{Constraint Satisfaction Model of Patient Cycle Times} \label{csp} \vspace{-0.05in} Constraint satisfaction problems (CSPs) are defined by a set of variables, such as the positive integer variables X and Y, and a set of constraints over the variables, such as $X < Y$. A constraint satisfaction problem defines a number of variables and constraints. A valid solution to a CSP is an assignment of values to the variables that adheres to all of the constraints. For example, X = 1, Y = 2, is a valid solution to this CSP. An assignment of values to the variables is called a labeling. Constraint solvers are automated tools that are used to solve CSPs. A constraint solver takes a set of variables, constraints, and any initial labelings of variables as input. The solver then automatically produces valid labelings for the remaining unlabeled variables that satisfy the constraints. For example, if a constraint solver was provided the CSP above and an initial labeling of Y = 3, it would solve for the valid labelings of X, 1 and 2. In order to use a constraint solver, a CSP must first be defined that captures the relationships between the variables of interest. In this paper, the cycle times of patients and their appointment times are of interest. This section walks through the construction of an initial CSP that captures the relationship between planned appointment times and durations, and actual observed appointment times and durations. In Section~\ref{auto}, this CSP is extended in a way that allows a constraint solver to automatically derive answers to whether late patients or long cycle times are responsible for clinics running behind schedule. Before beginning discussion of the model, a few key assumptions must be expressed. These key assumptions are outlined in Table~\ref{assumed}. The most important assumption is that we analyze the schedule for a single provider at a time. Analysis of multiple providers are possible, but each will have a separate CSP model built for their analysis. \vspace{-0.1in} \begin{table}[h] \caption{Key Model Assumptions} \vspace{-0.15in} \label{assumed} \begin{center} \begin{tabular}{|l|p{7cm}|} \hline A1. & A model is built for each individual provider's schedule and patients. \\ \hline A2. & A provider completes appointments sequentially. \\ \hline A3. & The appointment times T and At are sorted in ascending order based on actual start time. \\ \hline \end{tabular} \end{center} \vspace{-0.2in} \end{table} \begin{table}[h!] \caption{CSP Workflow Variables} \vspace{-0.15in} \label{cspvars} \begin{center} \begin{tabular}{|l|p{3.5cm}|} \hline $T = \{T_0 \dots T_n\} \in [0, 1440] $ & Scheduled start time of appointment as minutes offset from midnight \\ \hline $D = \{D_0 \dots D_n\} \in [0, 1440] $ & Scheduled duration of appointment in minutes \\ \hline $At = \{At_0 \dots At_n\} \in [0, 1440] $ & Actual start time of appointment as minutes offset from midnight \\ \hline $As = \{As_0 \dots As_n\} \in [0, 1440] $ & Difference in minutes of scheduled vs. actual appointment start time \\ \hline $Ad = \{Ad_0 \dots Ad_n\} \in [0, 1440] $ & Actual duration of appointment in minutes \\ \hline $Ae = \{Ae_0 \dots Ae_n\} \in [0, 1440] $ & Difference in minutes of scheduled vs. actual appointment duration \\ \hline $Ap = \{Ap_0 \dots Ap_n\} \in [0, 1440] $ & Actual patient arrival time as minutes offset from midnight \\ \hline $F = \{F_0 \dots F_n\} \in [0, 1440] $ & Actual end time of appointment as minutes offset from midnight \\ \hline $C = \{C_0 \dots C_n\} \in [0, 1440] $ & Cycle time of each patient in minutes \\ \hline $W = \{W_0 \dots W_n\} \in [0, 1440] $ & Difference between scheduled and actual cycle time in minutes \\ \hline \end{tabular} \end{center} \vspace{-0.3in} \end{table} We begin our model by defining a basic CSP. In the next subsection, we introduce additional variables into this CSP to support automated wait time diagnosis. The basic form of the CSP is shown in equations~\ref{eq1}-\ref{eq3}. The CSP input to the constraint solver is composed of a planned or expected schedule, $E = <T,D>$, and a set of actual observed values, $A = <At, Ad>$. A cycle time can be calculated for each appointment using either the planned values $cycle(E)$ or the actual observed values $cycle(A)$. \begin{equation} \label{eq1} As_i = \begin{cases} i = 0 & \max (0, ~Ap_i - T_i) \\ i > 0 & \max (0, ~At_{i-1} + Ad_{i-1} - T_i,~Ap_i - T_i) \end{cases} \end{equation} Equation~\ref{eq1} defines the basic constraint covering the calculation of the difference in minutes between the expected start time of an appointment and the actual start time. The first appointment of the day, $As_0$, will either start on time or will be delayed by the difference in minutes between the scheduled start time and the arrival time of the patient $\max (0, ~Ap_i - T_i)$. If the patient is late, $As_0$ will be a positive number of minutes that the patient was late to their appointment. For all other appointments, the start time deviation will either be a result of late patient arrival or the late completion of the the preceding appointment in At. Note, At is sorted based on actual appointment start time and not scheduled start time, which allows the analysis to consider deviations from the planned schedule. As shown in Equation~\ref{eq3}, the model constrains the actual duration of the appointment, $Ad_i$, to be equal to the expected duration of the appointment plus the difference between the expected and actual duration, $Ae_i$. This constraint is important later when the modified CSP is formulated to diagnose workflow issues. \begin{equation} \label{eq3} \begin{split} Ad_i & = D_i + Ae_i \\ \end{split} \end{equation} Next, the model constraints the actual end time of a patient's appointment, $F_i$, to be the actual start time plus the expected duration of the appointment, $D_i$, and difference in expected and actual deviation of the appointment, $Ae_i$. This constraint is shown in Equation~\ref{eq4} \begin{equation} \label{eq4} \begin{split} F_i & = At_i + D_i + Ae_i \end{split} \end{equation} A key input into the CSP model is the goals for patient cycle time. Ideally, patients should have a cycle time that matches their scheduled appointment duration. However, in reality, a patient may arrive late or a prior appointment may run late causing the cycle time and scheduled appointment time not to match. The model defines cycle time as the difference between the arrival time of the patient, $Ap_i$, and the actual finish time of the appointment, $F_i$. This constraint is shown in Equation~\ref{eq5} \begin{equation} \label{eq5} \begin{split} C_i & = F_i - Ap_i \end{split} \end{equation} The final component of the model is the defining a goal variable, which is that, ideally, the scheduled cycle time of the appointment should match the actual cycle time of the appointment, $C_i$. Although it might seem that it is preferable for the actual cycle time to be less than the scheduled cycle time, this indicates potential overestimation and waste in the schedule that could allow for more appointments. Thus, the ideal schedule has as little deviation from the planned vs. actual cycle time. This goal constraint is shown in Equation~\ref{eq6}. \begin{equation} \label{eq6} \begin{split} W_i & = C_i - D_i \end{split} \end{equation} With this simple CSP formulation of the model, all that a clinic can do is check that the actual collected data meets the expected constraints. If the data does not meet the constraints, it indicates a potential error in the data collection process or difference in actual operation vs. assumptions of this model. The next section extends the CSP model to allow automated analysis of whether late patients or long cycle times are responsible for clinics running behind schedule. \section{Constraint-based Diagnosis of Patient Cycle Times} \label{auto} \vspace{-0.1in} The overall goal of the diagnosis process is to explain why the planned cycle times for patients are longer or shorter than the actual observed cycle times. The automated diagnosis process relies on using a constraint solver to derive changes that could have been made to either the planned schedule or the actual observed schedule that would make the expected and actual cycle times more closely align. For example, the automated diagnosis process may state that had a specific patient arrived on time, the entire schedule for the day would have matched expectations. Alternatively, the automated diagnosis process might state that the actual duration of a single appointment was much longer than planned, indicating that treatment was more complicated than expected, and threw off the schedule. These are the types of outputs that the modified CSP will produce. In order to support these types of diagnoses, the model needs to encode the concept of a "change" that could be made to the actual or planned schedule to make them more closely align. The diagnosis tries to find the fewest changes to the actual schedule that would lead to actual cycle times matching planned cycle times. In other words, what things could have gone differently that would have made planned and actual cycle time the same. Later, the section will discuss how the constraint solver reasons over these changes to diagnose clinic workflows, since there are often a large number of possible changes that could be made to rectify the mismatch between planned and actual schedules. \subsection{CSP Model of Cycle Time Diagnosis} \vspace{-0.05in} More formally, given a planned schedules $E$ and $A$, such that $cycle(E) != cycle(A)$, the diagnosis defines a new CSP that solves for the set of changes R to E and A, such that $cycle(changes(E,R)) = cycle(changes(A,R))$. That is, the output of the CSP is a set of modifications to E and A that will make their calculated cycle times for each appointment equal. To support the concept of a potential "change", the CSP model needs two additional variables introduced to model $R=<\delta Ae, \delta Ap>$. An overview of these variables is shown in Table~\ref{divars}. First, the variable, $Ae_i$, is set to 1 by the solver if changing the duration of the $i_{th}$ appointment to match the planned duration would make the actual and planned cycle times more closely align. Second, the variable $Ap_i$ is a variable set to 1 by the solver if changing the patient's arrival time to match the start time of the appointment would make the actual and planned schedules match more closely. \vspace{-0.1in} \begin{table}[h] \caption{CSP Diagnosis Variables} \vspace{-0.15in} \label{divars} \begin{center} \begin{tabular}{|l|p{3.5cm}|} \hline $\delta Ae = \{\delta Ae_0 \dots \delta Ae_n\} \in [0, 1] $ & The difference in actual vs. scheduled treatment time of the $i_{th}$ appointment should be set to 0.\\ \hline $\delta Ap = \{\delta Ap_0 \dots \delta Ap_n\} \in [0, 1] $ & The $i_{th}$ patient's arrival time should be changed to the start time of the appointment. \\ \hline \end{tabular} \end{center} \end{table} \vspace{-0.1in} In order to use these variables, they must be incorporated into the CSP constraints. The $\delta Ap_i$ change variable is incorporated into the CSP in Equation~\ref{eq7}. The variable $RAp_i$ models the difference in planned appointment start time and patient arrival time. If the $\delta Ap_i$ is set to 1, it indicates that the patient arrival time should be set to the appointment time in order to more closely match scheduled and actual cycle times. By setting $\delta Ap_i$ to 1, it causes $RAp_i$ to equal the original planned start time of the appointment. \begin{equation} \label{eq7} RAp_i = \begin{cases} \delta Ap_i = 0 & Ap_i \\ \delta Ap_i = 1 & T_i \end{cases} \end{equation} The $\delta Ae$ variable is incorporated into the constraints in Equation~\ref{eq8}. If $\delta Ae$ is set to 1, $RAd_i$ takes the value of the original planned duration. Otherwise, $RAd_i$ takes the actual duration of the appointment as its value. \begin{equation} \label{eq8} RAd_i = \begin{cases} \delta Ae_i = 0 & D_i + Ae_i \\ \delta Ae_i = 1 & D_i \end{cases} \end{equation} Finally, in Equation~\ref{eq10}, the model ties the new change variables to the calculation of the difference in planned vs. actual start time of the appointment. The constraint is a modified version of Equation~\ref{eq1} that uses the $RAp_i$, $RAt_i$, and $RAd_i$ variables. For example, if $Ap_i \neq T_i$, but $\delta Ap_i = 1$, $RAp_i$ will equal 0, just as it would have if the patient had arrived on time. \begin{equation} \label{eq10} RAt_i = \begin{cases} i = 0 & RAp_i \\ i > 0 & \max (0, ~RAt_{i-1} + RAd_{i-1}, RAp_i) \end{cases} \end{equation} \begin{equation} \label{eq11} \begin{split} (RAt_i + RAd_i) = (T_i + D_i + \epsilon) \end{split} \end{equation} \subsection{Diagnosis as Optimization} \vspace{-0.06in} Clearly, there are arbitrarily many changes that could be made to the planned and actual schedules that would cause their cycle times for appointments to be the same. Therefore, a mechanism is needed to express to the constraint solver how to rank possible changes and diagnose the difference between an expected and actual schedule. The mechanism that the model uses to rank possible sets of changes is to try to minimize the total number of changes made to either the planned schedule, E, or the actually observed schedule, A. That is, the constraint solver is asked to solve for a solution that minimizes the value of Equation~\ref{eq12}. The solver is trying to find the minimal set of patients that could have arrived on time and appointments that could have met their expected duration to make the overall cycle times of all appointments match in both planning and actuality. \begin{equation} \label{eq12} \begin{split} \sum_0^n \delta Ae_i + \delta Ap_i \end{split} \end{equation} The output from the constraint solver will be a labeling of the variables in the CSP that minimizes the number of changes that have to be made to the planned or actual schedule to make them consistent. A key question is how this variable labeling can be used to answer questions about patient cycle times. The variables $\delta Ae_i$ and $\delta Ap_i$ are the path to answering these questions. \subsubsection{Diagnosing: Are late patients responsible?} The $\delta Ap_i$ variables determine if the minimal set of changes to make the actual and planned cycle times align includes changing the arrival time of patients. If late patients are part of the minimal set of changes that can be used to explain the difference between planned and actual execution time, it indicates that late patients are a factor and can precisely pinpoint which patients contributed to throwing off the planned cycle times. For example, if the 2nd patient's $Ap_2$ variable is set to 1 and there are not other outputs, it indicates that the solver can explain the discrepancy between the planned and actual cycle times simply by that patient's tardiness. Had that single patient arrived on time, actual cycle times for all appointments would have met their planned cycle times. The solver can output a single patient late arrival, multiple late arrivals, or a combination of late arrivals and poorly predicted appointment durations as the root cause. If a large number or all arrival times of patients are suggested as needing to be changed, meaning most people are late, this is a potential indicator that the front desk check-in process is slow. Moreover, it could also indicate problems with accessibility of the clinic location, such as difficulty in finding parking or navigating to the clinic. \subsubsection{Diagnosing: Are poor appointment block time estimates responsible?} The $\delta Ae_i$ variables indicate that appointment treatment times explain the discrepancy between planned and actual cycle times. For example, if $Ae_3 = 1$, it indicates that the 3rd appointment of the day went over its expected duration and contributed to the discrepancy in planned and actual times. The solver can output a single or combination of appointments and late arrivals that created the issue. If a large percentage of appointment durations exceeded expectations, meaning a large number of $\delta Ae_i$ variables are 1, it indicates a more pervasive issue with appointment block time planning. That is, if appointments are consistently over time, then it is likely that the provider is being scheduled insufficient time to see and treat each patient. Alternatively, if a single provider consistently has appointments that run past their expected duration, it may be that the particular provider slower or takes more time talking to their patients. \subsubsection{Diagnosing: Is treatment time unpredictability responsible?} If a single or small number of $\delta Ae_i$ variables consistently explain the discrepancy, it means that each day a small number of unpredictable appointments are running late and causing delays. For example, providers in urgent care clinics may face highly unpredictable health situations compared to other clinics with less emergent and varying conditions. Small numbers of $\delta Ae_i$ variables set to 1 indicate that it is unlikely that a clinic could have done anything differently to be on time. \iffalse \subsubsection{Diagnosing: Is slow rooming of patients responsible?} ... \subsubsection{Diagnosing: Is late provider arrival responsible?} ... \fi \vspace{-0.05in} \section{Results} \label{results} \vspace{-0.05in} From March 27, 2017 to April 21, 2017, 14 providers saw at least five patients on at least one appointment day at the Vanderbilt University Medical Center clinic in our study. These providers completed a total of 622 appointments over this period. Out of all appointments, 116 started late due to the patient arriving after the scheduled time, while 256 ended after the allocated time due to delayed cycle times. Figure 1 shows an example of one provider's schedule on one day where a combination of late patients and long cycle times caused the clinic to run off schedule. In this example, the solver determined that the making the 9th patient on time and completing the 6th, 7th, 10th, 11th, 12th, 14th, 15th, and 16th appointments on schedule would cause the rest of the appointments to run on schedule. In Table IV, we aggregate the diagnostic variables $\delta Ap$ and $\delta Ae$ for each provider over all their clinic days. Providers are sorted by the total number of patients seen. Provider A saw the most patients over the study period, and the solver determined that 12 patient check-in modifications and 40 appointment duration modifications were the minimum necessary to make that provider's clinics run on time. All providers had more appointment duration revisions than patient check-in revisions in the optimized schedule except for Provider D who had $\Sigma\delta Ap = 16 $ and $\Sigma\delta Ae = 14$, and Provider K who had $\Sigma\delta Ap = \Sigma\delta Ae = 5$. \begin{figure*} \centering \includegraphics[width=\textwidth]{img/exampleSched.png} \vspace{-0.35in} \caption{Visualization of original and optimal patient check-in and cycle times} \label{fig:1} \vspace{-0.25in} \end{figure*} \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Provider} \vspace{-0.1in} \begin{tabular}{|l|r|r|r|r|} \hline Provider & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ & Clinic Days & Patients Seen \\ \hline A & 12 & 40 & 7 & 106 \\ B & 5 & 11 & 7 & 66 \\ C & 15 & 22 & 4 & 66 \\ D & 16 & 14 & 10 & 63 \\ E & 12 & 27 & 4 & 59 \\ F & 5 & 6 & 8 & 51 \\ G & 7 & 20 & 2 & 45 \\ H & 11 & 13 & 4 & 40 \\ I & 12 & 12 & 6 & 38 \\ J & 12 & 15 & 3 & 38 \\ K & 5 & 5 & 3 & 24 \\ L & 2 & 7 & 3 & 15 \\ M & 0 & 0 & 1 & 6 \\ N & 2 & 2 & 1 & 5 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% Table V shows aggregate totals for $\delta Ap$ and $\delta Ae$ by date across all providers who had clinic that day. Again, $\Sigma\delta Ae > \Sigma\delta Ap$ on most days except on March 29th, April 4th, April 19, and April 21. There does not appear, from our sample, to be any correlation between the ratio of $\Sigma\delta Ap : \Sigma\delta Ae$ and the number of patients seen, the number of providers, or the day of the week. \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Date} \vspace{-0.1in} \begin{tabular}{|r|r|r|r|r|} \hline Date & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ & Patients Seen & \# Providers \\ \hline 27-Mar & 8 & 15 & 53 & 4 \\ 28-Mar & 4 & 15 & 41 & 4 \\ 29-Mar & 9 & 9 & 35 & 4 \\ 30-Mar & 8 & 14 & 41 & 4 \\ 31-Mar & 1 & 4 & 12 & 2 \\ 3-Apr & 7 & 13 & 48 & 4 \\ 4-Apr & 9 & 8 & 23 & 3 \\ 5-Apr & 6 & 11 & 30 & 3 \\ 6-Apr & 14 & 15 & 48 & 5 \\ 7-Apr & 3 & 6 & 18 & 3 \\ 10-Apr & 7 & 15 & 43 & 3 \\ 11-Apr & 9 & 14 & 46 & 4 \\ 12-Apr & 9 & 10 & 38 & 4 \\ 13-Apr & 6 & 14 & 38 & 4 \\ 14-Apr & 0 & 1 & 5 & 1 \\ 17-Apr & 3 & 5 & 20 & 2 \\ 18-Apr & 4 & 7 & 22 & 3 \\ 19-Apr & 6 & 6 & 26 & 3 \\ 20-Apr & 1 & 10 & 29 & 2 \\ 21-Apr & 2 & 2 & 6 & 1 \\ \hline \end{tabular}% \label{tab:addlabel}% \vspace{-0.25in} \end{table}% Finally, we aggregated $\delta Ae$ and $\delta Ap$ by the corresponding position of the revised appointment in the schedule in Table VI. For each provider clinic day with n appointments, any $\delta Ae$ and $\delta Ap$ in the first n/2 appointments (rounding down) would be assigned to the "first half" while the remainder would be assigned to the "second half". This means that provider clinic days with an odd number of appointments would have one more appointment attributed to the second half. Even with the discrepancy in the number of appointments favoring the second half, there were more modifications made to check-ins and cycle times in the first half of the schedule. \vspace{-0.1in} \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Position in Schedule} \vspace{-0.1in} \begin{tabular}{|r|r|r|} \hline & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ \\ \hline First Half of Schedule & 63 & 116 \\ \hline Second Half of Schedule & 53 & 78 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% \section{Related Work} \label{Related Work} \vspace{-0.08in} While this work is the first to use a CSP to diagnose problems in clinic workflow, other studies have used CSPs to create schedules in healthcare settings. Healthcare organizations use CSPs to solve nurse scheduling problems where a program will create a staffing schedule that satisfies hard constraints such as 24-hour coverage for inpatient units, while optimizing for soft constraints such as nurse preference \cite{Cheang2003}. In non-healthcare domains, application developers have used constraint satisfaction optimization to identify the least number of software and hardware feature changes necessary to satisfy a set of dependency constraints\cite{White2010}. Similarly to this study, the CSP was used to identify conflicts in the existing feature sets, while the aggregated optimal number of modifications allowed developers to diagnose the design elements that needed the most work. \section{Discussion} \label{discussion} \vspace{-0.05in} \textbf{Interpretation of results.} Our results demonstrate how we can use a constraint optimization problem to diagnose problems with clinic workflow. In diagnosing whether late patients are responsible for clinic going off schedule, we observed that for certain providers (such as Provider D in Table IV) and certain clinic days (such as April 4th in Table V), changing the arrival times for late patients would have caused the rest of the day to run according to schedule more so than adjusting planned appointment duration. Providers where $\Sigma\delta Ap < \Sigma\delta Ae$ may benefit from better coordination with the patient before their appointment in the form of appointment reminders, driving directions, or valet parking. Similarly, if the clinic notices trends in days that lead to high $\Sigma\delta Ap$, administrators could send reminders to patients ahead of days where tardy patients are likely to make a large impact on the schedule. From our study sample, we are able to diagnose that poor appointment block time estimates are largely responsible for planned schedule breakdown. For most providers and clinic days, a large number of changes to appointment duration are needed to make the clinic run on schedule. This finding implies that there is overscheduling of patients where planned appointment time allocation is insufficient to address patient needs. The identification of these challenges could lead the clinic to make changes to clinic operations such as increasing planned appointment times, extending clinic hours, or increasing the number of providers. Finally, we observe in Table IV that the solver made more schedule optimization changes in the first half of provider clinic days. This result is not surprising since a late patient or longer than expected appointment early in the day can adversely affect the rest of the schedule. This finding may lead providers to schedule fewer patients and longer appointment blocks in the first half of the day to increase the likelihood of later appointments running on time. \textbf{Current Limitations.} Despite the effectiveness of this model in identifying problems with clinic workflow, there are several limitations that affect the validity and generalizability of this work. Firstly, our model does not account for interaction between potential changes and other appointments. By keeping $RAd_i = D_i$ where $\delta Ae_i = 0$ we assume that providers do not adjust the time they spend with patients based on their workload. In fact, providers may speed up or slow down their encounters with patients based on whether or not they are behind schedule. Another limitation of our model is that treating clinic operation as a single server process may be an oversimplification. Once patients enter exam rooms, they are often seen by multiple healthcare professionals. \iffalse such as nurses and technicians before or after their encounter with the provider. Additionally, providers may leave and revisit a patient multiple times during a visit, allowing them to treat multiple patients at once. The method for schedule pre-processing tends to be optimistic for cycle completion times. Since the original schedules are a "best case scenario", changes recommended by the solver should still be valid. \fi Finally, we assume in the constraint optimization problem that the least number of changes $\delta Ap$ and $\delta Ae$ is the best for getting the clinic back on schedule, even though some interventions may be easier to implement than others. \iffalse It is unlikely that interventions to help patients arrive on-time or shortening cycle times are equally difficult to implement. Future work will investigate weighting $\delta Ap$ and $\delta Ae$ to account for costs. These weights could subsequently be tuned for different clinics and institutions based on their ability to modify patient and clinic behavior. \fi \iffalse We intend to use the output from our constraint optimization problem as outcomes to predict provider clinic days that are likely to go off schedule. Combining workflow data in this study with clinical variables such as billing codes, diagnoses, and medications, and operational variables such as staffing and patient distance traveled could help clinics better prepare for busy days. \fi \section{Conclusions} \vspace{-0.08in} \label{conclusion} The results from this constraint optimization problem offer valuable insights that could help improve workflow in outpatient settings. The minimum number of changes to patient check-in times and appointment durations reveal whether patients or the healthcare system are responsible for the clinic running behind schedule. Using this method to diagnose previous clinic schedules can inform interventions that decrease patient wait times and improve provider utilization. \section{Introduction} \section{Introduction} \vspace{-0.05in} With the high cost and competitive landscape of the healthcare industry~\cite{Bodenheimer2005}, health services researchers have applied operations research methods in an effort to decrease costs or increase revenue~\cite{brandeau2004operations}. Additionally, patient wait times have been linked to patient satisfaction and perception of the quality of care~\cite{Bleustein2014}, and are an outcome that operational improvements can address. One area of interest for solving these problems in healthcare is scheduling optimization for outpatient appointments and procedures~\cite{Gupta2008}. Studies have used mathematical programming models to optimize for desired outcomes such as utilization, throughput, and patient wait times~\cite{Cayirli2003}. Other studies have used stochastic models such as discrete event simulations to describe complex clinical processes~\cite{Jun2009}. These studies tune resource constraints such as staffing, equipment, or rooms to improve simulated outcomes~\cite{Mielczarek2012}. In healthcare, these methods are typically applied to busy and high value areas of the system such as chemotherapy~\cite{Le2015}, surgery~\cite{Cardoen2010}, radiation therapy~\cite{Saure2012}, or the emergency department~\cite{Hoot2008}. \textbf{Open problem.} There are several problems with simulation and mathematical models developed in previous studies. First, models describing healthcare processes are specific to a clinic or institution, making the model difficult or impossible to generalize to other use cases~\cite{Roberts2011}. Additionally, these models are difficult to validate with workflow data. Finally, model variables such as procedure times are often multi-faceted or non-modifiable for clinical reasons, thus complicating interventions designed to improve workflow. While many studies have sought to optimize scheduling or resources in order to improve certain outcomes, little work has been done to automate the identification of problems with clinic operations given real-world data. \textbf{Key contributions.} Unlike previous studies that optimize for a given utility function or outcome, our study seeks simply to diagnose problems with clinic workflow that cause appointments to start later than scheduled. Our model makes no assumptions about resources or existing distributions of services times. Therefore, our model is generalizable to any care setting or institution where data is available for scheduled appointment time, scheduled appointment duration, actual patient arrival time, and actual appointment duration. \iffalse Our model will be able to identify whether late patient arrivals or insufficient time allocated for appointments is primarily responsible for a clinic getting off schedule. The intended audience of the results of our model will clinic administrators and providers that can consider changes to the clinic process that address the problems identified. \fi This paper provides the following contributions to the study of computer aided clinic workflow diagnosis: \begin{itemize} \item It discusses how a constraint satisfaction model can depict the existing state of patient arrival times, appointment start times, and appointment durations. \item It discusses how comparing the existing state to scheduled appointment times can show mismatches in the planned and actual schedules. \item It discusses how a constraint optimization problem can diagnose whether late patients, poor appointment duration allocation, or variability in treatment duration most likely led to the mismatch between planned and actual schedules. \end{itemize} \section{Motivating Example} \label{motivation} \vspace{-0.05in} We apply our constraint satisfaction problem to appointments at an outpatient clinic of Vanderbilt University Medical Center between March 27 and April 21, 2017. The basis for our actual schedule are timestamps for when the patient arrives at the clinic, when the patient moves to the exam room for the start of their appointment, and when the patient leaves the clinic. Timestamp data for patient flow are collected by two systems in that area. One system is a workflow management tool integrated with the electronic medical record, where staff track the progress of patients through their appointments\cite{Weinberg2006}. The second system is an automated patient tracking tool, where patients receive a Bluetooth low energy beacon that tracks their room location within the clinic. For each checkpoint in the patient process, we take the earlier timestamp of the two systems to improve accuracy. We also pre-processed actual cycle times by assuming that the provider clinic is a single server process. This means that providers only saw one patient at a time in order of their appointment start times. Since most providers see patients in multiple rooms there are many cases where a patient room-in time overlaps with the next patient. In this case, we assume that the earlier patient departed and the later patient arrived in the room halfway through the time where their room-in times overlapped. The planned schedule is taken from the appointment record. Each appointment has a scheduled start time and scheduled duration. The mismatch between the scheduled appointments and the timestamp data are the basis for our constraint satisfaction model. \section{Diagnosing long patient wait times} \label{questions} ... \subsubsection{Question 1: Are late patients responsible?} ... \subsubsection{Question 2: Are poor appointment duration time estimates responsible?} ... \subsubsection{Question 3: Is treatment time unpredictability responsible?} ... \iffalse \subsubsection{Question 4: Is slow rooming of patients responsible?} ... \subsubsection{Question 5: Is late provider arrival responsible?} ... \fi \section{Constraint Satisfaction Model of Patient Cycle Times} \label{csp} \vspace{-0.05in} Constraint satisfaction problems (CSPs) are defined by a set of variables, such as the positive integer variables X and Y, and a set of constraints over the variables, such as $X < Y$. A constraint satisfaction problem defines a number of variables and constraints. A valid solution to a CSP is an assignment of values to the variables that adheres to all of the constraints. For example, X = 1, Y = 2, is a valid solution to this CSP. An assignment of values to the variables is called a labeling. Constraint solvers are automated tools that are used to solve CSPs. A constraint solver takes a set of variables, constraints, and any initial labelings of variables as input. The solver then automatically produces valid labelings for the remaining unlabeled variables that satisfy the constraints. For example, if a constraint solver was provided the CSP above and an initial labeling of Y = 3, it would solve for the valid labelings of X, 1 and 2. In order to use a constraint solver, a CSP must first be defined that captures the relationships between the variables of interest. In this paper, the cycle times of patients and their appointment times are of interest. This section walks through the construction of an initial CSP that captures the relationship between planned appointment times and durations, and actual observed appointment times and durations. In Section~\ref{auto}, this CSP is extended in a way that allows a constraint solver to automatically derive answers to whether late patients or long cycle times are responsible for clinics running behind schedule. Before beginning discussion of the model, a few key assumptions must be expressed. These key assumptions are outlined in Table~\ref{assumed}. The most important assumption is that we analyze the schedule for a single provider at a time. Analysis of multiple providers are possible, but each will have a separate CSP model built for their analysis. \vspace{-0.1in} \begin{table}[h] \caption{Key Model Assumptions} \vspace{-0.15in} \label{assumed} \begin{center} \begin{tabular}{|l|p{7cm}|} \hline A1. & A model is built for each individual provider's schedule and patients. \\ \hline A2. & A provider completes appointments sequentially. \\ \hline A3. & The appointment times T and At are sorted in ascending order based on actual start time. \\ \hline \end{tabular} \end{center} \vspace{-0.2in} \end{table} \begin{table}[h!] \caption{CSP Workflow Variables} \vspace{-0.15in} \label{cspvars} \begin{center} \begin{tabular}{|l|p{3.5cm}|} \hline $T = \{T_0 \dots T_n\} \in [0, 1440] $ & Scheduled start time of appointment as minutes offset from midnight \\ \hline $D = \{D_0 \dots D_n\} \in [0, 1440] $ & Scheduled duration of appointment in minutes \\ \hline $At = \{At_0 \dots At_n\} \in [0, 1440] $ & Actual start time of appointment as minutes offset from midnight \\ \hline $As = \{As_0 \dots As_n\} \in [0, 1440] $ & Difference in minutes of scheduled vs. actual appointment start time \\ \hline $Ad = \{Ad_0 \dots Ad_n\} \in [0, 1440] $ & Actual duration of appointment in minutes \\ \hline $Ae = \{Ae_0 \dots Ae_n\} \in [0, 1440] $ & Difference in minutes of scheduled vs. actual appointment duration \\ \hline $Ap = \{Ap_0 \dots Ap_n\} \in [0, 1440] $ & Actual patient arrival time as minutes offset from midnight \\ \hline $F = \{F_0 \dots F_n\} \in [0, 1440] $ & Actual end time of appointment as minutes offset from midnight \\ \hline $C = \{C_0 \dots C_n\} \in [0, 1440] $ & Cycle time of each patient in minutes \\ \hline $W = \{W_0 \dots W_n\} \in [0, 1440] $ & Difference between scheduled and actual cycle time in minutes \\ \hline \end{tabular} \end{center} \vspace{-0.3in} \end{table} We begin our model by defining a basic CSP. In the next subsection, we introduce additional variables into this CSP to support automated wait time diagnosis. The basic form of the CSP is shown in equations~\ref{eq1}-\ref{eq3}. The CSP input to the constraint solver is composed of a planned or expected schedule, $E = <T,D>$, and a set of actual observed values, $A = <At, Ad>$. A cycle time can be calculated for each appointment using either the planned values $cycle(E)$ or the actual observed values $cycle(A)$. \begin{equation} \label{eq1} As_i = \begin{cases} i = 0 & \max (0, ~Ap_i - T_i) \\ i > 0 & \max (0, ~At_{i-1} + Ad_{i-1} - T_i,~Ap_i - T_i) \end{cases} \end{equation} Equation~\ref{eq1} defines the basic constraint covering the calculation of the difference in minutes between the expected start time of an appointment and the actual start time. The first appointment of the day, $As_0$, will either start on time or will be delayed by the difference in minutes between the scheduled start time and the arrival time of the patient $\max (0, ~Ap_i - T_i)$. If the patient is late, $As_0$ will be a positive number of minutes that the patient was late to their appointment. For all other appointments, the start time deviation will either be a result of late patient arrival or the late completion of the the preceding appointment in At. Note, At is sorted based on actual appointment start time and not scheduled start time, which allows the analysis to consider deviations from the planned schedule. As shown in Equation~\ref{eq3}, the model constrains the actual duration of the appointment, $Ad_i$, to be equal to the expected duration of the appointment plus the difference between the expected and actual duration, $Ae_i$. This constraint is important later when the modified CSP is formulated to diagnose workflow issues. \begin{equation} \label{eq3} \begin{split} Ad_i & = D_i + Ae_i \\ \end{split} \end{equation} Next, the model constraints the actual end time of a patient's appointment, $F_i$, to be the actual start time plus the expected duration of the appointment, $D_i$, and difference in expected and actual deviation of the appointment, $Ae_i$. This constraint is shown in Equation~\ref{eq4} \begin{equation} \label{eq4} \begin{split} F_i & = At_i + D_i + Ae_i \end{split} \end{equation} A key input into the CSP model is the goals for patient cycle time. Ideally, patients should have a cycle time that matches their scheduled appointment duration. However, in reality, a patient may arrive late or a prior appointment may run late causing the cycle time and scheduled appointment time not to match. The model defines cycle time as the difference between the arrival time of the patient, $Ap_i$, and the actual finish time of the appointment, $F_i$. This constraint is shown in Equation~\ref{eq5} \begin{equation} \label{eq5} \begin{split} C_i & = F_i - Ap_i \end{split} \end{equation} The final component of the model is the defining a goal variable, which is that, ideally, the scheduled cycle time of the appointment should match the actual cycle time of the appointment, $C_i$. Although it might seem that it is preferable for the actual cycle time to be less than the scheduled cycle time, this indicates potential overestimation and waste in the schedule that could allow for more appointments. Thus, the ideal schedule has as little deviation from the planned vs. actual cycle time. This goal constraint is shown in Equation~\ref{eq6}. \begin{equation} \label{eq6} \begin{split} W_i & = C_i - D_i \end{split} \end{equation} With this simple CSP formulation of the model, all that a clinic can do is check that the actual collected data meets the expected constraints. If the data does not meet the constraints, it indicates a potential error in the data collection process or difference in actual operation vs. assumptions of this model. The next section extends the CSP model to allow automated analysis of whether late patients or long cycle times are responsible for clinics running behind schedule. \section{Constraint-based Diagnosis of Patient Cycle Times} \label{auto} \vspace{-0.1in} The overall goal of the diagnosis process is to explain why the planned cycle times for patients are longer or shorter than the actual observed cycle times. The automated diagnosis process relies on using a constraint solver to derive changes that could have been made to either the planned schedule or the actual observed schedule that would make the expected and actual cycle times more closely align. For example, the automated diagnosis process may state that had a specific patient arrived on time, the entire schedule for the day would have matched expectations. Alternatively, the automated diagnosis process might state that the actual duration of a single appointment was much longer than planned, indicating that treatment was more complicated than expected, and threw off the schedule. These are the types of outputs that the modified CSP will produce. In order to support these types of diagnoses, the model needs to encode the concept of a "change" that could be made to the actual or planned schedule to make them more closely align. The diagnosis tries to find the fewest changes to the actual schedule that would lead to actual cycle times matching planned cycle times. In other words, what things could have gone differently that would have made planned and actual cycle time the same. Later, the section will discuss how the constraint solver reasons over these changes to diagnose clinic workflows, since there are often a large number of possible changes that could be made to rectify the mismatch between planned and actual schedules. \subsection{CSP Model of Cycle Time Diagnosis} \vspace{-0.05in} More formally, given a planned schedules $E$ and $A$, such that $cycle(E) != cycle(A)$, the diagnosis defines a new CSP that solves for the set of changes R to E and A, such that $cycle(changes(E,R)) = cycle(changes(A,R))$. That is, the output of the CSP is a set of modifications to E and A that will make their calculated cycle times for each appointment equal. To support the concept of a potential "change", the CSP model needs two additional variables introduced to model $R=<\delta Ae, \delta Ap>$. An overview of these variables is shown in Table~\ref{divars}. First, the variable, $Ae_i$, is set to 1 by the solver if changing the duration of the $i_{th}$ appointment to match the planned duration would make the actual and planned cycle times more closely align. Second, the variable $Ap_i$ is a variable set to 1 by the solver if changing the patient's arrival time to match the start time of the appointment would make the actual and planned schedules match more closely. \vspace{-0.1in} \begin{table}[h] \caption{CSP Diagnosis Variables} \vspace{-0.15in} \label{divars} \begin{center} \begin{tabular}{|l|p{3.5cm}|} \hline $\delta Ae = \{\delta Ae_0 \dots \delta Ae_n\} \in [0, 1] $ & The difference in actual vs. scheduled treatment time of the $i_{th}$ appointment should be set to 0.\\ \hline $\delta Ap = \{\delta Ap_0 \dots \delta Ap_n\} \in [0, 1] $ & The $i_{th}$ patient's arrival time should be changed to the start time of the appointment. \\ \hline \end{tabular} \end{center} \end{table} \vspace{-0.1in} In order to use these variables, they must be incorporated into the CSP constraints. The $\delta Ap_i$ change variable is incorporated into the CSP in Equation~\ref{eq7}. The variable $RAp_i$ models the difference in planned appointment start time and patient arrival time. If the $\delta Ap_i$ is set to 1, it indicates that the patient arrival time should be set to the appointment time in order to more closely match scheduled and actual cycle times. By setting $\delta Ap_i$ to 1, it causes $RAp_i$ to equal the original planned start time of the appointment. \begin{equation} \label{eq7} RAp_i = \begin{cases} \delta Ap_i = 0 & Ap_i \\ \delta Ap_i = 1 & T_i \end{cases} \end{equation} The $\delta Ae$ variable is incorporated into the constraints in Equation~\ref{eq8}. If $\delta Ae$ is set to 1, $RAd_i$ takes the value of the original planned duration. Otherwise, $RAd_i$ takes the actual duration of the appointment as its value. \begin{equation} \label{eq8} RAd_i = \begin{cases} \delta Ae_i = 0 & D_i + Ae_i \\ \delta Ae_i = 1 & D_i \end{cases} \end{equation} Finally, in Equation~\ref{eq10}, the model ties the new change variables to the calculation of the difference in planned vs. actual start time of the appointment. The constraint is a modified version of Equation~\ref{eq1} that uses the $RAp_i$, $RAt_i$, and $RAd_i$ variables. For example, if $Ap_i \neq T_i$, but $\delta Ap_i = 1$, $RAp_i$ will equal 0, just as it would have if the patient had arrived on time. \begin{equation} \label{eq10} RAt_i = \begin{cases} i = 0 & RAp_i \\ i > 0 & \max (0, ~RAt_{i-1} + RAd_{i-1}, RAp_i) \end{cases} \end{equation} \begin{equation} \label{eq11} \begin{split} (RAt_i + RAd_i) = (T_i + D_i + \epsilon) \end{split} \end{equation} \subsection{Diagnosis as Optimization} \vspace{-0.06in} Clearly, there are arbitrarily many changes that could be made to the planned and actual schedules that would cause their cycle times for appointments to be the same. Therefore, a mechanism is needed to express to the constraint solver how to rank possible changes and diagnose the difference between an expected and actual schedule. The mechanism that the model uses to rank possible sets of changes is to try to minimize the total number of changes made to either the planned schedule, E, or the actually observed schedule, A. That is, the constraint solver is asked to solve for a solution that minimizes the value of Equation~\ref{eq12}. The solver is trying to find the minimal set of patients that could have arrived on time and appointments that could have met their expected duration to make the overall cycle times of all appointments match in both planning and actuality. \begin{equation} \label{eq12} \begin{split} \sum_0^n \delta Ae_i + \delta Ap_i \end{split} \end{equation} The output from the constraint solver will be a labeling of the variables in the CSP that minimizes the number of changes that have to be made to the planned or actual schedule to make them consistent. A key question is how this variable labeling can be used to answer questions about patient cycle times. The variables $\delta Ae_i$ and $\delta Ap_i$ are the path to answering these questions. \subsubsection{Diagnosing: Are late patients responsible?} The $\delta Ap_i$ variables determine if the minimal set of changes to make the actual and planned cycle times align includes changing the arrival time of patients. If late patients are part of the minimal set of changes that can be used to explain the difference between planned and actual execution time, it indicates that late patients are a factor and can precisely pinpoint which patients contributed to throwing off the planned cycle times. For example, if the 2nd patient's $Ap_2$ variable is set to 1 and there are not other outputs, it indicates that the solver can explain the discrepancy between the planned and actual cycle times simply by that patient's tardiness. Had that single patient arrived on time, actual cycle times for all appointments would have met their planned cycle times. The solver can output a single patient late arrival, multiple late arrivals, or a combination of late arrivals and poorly predicted appointment durations as the root cause. If a large number or all arrival times of patients are suggested as needing to be changed, meaning most people are late, this is a potential indicator that the front desk check-in process is slow. Moreover, it could also indicate problems with accessibility of the clinic location, such as difficulty in finding parking or navigating to the clinic. \subsubsection{Diagnosing: Are poor appointment block time estimates responsible?} The $\delta Ae_i$ variables indicate that appointment treatment times explain the discrepancy between planned and actual cycle times. For example, if $Ae_3 = 1$, it indicates that the 3rd appointment of the day went over its expected duration and contributed to the discrepancy in planned and actual times. The solver can output a single or combination of appointments and late arrivals that created the issue. If a large percentage of appointment durations exceeded expectations, meaning a large number of $\delta Ae_i$ variables are 1, it indicates a more pervasive issue with appointment block time planning. That is, if appointments are consistently over time, then it is likely that the provider is being scheduled insufficient time to see and treat each patient. Alternatively, if a single provider consistently has appointments that run past their expected duration, it may be that the particular provider slower or takes more time talking to their patients. \subsubsection{Diagnosing: Is treatment time unpredictability responsible?} If a single or small number of $\delta Ae_i$ variables consistently explain the discrepancy, it means that each day a small number of unpredictable appointments are running late and causing delays. For example, providers in urgent care clinics may face highly unpredictable health situations compared to other clinics with less emergent and varying conditions. Small numbers of $\delta Ae_i$ variables set to 1 indicate that it is unlikely that a clinic could have done anything differently to be on time. \iffalse \subsubsection{Diagnosing: Is slow rooming of patients responsible?} ... \subsubsection{Diagnosing: Is late provider arrival responsible?} ... \fi \vspace{-0.05in} \section{Results} \label{results} \vspace{-0.05in} From March 27, 2017 to April 21, 2017, 14 providers saw at least five patients on at least one appointment day at the Vanderbilt University Medical Center clinic in our study. These providers completed a total of 622 appointments over this period. Out of all appointments, 116 started late due to the patient arriving after the scheduled time, while 256 ended after the allocated time due to delayed cycle times. Figure 1 shows an example of one provider's schedule on one day where a combination of late patients and long cycle times caused the clinic to run off schedule. In this example, the solver determined that the making the 9th patient on time and completing the 6th, 7th, 10th, 11th, 12th, 14th, 15th, and 16th appointments on schedule would cause the rest of the appointments to run on schedule. In Table IV, we aggregate the diagnostic variables $\delta Ap$ and $\delta Ae$ for each provider over all their clinic days. Providers are sorted by the total number of patients seen. Provider A saw the most patients over the study period, and the solver determined that 12 patient check-in modifications and 40 appointment duration modifications were the minimum necessary to make that provider's clinics run on time. All providers had more appointment duration revisions than patient check-in revisions in the optimized schedule except for Provider D who had $\Sigma\delta Ap = 16 $ and $\Sigma\delta Ae = 14$, and Provider K who had $\Sigma\delta Ap = \Sigma\delta Ae = 5$. \begin{figure*} \centering \includegraphics[width=\textwidth]{img/exampleSched.png} \vspace{-0.35in} \caption{Visualization of original and optimal patient check-in and cycle times} \label{fig:1} \vspace{-0.25in} \end{figure*} \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Provider} \vspace{-0.1in} \begin{tabular}{|l|r|r|r|r|} \hline Provider & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ & Clinic Days & Patients Seen \\ \hline A & 12 & 40 & 7 & 106 \\ B & 5 & 11 & 7 & 66 \\ C & 15 & 22 & 4 & 66 \\ D & 16 & 14 & 10 & 63 \\ E & 12 & 27 & 4 & 59 \\ F & 5 & 6 & 8 & 51 \\ G & 7 & 20 & 2 & 45 \\ H & 11 & 13 & 4 & 40 \\ I & 12 & 12 & 6 & 38 \\ J & 12 & 15 & 3 & 38 \\ K & 5 & 5 & 3 & 24 \\ L & 2 & 7 & 3 & 15 \\ M & 0 & 0 & 1 & 6 \\ N & 2 & 2 & 1 & 5 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% Table V shows aggregate totals for $\delta Ap$ and $\delta Ae$ by date across all providers who had clinic that day. Again, $\Sigma\delta Ae > \Sigma\delta Ap$ on most days except on March 29th, April 4th, April 19, and April 21. There does not appear, from our sample, to be any correlation between the ratio of $\Sigma\delta Ap : \Sigma\delta Ae$ and the number of patients seen, the number of providers, or the day of the week. \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Date} \vspace{-0.1in} \begin{tabular}{|r|r|r|r|r|} \hline Date & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ & Patients Seen & \# Providers \\ \hline 27-Mar & 8 & 15 & 53 & 4 \\ 28-Mar & 4 & 15 & 41 & 4 \\ 29-Mar & 9 & 9 & 35 & 4 \\ 30-Mar & 8 & 14 & 41 & 4 \\ 31-Mar & 1 & 4 & 12 & 2 \\ 3-Apr & 7 & 13 & 48 & 4 \\ 4-Apr & 9 & 8 & 23 & 3 \\ 5-Apr & 6 & 11 & 30 & 3 \\ 6-Apr & 14 & 15 & 48 & 5 \\ 7-Apr & 3 & 6 & 18 & 3 \\ 10-Apr & 7 & 15 & 43 & 3 \\ 11-Apr & 9 & 14 & 46 & 4 \\ 12-Apr & 9 & 10 & 38 & 4 \\ 13-Apr & 6 & 14 & 38 & 4 \\ 14-Apr & 0 & 1 & 5 & 1 \\ 17-Apr & 3 & 5 & 20 & 2 \\ 18-Apr & 4 & 7 & 22 & 3 \\ 19-Apr & 6 & 6 & 26 & 3 \\ 20-Apr & 1 & 10 & 29 & 2 \\ 21-Apr & 2 & 2 & 6 & 1 \\ \hline \end{tabular}% \label{tab:addlabel}% \vspace{-0.25in} \end{table}% Finally, we aggregated $\delta Ae$ and $\delta Ap$ by the corresponding position of the revised appointment in the schedule in Table VI. For each provider clinic day with n appointments, any $\delta Ae$ and $\delta Ap$ in the first n/2 appointments (rounding down) would be assigned to the "first half" while the remainder would be assigned to the "second half". This means that provider clinic days with an odd number of appointments would have one more appointment attributed to the second half. Even with the discrepancy in the number of appointments favoring the second half, there were more modifications made to check-ins and cycle times in the first half of the schedule. \vspace{-0.1in} \begin{table}[htbp] \centering \caption{Clinic Workflow Diagnosis by Position in Schedule} \vspace{-0.1in} \begin{tabular}{|r|r|r|} \hline & $\Sigma \delta Ap $ & $\Sigma \delta Ae$ \\ \hline First Half of Schedule & 63 & 116 \\ \hline Second Half of Schedule & 53 & 78 \\ \hline \end{tabular}% \label{tab:addlabel}% \end{table}% \section{Discussion} \label{discussion} \vspace{-0.05in} \textbf{Interpretation of results.} Our results demonstrate how we can use a constraint optimization problem to diagnose problems with clinic workflow. In diagnosing whether late patients are responsible for clinic going off schedule, we observed that for certain providers (such as Provider D in Table IV) and certain clinic days (such as April 4th in Table V), changing the arrival times for late patients would have caused the rest of the day to run according to schedule more so than adjusting planned appointment duration. Providers where $\Sigma\delta Ap < \Sigma\delta Ae$ may benefit from better coordination with the patient before their appointment in the form of appointment reminders, driving directions, or valet parking. Similarly, if the clinic notices trends in days that lead to high $\Sigma\delta Ap$, administrators could send reminders to patients ahead of days where tardy patients are likely to make a large impact on the schedule. From our study sample, we are able to diagnose that poor appointment block time estimates are largely responsible for planned schedule breakdown. For most providers and clinic days, a large number of changes to appointment duration are needed to make the clinic run on schedule. This finding implies that there is overscheduling of patients where planned appointment time allocation is insufficient to address patient needs. The identification of these challenges could lead the clinic to make changes to clinic operations such as increasing planned appointment times, extending clinic hours, or increasing the number of providers. Finally, we observe in Table IV that the solver made more schedule optimization changes in the first half of provider clinic days. This result is not surprising since a late patient or longer than expected appointment early in the day can adversely affect the rest of the schedule. This finding may lead providers to schedule fewer patients and longer appointment blocks in the first half of the day to increase the likelihood of later appointments running on time. \textbf{Current Limitations.} Despite the effectiveness of this model in identifying problems with clinic workflow, there are several limitations that affect the validity and generalizability of this work. Firstly, our model does not account for interaction between potential changes and other appointments. By keeping $RAd_i = D_i$ where $\delta Ae_i = 0$ we assume that providers do not adjust the time they spend with patients based on their workload. In fact, providers may speed up or slow down their encounters with patients based on whether or not they are behind schedule. Another limitation of our model is that treating clinic operation as a single server process may be an oversimplification. Once patients enter exam rooms, they are often seen by multiple healthcare professionals. \iffalse such as nurses and technicians before or after their encounter with the provider. Additionally, providers may leave and revisit a patient multiple times during a visit, allowing them to treat multiple patients at once. The method for schedule pre-processing tends to be optimistic for cycle completion times. Since the original schedules are a "best case scenario", changes recommended by the solver should still be valid. \fi Finally, we assume in the constraint optimization problem that the least number of changes $\delta Ap$ and $\delta Ae$ is the best for getting the clinic back on schedule, even though some interventions may be easier to implement than others. \iffalse It is unlikely that interventions to help patients arrive on-time or shortening cycle times are equally difficult to implement. Future work will investigate weighting $\delta Ap$ and $\delta Ae$ to account for costs. These weights could subsequently be tuned for different clinics and institutions based on their ability to modify patient and clinic behavior. \fi \iffalse We intend to use the output from our constraint optimization problem as outcomes to predict provider clinic days that are likely to go off schedule. Combining workflow data in this study with clinical variables such as billing codes, diagnoses, and medications, and operational variables such as staffing and patient distance traveled could help clinics better prepare for busy days. \fi \section{Related Work} \label{Related Work} \vspace{-0.08in} While this work is the first to use a CSP to diagnose problems in clinic workflow, other studies have used CSPs to create schedules in healthcare settings. Healthcare organizations use CSPs to solve nurse scheduling problems where a program will create a staffing schedule that satisfies hard constraints such as 24-hour coverage for inpatient units, while optimizing for soft constraints such as nurse preference \cite{Cheang2003}. In non-healthcare domains, application developers have used constraint satisfaction optimization to identify the least number of software and hardware feature changes necessary to satisfy a set of dependency constraints\cite{White2010}. Similarly to this study, the CSP was used to identify conflicts in the existing feature sets, while the aggregated optimal number of modifications allowed developers to diagnose the design elements that needed the most work. \section{Conclusions} \vspace{-0.08in} \label{conclusion} The results from this constraint optimization problem offer valuable insights that could help improve workflow in outpatient settings. The minimum number of changes to patient check-in times and appointment durations reveal whether patients or the healthcare system are responsible for the clinic running behind schedule. Using this method to diagnose previous clinic schedules can inform interventions that decrease patient wait times and improve provider utilization. \section*{Acknowledgment} \vspace{-0.05in} The authors would like to thank the National Library of Medicine for supporting Alex Cheng's training grant. \vspace{-0.05in} \section*{Acknowledgment} \vspace{-0.05in} The authors would like to thank the National Library of Medicine for supporting Alex Cheng's training grant. \vspace{-0.05in}
{ "timestamp": "2018-05-08T02:12:38", "yymm": "1805", "arxiv_id": "1805.02264", "language": "en", "url": "https://arxiv.org/abs/1805.02264" }
\section{Introduction} Nowadays Artificial Intelligence (AI) plays an important role in science (e.g. chemistry, physics, medicine and etc.). While experimental methods are more reliable but they are also more time consuming and expensive than computational methods. Although computational methods may never get accurate enough to replace experimental methods, but they can help selecting and prioritizing to a small number of likely candidates from pools of available data. One of the computational methods used in medicine is Artificial Neural Networks (ANNs) which is a computer program and inspired from animal's nervous system. ANNs consist of simple processing units, nodes (neurons), which aggregates and processes according to its internal activation function, then these units produce outputs. Every ANN consists of layers including an input layer, some intermediate layers and an output layer. The units (nodes) of each layer has connection with the nodes of its next layer, although more complex patterns of connections are possible. ANNs can be divided into two main groups, supervised learning ANNs, and unsupervised learning ANNs. Supervised learning ANNs need to be trained using examples of the problem under investigation. In the training process, the weights relating each connection between units and parameters of the activation function in each unit of the network are adjusted in a direction to reduce the output error. By training an ANN with the training set, which is a set of different, but related input patterns, The ANN can relate input to output without using explicit algorithms for deciding the appropriate output.\\ ANNs have been used in cancer detection and diagnosis for more than 30 years (\citealp{Sim85},\citealp{Mac91},\citealp{Cic92}) but as noted in a study, the fundamental goals of cancer prediction and prognosis are different from the goals of cancer detection/diagnosis. In cancer prediction one tries to (i) predict cancer susceptibility, (ii) predict cancer recurrence, (iii) predict cancer survivability. Prediction is more useful than detection because in prediction one can get prepared before the occurrence of cancer. Also in the study it has been noted that almost all predictions use just four types of input data (i) genomic data, (ii) proteomic data, (iii) clinical data, (iv) combination of these data (\citealp{Cru06}).\\ A non-synonymous SNP (nsSNP) is a single nucleotide substitution occurring inside coding region of a gene, causing an amino acid substitution in the corresponding protein product. These changes can lead to a structural or functional change in the protein product which may give a minor or major phenotypic change or may absolutely have no effect. For example, a nsSNP in the hemoglobin beta gene (substitution of glutamic acid by Valine) is one cause for sickle cell anemia (\citealp{Wes01}), also diabetes has been correlated with a number of nsSNPs (\citealp{Sha05}). A mutation can affect protein folding and stability, protein function, protein-protein interaction, protein expression and sub-cellular localization (\citealp{Rev11}) and they can be divided into two categories (i) apparently random (Sporadic) mutations followed by somatic selection (somatic mutations), (ii) pre-existing mutations in the germline (germline mutations). Mutations can occur due to (i) gain of function mutations that changes the normal gene into an oncogen. (ii) Loss of function mutations that inactivates tumor suppressor genes. (iii) drug resistance mutations that overcomes the inhibitory effect of a drug on the targeted protein. Nonsynonymous single nucleotide polymorphisms (nsSNPs) are prevalent in genomes and are closely associated with inherited diseases. To facilitate identifying disease-associated nsSNPs from a large number of neutral nsSNPs, it is important to develop computational tools to predict the nsSNP's phenotypic effect (disease-associated versus neutral).\\ Breast cancer is the most common cancer among women causing highest cancer death every year(\citealp{xei2016},\citealp{xei2017},\citealp{xei2018}). The major susceptibility genes, BRCA1 and BRCA2, acting as a tumor suppressor have been previously identified (\citealp{Ant00},\citealp{Kur10}). Pathogenic mutations in these genes increase the inherited predisposition to breast cancer. Evidence are suggesting that genetic variants may alter the breast cancer risk for those with BRCA1 or BRCA2 mutations. There is a study reporting a relationship between a woman BRCA2 SNPs profile and the age she develops breast cancer (\citealp{Joh13}). Another study on individuals carrying inactivating germline mutations in BRCA1 shows that they have an increased risk of developing cancer (\citealp{Shat97}). So it's essential to identify those at risk. The risk estimates of developing breast cancer in a woman who carries a BRCA1 or BRCA2 mutations is from kindred with multiple cases of breast or ovarian cancer, or both, range from 76 to 87\% (\citealp{Cru06}).\\ Variants with Unknown Clinical Significance (VUS) in BRCA1 or BRCA2 genes cause major problems because physicians do not know whether the VUS is related to developing BC or is neutral with respect to BC risk. Thus carriers of VUSs cannot benefit from risk assessment, prevention and therapeutic measures that are available to carriers of known significance mutations. The recent increase in the available nsSNP data, determining the clinical significance of VUS in BRCA1 and BRCA2 has become an important clinical issue and doing it automatically using the ANN is valuable. Here we use the PNN, which is a supervised learning ANN and benefits from speed and easy result interpretation. The network is trained with nsSNP data of BRCA1 and BRCA2 to predict the clinical significance of VUSs. We retrieve data from NCBI \footnote{\vspace{5pt}ncbi.nlm.nih.gov/snp} with 449 nsSNP data of homo sapiens BRCA1 and 460 nsSNP data of homo sapiens BRCA2 and then preprocess them. We train PNN and use different methods of validation (e.g. jackknife and cross-validation) and achieve different accuracies and use best data model to train and test the DNN. We show that, given enough data, both PNN and DNN can outperform other ANN algorithms at accuracy and speed. Larger the training samples, more will be the accuracy. \enlargethispage{12pt} \begin{methods} \section{Methods} \subsection{Probbablistic neural network (PNN)} \begin{figure}[t \centerline{\includegraphics[width=160pt,height=95pt]{fig01.png}} \vspace*{-5pt} \caption{The architecture of a typical PNN.}\label{fig:01} \end{figure} Strategies used to classify patterns in a way that they minimize the expected risk are called "Bayes strategies". PNN introduced by Specht is a result of the theory of statistical pattern classification. In the fifties and sixties parametric methods used to solve statistical pattern classification problems but within last twenty years these methods are replaced by the non-parametric approach (\citealp{Rut04}). In the non-parametric method, it is assumed that a functional form of probability densities is unknown. Pattern classification procedures derived from non-parametric estimates are convergent - when the length of learning sequence increases - to Bayes' rules. PNNs are applied in many interesting fields and they implement non-parametric estimations techniques in parallel fashion to benefit from fast training and convergence to Bayes optimal decision surface (\citealp{Rut04}). The architecture of a typical PNN is as shown in fig.\ref{fig:01}.\\ The input layer without any computation distributes the input neurons in the pattern layer. Neuron $x_{ij}$ after receiving a pattern x computes its output. \begin{equation} \phi_{ij}(x)=\frac{1}{(2\pi)^{d/2}\sigma^d}exp[-\frac{(x-x_{ij})^T(x-x_{ij})}{2\sigma^2}]\label{eq:01}\vspace*{-10pt} \end{equation}\\ where d denotes the dimension of the pattern vector x, $\sigma$ is the smoothing parameter and $x_{ij}$ is the neuron vector and by summarizing and averaging the output of all neurons belonging to the same class, the summation layer neurons compute the maximum likelihood of pattern x being classified into $C_i$. \begin{equation} P_{i}(x)=\frac{1}{(2\pi)^{d/2}\sigma^d}\frac{1}{N_i}\sum_{j=1}^{N}exp[-\frac{(x-x_{ij})^T(x-x_{ij})}{2\sigma^2}]\label{eq:02}\vspace*{-10pt} \end{equation}\\ $N_i$ denotes the total number of sample in class $C_i$. The decision layer unit will classify the pattern x in accordance with the Bayes' decision rule based on the output of all the summation layer neurons if the a priori probabilities and losses associated with making an incorrect decision for each class are the same (\citealp{Rut04}). \begin{equation} \hat{C}(x)=arg\hspace{2pt}max\{p_i(x)\} ; i = 1 , 2 , ... , m\label{eq:03}\vspace*{-10pt} \end{equation}\\ Where $\hat{C}(x)$ denotes the estimated class of the pattern x and m is the total number of classes in the training samples. \subsection{Deep Neural Network-Stacked AutoEncoder} \begin{figure}[t \centerline{\includegraphics[width=160pt,height=150pt]{fig03.png}}\vspace*{-50pt} \caption{Deep Learning Structure used in this work.}\label{fig:03} \end{figure} Todays modern technology influenced by many aspects of machine-learning, from video content analysis to large scale image processing(\citealp{deep1}), from self-driving car(\citealp{deep2}),to recommended system(\citealp{deep3}). From the beginning the research intention of pattern recognition was to replace human engineered features by multilayer neural network, no suitable algorithm provided for this goal since 1980s decade. The backpropagation used to compute the gradient of an objective function by optimizing weights(\citealp{deep4}).\\ Learning an undirected graphical model called a Gaussian binary restricted Boltzmann machine (RBM) that has one visible layer of linear variables with Gaussian noise and one hidden layer. There is full connectivity between layers. The connection weights (and biases) can be learned efficiently using the contrastive divergence approximation to the log likelihood gradient(\citealp{deep5}).\\ Auto-encoders are simple building-block for learning, each of these blocks transform input data into outputs to create more efficient form of data. One can build up most powerful machine learning algorithm by combining these simple parts. Auto-encoders were first introduced by Hinton for unsupervised learning of back-propagation algorithm(\citealp{deep6}). After many years auto-encoders take inner place in deep architectures, where stacked auto-encoders combined with supervised top-layer. These deep architectures show state-of-art results for many challenging problems(\citealp{deep8},\citealp{deep9}). The Encoder building block is given in figure \ref{fig:03} Auto-encoders, mainly, are of two types; linear auto-encoder and non-linear auto-encoder. Linear auto-encoder is equivalent to Principle Component Analysis (PCA), also by using nonlinear transfer function one can discover nonlinear patterns of data. Boolean and Boltzmann learning machines are famous non-linear models. De-noising auto-encoder is other kind of auto-encoder that reconstructs corrupted patterns(\citealp{deep9}).\\ Auto-encoder can be defined by set of parameters (n, p, N, F, G, A, B, X, Y, D) n and p are the positive integers, which show the number of units for input/output and hidden layer. N is the number of training samples and A is the class transfer functions that relates G in hidden units to F in output layer. B is the transfer functions that relates input units in set F to hidden units. \begin{math}X\in R^1\end{math} is an input vector for auto-encoder; auto-encoder converts the vector X into another similar vector as follows: \begin{equation} z=h(wx+b)\label{eq:04}\vspace*{-10pt} \end{equation} h is the transfer function for each layer of auto-encoder, w is a weight matrix and b is a bias vector, the decoder reverses this process and converts back the z to x vector(\citealp{deep10}). Y is the target vector. D measures the similarity over input layer units.\\ If we define $A_1$ as a subset of A and $B_1$ as a subset of B then for any input we would like to find the parameters to transform the input units in $A_1$ to output units in $B_1$ in which we want to minimize the solution for: \begin{equation} \begin{split} min E(A_1,B_1)=& min(A_1,B_1)\sum_{i=1}^{N} E(A_1,B_1)= \\ & \sum_{i=1}^{N} D(f_{A_1,B_1}(x_{target}),y_{target}) \end{split} \label{eq:05}\vspace*{-10pt} \end{equation} Where $f_{AB}$ is the function that related A to B, D is similarity measure and N is the number of training data (\ref{eq:05}).\\ We can add regularization to the cost function(equation1) and create sparsity for auto-encoder. To compute cost functions for encoder, usually $L_2$ and sparsity regularization terms combine with mean square error term.\\ The hidden layers in auto-encoders are of two types named as compressed and sparse. In compressed type, number of units in hidden layer is less than input layer, so network tries to combine features and reproduce new compressed features. But in sparse auto-encoder, features are expanded (\citealp{deep11}). The network structure which is used in this study is given in figure \ref{fig:03}. Here we defined simple stacked auto encoder as deep network to analyze our dataset. We used three simple auto-encoders with hidden node of size 300 in each one, these layers combined with one softmax layer to create deep structure.\\ \subsection{Data} Here, we used ncbi.nlm.gov/snp among all the nsSNP databases because it is complete and also details are provided with each data. for now, we only used nsSNPs and didn't include indel variations. We used our program to pre-process data which translates DNA code to protein code (transcript ID for BRCA1: NM 007294, transcript ID for BRCA2: NM 000059.3). The final step was to make a good coding scheme for amino-acids (AAs) so that PNN can achieve high accuracy at predicting the clinical significance. We used hydropathy and propensity indices for scoring AAs and divided them into 6 categories based on if an AA is hydrophobic or hydrophilic and if it favors alpha-helices or turns or beta sheets in its 2nd structure. \subsubsection{BRCA1} We retrieved 5591 data from the database and used our program to apply variants, after applying nsSNP data our program showed 1871 of them are the exonic variants of which 449 were variants with known clinical significance. So we used these 449 data to train and test the PNN and DNN. We tried out 5 different methods to prepare our data and each time improved it. Five different types of prepration used for data are as below:\\ 1 - After application of SNPs, DNA sequences are converted to number string using this scheme : a = 1 , c = 2 , g = 3 , d = 4.\\ 2 - DNA sequences are converted to number string using this scheme : All nodes are zero except for the node which is different from main sequence node.\\ 3 - DNA sequences are translated to AA sequences according to their transcription ID and then AA sequences are converted to number strings according to the scheme used to classify AAs.\\ 4 - AA sequences are converted to number strings where all digits are zero except changed ones. \\ 5 - This model is the most important because it gave promising results of accuracy and speed due to its short length and amount of data fitted into its nodes. In this model we used just 3 nodes to represent every SNP, first node dedicated to the location of AA substitution caused by SNP while second and third nodes are for old AA and new AA. Every data in this model contains 5 different information involving, AA substitution location , old and new AA's hydrophilicity or hydrophobicity and their favorite structure. \subsubsection{BRCA2} The same procedure was applied to BRCA2 data. As a raw data, we had 7227 BRCA2 nsSNP, then after processing them with our program we had 2972 coding nsSNP data. The program showed that 460 of them are variant with known clinical significance. \section{Results} Training and testing with 5th dataset resulted in the highest accuracy that we could get out of PNN and DNN. The reason why this dataset is the most suitable one is simple. Because it has the shortest possible length for representing AA substitution and provides enough information for AI so that it can classify AA substitutions with sufficient accuracy.\\ Table \ref{Tab:01} shows PNN accuracy at predicting AA substitution and table \ref{Tab:02} shows same results obtained using DNN. \begin{table}[H] \processtable{Accuracies achived training PNN using different datasets\label{Tab:00}} {\begin{tabular}{@{}lll@{}} \toprule dataset number$^{*}$ & BRCA1(\%) & BRCA2(\%)\\ \midrule 1 & 48.2 & 40.4\\ 2 & 50.4 & 45.7\\ 3 & 55.6 & 48.8\\ 4 & 58.7 & 53.2\\\botrule \end{tabular}}{*prediction accuracies using 5th dataset shows significant increase in accuracies which are given at table \ref{Tab:01} and table \ref{Tab:02}.} \end{table} \vspace{-20pt} \begin{table}[H] \processtable{PNN accuracy at predicting BRCA1 and BRCA2 SNP's clinical significance\label{Tab:01}} {\begin{tabular}{@{}lll@{}} \toprule Validation method & BRCA1(\%) & BRCA2(\%)\\ \midrule 5-fold cross validation & 78.40 & 79.13\\ 10-fold cross validation & 80.40 & 80.00\\ 20-fold cross validation & 85.45 & 80.22\\ Jackknife & 87.97 & 82.17\\\botrule \end{tabular}}{} \end{table} \vspace{-19pt} \begin{table}[H] \processtable{DNN accuracy at predicting BRCA1 and BRCA2 SNP's clinical significance\label{Tab:02}} {\begin{tabular}{@{}lll@{}} \toprule Validation method & BRCA1(\%) & BRCA2(\%)\\ \midrule 5-fold cross validation & 86.74 & 79.95\\ 10-fold cross validation & 92.04 & 81.14\\ 20-fold cross validation & 93.30 & 82.86\\ Jackknife & 95.41 & 92.80\\\botrule \end{tabular}}{} \end{table} \vspace*{-45pt} \begin{table}[H] \processtable{PNN and Deeplearning evalution benchmarks for BRCA1\label{Tab:03}} {\begin{tabular}{@{}lll@{}} \toprule Benchmark \hspace{30pt} & PNN\hspace{20pt} & DNN\\ \midrule Accuracy & 87.97\% & 95.41\%\\ Sensitivity & 93.96\% & 79.73\%\\ Specificity & 62.35\% & 93.87\%\\ F1-score & 0.9268 & 0.8059\\ MCC & 0.5914 & 0.7454\\\botrule \end{tabular}}{MCC : Matthews correlation coefficient} \end{table} \vspace*{-20pt} \begin{table}[H] \processtable{PNN and Deeplearning evalution benchmarks for BRCA2\label{Tab:04}} {\begin{tabular}{@{}lll@{}} \toprule Benchmark \hspace{30pt} & PNN\hspace{20pt} & DNN\\ \midrule Accuracy & 82.17\% & 92.80\%\\ Sensitivity & 87.72\% & 64.01\%\\ Specificity & 50.72\% & 90.12\%\\ F1-score & 0.8932 & 0.6723\\ MCC & 0.3570 & 0.5807\\\botrule \end{tabular}}{} \end{table} \vspace*{-35pt} \end{methods} \section{Discussion} In this article, we have shown that choosing the right data (right tool for the right job) will change the results. As shown in the table \ref{Tab:00} when we had chosen DNA sequence instead of protein sequence our result would not be more than 50.4\% and 45.7\% for BRCA1 and BRCA2 respectively.\\ As reported in (\citealp{disc01}) several studies have considered how benign and pathogenic nsSNPs may be distinguished using only sequence and structural aspects of the proteins in which they occur, e.g. Wang and Moult in (\citealp{disc02}) used protein hydrophobic core disruption to determine the proteins structural stability indirectly. Here we have used hydrophobicity and hydrophilicity alongside favorite second structure of the previous AA and new AA to score the AAs substitution. Our results suggest that this information will lead to the increase of the accuracy of prediction making machine-learning methods a useful tool for SNP prediction problems in a way that our method can be practical to real-world applications. A clear limitation of our study was the inability to use indel SNPs due to assumptions made in the definition of the model.\\ Prediction accuracy difference for BRCA1 and BRCA2 is because BRCA1 AA sequence length is 1863 AA while BRCA2 AA sequence consists of 3418 AAs. Since we have almost the same number of AA substitution with known clinical significance for both BRCA1 and BRCA2 , fraction of our knowledge which we use to train AI for BRCA1 is almost twice the knowledge that we have about BRCA2.\\ In order to evaluate the behavior of NNs, we have used five well-known measures: accuracy, sensitivity, specificity, F1-score, Mathews Correlation Coefficient (MCC). In general, sensitivity indicates how well the NN can predict the actual positive (e.g. a pathogenic sample as a pathogenic sample) and specificity indicates how well the NN can identify the actual negative (e.g. a benign sample as a benign sample). The F1-score is the harmonic average of the precision and sensitivity. Best F1-score is 1 while the worst F1-score is 0. MCC is a measure to indicate the quality of learning where value of +1 shows perfect prediction and value of 0 shows prediction quality no better than r+1andom prediction and -1 shows a prediction in total disagreement between prediction and observation. Values of each measure for each gene and NN used are shown in tables \ref{Tab:03} and \ref{Tab:04}.\\ As the result, it is shown that DNN can predict AA substitutions with more accuracy due to its modern structure and its efficiency.\\ According to (\citealp{Cru06}) the lack of attention to the data validation is one of the major problems in this field. As reported in the article, 5-fold or 10-fold cross validation is sufficient for most learning algorithms to be validated, however, we used a more aggressive method to validate the learning quality of the NNs which is jackknife cross-validation method in this process we have used all but one sample iteratively to train the NNs. Also, a common problem in this field is the imbalance problem, in which the data set is dominated by a major class, so the predictions have a bias toward that class. To check if our method is affected by this problem or not, we performed the learning and testing procedure using the same number of samples from each class. The obtained result showed a minor (2-5\%) change in the accuracy of the final result that reported in the table \ref{Tab:02}.\\ In addition to the supervised learning algorithms reported here, we have also use Self-Organizing Maps (SOM) to check whether these algorithms can or can't distinguish between different classes. Resulted Maps suggested the inability of these algorithms in the classification of nsSNPs.\\ \section{Conclusion} In this article, we have used PNN and DNN to predict the clinical significance of nsSNPs with unknown clinical significance. Our data were obtained from NCBI database and then we have used DNA sequence and Protein sequence and 2nd structural information to prepare our training and testing data set. Among different datasets, protein dataset with a novel model of showing nsSNP position and substitution showed best results, then we used n-fold cross validations to validate our results. Also, F1-score and MCC scores are reported to show the quality of learning and prediction of NNs used.\vspace*{-10pt} \section*{Acknowledgements} We thank Dr Emarn Heshmati and Dr S. Shahriar Arab for their guides at the beginning of this research. \vspace*{-12pt} \bibliographystyle{natbib} \bibliographystyle{bioinformatics}
{ "timestamp": "2018-05-08T02:10:32", "yymm": "1805", "arxiv_id": "1805.02176", "language": "en", "url": "https://arxiv.org/abs/1805.02176" }
\section{Introduction} There is a very natural question that one can ask about the gravitational waves that have been detected by the LIGO/VIRGO: did the detectors leave a permanent effect on the waves or the waves left the detectors intact as they entered ? Of course, one can formulate the problem in just the opposite way: did the waves leave a permanent effect on the detectors ? The second formulation is better because we might be able to measure the effect if that is the case. It turns out, for certain gravitational waves, part of the strain can be considered as a sort of permanent effect on the detector. This phenomenon is aptly called the {\it gravitational memory effect} and comes in two related forms: ordinary (or linear) \cite{zeldovich}, null (or nonlinear) \cite{Christodoulou} which could be measured soon in the observations. One might wonder why this somewhat subtle effect arises in the first place. Let us explain this a little bit. All the effects of gravity are encoded in the metric tensor field $g$ which needs no coordinates to be defined. General Relativity (GR) is intrinsically four dimensional and the full metric of {\it spacetime} manifold $g$ does not really evolve in time: it is what it is. So, if we had known how to obtain all the local observables from the metric for all physically relevant situations, we would not need any further nomenclatures such as memory effect, gravitomagnetism {\it etc.} But, since as local observers, we do not have full access to the fully consistent spacetime, it pays to see spacetime as space evolving in time, namely, to see spacetime as a history of space. Such a dynamical picture requires a choice of time and other coordinates and leads to interesting phenomena and the gravitational memory is one such an event: the wave that enters the interaction with the detector masses differs in some well-defined sense from the wave that leaves the interaction. The best way to see the difference is to measure the change in the relative separation of the masses as this is related to the change in the wave profile. In geometric units, in GR, the total change of the wave profile is given by two parts \begin{equation} \Delta h^{\textnormal{TT}}_{ab} = \Delta {h_1}^{\textnormal{TT}}_{ab} +\Delta {h_2}^{\textnormal{TT}}_{ab}, \label{firsteq} \end{equation} where the first part comes from the massive unbound sources with masses $m_i$ and velocities $v_i$ and is given as \cite{Braginsky:1985fw} \begin{equation} \Delta {h_1}^{\textnormal{TT}}_{ab}= \frac{4}{r} \Delta\sum_i \frac{m _i}{\sqrt{1-v_i^2}}\left[\frac{(v_i)_a (v_i)_b}{1-v_i\cos\theta_i}\right]^{\textnormal{TT}}. \label{nonrev1} \end{equation} Here the unbound sources are located at the origin and $r$ is the radial coordinate of the detector located at a distance far away from the sources. $\Delta$ before the summation denotes the difference after and before the wave interacts with the detector, the TT index refers to the transverse-transpose component while the indices $a$ and $b$ are abstract spacetime indices \cite{R.M.Wald}. $\theta_i$ is the angle between the velocity $v_i$ and $\hat{r}$. The second part in (\ref{firsteq}) is somewhat more subtle and was initially found by Christodoulou \cite{Christodoulou} as a result of carefully studying the change at the null infinity once a null stress energy-tensor reaches the null infinity. A more transparent physical interpretation was given by Thorne \cite{Thorne:1992sdb}: considering each graviton emitted by the source as an unbound system, one should simply modify (\ref{nonrev1}) to take into account the gravitons as \begin{equation} \Delta {h_2}^{\textnormal{TT}}_{ab}= \frac{4}{r}\Delta\int \frac{dE}{d\Omega'} \left[\frac{\zeta'_{a} \zeta'_b}{1-\cos\theta'}\right]^{\textnormal{TT}}d\Omega' \, , \label{nonrev} \end{equation} where $\Omega'$ is the solid angle, $\zeta'$ is the unit vector in the direction of the solid angle, $\theta'$ is the angle between the unit vector $\hat{r}$ and $\frac{dE}{d\Omega'}$ describes the radiated energy that reaches the null infinity per unit angle. In this work, we calculate the gravitational memory as a function of the graviton mass and suggest that a possible observation of memory can constrain or possibly rule out the graviton mass. We shall also discuss the memory effect in quadratic gravity. A priori one would expect that the effect of having a massive graviton (with mass $m_g$) amounts to a change of the physically relevant quantities such as the $\frac{1}{r}$ potential to $\frac{e^{-m_gr}}{r}$, which indeed is true but the overall factor is not correct: the weak field limit of the Newtonian potential in the massive gravity is $V(r)=-\frac{4G}{3}\frac{e^{-m_gr}}{r}$, which has the well-known Van-dam-Veltmann-Zakharov (vDVZ) \cite{vdvz1,vdvz2} discontinuity that cannot be remedied by redefining the Newton's constant as that would lead to a wrong prediction of light deflection by the Sun. Relativistic counterparts of the vDVZ discontinuity have been found recently \cite{Gullu-Tekin, Tasseten-Tekin} where massive gravity predicts a maximized total spin for two interacting bodies, while Einstein's gravity predicts a minimum total spin. Here we study the effects of graviton mass and quadratic terms on gravitational memory and show that it is significantly different from that of GR in the case of massive gravity. The lay out of the paper is as follows: In section II, we calculate the memory effect in the low energy massive gravity theory (namely the Fierz-Pauli theory). The computation boils down to solving the geodesic deviation equation in the presence of the Riemann tensor which is determined by a passing gravitational wave in massive gravity. In section III, we carry out a similar calculation for quadratic gravity, that has a massive spin-0 and a massive spin-2 particle along with the Einsteinian massless spin-2 particle. The computation is in generic $D$ dimensions. In the Appendix, we consider the massive scalar field case to set the notation and our conventions, especially how we define the sources that create the fields. \section{Memory effect in massive gravity} The action for massive gravity is \begin{equation} {I}=\int d^4x \sqrt{-g} \, \Big ( \frac{1}{ \kappa}R -\frac{m_g^2}{4 \kappa}(h^2_{ab}-h^2)+{\cal{L}}_{matter}\bigg), \label{pfaction} \end{equation} which yields the linearized field equations \begin{equation} {\cal G}^L_{ab}+\frac{m_g^2}{2}(h_{ab}-\bar{g}_{ab}h)=8\pi T_{ab}, \label{PFeom} \end{equation} where ${\cal G}^L_{ab}$ is the linearized Einstein tensor and $\bar{g}_{ab}$ refers to the background metric (see \cite{deser_tekin} for the relevant definitions of linearized tensors). Assuming a flat background ($\bar{g}_{ab} = \eta_{a b}$) and a conserved source ( $\partial_a T^ {a b} =0$), one arrives at \begin{equation} \begin{aligned} (\partial^2-m_g^2)h_{ab}=&-16\pi\bigg(T_{ab}-\frac{1}{3}(\eta_{ab}-\frac{1}{m_g^2}\partial_a\partial_b)T\bigg)\\&\equiv-16\pi\tilde{T}_{ab}, \end{aligned} \end{equation} whose inhomogeneous solution can be written as \begin{equation} h_{ab} = 16\pi\int G_{ab}{}^{cd}(x,x')\tilde{T}_{cd}(x')d^{4}x', \label{Solution} \end{equation} with the retarded Green's function given as \begin{equation} G_{ab}{}^{cd}(x,x')=\eta_{a}{}^{c}\eta_{b}{}^{d}G(x,x'), \label{Green1} \end{equation} here $\eta_{a}{}^{c}$ is the parallel propagator. We follow the analogous computation in GR \cite{Satishchandran:2017pek, Garfinkle}, see the Appendix below for the case of the massive scalar field where we establish the notation. Now consider the source to be some free particles colliding at the point $t=0, \vec{x}$ and some (possibly other) particles coming out from that single spacetime point. Then the energy momentum tensor of the source is \begin{equation} \begin{aligned} T_{ab}=&\sum_{(j)in}m^{\rm in}_{(j)}\frac{d\tau_{(j)}}{dt}u_{(j)a}u_{(j)b}\delta_{3}(\mathbf{x}-\mathbf{y}_{(j)}(t))\Theta(-t) \\&+ \sum_{(i)out}m^{\rm out}_{(i)}\frac{d\tau_{(i)}}{dt}u_{(i)a}u_{(i)b}\delta_{3}(\mathbf{x}-\mathbf{y}_{(i)}(t))\Theta(t), \label{SourceSol} \end{aligned} \end{equation} where $u_{(i)a}$ and $u_{(j)a}$ are normalized four velocities and the propagator can be given as \begin{equation} G^R(x,x')=(\partial^2-m_g^2)^{-1}=\frac{1}{4\pi r}e^{-m_gr}\delta(t-t'-r) \label{GreenSol}. \end{equation} Using (\ref{GreenSol}), (\ref{SourceSol}), (\ref{Green1}) in (\ref{Solution}), the retarded solution for massive gravity theory can be obtained as \begin{equation} \begin{aligned} h_{ab}(x)=&\bigg[4 \bigg(\alpha_{ab}\Theta(U)+\beta_{ab}\Theta(-U)\bigg)+\frac{4}{3m_g^2}\bar{g}_{cd} \\&\partial_a \partial_b\bigg(\tilde{\alpha}^{cd}\Theta(U)+\tilde{\beta}^{cd}\Theta(-U)\bigg)\bigg]\frac{e^{-m_gr}}{r}, \label{Field1} \end{aligned} \end{equation} where $U \equiv t -r$ is the retarded time and we defined \begin{equation} \begin{aligned} &\alpha_{ab}(\hat{\mathbf{r}}) \equiv\sum_{(i)out}\frac{d\tau^{(i)}}{dt}\Big(\frac{m^{\rm out}_{(i)}}{1-\hat{\mathbf{r}}\cdot\mathbf{v}^{(i)}}\Big)\bigg(u_{a}^{(i)}u_{b}^{(i)}+\frac{1}{3}\eta_{ab}\bigg), \\ &\tilde{\alpha}_{ab}(\hat{\mathbf{r}}) \equiv \sum_{(i)out}\frac{d\tau^{(i)}}{dt}\Big(\frac{m^{\rm out}_{(i)}}{1-\hat{\mathbf{r}}\cdot\mathbf{v}^{(i)}}\Big)u_{a}^{(i)}u_{b}^{(i)}. \end{aligned} \end{equation} We did not write the explicit form of $\beta_{ab}$ since it is exactly like $\alpha_{ab}$, except one replaces $\text{``out``}$ with $\text{``in``}$ which is also the case for $\tilde\beta_{ab}$. Already at this stage there seems to be two differences between massive gravity and the massless GR: due to the second term in (\ref{Field1}), the $m_g\rightarrow 0$ limit seems divergent, but this is a red-herring, that term does not contribute to the linearized Riemann tensor and so it is of no real consequence. But in $\alpha_{ab}(\hat{\mathbf{r}})$ the factor 1/3 in front of $\eta_{a b}$ is 1/2 in massless GR. This will be crucial as the rest smoothly reproduces the GR result in the massless limit. For the moment keeping all the terms in (\ref{Field1}), one can find up to the leading order in $\frac{1}{r}$ \begin{equation} \begin{aligned} h_{ab}(x)=&4\bigg(\alpha_{ab}\Theta(U)+\beta_{ab}\Theta(-U)\bigg)\frac{e^{-m_gr}}{r}\\&+\frac{4}{ 3\, m_g^2}\bar{g}_{cd}\bigg((\tilde{\alpha}^{cd}\Theta(U)+\tilde{\beta}^{cd}\Theta(-U))(m_g^2r_ar_b)\\&+m_g(\tilde{\alpha}^{cd}-\tilde{\beta}^{cd})(K_ar_b+K_br_a)\delta(U)\\&+(\tilde{\alpha}^{cd}-\tilde{\beta}^{cd})K_aK_b\delta'(U)\bigg)\frac{e^{-m_gr}}{r}, \label{field} \end{aligned} \end{equation} where $K^a \equiv -\partial^aU=t^a+r^a$ and $t^a$ and $r^a=\partial^ar$ are unit vectors. We can now compute the linearized Riemann tensor which is \begin{equation} R_{abcd}= \partial_{c}\partial_{[b}h_{a]d} - \partial_{d}\partial_{[b}h_{a]c}. \end{equation} Note that up to ${\cal{O}}(\frac{1}{r^2})$, one has \begin{equation} \begin{aligned} \partial_d\partial_a\bigg(\frac{e^{-m_gr}}{r}\Theta(U)\bigg)=&\bigg(m_g^2r_ar_d\Theta(U)\\&+m_g\delta(U)(K_ar_d+K_dr_a)\\&+\delta'(U)K_aK_d\bigg)\frac{e^{-m_gr}}{r}. \end{aligned} \end{equation} As noted above, the $1/m_g^2$ terms in (\ref{Field1}) do not contribute to Riemann tensor. Finally, to the leading order, the linearized Riemann tensor reads \begin{equation} \begin{aligned} R_{abcd}=&4\bigg(K_{[a}\Delta_{b][c}K_{d]}\frac{d^2\Theta(U)}{dU^2}+m_gK_{[a}\Delta_{b][c}r_{d]}\frac{d\Theta(U)}{dU}\\&+m_gK_{[d}\Delta_{b][c}r_{a]}\frac{d\Theta(U)}{dU}+2m_g^2r_{[a}\alpha_{b][c}r_{d]}\Theta(U)\\&+2m_g^2r_{[a}\beta_{b][c}r_{d]}\Theta(-U)\bigg)\frac{e^{-m_gr}}{r}, \label{Riemann} \end{aligned} \end{equation} where $\Delta_{ab} \equiv 2 (\alpha_{ab}(\hat{\mathbf{r}}) -\beta_{ab}(\hat{\mathbf{r}}))$. In the GR case, the linearized Riemann tensor is gauge invariant and one can work in any gauge one likes and the TT gauge is the most convenient one, hence the TT indices in all the expressions. But in massive GR, as the symmetry of the theory, one only has the rigid background symmetries, not the the full linearized diffeomorphisms, one cannot consider the TT gauge. The memory part of the linearized Riemann tensor is only the first term in (\ref{Riemann}) as can be seen from the computation of the geodesic deviation between two massive sources at rest with a relative separation vector $\xi$: \begin{equation} \frac{d^{2}\xi^{i}}{dt^{2}}=-{R^i}_{0j0}\xi^{j}. \label{geodesicdevmemory} \end{equation} Plugging (\ref{Riemann}) into the last equation and integrating twice yield \begin{equation} \begin{aligned} \Delta\xi^{i}&=\int_{-\infty} ^{U}dU'\int_{-\infty} ^{U'}dU''\frac{d^{2}\xi^{i}}{dU''^{2}}\\ &=\frac{1}{r}e^{-m_gr}\tilde{\Delta}_j^i(m_g)\Theta(U)\xi^{j}, \label{memoryeffect} \end{aligned} \end{equation} where the memory tensor $\tilde{\Delta}_j^i(m_g)$ is given explicitly as \begin{equation} \tilde{\Delta}_j^i(m_g)\equiv \Delta_j^i(m_g)+\delta_j^i \Delta_{00}(m_g)+\hat{r}^i \Delta_{0j}(m_g)+\hat{r}_j \Delta_0\,^i(m_g). \end{equation} In GR, one only has the first part and moreover the relation between the memory tensors in massive gravity and GR is \begin{equation} \Delta_{ a b}(m_g) = \Delta_{a b}(\text{GR}) - \frac{1}{6}\eta_{a b} \eta^{ c d}\Delta_{ cd } (\text{GR}). \label{mem} \end{equation} Let us give some numerical values: as the graviton mass is expected to be small ($ m_g < 10^{-29 }$ eV $\approx 5\times 10^{-20} \frac{1}{\text{km}}$) \cite{nieto}, for small $r$, we can take $m_g r \rightarrow 0$ and the Yukawa decay part reduces to the usual Einsteinian $1/r$ form, but the noted discrete difference survives and an accurate measurement of memory can distinguish massive gravity from GR as $\Delta_{ a b}(m_g \rightarrow 0) \ne \Delta_{a b}(\text{GR})$. On the other hand, if $r = 1$ Mpc, then one has $m_g r \approx 1.55$ and the memory is reduced by $0.21$ due to the Yukawa decay part. For larger separations, as in the case of the first black hole merger observation which was at a distance $440^{+160}_{-180}$ Mpc \cite{ligo}, all the memory is wiped out in massive gravity. For weaker bounds on the graviton mass, such as the one noted in \cite{ligo2} ($ m_g < 7.7 \times 10^{-23}$ eV), the memory is wiped out virtually above 0.1 Pc ! \section{Higher Derivative Gravity} In \cite{Garfinkle}, the authors showed that there is no gravitational memory effect in higher even dimensional spacetimes ($D>4$). Here we add quadratic curvature terms (which are the only relevant ones in flat backgrounds in the weak-field limit) to the Einstein's theory and compute the memory effect in generic $D$ dimensions: \begin{equation} \begin{aligned} I = \int d^{{D}}x\,\sqrt{-g}\{& \frac{1}{\kappa}R+\alpha R^{2}+\beta R_{ab}^{^{2}} +\gamma(R_{abcd}^{2}\\&-4R_{ab}^{2}+R^{2})+ {\cal {L}}_{\mbox{matter}}\} , \label{action11} \end{aligned} \end{equation} which yields the linearized field equations around the flat background metric \begin{equation} \begin{aligned} \frac{1}{\kappa} {\mathcal{G}}_{ab}^{L} + \left(2\alpha+\beta\right)\left(\bar{g}_{ab}\partial^2-\partial_{a}\partial_{b} \right)R^{L} + \beta\partial^2{\mathcal{G}}_{ab}^{L}=T_{ab}\left(h\right). \label{linearized11} \end{aligned} \end{equation} In the harmonic gauge $\partial^{a}h_{ab}=\frac{1}{2} \partial_{b}h $, the linearized field equations reduce to \begin{equation} \begin{aligned} (\frac{1}{\kappa}+ \beta \partial^2) \partial^2 h_{ab}=& -2T_{ab}+ 2(2\alpha+\beta)(\bar{g}_{ab}\partial^2-\partial_a \partial_b)R^L\\& - (\frac{1}{\kappa}+\beta \partial^2) \bar{g}_{ab}R^L, \end{aligned} \end{equation} whose inhomogeneous solution reads \begin{equation} \begin{aligned} h_{ab} = \int & d^{D}x'\bigg(2 G^1(x,x')T_{ab}(x')+2 \bar{g}_{ab}G^3(x,x')T(x')\\&-4(2\alpha+\beta)G^2(x,x')(\bar{g}_{ab}\partial^2-\partial_a \partial_b)T(x')\bigg), \end{aligned} \end{equation} with the retarded scalar Green's functions given as \begin{equation} \begin{aligned} &G^1(x,x')=\frac{1}{\beta}\bigg(( \partial^2-m_\beta^2) \partial^2 \bigg)^{-1},\\&G^2(x,x')=\frac{\bigg((\partial^2-m_\beta^2) (\partial^2 - m_c^2)\partial^2\bigg)^{-1}}{\beta\left( 4 \alpha (D-1) + D\beta \right)}\\& G^3(x,x')=\frac{1}{\left( 4 \alpha (D-1) + D\beta \right)}\bigg((\partial^2 - m_c^2)\partial^2\bigg)^{-1}. \end{aligned} \end{equation} Here the mass of the massive spin-$2$ and the massive spin-$0$ graviton are given as $m_\beta^2=-\frac{1}{\beta\kappa}$, $m_c^2=\frac{D-2}{\kappa\left( 4 \alpha (D-1) + D\beta \right)}$, respectively. By using these, to leading order after a somewhat cumbersome calculation, the linearized Riemann tensor can be found as \begin{equation} \begin{aligned} R_{abcd}=&\frac{\kappa}{(2\pi r)^{\frac{D-2}{2}}}K_{[a}\bar{\Delta}_{b][c}K_{d]}\frac{d^{\frac{D-2}{2}}}{dU^{\frac{D-2}{2}}}\delta(U)\\&-\frac{\kappa e^{-m_\beta r}}{(2\pi r)^{\frac{D-2}{2}}}(m_\beta)^{\frac{D-4}{2}}\bigg(K_{[a}\bar{\Delta}_{b][c}K_{d]}\delta'(U)\\&+m_\beta K_{[a}\bar{\Delta}_{b][c}r_{d]}\delta(U)+m_\beta K_{[d}\bar{\Delta}_{b][c}r_{a]}\delta(U)\\&+2m_\beta^2r_{[a}\bar{\alpha}_{b][c}r_{d]}\Theta(U)+2m_\beta^2r_{[a}\bar{\beta}_{b][c}r_{d]}\Theta(-U)\bigg), \label{Riemann1} \end{aligned} \end{equation} here we have defined \begin{equation} \begin{aligned} &\bar{\Delta}_{ab} \equiv 2\sum_{(i)out} \frac{d\tau_{(i)}}{dt}\Big(\frac{m^{\rm out}_{(i)}}{1-\hat{\mathbf{r}}\cdot\mathbf{v}_{(i)}}\Big)\bigg(q_{ac}u^{c}_{(i)}q_{bd}u^{d}_{(i)}\\ &-\frac{q_{cd}u^{c}_{(i)}u^{d}_{(i)}}{D-2}q_{ab}\bigg) -2\sum_{(j)in}\frac{d\tau_{(j)}}{dt}\Big(\frac{m^{\rm in}_{(j)}}{1-\hat{\mathbf{r}}\cdot\mathbf{v}_{(j)}}\Big)\times\\ &\bigg(q_{ac}u^{c}_{(j)}q_{bd}u^{d}_{(j)}-\frac{q_{cd}u^{c}_{(j)}u^{d}_{(j)}}{D-2}q_{ab}\bigg),\\& \bar{\alpha}_{ab}= \sum_{(i)out} \frac{d\tau_{(i)}}{dt}\Big(\frac{m^{\rm out}_{(i)}}{1-\hat{\mathbf{r}}\cdot\mathbf{v}_{(i)}}\Big)\bigg(q_{ac}u^{c}_{(i)}q_{bd}u^{d}_{(i)}\\ &-\frac{q_{cd}u^{c}_{(i)}u^{d}_{(i)}}{D-2}q_{ab}\bigg), \label{memorytensor1} \end{aligned} \end{equation} where $q_{ab}$ is the projector that projects a symmetric tensor onto the sphere $S^{D-2}$ at large $r$ and $\bar{\beta}_{ab}$ is exactly like $\bar{\alpha}_{ab}$, except one replaces $\text{``out``}$ with $\text{``in``}$. By using (\ref{geodesicdevmemory}), the finite relative change in the displacement between two free test particles can be computed as \begin{equation} \begin{aligned} \Delta\xi^{i}=\frac{2\pi}{(2\pi r)^{\frac{D-2}{2}}}\bigg(\frac{d^{\frac{D-4}{2}}}{dU^{\frac{D-4}{2}}}- (m_\beta)^{\frac{D-4}{2}}e^{-m_\beta r}\bigg)\bar{\Delta}_j^i\Theta(U)\xi^{j}, \label{memoryeffect11} \end{aligned} \end{equation} here $\bar{\Delta}_j^i$ are spatial components of the memory tensor Eq.(\ref{memorytensor1}). Observe that, in higher dimensional even spacetimes ($D> 4$), to the leading order, there is no memory effect as in the case of pure GR. On the other hand, in four dimensions, the memory effect is \begin{equation} \begin{aligned} \Delta\xi^{i}=\frac{1}{r}\bigg(1- e^{-m_\beta r}\bigg)\bar{\Delta}_j^i\Theta(U)\xi^{j}. \end{aligned} \end{equation} In the $m_\beta \to \infty$, the memory is the same as obtained in \cite{Garfinkle}. But for any finite value of $m_\beta$, the memory is reduced compared to GR. \section{Conclusions} Recently gravitational memory effect received a renewed interest \cite{Pasterski,Strominger1,Strominger2,Flanagan,Garfinkle,Hollands,Satishchandran:2017pek,Tolish1,Tolish2,Bieri2,Zhang,Kilicarslan1} for various reasons some of which are: its related to black hole soft hair, asymptotic symmetries and its potential observation in the gravitational wave detectors. Here, we calculated the gravitational memory as a function of graviton mass and showed that for the graviton mass $m_g \le 10^{-29}$ eV, the memory is significantly reduced for distances beyond $1$ Mpc as in the first observation of two black hole mergers which was at a distance of more than $200$ Mpc. Moreover massive gravity leaves a discretely different memory on our detectors from the expected general relativity result. The result is summarized by equation (\ref{mem}). In the LIGO/VIRGO observations of gravitational waves, memory effect is already in the data but it is hard to distinguish it is from the background noise. In the near future, one might expect to see this effect observed (possibly in eLISA). This observation might rule out massive gravity. We have also calculated the memory effect in quadratic gravity and showed that due to the massive spin-2 mode, the memory is reduced from that of the Einstein's theory. Here we have used the linearized massive gravity theory which is valid in the weak-field regime that is relevant for the gravitational wave bursts observed on earth. Of course one can consider non-linear extensions of massive gravity such as the one given in \cite{deRham:2010kj} but, the above result is universal in the weak field limit as the non-linear extensions reduce down to the Einstein-Fierz-Pauli theory that we employed. \section{Appendix} We follow the analogous computation in GR \cite{Garfinkle,Satishchandran:2017pek} and first establish the relevant Green's function for the scalar field case: consider a scalar source $S$ coupled to massive wave field in a $4$-dimensional Minkowski spacetime \begin{equation} (\eta^{ab}\partial_{a}\partial_{b}-m^2)\phi = -4\pi S, \label{waveeq1} \end{equation} from which follows the retarded Green's function \begin{equation} G(x,x')=\frac{e^{-mr}}{4\pi r}\delta(t-t'-r), \label{Green'sscalar1} \end{equation} yielding the general (inhomogeneous) solution of the Eq.(\ref{waveeq1}) as \begin{equation} \phi_S(x) = 4\pi\int{G(x,x')S(x')d^{4}x'}. \label{phiintegralexp1} \end{equation} Now consider the source to be some free particles colliding at the point $t=0, \vec{x} =0$ and some (possibly other) particles coming out from that single spacetime point. Then the source is \begin{equation} \begin{aligned} S(x)=&\sum_{(j)in}q^{\rm in}_{(j)}\frac{d\tau_{(j)}}{dt}\delta_{3}(\mathbf{x}-\mathbf{y}_{(j)}(t))\Theta(-t) \\&+ \sum_{(i)out}q^{\rm out}_{(i)}\frac{d\tau_{(i)}}{dt}\delta_{3}(\mathbf{x}-\mathbf{y}_{(i)}(t))\Theta(t),\label{gensource1} \end{aligned} \end{equation} in which $q^{\rm out}_{i}$ $(q^{\rm in}_{j})$ are the out (in) scalar charges and $\tau_{(i)}$ is the proper time. We would like to solve the equation \eqref{phiintegralexp1} for the source \eqref{gensource1}. For this purpose, for the sake of simplicity, let us consider a single created particle at $O$. The source can be given \begin{equation} S_0=q\delta_{3}(\mathbf{x})\Theta(t). \label{createdscalar1} \end{equation} Plugging this into (\ref{phiintegralexp1}) and using the retarded Green's function (\ref{Green'sscalar1}), one gets \begin{equation} \phi_0(x)= q\int_0^\infty{\frac{1}{ r}e^{-mr}\delta(t-t'-r)dt'}. \label{phi01} \end{equation} The solution reads \begin{equation} \phi_0 = q\Theta(U)\frac{e^{-mr}}{r}. \end{equation} To obtain the field of a particle created at $O$ with the coordinate velocity $\mathbf{v}=d\mathbf{y}/dt$, Eq.(\ref{phi01}) can be boosted to get \begin{equation} \phi_{0,v}(x)=q\frac{d\tau}{dt}\bigg(\frac{1}{1-\hat{\mathbf{r}}\cdot\mathbf{v}}\bigg)\Theta(U)\frac{e^{-mr}}{r}, \label{phioutv} \end{equation} where $\hat{\mathbf{r}}=\mathbf{x}/r$ is a unit vector. Let us now consider the case that the particle is destroyed, the source is simply given \begin{equation} \tilde{S}_0=q\delta_{3}(\mathbf{x})\Theta(-t).\label{destroyedscalar} \end{equation} The solution is \begin{equation} \tilde{\phi}_0 = q\Theta(-U)\frac{e^{-mr}}{r}. \end{equation} The linear superposition of the retarded solutions for the case that the particles are created and destroyed can be written as \begin{equation} \phi_{S}(x)=(\alpha(\hat{\mathbf{r}})\Theta(U)+\beta(\hat{\mathbf{r}})\Theta(-U))\frac{e^{-mr}}{r}, \label{phiSv} \end{equation} where \begin{equation} \begin{aligned} \alpha(\hat{\mathbf{r}})=\sum_{(i)out}q^{\rm out}_{(i)}\frac{d\tau_{(i)}}{dt}\bigg(\frac{1}{1-\hat{\mathbf{r}}\cdot\mathbf{v}_{(i)}}\bigg), \end{aligned} \label{alphabeta} \end{equation} and $\beta(\hat{\mathbf{r}})$ reads exactly the same except "out" becomes "in".
{ "timestamp": "2019-02-07T02:14:23", "yymm": "1805", "arxiv_id": "1805.02240", "language": "en", "url": "https://arxiv.org/abs/1805.02240" }