Dataset Viewer
Auto-converted to Parquet Duplicate
text
stringlengths
1
1.52M
meta
dict
\section{Introduction} \label{sec:intro} One of the fundamental issues in the study of genomes is their primary structure, that is, the distribution of nucleotides along DNA sequences. The identification of statistical patterns in the primary structure of DNA sequences has revealed several underlying patterns in genomes \cite{Cattani12,Li_92,Lobry96-2,sobottka&hart2011} and has enabled scientists to propose models for evolutive pressures and mutational mechanisms that might act on organisms \cite{AlbrechtBuehler2006,hart&martinez&olmos2012,sobottka&hart2011} as well as to construct bioinformatics tools. For example, in \cite{Felsenstein81}, a maximum likelihood approach was used to perform analyses of DNA sequences in order to estimate evolutionary trees, while in \cite{Yu_et_al2000}, a measure of the long-range correlation between the nucleotide bases of DNA sequences was used to classify bacteria. In addition, strand compositional asymmetry (SCA) was used to detect replication origins in bacteria \cite{FrankLobry00}, while \cite{Salzberg_et_al98} used interpolated Markov models to identify genes in bacteria, \cite{hart&martinez&videla2006} proposed a maximization model to describe the organization and distribution of genes in bacterial DNA and \cite{martinez2016} presented a stationary stochastic process for modeling the placement of coding and non-coding regions within a genome that incorporates the phenomenon of start codons appearing within coding regions. The aim of this work is to provide a rigorous formalization of a stochastic concatenation model for capturing the primary structure of bacterial DNA sequences which was presented in \cite{sobottka&hart2011}. The model, henceforth referred to as the S-H model, allowed novel statistical symmetries in the mononucleotide and dinucleotide distributions of a collection of bacterial chromosomes to be observed. A key feature of the model is a persymmetric matrix of probabilities which plays a role in determining the nucleic acids seen along a DNA sequence. The persymmetric matrices constitute a special class of matrices which has been employed in models from various fields (see for example \cite{Nian1997, Nian&Chu1994,Nield1994}) and which has been widely studied (see for example \cite{Gutierrez2014,Huang&Cline1972,Reid1997,Xie&Sheng2003}). A genome is a duplex of DNA strands, each strand consisting of a sequence of nucleotides. The nucleotides are of four types: adenine ($A$), cytosine ($C$), guanine ($G$) and thymine ($T$). Of these types, adenine is complementary to thymine while cytosine is complementary to guanine. Each nucleotide on one DNA strand pairs with its complement on the opposite strand. This chemically induced pairing between the two strands causes the strands to assume a ladder-like arrangement which is then twisted to attain the famous helix. The chemical composition of DNA molecules endows a strand with an intrinsic reading direction: each strand can only be read in one direction by the genetic machinery of the cell. Furthermore, the way strands combine to form a duplex means that the two strands are read in opposite directions: they are said to be antiparallel. We shall identify each nucleotide type with a number in $N:=\{1,2,3,4\}$ ($A\equiv 1$, $C\equiv 2$, $G\equiv 3$ and $T\equiv 4$). Let $\alpha:N\rightarrow N$ be the involution which maps each nucleotide to its complement, that is, $\alpha(i)=5-i$. The S-H model is a concatenation model which has at its core a first-order Markov chain whose one-step transition matrix $P=\bigl(P_{ij}\bigr)_{i,j\in N}$ is derived from a positive parameter $m$ and a positive persymmetric matrix $\A=\bigl(L_{ij}\bigr)_{i,j\in N}$: \begin{equation} \label{p.form} P_{ij}=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}, \end{equation} where $M_1=M_4:=m/(2m+2)$ and $M_2=M_3:=1/(2m+2)$. The Markov chain governs how the DNA sequence grows in both directions from an initial nucleotide called the origin by appending nucleotides in three steps. {\bf Step 1.}\ a nucleotide of type $j$ is randomly selected with probability $M_j$. {\bf Step 2.}\ with probability $1/2$, the nucleotide tries to join the end (consonant with the DNA reading direction ) or beginning (contrary to the reading direction) of the sequence. {\bf Step 3.} In the first case, the nucleotide is appended to the sequence with probability $L_{ij}$, where $i$ is the type of the last nucleotide in the sequence; in the latter, the nucleotide is prepended to the sequence with probability $L_{\alpha(k)\alpha(j)}$, where $k$ is the type of the initial nucleotide. This scheme is illustrated in Figure \ref{fig:nucleotide_aggregation}. Provided nucleotides accumulate evenly at the ends of the DNA strand, after a long time one would obtain (with probability~$1$) a sequence with the initial nucleotide at its midpoint. One half would be generated by the stationary Markov chain $(P,\pi)$, where the transition matrix~$P$ is given by~\eqref{p.form} and the chain's stationary distribution~$\pi$ is the left eigenvector of~$P$ corresponding to the eigenvalue~$1$ normalised to sum to~$1$. The other half would have distribution given by the stationary Markov chain $(\tilde P,\tilde \pi)$ where $\tilde \pi_i=\pi_{\alpha(i)}$ and $\tilde P_{ij}=\frac{\pi_{\alpha(j)}}{\pi_{\alpha(i)}}P_{\alpha(j)\alpha(i)}$, for $i,j\in N$. The model is consistent with the observation reported by geneticists that bacterial DNA sequences are usually composed of two distinct segments called chirochores (see \cite{FrankLobry00}). Furthermore, if one estimates the transition matrices~$\tilde P$ and~$P$ for the segments prior to and following the origin nucleotide respectively, one usually finds that $\tilde P_{ij}\approx\frac{\pi_{\alpha(j)}}{\pi_{\alpha(i)}}P_{\alpha(j)\alpha(i)}$ (see \cite{sobottka&hart2011}). \begin{figure}[!ht] \centering \includegraphics[width=.7\linewidth]{nucleotide_aggregation_2.eps} \caption{A schematic presentation of the S-H model for constructing bacterial DNA sequences. Assuming the reading sense of the sequence is from left to right, a new nucleotide of type $C$ is selected with probability $1/(2m+2)$ and is appended to the end of the sequence with probability $L_{32}$, while a nucleotide of type $T$ is selected with probability $m/(2m+2)$ and will be attached to the beginning of the sequence with probability $L_{\alpha(3)\alpha(4)}$. The final DNA sequence obtained is the concatenation of two Markovian processes: one starting at position zero and extending to the right, whose estimated transition matrix is $P$; and the other terminating at zero, whose estimated transition matrix is $\tilde P$.}\label{fig:nucleotide_aggregation} \label{fig:1} \end{figure} The paper is organized as follows. Section~\ref{sec:interp} discusses the probabilistic interpretation of the form~\eqref{p.form} of the matrix~$P$ in greater depth than~\cite{sobottka&hart2011}. Two different probabilistic constructions are presented, the first of which provides the justification for the description of DNA sequence growth given above. Section~\ref{sec:aleph.generated} introduces the set of $\aleph$-generated matrices as matrices of the form \eqref{p.form}, where $\aleph$ is the set of positive persymmetric matrices, and establishes several algebraic characterizations of such matrices. The non-uniqueness of the persymmetric matrix~$\A$ and positive parameter~$m$ that define an $\aleph$-generated matrix is then considered in Section~\ref{sec:families}, where a couple of equivalence relations on~$\aleph$ are considered. This leads to an examination of various properties of $\aleph$-generated matrices as used in the S-H model in Section~\ref{sec:properties}. Finally, we discuss some measures for determining how closely a DNA sequence conforms to the S-H model and make concluding remarks in Section~\ref{sec:conclusion}. \section{Probabilistic interpretation of~$P$} \label{sec:interp} In~\cite{sobottka&hart2011}, a formal description of the way nucleotides are appended to a DNA sequence using the persymmetric matrix $\A$ and the parameter $m$ was presented, but the explicit connection with stochastic matrices of the form~\eqref{p.form} was left for the reader to deduce. Here, we more rigorously discuss how the form \eqref{p.form} of the stochastic matrix~$P$ arises from the DNA-sequence growth mechanism described above. In addition, we shall present an alternative probabilistic interpretation of the growth mechanism. \subsection{Interpretation} To begin, consider the growth of a DNA sequence whose initial nucleotide is taken to be of type~$i$. Let $(\beta_t,\ t\geq0)$ be a Bernoulli scheme on~$N$ with common distribution $M=(M_1, M_2, M_3, M_4)=(m, 1, 1, m)\big/(2m+2)$, that is, an independent and identically distributed sequence of random variables on $N$ with $\beta_s\sim M$. Consider two coupled stochastic processes $(V_t,\ t\geq0)$, which evolves on the state space~$N$, and $(W_t,\ t\geq0)$, which is a Bernoulli $\{0,1\}$-process where $W_t$ is~$1$ with probability $L_{V_t\beta_t}$ (that is, $W_t \sim \textrm{B}\left(L_{V_t\beta_t}\right)$). By setting $V_0:=i$ as the type of the initial nucleotide from which the DNA sequence grows, the process $(V_t,\ t\geq0)$ evolves as a deterministic function of $(\beta_t,\ t\geq0)$ and $(W_t,\ t\geq0)$ as follows: \begin{equation*} V_{t+1} := \beta_tW_t + V_t(1-W_t) = \begin{cases} \beta_t, & \text{if } W_t=1 \\ V_t, & \text{if } W_t=0 \end{cases}\qquad,\qquad \forall t\geq 0. \end{equation*} Note that, while $V_t$ denotes the type of the last nucleotide appended to the sequence, $\beta_t$ corresponds to the mechanism responsible for proposing the type, say~$j$, of the next nucleotide to concatenate to the sequence, and $W_t$ corresponds to the mechanism responsible for accepting or rejecting the new nucleotide in the sequence. If $\beta_t=j$, then~$j$ is accepted as the type of the next nucleotide provided that $W_t=1$, in which case $V_{t+1}$ is set to~$j$. Otherwise, the nucleotide of type~$j$ is discarded and no nucleotide is appended. In that case, $V_{t+1}$ takes the value of~$V_t$. In this way,~$t$ counts the number of nucleotides proposed rather than the length of the DNA sequence while the number of acceptances, given by $\sum_{u=1}^tW_u$, is one less than the length of the DNA sequence, since it doesn't count the initial nucleotide. For all $i\in N$ and $t\geq0$, we define \begin{align*} \gamma_i &:= \Pr(W_t=1 \given V_t=i) = \sum_{j\in N} \Pr(W_t=1, \beta_t=j \given V_t=i) \\ &= \sum_{j\in N} \Pr(W_t=1\given \beta_t=j, V_t=i) \Pr(\beta_t=j \given V_t=i) \\ &= \sum_{j\in N} \Pr(W_t=1\given \beta_t=j, V_t=i) \Pr(\beta_t=j) = \sum_{j\in N} L_{ij}M_j. \end{align*} Next, define a sequence $(\tau_s,\ s\geq0)$ of stopping times by $\tau_0:=0$ and $$ \tau_{s+1} := \min\left\{t>\tau_s \suchthat W_{t-1}=1\right\}. $$ The $\tau_s$'s mark the nucleotide type proposals that were accepted. By construction, they constitute a series of renewal times. Note that $(V_t,\ t\geq0)$ is a discrete step function which transitions to a new nucleotide whenever $t\in\{\tau_s,\ s\geq0\}$. More precisely, for all $s\geq0$, $V_t=V_{\tau_s}$ for $t=\tau_s, \tau_s+1, \ldots, \tau_{s+1}-1$. Let $i\in N$ and $w\in\{0,1\}$. The random variable $\beta_t$ is independent of $W_u$ for $u<t$ and the distribution of $W_t$ is completely determined by the value of $\beta_t$ and $V_t$. Consequently, the event $\{W_t=w$ is conditionally independent of $\{W_u=0\}$ for all $u<t$ given $V_t=i$. For $i\in N$ and $t> u\geq0$, we have \begin{align*} \Pr(W_t=1, W_{t-1}=0, \ldots, W_u=0 \given V_u=i) &= \Pr(W_t=1 \given W_{t-1}=0, \ldots, W_u=0, V_u=i) \cdot \\ &\qquad\ Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \Pr(W_t=1 \given V_t=i, W_{t-1}=0, \ldots, W_u=0, V_u=0) \cdot \\ & \qquad \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \Pr(W_t=w \given V_t=i) \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \\ &= \gamma_i \Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) \end{align*} and $$ \Pr(W_t=0, \ldots, W_u=0 \given V_u=i) = (1-\gamma_i)\Pr(W_{t-1}=0, \ldots, W_u=0 \given V_u=i) . $$ Hence, for $s\geq0$, $t\geq1$ and $i\in N$, we obtain \begin{align*} \Pr(\tau_{s+1}-\tau_s=t \given V_{\tau_s}=i) &= \Pr(W_{\tau_s+t-1}=1, w_{\tau_s+t-2}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) \\ &= \Pr(W_{\tau_s+t-2}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) \gamma_i \\ &= \Pr(W_{\tau_s+t-3}=0, \ldots, W_{\tau_s}=0 \given V_{\tau_s}=i) (1-\gamma_i)\gamma_i \\ &= \cdots \\ &= (1-\gamma_i)^{t-1}\gamma_i. \end{align*} Conditional on $V_{\tau_s}=i$, $\tau_{s+1}-\tau_s$ is thus a geometric random variable taking values on the positive integers: $$ \tau_{s+1}-\tau_s \given V_{\tau_s}=i \sim \textrm{geom}\bigl( \gamma_i\bigr), \quad s\geq0,\ i\in N. $$ Observe that the distribution of $\tau_{s+1}-\tau_s$ is completely determined by the value of $V_{\tau_s}$ and is independent of any events prior to $\tau_s$ if $V_{\tau_s}$ is given. Furthermore, $\tau_{s+1}-\tau_s \given V_{\tau_s}=i$ is identically distributed as $\tau_1 \given V_0=i$, for all $s>0$. Next, define the process $(U_s,\ s\geq0)$ by $U_s := V_{\tau_s}$. Suppose that $V_{\tau_s}=i$ for some fixed $s\geq0$. Then $V_{\tau_{s+1}}$ is determined by $\beta_{\tau_{s+1}-1}$ and $V_{\tau_s}=\beta_{\tau_s-1}$, which are independent of all $\beta_t$, $V_t$ and $W_t$ for all~$t$ prior to $\tau_s-1$. Consequently, $(U_s,\ s\geq0)$ has the Markov property: $$ \Pr(U_{s+1}=j \given U_s=i, U_{s-1}=i_1, \ldots, U_0=i_s) = \Pr(U_{s+1}=j \given U_s=i), $$ for all $i_1,i_2,\ldots, i_s\in N$ and $s\geq0$. Finally, since each $\tau_s$ essentially marks a point at which the process $\bigl((\beta_t, V_t, W_t),t\geq0\bigr)$ is restarted, we have $$ \Pr(U_{s+1}=j \given U_s=i) = \Pr(V_{\tau_{s+1}}=j \given V_{\tau_s}=i) = \Pr(V_{\tau_1}=j \given V_{\tau_0}=i) = \Pr(U_1=j \given U_0=i)=:P_{ij}, $$ for all $s\geq0$. Therefore, $(U_s,\ s\geq0)$ is a time-homogeneous Markov chain on the finite state space~$N$. The following theorem gives the form of the one-step transition matrix $P=\bigl( P_{ij} \bigr)_{i,j\in N}$ in terms of~$\A$ and~$M$. \begin{theo} \label{thm:interp1} The one-step transition matrix $P=\bigl(P_{ij}\bigr)_{i,j\in N}$ of the Markov chain $(U_s,\ s\geq0)$ is given by $$ P_{ij}:=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. $$ \end{theo} \begin{proof} Let $\tau:=\tau_1$. Now, \begin{align} \nonumber P_{ij} &= \Pr( U_1=j \given U_0=i) \\ \nonumber &= \Pr(V_\tau=j \given V_0=i) \\ \nonumber &= \sum_{t=1}^\infty \Pr(V_t=j, \tau=t \given V_0=i) \\ \nonumber &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\Pr(\tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i) \\ \label{eqn:p.coin} &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\sum_{k\in N} \Pr(V_t=k, \tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i). \end{align} However, \begin{align*} \Pr(V_t=j, \tau=t & \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1, \tau=t \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1, W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1 \given V_0=i, W_u=0, u=1,\ldots,t-2) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(\beta_{t-1}=j, W_{t-1}=1 \given V_{t-1}=i) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= \Pr(W_{t-1}=1 \given \beta_{t-1}=j, V_{t-1}=i) \Pr(\beta_{t-1}=j \given V_{t-1}=i) \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \\ &= L_{ij}M_j \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i) \end{align*} and substituting this into \eqref{eqn:p.coin} yields \begin{align*} P_{ij} &= \sum_{t=1}^\infty \frac{\Pr(V_t=j, \tau=t \given V_0=i)}{\sum_{k\in N} \Pr(V_t=k, \tau=t \given V_0=i)}\Pr(\tau=t \given V_0=i) \\ &= \sum_{t=1}^\infty \frac{L_{ij}M_j \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i)}{\sum_{k\in N} L_{ik}M_k \Pr(W_u=0, u=1,\ldots,t-2 \given V_0=i)} \Pr(\tau=t \given V_0=i) \\ &= \sum_{t=1}^\infty \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k} \Pr(\tau=t \given V_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. \qedhere \end{align*} \end{proof} Clearly, the matrix~$P$ is invariant to rescaling~$\A$. The only effect of rescaling~$\A$ by some constant, say~$h$, is to multiply the mean $1/\gamma_i$ of the distribution of $\tau_{s+1}-\tau_s \given V_{\tau_s}=i$ by a factor of $1/h$. Of course, while such scaling preserves the persymmetry of~$\A$, it only makes sense if $0<h< \min\{1/\gamma_i \suchthat i\in N\}$. \subsection{Alternative interpretation} There is another way to represent how new nucleotides are added to a DNA sequence which provides an alternative derivation of the Markov chain on~$N$ with one-step transition matrix~$P$ of the form \eqref{p.form}. Let $(Y_s,\ s\geq0)$ be a Markov chain on the set of nucleotides~$N$ with transition matrix $K=(K_{ij})_{i,j\in N}$ given by $K_{ij}=L_{ij}\big/\sum_{k\in N}L_{ik}$. Thus, the one-step transition matrix of $(Y_s,\ s\geq0)$ is obtained by converting the positive persymmetric~$\A$ into a stochastic matrix by normalizing its rows to sum to unity. Next, let $(B_s,\ s\geq0)$ be a Bernoulli scheme on~$N$ with common distribution $M$. Since $(Y_s,\ s\geq0)$ is a positive recurrent Markov chain on the finite state space~$N$ and $(B_s,\ s\geq0)$ is an i.i.d. sequence also on~$N$ that is independent of $(Y_s,\ s\geq0)$, the joint process $\left(\bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is a positive recurrent Markov chain on the state space $N\times N$ with one-step transition matrix $\left(R_{(i,k),(j,l)} \right)_{(i,k), (j,l)\in N^2}$ given by $R_{(i,k),(j,l)} = K_{ij}M_l$. We shall assume without loss of generality that $Y_0=B_0$. Define a sequence of stopping times $(T_s,\ s\geq0)$ by $T_0:=0$ and $$ T_{s+1}:=\min\{t>T_s+1 \suchthat Y_{t-1}=B_{t-1}=Y_{T_s} \text{ and } Y_t=B_t\}, $$ for $s\geq0$. By definition, $Y_{T_s}=B_{T_s}$ for all $s\geq0$ and $Y_{T_s-1}=B_{T_s-1}$ for all $s\geq1$. Observe that if $Y_{T_s}$ and $B_{T_s}$ are given, for example, $Y_{T_s}=B_{T_s}=i$, then \begin{align*} T_{s+1}-T_s &=\min\{t>T_s+1 \suchthat Y_{t-1}=B_{t-1}=Y_{T_s} \text{ and } Y_t=B_t\} - T_s \\ &=\min\{t>1 \suchthat Y_{t-1}=B_{t-1}=i \text{ and } Y_t=B_t\}. \end{align*} Thus, $T_{s+1}-T_s$ is independent of $T_s$ if $Y_{T_s}$ is given. Furthermore, $T_{s+1}-T_s \given Y_{T_s}=i$ has the same distribution as $T_1\given Y_0=i$. Thus, each $T_s$ is a renewal time at which the Markov chain $\left(\bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is restarted. Next, define the stochastic process $(X_s,\ s\geq0)$ by $X_s:=Y_{T_s}$. Since $\left( \bigl(Y_s,B_s\bigr),\ s\geq0\right)$ is a Markov chain and $(T_s,\ s\geq0)$ is a sequence of stopping times at which it renews, one may employ the strong Markov property to deduce that $(X_s,\ s\geq0)$ is also a Markov chain. It only remains to compute its one-step transition matrix. \begin{theo} \label{thm:interp2} The Markov chain $(X_s,\ s\geq0)$ has one-step transition matrix $P=\left(P_{ij}\right)_{i,j\in N}$, where $$ P_{ij}:=\frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}. $$ \end{theo} \begin{proof} Fix $X_0=B_0=i$ and let $T:=T_1$. Then, \begin{align*} P_{ij} &= \Pr(X_1=j\given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_T=j, T=t \given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j \given T=t, X_0=i)\Pr(T=t \given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j, B_t=j \given Y_t=B_t, Y_{t-1}=i, B_{t-1}=i, Y_{t-2}\neq B_{t-2}, \ldots, Y_2\neq B_2, Y_1\neq B_1, Y_0=i, B_0=i) \cdot \\ & \quad \Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \Pr(Y_t=j, B_t=j \given Y_t=B_t, Y_{t-1}=i)\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{\Pr(Y_t=j, B_t=j \given Y_{t-1}=i)}{\Pr(Y_t=B_t \given Y_{t-1}=i)}\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{\Pr(Y_t=j, B_t=j \given Y_{t-1}=i)}{\sum_{k\in N}\Pr(Y_t=k, B_t=k \given Y_{t-1}=i)}\Pr(T=t\given X_0=i) \\ &= \sum_{t=2}^\infty \frac{K_{ij}M_j}{\sum_{k\in N} K_{ik}M_k}\Pr(T=t\given X_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k} \sum_{t=2}^\infty \Pr(T=t\given X_0=i) \\ &= \frac{L_{ij}M_j}{\sum_{k\in N} L_{ik}M_k}, \end{align*} since $$ \sum_{t=2}^\infty \Pr(T=t\given X_0=i)=1 $$ and \[ \Pr\{Y_t=j, B_t=j \given Y_{t-1}=i) =\Pr\{Y_t=j \given Y_{t-1}=i)\Pr(B_t=j) =K_{ij}M_j. \qedhere \] \end{proof} Thus, the mechanism by which nucleotides are appended to a DNA sequence according to a Markov chain with transition matrix~$P$ may also be described as follows. Suppose that the last nucleotide in the sequence is of type~$i$. Then, one simply waits until both the Markov chain $(Y_s)$ and the i.i.d. sequence $(B_s)$ simultaneously return to state~$i$ and both immediately jump to the same state, say~$j$. When such a consecutive pair of concordant events occurs, a nucleotide of type~$j$ is appended to the sequence. At this point, this scheme is repeated, but using~$j$ as the initial state, so that one waits for A coincident return of the two processes to state~$j$ followed by simultaneous transitions to a new state, say~$k$, and so on. The Markov chain $Y_s$ transitions from~$i$ to~$j$ with probability $K_{ij}$ while $B_s$ selects~$j$ with probability~$M_j$. In contrast to the original description given in~\cite{sobottka&hart2011} and in Section~\ref{sec:intro}, two nucleotides of types~$j$ and~$k$ are selected with probabilities~$M_j$ and~$K_{ij}$ respectively and a nucleotide of type~$j$ is then appended to the end of the sequence if and only if they are of the same type. In essence, the mechanism by which nucleotides are appended to the DNA sequence can be thought of as carrying out acceptance rejection sampling, by repeatedly drawing independent sample nucleotides from the distributions $(K_{ij},\ j\in N)$ and~$M$ until they agree (assuming~$i$ is the type of the nucleotide at the end of the sequence). In this case, the number of draws needed in order to obtain a suitable nucleotide is a geometric random variable with mean $1\big/\sum_{j\in N}K_{ij}M_j$. The first interpretation also amounts to performing acceptance-rejection sampling, but with a two-step procedure in which a nucleotide type~$j$ is first proposed by sampling it from the distribution~$M$ and then is added to the DNA sequence according to an unfair coin toss with probability~$L_{ij}$. Finally, we note that if the matrix~$\A$ is rescaled so that $\sum_{i,j\in N}L_{ij}=1$, it admits the natural interpretation as the stationary dinucleotide probability distribution, that is, $$ L_{ij} = \Pr(Y_t=i, Y_{t+1}=j), \qquad i,j\in N,\ t\geq0. $$ As noted above, $\A$ remains persymmetric under this kind of rescaling. \section{$\aleph$-generated matrices} \label{sec:aleph.generated} Let $\sS_4$ be the set of all $4\times4$ stochastic matrices, and let $\aleph$ be the cone of positive persymmetric matrices (matrices $\A=(L_{ij})_{i,j\in N}$ with positive entries such that $L_{ij}=L_{\alpha(j)\alpha(i)}$ for all $i,j\in N$). Given $P\in \sS_4$ we will say that $(P,\pi)$ is a stationary Markov chain if the vector $\pi=(\pi_i)_{i\in N}$ is such that $\pi P=\pi$. Let $\digamma:\aleph\times (0,+\infty)\to S_4$ be the map which takes $(\A,m)$ to the matrix $\digamma(\A,m)$, which is given for all $i,j\in N$ by \begin{equation} \label{digamma} \left(\digamma(\A,m)\right)_{ij} := \frac{L_{ij}M_j}{\sum_{k=1}^4 L_{ik}M_k}, \qquad \text{where}\qquad M_\ell=\left\{\begin{array}{lll} m/(2m+2) &\text{, if}& \ell=1,4, \\ 1/(2m+2)&\text{, if}& \ell=2,3. \end{array}\right. \end{equation} Since $\A$ is a positive matrix and $m>0$, the matrix $\digamma(\A,m)$ is primitive, that is, irreducible and aperiodic. \begin{defn} \label{defn:aleph.generated} We say that $P\in\sS_4$ is $\aleph$-generated if there exist $(\A,m)\in \aleph\times (0,+\infty)$ such that $P=\digamma(\A,m)$. \end{defn} Let $\Phi:\sS_4\times (0,+\infty)\times (0,+\infty)\to\aleph$ be the map defined for all stochastic matrices $P$, $\tilde{m}>0$ and $\tilde{s}>0$ by \begin{equation \label{AlephEstimated} \Phi(P,\tilde{m},\tilde{s}):= \tilde{s} \begin{pmatrix} a^{11}_{_P}\kappa_{_P}/\tilde{m} & a^{12}_{_P} & 1 & \kappa_{_P}/\tilde{m} \\\\ a^{21}_{_P} & a^{22}_{_P}\epsilon_{_P} \tilde{m} & \epsilon_{_P} \tilde{m} & 1 \\\\ a^{21}_{_P}a^{42}_{_P} & a^{22}_{_P}\epsilon_{_P} a^{32}_{_P} \tilde{m} & a^{22}_{_P}\epsilon_{_P} \tilde{m} & a^{12}_{_P} \\\\ a^{11}_{_P}a^{41}_{_P}\kappa_{_P}/\tilde{m} & a^{21}_{_P}a^{42}_{_P} & a^{21}_{_P} & a^{11}_{_P}\kappa_{_P}/\tilde{m} \end{pmatrix}, \qquad\text{where}\qquad \begin{array}{lcl} a^{ij}_{_P} & := & P_{ij}/P_{i\alpha(j)};\\\\ \kappa_{_P} & := & P_{14}\big/P_{13};\\\\ \epsilon_{_P} & := & P_{23}\big/P_{24}. \end{array} \end{equation} From \eqref{digamma} it follows that if $P$ is an $\aleph$-generated matrix for some $\A=\bigl(L_{ij}\bigr)_{i,j\in N}\in\aleph$ and $m\in(0,+\infty)$, then the nine ratios that appear in \eqref{AlephEstimated} become: \begin{equation}\label{nine_ratios} a^{ij}_{_P} = L_{ij}/L_{i\alpha(j)},\qquad \kappa_{_P} = L_{14}m/L_{13} \qquad \text{and}\qquad \epsilon_{_P} = L_{23}/L_{24}m. \end{equation} \begin{theo} \label{digamma_inv} For any $\aleph$-generated matrix $P$, $$ \digamma^{-1}(P) = \left\{\bigl(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}. $$ \end{theo} \begin{proof} Let $P=\digamma(\A,m)$ for some fixed $\A\in\aleph$ and $m>0$. Given $\tilde{\A}:=\Phi(P,\tilde{m},\tilde{s})$ for any choice of $\tilde{m},\tilde{s}>0$, it is straightforward to check that $\digamma(\tilde{\A},\tilde{m})=\digamma(\A,m)=P$. Therefore, $\left\{\big(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\big):\ \tilde{m}>0,\ \tilde{s}>0\right\}\subseteq \digamma^{-1}(P)$. On the other hand, suppose $\A'=(L'_{ij})_{i,j\in N}\in\aleph$ and $m'>0$ are such that $(\A',m')\in\digamma^{-1}(P)$. Note that, since $P=\digamma(\A,m)=\digamma(\A',m')$, it follows from \eqref{nine_ratios} that \begin{equation*} a^{ij}_{_P} = L_{ij}/L_{i\alpha(j)} = L'_{ij}/L'_{i\alpha(j)}, \qquad \kappa_{_P} = L_{14}m\big/L_{13} = L'_{14}m'\big/L'_{13}, \qquad \epsilon_{_P} = L_{23}\big/L_{24}m = L'_{23}\big/L'_{24}m'. \end{equation*} Hence, $\A'=\Phi(P,m',L'_{13})$ and so $\digamma^{-1}(P)\subseteq\left\{\bigl(\Phi(P,\tilde{m},\tilde{s}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}$, which completes the proof.\qedhere \end{proof} Since $\Phi$ is linear in $\tilde{s}$, instead of working with $\Phi$ we can work with the map $\varphi:\sS_4\times(0,+\infty)\to\aleph$ defined by \begin{equation}\label{varphi} \varphi(P,\tilde{m}):=\Phi(P,\tilde{m},1). \end{equation} Then, $\digamma^{-1}(P)=\left\{\bigl(\tilde s\varphi(P,\tilde{m}),\tilde{m}\bigr):\ \tilde{m}>0,\ \tilde{s}>0\right\}$. The next corollary is a simple consequence of \eqref{varphi} and Theorem~\ref{digamma_inv}. \begin{cor} \label{P-aleph1} A stochastic matrix $P$ is $\aleph$-generated if and only if $P$ is obtained by probability-normalizing the rows of the matrix $\A:=\varphi(P,1)$.\qed \end{cor} Given a vector $\mathbf{a}=(a_1,a_2,a_3,a_4)\in \R^4$, let $D(\mathbf{a})$ be the $4\times 4$ diagonal matrix with $\mathbf{a}$ on its diagonal. \begin{cor}\label{P-aleph2} A stochastic matrix $P$ is $\aleph$-generated if and only if there exists a strictly positive vector $\mathbf{x}=(x_i)_{i\in N}\in\R^4$ such that $D(\mathbf{x})P\in\aleph$. \end{cor} \begin{proof} Suppose that $P$ is $\aleph$-generated and define $\mathbf{x}$ to be the vector with elements given by $ x_i:=\sum_{k=1}^4\Bigl(\varphi(P,1)\Bigr)_{ik} $.\sloppy\ Then, from Corollary~\ref{P-aleph1} we have that $D(\mathbf{x})P=\varphi(P,1)\in\aleph$. Conversely, if $D(\mathbf{x})P=\A\in\aleph$ for some $\mathbf x\in\R^4$, then $P=\digamma(\A,1)$ and $\mathbf x$ contains the row sums of $\A$.\qedhere \end{proof} Note that given an $\aleph$-generated matrix $P$, there exist infinitely many vectors $\mathbf{x}$ that satisfy the stated property, all of which are collinear. Because of this, we can decide whether or not a stochastic matrix is $\aleph$-generated by setting $$ \mathbf{x}_P = \left(\frac{P_{j\alpha(i)}}{P_{i\alpha(j)}} \right)_{i\in N} = \frac1{\sum_{k=1}^4\bigl(\varphi(P,1)\bigr)_{jk}} \left( \sum_{k=1}^4\bigl(\varphi(P,1)\bigr)_{ik} \right)_{i\in N}, $$ for a fixed $j\in\{1,2,3,4\}$, and checking whether or not $D(\mathbf{x}_P)P$ belongs to $\aleph$. Observe that $\mathbf{x}_P$ is expressed in terms of elements of~$P$. In particular, we can choose \begin{equation*} \mathbf{x}_P=\left(\frac{P_{44}}{P_{11}},\frac{P_{43}}{P_{21}},\frac{P_{42}}{P_{31}},1\right). \end{equation*} \section{$\aleph$-families and generators} \label{sec:families} From the preceding discussion, it is evident that a given $\aleph$-generated stochastic matrix can be generated using any one of a multitude of persymmetric matrices. We proceed to examine this non-uniqueness in greater detail. \begin{defn} The $\aleph$-family of an $\aleph$-generated matrix $P$ is the set $$ \aleph(P):=\left\{\varphi(P,\tilde{m}):\ \tilde{m}>0\right\}. $$ The family of generators of an $\aleph$-generated matrix $P$ is the set $$ \aleph_G(P):=\left\{\bigl(\varphi(P,\tildem),\tildem\bigr):\ \tildem>0\right\}. $$ \end{defn} The import of the next theorem is that $\aleph$ can be partitioned into equivalence classes. Firstly, any persymmetric matrix can be used to generate a whole host of $\aleph$-generated matrices simply by varying the value of the parameter~$m$. Thus, there are families of persymmetric matrices that give rise to disjoint collections of $\aleph$-generated matrices and these families are mutually exclusive, partitioning the space~$\aleph$ into equivalence classes. Secondly, for each $\aleph$-generated matrix~$P$, there is a set of persymmetric matrices, each of which generates~$P$ when combined with the appropriate value of~$m$. This leads to an equivalence relation on the set $\aleph\times (0,\infty)$. \begin{theo} Suppose $P$ and $Q$ are two $\aleph$-generated matrices. Then: \begin{enumerate} \item Either $\aleph(P)\cap\aleph(Q)=\emptyset$ or $\aleph(P)=\aleph(Q)$. \item Either \begin{enumerate} \item $\aleph_G(P)\cap\aleph_G(Q)=\emptyset$ and $P\neq Q$; or \item $\aleph_G(P)=\aleph_G(Q)$ and $P=Q$. \end{enumerate} \end{enumerate} \end{theo} \begin{proof} \ \par\nobreak \begin{enumerate} \item Suppose $\aleph(P)\cap\aleph(Q)\neq\emptyset$ and choose an $\A=(L_{ij})_{i,j\in N}\in\aleph(P)\cap\aleph(Q)$. Let $m^{(1)},m^{(2)}\in(0,+\infty)$ be such that $P=\digamma(\A,m^{(1)})$ and $Q=\digamma(\A,m^{(2)})$. We begin by proving that $\aleph(P)\subseteq\aleph(Q)$. Let $\mathfrak{B}=(B_{ij})_{i,j\in N}\in\aleph(P)$ and let $m^{(3)}>0$ be such that $P=\digamma(\mathfrak{B},m^{(3)})$. Since $P$ and $Q$ can be generated by the same $\A$, they share the same ratios $a^{ij}_{\cdot}$ listed in~\eqref{nine_ratios}, that is, \begin{equation} \label{a-h} a^{ij}_{_P} = a^{ij}_{_Q} \end{equation} The other two ratios for $P$ will satisfy the following equalities: \begin{gather*} \kappa_{_P} := P_{14}/P_{13} = B_{14}m^{(3)}/B_{13} = L_{14}m^{(1)}/L_{13}, \\ \epsilon_{_P} := P_{23}/P_{24} = B_{23}/B_{24}m^{(3)} = L_{23}/L_{24}m^{(1)}, \end{gather*} which means that \begin{equation} \label{LB} B_{14}m^{(3)}\big/B_{13}m^{(1)} = L_{14}\big/L_{13},\qquad\text{and}\qquad B_{23}m^{(1)}\big/B_{24}m^{(3)} = L_{23}\big/L_{24}. \end{equation} On the other hand, the last two ratios for $Q$ are: \begin{equation} \label{k-e} \begin{gathered} \kappa_{_Q} := Q_{14}/Q_{13} = L_{14}m^{(2)}/L_{13} = B_{14}m^{(3)}m^{(2)}/B_{13}m^{(1)} = \frac{m^{(2)}}{m^{(1)}}\kappa_{_P} ,\\ \epsilon_{_Q} := Q_{23}/Q_{24} = L_{23}/L_{24}m^{(2)} = B_{23}m^{(1)}/B_{24}m^{(3)}m^{(2)} = \frac{m^{(1)}}{m^{(2)}}\epsilon_{_P}, \end{gathered} \end{equation} where the last equality in each line follows from \eqref{LB}. Setting $\tildem:= m^{(2)}m^{(3)}/m^{(1)}$, and taking \eqref{a-h} and \eqref{k-e} together with the last line in the proof of Theorem \ref{digamma_inv} yields $\varphi(Q,\tildem) =\varphi(P, m^{(3)}) = \mathfrak{B}$. Therefore, $\mathfrak{B}\in\aleph(Q)$ and so $\aleph(P)\subseteq\aleph(Q)$. Next, let $\mathfrak B\in\aleph(Q)$. By symmetry, another application of the above argument allows us to conclude that $B\in\aleph(P)$ and hence $\aleph(Q)\subseteq\aleph(P)$. Therefore, $\aleph(P)=\aleph(Q)$. \item By definition, either $\digamma^{-1}(P)=\digamma^{-1}(Q)$, in which case $P=Q$, or $\digamma^{-1}(P)\cap\digamma^{-1}(Q)=\emptyset$ and $P\neq Q$. Now, $\aleph_G(P) \subset \digamma^{-1}(P)$ since $\digamma^{-1}(P) = \left\{ \tildes\A \suchthat \tildes>0,\ \A\in\aleph_G(P) \right\}$, and the result follows. \qedhere \end{enumerate} \end{proof} \begin{defn} Given an $\aleph$-generated matrix $P$ and $\tilde{m}\in (0,+\infty)$, we define the {\em $\tilde{m}$-canonical representative} of $\aleph(P)$ to be the matrix $\A_{P,\tilde{m}}:=\varphi(P,\tilde{m}/\epsilon_{_P})$. \end{defn} Note that $\bigl(\A_{P,\tilde{m}}, \tilde{m}/\epsilon_{_P}\bigr)$ is a generator of $P$. Furthermore, from \eqref{nine_ratios} if $P$ and $Q$ are two $\aleph$-generated matrices with $\aleph(P)=\aleph(Q)$, then $a^{ij}_{_P}=a^{ij}_{_Q}$, for all $i,j$ and $\kappa_{_P}\epsilon_{_P}=\kappa_{_Q}\epsilon_{_Q}$. This gives \begin{cor}\label{cor canonical_representative} Two $\aleph$-generated matrices belong to the same $\aleph$-family if and only if they have identical canonical representatives, that is, if $P$ and $Q$ are $\aleph$-generated, then $$ \aleph(P)=\aleph(Q)\qquad\Longleftrightarrow\qquad \A_{P,1}=\A_{Q,1}\quad \Longleftrightarrow \quad \A_{P,\tilde{m}}=\A_{Q,\tilde{m}}, \text{ for all } \tilde{m}>0. \qed $$ \end{cor} \section{Properties of $\aleph$-generated matrices} \label{sec:properties} Given the stationary Markov chain $(P,\pi)$, consider the following related stationary Markov chains: $(P^\alpha,\pi^\alpha)$ is the complement Markov chain of $(P,\pi)$, where $P_{ij}^\alpha := P_{\alpha(i)\alpha(j)}$ and $\pi^\alpha_i:=\pi_{\alpha(i)}$; $(P^*,\pi^*)$ denotes the reverse Markov chain of $(P,\pi)$, where $P_{ij}^*:=\pi_jP_{ji}\big/\pi_i$ and $\pi^*_i:=\pi_{i}$; and $(\tilde P,\tilde \pi)$ is the reverse complement Markov chain of $(P,\pi)$, where $\tilde P_{ij} = \pi_{\alpha(j)}P_{\alpha(j)\alpha(i)}\big/\pi_{\alpha(i)}$ and $\tilde\pi_i=\pi_{\alpha(i)}$. Note that $\tilde P=(P^\alpha)^*= (P^*)^\alpha$ and $\tilde \pi=(\pi^\alpha)^*=(\pi^*)^\alpha$. The names complement, reverse and reverse complement come from the genetics and Markov chains literature, referring to several kinds of relationship that can exist between nucleotide sequences (genetics), as well as Markov chains. \begin{theo}\label{all_or_none} The matrices~$P$, $P^\alpha$, $P^*$ and $\tilde P$ are either all $\aleph$-generated or none of them are. \end{theo} \begin{proof} Assume $P$ is $\aleph$-generated and take $\A:=\varphi(P,1)=\bigl(L_{ij}\bigr)_{i,j\in N}$. Define $\A^\alpha=(L^\alpha_{ij})_{i,j\in N} \in \aleph$, where $L_{ij}^\alpha:=L_{\alpha(i)\alpha(j)}$. Then, \begin{align*} P_{ij}^\alpha &= P_{\alpha(i)\alpha(j)} = \frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4L_{\alpha(i)k}} = \frac{L_{ij}^\alpha}{\sum_{k=1}^4L_{ik}^\alpha}, \quad i,j\in N \end{align*} and $P^\alpha$ is $\aleph$-generated with $P^\alpha=\digamma\bigl(\A^\alpha, 1\bigr)$. To check that $P$ is $\aleph$-generated implies that $P^*$ is also $\aleph$-generated, it suffices by Corollary \ref{P-aleph2} to set $\ds \mathbf{x}_{P^*}=\left(\frac{P_{44}^*}{P_{11}^*},\frac{P_{43}^*}{P_{21}^*},\frac{P_{42}^*}{P_{31}^*},1\right)$ and prove that $D\bigl(\mathbf{x}_{P^*})P^*\in\aleph$. In fact, $\left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{ij}=\left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{\alpha(j)\alpha(i)}$ because \begin{align*} \left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{ij} &= (\mathbf{x}_{P^*})_i P_{ij}^* = \frac{P_{4\alpha(i)}^*}{P_{i1}^*}P_{ij}^* = \frac{\frac{\pi_{\alpha(i)}}{\pi_4}\ P_{\alpha(i)4}}{\frac{\pi_1}{\pi_i}\ P_{1i}}\frac{\pi_j}{\pi_i}\ P_{ji} =\frac{\pi_{\alpha(i)}\pi_j}{\pi_1\pi_4}\frac{P_{\alpha(i)4}}{P_{1i}} P_{ji} \end{align*} and similarly $$ \left(D\bigl(\mathbf{x}_{P^*}\bigr)P^*\right)_{\alpha(j)\alpha(i)} = \frac{\pi_{j}\pi_{\alpha(i)}}{\pi_1\pi_4}\frac{P_{j4}}{P_{1\alpha(j)}} P_{\alpha(i)\alpha(j)}, $$ while \begin{align* \frac{P_{\alpha(i)4}}{P_{1i}}P_{ji} &=\frac{L_{\alpha(i)4}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{ji}}{L_{1i}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{L_{\alpha(i)4}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{\alpha(i)\alpha(j)}}{L_{\alpha(i)4}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} \\\\ &=\frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{L_{\alpha(i)\alpha(j)}}{\sum_{k=1}^4 L_{\alpha(i)k}} \frac{L_{j4}}{L_{1\alpha(j)}} \frac{\sum_{k=1}^4 L_{1k}}{\sum_{k=1}^4 L_{jk}} =\frac{P_{j4}}{P_{1\alpha(j)}} P_{\alpha(i)\alpha(j)}. \end{align* Next, to check that $\tilde P$ is $\aleph$-generated given that $P$ is $\aleph$-generated, we need only note that $\tilde P=(P^\alpha)^*$ and apply the above two results one after the other. The proof is completed by realising that $P=(P^\alpha)^\alpha=(P^*)^*=\tilde{\tilde P}$ and hence being $\aleph$-generated is a solidarity property of the four matrices. \qedhere \end{proof} Most bacterial DNA sequences can be segmented into two halves called chirochores~\cite{FrankLobry00} and the two stationary Markov chains that empirically approximate their first-order structure are reverse complements of each other \cite{sobottka&hart2011}. If the DNA sequence conforms to the S-H model then the dinucleotide distribution in one of the chirochores is approximated by $(P,\pi)$ with $P$ being $\aleph$-generated. However, it was an open question as to whether or not the other chirochore would also be approximated by an $\aleph$-generated Markov chain. Theorem \ref{all_or_none} above answers this question in the positive. Furthermore, it is common to find that the stationary Markov chain $(W,\omega)$ that approximates the first-order structure of an entire DNA sequence satisfies {\em intra-strand parity} \cite{AlbrechtBuehler2006,hart&martinez2011}, that is, $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}=\tilde\omega_i\tilde W_{ij}$ for all $i,j\in N$. Intra-strand parity has been observed in the DNA sequences of many organisms such as Bacteria, archaea, plants and animals, but not in other sequences such as those from single-stranded viruses and organelles. The next theorem relates intra-strand parity of dinucleotides to the $\aleph$-generated matrices (cf. the direct characterization in~\cite[Proposition 1]{hart&martinez2011}) and shows that $\aleph$-generated matrices satisfy a weaker property than intra-strand parity. \begin{theo}\label{ISP<->aleph-generated} Let $(W,\omega)$ be a stationary Markov chain. Then $(W,\omega)$ satisfies $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}$ for all $i,j\in N$ if and only if it is $\aleph$-generated and the matrix $\A:=\varphi(W,1)=\bigl(L_{ij}\bigr)_{i,j\in N}$ that generates it satisfies $S_i=S_{\alpha(i)}$ for $i\in N$, where $S_i:=\sum_{k=1}^4L_{ik}$. Furthermore, if $W$ complies with intra-strand parity, then its stationary distribution~$\omega$ can be explicitly expressed as $\omega=\frac1{2(S_1+S_2)}(S_1,S_2,S_2, S_1)$. \end{theo} \begin{proof} \ \par\nobreak \noindent [$(\Longrightarrow)$] It can be seen that $W$ is $\aleph$-generated by observing that $\bigl(D(\omega)W\bigr)_{ij}=\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}=\bigl(D(\omega)W\bigr)_{\alpha(j)\alpha(i)}$. Next, let $\A=\varphi(W,1)$. One can easily check that $\omega_iW_{ij}=\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}$, for $i,j\in N$, implies $\omega_i=\omega_{\alpha(i)}$ for all $i\in N$. Therefore, $ \omega_i\frac{L_{ii}}{\sum_{k=1}^4L_{ik}} = \omega_{\alpha(i)}\frac{L_{\alpha(i)\alpha(i)}}{\sum_{k=1}^4L_{\alpha(i) k}} = \omega_i\frac{L_{ii}}{\sum_{k=1}^4L_{\alpha(i) k}}, $ for all $i\in N$, and hence $\sum_{k=1}^4L_{ik}=\sum_{k=1}^4L_{\alpha(i)k}$.\sloppy \noindent [$(\Longleftarrow)$] Suppose $W$ is obtained by normalizing the rows of a matrix $\A=(L_{ij})_{i,j\in N}\in\aleph$, that is, $ W=\left(\frac{L_{ij}}{S_i}\right)_{i,j\in N} $, where $S_i:=\sum_{k=1}^4L_{ik}$. Suppose that $\A$ satisfies $S_i=S_{\alpha(i)}$ for $i=1,2$. It is easy to check that $\omega:=\frac1{2(S_1+S_2)} (S_1,S_2,S_2,S_1)$ is the stationary distribution of~$W$. Hence, it follows that for all $i,j\in N$, \begin{equation*} \omega_iW_{ij}=\frac{S_i}{2(S_1+S_2)}\frac{L_{ij}}{S_i} =\frac{L_{ij}}{2(S_1+S_2)} =\frac{S_{\alpha(j)}}{2(S_1+S_2)}\frac{L_{\alpha(j)\alpha(i)}}{S_{\alpha(j)}} =\omega_{\alpha(j)}W_{\alpha(j)\alpha(i)}. \qedhere \end{equation*} \end{proof} \section{Applications and final remarks} \label{sec:conclusion} This article has given a mathematical analysis of the S-H model and elucidated its properties. We conclude with some remarks about the application of the results that have been presented here. Corollary~\ref{cor canonical_representative} provides a way of deciding whether or not two or more $\aleph$-generated matrices can be generated from a single persymmetric matrix $\A$ in conjunction with different values of the parameter~$m$. Meanwhile, Theorem \ref{ISP<->aleph-generated} shows that intra-strand parity in dinucleotides is a special case of $\aleph$-generated matrices. Possessing a weaker structure than that encapsulated by intra-strand parity, it is possible that $\aleph$-generated matrices may be useful for capturing the dinucleotide structure in genomic sequences that do not exhibit intra-strand parity. For the purposes of applications, corollaries~\ref{P-aleph1} and~\ref{P-aleph2} are useful for constructing measures of how close the estimated stationary Markov chain of a bacterial DNA sequence is to being $\aleph$-generated. Given $P\in \sS_4$, we can define the following two examples of such measures: \noindent{\bf Measure 1:} Let $\proj(Q)$ be the orthogonal projection of a $4\times 4$ positive matrix~$Q$ onto~$\aleph$, and define $$\delta_1(P):=\min_{\mathbf{x}=(x_1,x_2,x_3,1)} \norm{D(\mathbf{x})P-\proj\Bigl(D(\mathbf{x})P\Bigr)}. $$ The quantity $\delta_1(P)=0$ if and only if $P$ is $\aleph$-generated. Otherwise, $\delta_1(P)$ gives the minimal distance between some matrix $D(\mathbf{x})P$ which generates $P$ according to the model (but which does not belong to~$\aleph$) and the space~$\aleph$. Note that $\delta_1(P)$ can be analytically computed. The minimum in the expression for $\delta_1(P)$ is attained at the point $\mathbf{x}=(x_1,x_2,x_3,1)$, where $$ \begin{pmatrix} x_1 \\ x_2 \\ x_3 \end{pmatrix} = \begin{pmatrix} p_{11}^2+p_{12}^2+p_{13}^2 & -p_{24}p_{13} & -p_{34}p_{12} \\ -p_{13}p_{24} & p_{21}^2+p_{22}^2+p_{24}^2 & -p_{33}p_{22} \\ -p_{12}p_{34} & -p_{22}p_{33} & p_{31}^2+p_{33}^2+p_{34}^2 \end{pmatrix}^{-1} \begin{pmatrix} p_{44}p_{11} \\ p_{43}p_{21} \\ p_{42}p_{31} \end{pmatrix}. $$ \noindent{\bf Measure 2:} Let $\epsilon$ be a $4\times 4$ matrix, $P(\epsilon):=P+\epsilon$, and $\mathbf{x}=(x_1,x_2,x_3,1)$ be a positive vector. Define $\delta_2(P)$ as the solution of the following optimization problem: $$ \text{min } \sum_{i,j\in N} \epsilon_{i,j}^2 \qquad\text{subject to } \left\{\begin{array}{l} P(\epsilon) \in\sS_4;\\ D(\mathbf{x})P(\epsilon)-\proj\Bigl(D(\mathbf{x})P(\epsilon)\Bigr)=\mathbf{0}. \end{array}\right. $$ As was the case with $\delta_1(P)$, we have $\delta_2(P)=0$ if and only if $P$ is $\aleph$-generated, otherwise, $\delta_2(P)$ gives the shortest squared Frobenius distance between $P$ and some $\aleph$-generated stochastic matrix. There being no closed-form solution to the optimization problem, the computation of $\delta_2(P)$ would need to be implemented using numerical methods. Finally, the development of statistical hypothesis tests based on these measures together with further statistical analyses and their application to real bacterial genomes are planned for future publication. \section*{Acknowledgments} This work was supported by the Center for Mathematical Modeling CONICYT Project/Grant PIA AFB 170001, Fondecyt Regular Grant 1070344 and CNPq-Brazil grants 308575/2015-6 and 301445/2018-4. M. Sobotka was partially supported by CNPq-Brazil grant 54091/2017-6. Part of this work was carried out while M. Sobotka was visiting the Center for Mathematical Modeling at the University of Chile. \small \bibliographystyle{plain}
{ "timestamp": "2019-11-28T02:02:25", "yymm": "1805", "arxiv_id": "1805.02231", "language": "en", "url": "https://arxiv.org/abs/1805.02231" }
\section{Introduction} Discovering and understanding nonequilibrium scaling behaviors near the quantum critical point (QCP) is one of the most interesting arenas in condensed matter physics and statistical physics. Continuous quantum phase transitions (QPTs) occur when the control parameter in a Hamiltonian is tuned across QCPs at zero temperature \cite{sachdev2011quantum}. In a continuous phase transition, the order parameter vanishes smoothly as the critical point is approached. The existence of a QCP is usually accompanied by nonanalyticity in the ground state energy, and it usually connects two quantum phases with different symmetries. Strong quantum fluctuations near a QCP always lead to breaking of symmetry and subsequent building up a macroscopic order. The emergence of an order parameter and the nonanalyticity in the ground state energy are related by the Hellmann-Feynman theorem. Universality, which originates from the scale invariance near a critical point, is a remarkable feature in continuous phase transitions \cite{cardy1996scaling,Stanley1999}. As is known from equilibrium critical phenomena in classical systems, universal behaviors emerge in the vicinity of a critical point where a large number of degrees of freedom are strongly correlated. Associated with the critical point a set of critical exponents can be used to describe the scaling behaviors for relevant quantities near the transition. Moreover, the classical notion of universality in thermal phase transition has been extended successfully to describe the quantum critical phenomena due to quantum fluctuations at zero temperature \cite{sachdev2011quantum}. Cold atom experiments facilitate the study of quantum phases and their associated QPTs in a closed quantum many-body system \cite{Bloch2008,Polkovnikov2011, stamperkurn2013spinor,Langen2015}. A wide variety of dynamical properties can be monitored because the relevant energy scales in cold atom systems are much smaller than in conventional condensed matter systems, therefore the relaxation time or the response time is longer and easier to follow experimentally. The equilibrium relaxation time $t_\text{eq}$ of a quantum system, which is typically measured by the inverse of the excitation gap ($\Delta$), diverges in the thermodynamic limit (TDL) because of the gap closing at the QCP. Consequently any driving of the control parameter at a finite rate would cause nonequilibrium effects. An effective approach for the description of such nonequilbrium effects is the celebrated Kibble-Zurek (KZ) mechanism \cite{kibble1976topology,zurek1985cosmological,zurek1996cosmological}, which was first proposed in cosmology physics by Kibble and then extended by Zurek to condensed matter physics. \par The KZ mechanism has been extensively studied both in classical and quantum systems, and in theories \cite{ Damski2005,Zurek2005,Damski2006,Damski2007,Lamacraft2007,Saito2007,Uwe2007,Cucchietti2007, DelCampo2010,Sabbatini2011,Saito2013a,Huang2014,Lee2015, Jaschke2016} as well as in experiments \cite{chuang1991cosmology,bauerle1996laboratory,ruutu1996vortex, Chen2011,Baumann2011,Lamporesi2013,Corman2014, Navon2015,Clark2016,Anquez2016,Aidelsburger2017a}. A signature scaling relation between the number of defects or excitations and the driving rate is predicted when the system is driven across a continuous phase transition. The key enabling element lies at the possibility of combining the equilibrium critical exponents and the driving rate to characterize the nonequilibrium effects from the finite driving rate. The main idea involves seperating the whole dynamics in such a driven process into an adiabatic plus an impulse region. When the driven parameter is far from the critical point, the dynamics is approximately adiabatic due to large equilibrium relaxation time; When the critical point is approached, due to the so-called critical slowing down, the system dynamics can be regarded as frozen and describable by the impulse approximation, and nonadiabatic effects appear. The instant separating the two regions is obtained by equating the time remained to arrive at the QCP, denoted as $t_{\rm KZ}$, to the equilibrium relaxation time $t_{\rm eq}$, {\it i.e.,} $t_{\rm KZ}\simeq t_\text{eq}\simeq {1}/{\Delta}$. The different dynamic regions then originate from the competitions between the two time (length) scales \cite{Huang2014}: the time (length) scale given by external driving and intrinsic relaxation time $t_\text{eq}$ (correlation length $\xi$). Spinor atomic Bose-Einstein condensate (BEC) exhibits rich magnetic phases in the presence of external magnetic field, which makes it a suitable platform to study the dynamics of QPTs. In this work, we focus on a spin-1 BEC with ferromagnetic interactions such as for $\Rb87$ atoms \cite{Stenger1998,Barrett2001,Chang2004,Sadler2006,Luo2017}. Invariably, current atomic BEC systems are trapped in a finite volume by magnetic or optical means with a finite number of atoms, although the total atom number can be changed to some degree from experiment to experiment. In the pioneering experimental work of Ref.\,\cite{Anquez2016}, aimed at checking the predictions of KZ mechanism, the scaling behavior for the impulse stage duration was confirmed. But the deviation of the scaling exponent from the mean-field theory critical exponent is evident especially at the long ramp time limit. It is presumably due to the neglect of the finite size effect, which enters by opening a gap at the QCP and smoothes out the relevant phase transition observables. It cannot be ignored especially when the finite gap opening at the QCP is comparable with the energy scale associated with the dynamics one is investigating. Besides, a finite gap enables near-adiabatic preparation of metrologically meaningful quantum states \cite{Luo2017}. The equilibrium and dynamical properties are studied in this work when the quadratic Zeeman shift is tuned through a continuous QCP as in recent experiments \cite{Anquez2016,Luo2017}. We combine the KZ mechanism with finite-size scaling theory to obtain universal dynamical scaling functions for relevant phase transition observables and successfully verify their scaling collapse in finite systems by using the mean-field critical exponents. We cover the whole range of the driving rate and find that the dynamics in a finite system can be described by adiabatic perturbation theory \cite{Polkovnikov2008a,DeGrandi2010} in the very slow driving limit, and becomes far-from-equilibrium and non-universal in the fast driving limit. \par This paper is organized as follows. We first discuss the QPT for our model in Sect.\,\ref{subsec:modelHam} and extract the critical exponents from mean-field results in Sect.\,\ref{subsec:exponents}. In Sect.\,\ref{subsec:FSSeq}, we study the finite-size scaling for equilibrium observables. Section\,\ref{sec:dynamic} is devoted to a study of the dynamical properties for a linear driven protocol, where three distinct dynamical regions are analyzed. The consequent predictions can be tested in existing experimental setups. Finally in Sect.\,\ref{sec:conclusion}, we conclude with discussions.\\ \begin{figure \centering \includegraphics[width=0.8\columnwidth]{Fig1.pdf} \caption{{\bf Mean-field phase diagram.} The mean-field phase diagram for our model at $q > 0$ in the subspace of zero longitudinal magnetization $F_z = 0$. The broken-axisymmetry phase (BA phase) and the polar phase are separated at the quantum critical point(QCP) $q_c = 2$. The mean-field values for ground state observables: fractional population $\mathcal{N}$ (black solid line) and transverse magnetization $\mathcal{M}$ (red solid line). The mean-field critical exponents can be obtained from the scaling behaviors near the QCP for $\mathcal{N}$ and $\mathcal{M}$ (see the main text).}\label{fig:phasediag} \end{figure} \section{Model Hamiltonian and the Critical exponents}\label{sec:modelHam} \subsection{Spin-1 BEC Hamiltonian and its QPT}\label{subsec:modelHam} {\it Model.}---For a spin-1 BEC of $\Rb87$ or $\Na23$ atoms, the spin-dependent interaction strength is usually much weaker than the density-density interactions, it is therefore reasonable to make the single-mode approximation (SMA) by assuming that all spin states share the same spatial wavefunction $\phi(\br)$, which is unit normalized according to $\int|\phi(\br)|^2d\br =1$ \cite{Law1998}. SMA decouples the spatial mode and the spin. The equations of motion at low energies are simplified to those concerning the internal spin degrees of freedom. The Hamiltonian under SMA becomes \cite{Law1998,Pu1999} \begin{widetext} \begin{eqnarray} \hat H &= &\frac{c_2}{2N}\left[\left(2\hat N_0 -1\right)\left(\hat N_1+\hat N_{-1}\right)+2\left(\hat a_1^\dag\hat a_{-1}^\dag\hat a_0\hat a_0+\text{h.c.}\right)\right] -p \left(\hat N_1 - \hat N_{-1}\right) + \,q\,\left(\hat N_1 + \hat N_{-1}\right)\,,\label{eq:Hamil0} \end{eqnarray} \end{widetext} where $\hat a_{m_f} (m_f=0,\pm1)$ is the annihilation operator of the ground state manifold $|f=1, m_f\rangle$, with number operator $\hat N_{m_f} = \hat a_{m_f}^\dag\hat a_{m_f}$, and the total particle number operator $\hat N = \hat N_{1}+\hat N_0+\hat N_{-1}$ is conserved. $p$ and $q$ are linear and quadratic Zeeman shifts which could be tuned independently in experiments. The spinor dynamic rate $c_2$, which sets the spin-dependent interaction energy scale, is defined as $c_2 = N\int |\phi(\br)|^4 d\br\times\frac{4\pi (a_2-a_0)}{3m_\text{a}}\,,$ with $m_\text{a}$ being the atomic mass, $a_F$ the $s$-wave scattering length in the total spin angular momentum channel of $F=f_1+f_2$ for the two atoms. Atomic interactions naturally give $c_2<0$ for $\Rb87$ atoms and $c_2>0$ for $\Na23$ atoms which corresponds to ferromagnetic and anti-ferromagnetic spin-dependent interactions, respectively. \par The collective spin operators for this spin-1 boson system are defined by $\hat F_+ = \sqrt{2}\,(\hat a_1^\dag \hat a_0+\hat a_0^\dag\hat a_{-1}),\, \hat F_-= \hat F_+^\dag,\, \hat F_z =\hat a_1^\dag a_1-\hat a^\dag_{-1}\hat a_{-1}, $ where $\,\hat F_\pm \equiv \hat F_x \pm i\hat F_y\,$ are the raising and lowering operators, and $[\hat F_z, \hat H]=0$, making the longitudinal magnetization $ F_z$ a good quantum number. Hereafter we constrain to the $F_z = 0$ subspace, which means the linear Zeeman shift can be set to $p=0$, effectively. {\it Phase diagram.}---In the following discussions, we shall focus on the QPT physics in the ferromagnetic condensate with $c_2<0$ and nonnegative (effective) quadratic Zeeman energy $q \geq 0$. As we can see from Eq.\,(\ref{eq:Hamil0}), in the limit of $q/|c_2|\rightarrow +\infty$, all atoms stay in the single-particle state $|1,0\rangle$, but in the limit of $q/|c_2|\rightarrow 0$, the ferromagnetic interaction term dominates. There must exist a critical point when these two terms are comparable. The competition between the ferromagnetic interaction and the quadratic Zeeman energy manifests the system by two phases with different symmetries revealed by their collective spin magnetization. They are the polar phase for $q/|c_2| > 2$ and the broken-axisymmerty (BA) phase for $0\leq q/|c_2|\leq 2$ (see Fig.\,\ref{fig:phasediag} for the phase diagram). In order to clarify the QCP explicitly, we assume a homogeneous density profile $\phi(\br)=\frac{1}{\sqrt{V}}$ for the condensate, which is a good approximation if the atoms are loaded into a flat trap \cite{Gaunt2013,Chomaz2015,Beugnon2016,Mukherjee2017,Hueck2018}. Therefore $c_2 \propto N\int |\phi(\br)|^4 d\br\propto\frac{N}{V}$. Strictly speaking, phase transitions occur only in the limit of thermodynamics $ \lim\limits_{N,V\rightarrow\infty}\frac{N}{V} = \text{const.}\,,\,$ so $c_2$ is intensive and fixed when we take the TDL. From now on we take $|c_2|=1$ as energy unit in the following discussions. If the system is inhomogeneous in space, such as in a 3D harmonic trap \cite{Anquez2016,Luo2017}, under the Thomas-Fermi approximation, one must take $c_2(N)\propto N^{2/5}$ into consideration to keep the interaction energy per atom fixed when the TDL is taken \cite{Anquez2016}. \par For a continuous transition associated with spontaneously broken symmetry, order parameters can be defined to identify the QPT. The following two order parameters \cite{Damski2007,Lamacraft2007,Anquez2016} $$ \mathcal{N} = \frac{\langle \hat N_1 +\hat N_{-1}\rangle}{N}, \quad \mathcal{M} =\frac{\sqrt{\langle\hat F_x^2\rangle+\langle\hat F_y^2\rangle}}{N} \,,$$ are adopted, wherein $\mathcal{N}$ denotes the fractional atomic population in magnetic states $|1,1\rangle$ and $|1,-1\rangle\,$, and $\mathcal{M}$ is the magnitude of the transverse magnetization for the collective spin. $E_n(q)$ denotes the $n$-th ($n\in\mathbb{N}$) eigenvalue of $\hat H(q)$, and $e_n(q)\equiv E_n(q)/N$ the energy per particle. By using the Hellman-Feynman theorem, the fractional population satisfies $\mathcal{N}(q) \equiv \frac{1}{N}\left\langle\frac{\partial \hat H(q)}{\partial q}\right\rangle= \frac{\partial e_0(q)}{\partial q}$ with $e_0$ the ground state energy per particle. From Fig.\,\ref{fig:phasediag}, it is clear that the QCP at $q=2\,$ is a second order transition since the derivative of $e_0$ with respect to $q\,$, namely $\mathcal{N}(q)\,$, is continuous but the higher order derivatives are discontinuous. \begin{figure \centering \includegraphics[width=0.8\columnwidth]{Fig2.pdf}\\ \caption{{\bf The precursor to QPT in a finite system.} The $\frac{\partial^2 e_0}{\partial q^2}$ approaches a discontinuous step with increasing $N$, which implies a second order (continuous) QPT according to Ehrenfest's classification. Inset: the pseudo-critical point $q_c(N)$ (location of the minimal $e_1-e_0$) for different finite size $N$. In the log-log plot, the difference $q_c-q_c(N)$ is seeing to vanish as $N\rightarrow\infty$ according to a power law, wherein $q_c=2$ is the mean-field critical point. This indicates the mean-field critical point is exact. }\label{fig:quantumQCP} \end{figure} \par Besides the mean-field results, in Fig.\,\ref{fig:quantumQCP}, we also show numerical results of $\frac{\partial^2 e_0}{\partial q^2}$ obtained from exact diagonalization of the Hamiltonian of Eq.\,(\ref{eq:Hamil0}) for different total atom number $N$. The increasingly sharper jump from zero to a negative value for $\frac{\partial^2 e_0}{\partial q^2}$ with increasing $N$ serves as a precursor to QPT in a finite system. The inset of Fig.\,\ref{fig:quantumQCP} shows the locations of the minimal $e_1-e_0$ for different $N$, {\it i.e.,} the pseudo-critical points [$q_c(N)$] for a finite system. It is clear that the $q_c(N)$ converges to $q_c=2$ in the TDL, consistent with the mean-field critical point. \begin{figure*}[!hbt] \centering \includegraphics[width = 1.75\columnwidth]{Fig3.pdf}\\ \caption{{\bf Finite-size scaling at equilibrium.} (a)-(c) In the vicinity of the QCP, exact diagonalization of the Hamiltonian of Eq.\,(\ref{eq:Hamil0}) gives the gap $\Delta(q)\,$, fractional population $\mathcal{N}(q)$ and the transverse magnetization $\mathcal{M}(q)$ for the ground state. (d)-(f) show the corresponding data rescaled according to Eqs.\,(\ref{eq:Gapfss})-(\ref{eq:OPfss}) by using the critical exponents in Table \ref{tab:exponents}. Finite-size scaling is clearly verified. Different system sizes for $N = 500, 1000 \text{ and } 5000$ are used in the calculations.}\label{fig:rescale_eq} \end{figure*} \subsection{Static critical properties}\label{subsec:exponents} The Bogoliubov analysis in Ref.\,\cite{Murata2007} for our model system shows there exist three excitation modes at long wavelength limit in the BA phase. One is gapful and the other two are gapless Goldstone modes associated with U(1) and SO(2) symmetries being broken. The gapful mode denoted as $E_\alpha$ in Ref.\,\cite{Murata2007} is directly relevant for our following discussions, \begin{eqnarray*} E_\alpha^2 &=& \Delta^2 + 4|c_2|\epsilon_{\mathbf k} + O(\epsilon_\mathbf{k}^2)\;,\\ \Delta^2 &=& \left(q_c-q\right) \left(q_c+q\right)\;, \end{eqnarray*} where $\epsilon_\mathbf{k} = \frac{\hbar^2{\mathbf k}^2}{2m}$ and $\Delta$ are free particle dispersion and excitation gap, respectively. \par Therefore, the excitation is gapless with a spectrum $E_\alpha\sim\epsilon^{1/2}_\mathbf{k}\sim k^z$ at the QCP $q=q_c$, so we must have the dynamical critical exponent $z=1$. Furthermore, the behavior of the gap approaching the QCP from the BA phase $\Delta({q\rightarrow q_c^{-}})\sim |q-q_c|^{\nu z}$ yields \,$\nu z = 1/2$, thus the correlation length critical exponent $\nu=1/2$. \par The mean-field results for the order parameters $\mathcal{N}$ and $\mathcal{M}$ near the QCP in the BA phase are respectively given by\,\cite{Murata2007,Hoang2016a}, \begin{eqnarray*} \mathcal{N}^\text{(BA)} &\propto&{q_c-q}\,,\qquad \mathcal{M}^\text{(BA)} \propto \sqrt{q_c-q}\;, \end{eqnarray*} as shown in Fig.\,\ref{fig:phasediag}, and both are zero in the polar phase. We thus obtain the exponents of order parameters $\beta_\mathcal{N} = 1$ and $\beta_{\mathcal{M}} = 1/2\,$ from the behavior $\mathcal{O}\sim |q-q_c|^{\beta_\mathcal{O}}$ (where $\mathcal{O} = \mathcal{N}, \mathcal{M}$) in the vicinity of the QCP. \par The Hamiltonian in Eq.\,(\ref{eq:Hamil0}) actually describes $N$ spin--1 bosons interacting equally with all other spins. For such a system mean-field theory gives exact results about the QPT. Because of the infinitely long-range nature of interaction, the concepts of ``dimensionality'' or ``length'' are not well-defined \cite{Botet1982,Botet1983}. The correlation length for a general short-range model must be substituted by an effective quantity $N_\xi$. By following the arguments of Botet and Jullien \cite{Botet1982,Botet1983}, we can define a length scale $\xi$ which simply links the upper critical dimensionality $d_c$ of the corresponding finite-range model according to $N_\xi\sim \xi^{d_c}$. The finite-range spin model has an upper critical dimension $d_c = 4\,$ for a classical phase transition, and since a QPT in $d$-dimension has the same critical behaviors as the classical transition in $(d+z)$-dimension, the upper critical dimensionality is $d = 4-z=3$ for the QPT we discuss. This dimensionality is consistent with what we have in the approximated Hamiltonian\,(\ref{eq:Hamil0}) under SMA. If the coherence number $N_\xi$ is used as an effective correlation length, we find critical exponents $\nu^\ast z^\ast=1/2$ but with $\nu^* = \nu d = 3/2, \, z^*= z/d=1/3$, which implies the information concerning dimensionality is encapsulated into the critical exponents. We list the critical exponents in Table \ref{tab:exponents} for later use. \begin{table}[!htbp] \tabcolsep 2pt \caption{The critical exponents and dimensionality.} \vspace*{-12pt} \begin{center} \def0.3\textwidth{0.3\textwidth} {\rule{0.3\textwidth}{1pt}} \begin{tabular*}{0.3\textwidth}{@{\extracolsep{\fill}}cccccc} $\;\nu$ & $\beta_\mathcal{N}$ &$\beta_{\mathcal{M}}$ & $z$ & $d$ \, \\ \hline $\;1/2$ & 1 & $1/2$ & 1 & 3 \, \label{tab:exponents} \end{tabular*} {\rule{0.3\textwidth}{1pt}} \end{center} \end{table} \subsection{Finite-size scaling in the equilibrium state}\label{subsec:FSSeq} In the vicinity of the QCP with $N \rightarrow \infty$, one has \begin{eqnarray*} \xi &\sim& |q-q_c|^{-\nu}, \quad N_\xi \sim |q-q_c|^{-\nu d}\,,\label{eq:Len_Ninf}\\ \Delta^{-1}&\sim& \xi^{z}\sim N_\xi^{z/d}\sim |q-q_c|^{-\nu z}\,, \end{eqnarray*} which shows the power-law divergence of the characteristic length and time at the critical point. At any finite $N$, the singularity at QCP thus gets rounded, the characteristic length $\xi$ would remain finite and a nonvanishing gap stays at the critical field. The ``rounding off'' can be introduced through a regular scaling function $g_\Delta(x)$, such that for the inverse gap \begin{eqnarray} \Delta^{-1}(q,N)& \sim & \Delta^{-1}(q,N=\infty)\cdot g_\Delta\left({N}/{N_\xi}\right)\,,\label{eq:gapFSS} \end{eqnarray} with $g_\Delta(x)\rightarrow\text{const.}$ for $x\gg 1$, which recovers the nominal TDL, and $g_\Delta(x)\rightarrow x^{\omega_\Delta}$ for $x\ll 1$. The exponent $\omega_\Delta = z/d$ is obtained by assuming that $\Delta^{-1}$ would become regular at $q_c$ for any finite $N$. By using $z=1$ and $d=3$ obtained in last section, we find $\Delta\sim N^{-z/d}\sim N^{-1/3}$ at the pseudo-critical point because finite $N$ takes over the role of $N_\xi$ as a length scale cutoff. Such a scaling was already revealed from fitting numerical calculated values in Ref.\,\cite{Zhang2013,Hoang2016a}. This is the same finite size behavior at the QCP as in the Dicke model \cite{jvidalDickeModel} and the Lipkin-Meshkov-Glick model \cite{Dusuel2004,Leyvraz2005}. \par Based on the above discussions, the finite-size scaling hypotheses for the gap and order parameters can be generally chosen as, \begin{eqnarray} \Delta(\epsilon, N) &\sim& N^{-z/d}g_1(\epsilon N^{1/\nu d})\,,\label{eq:Gapfss}\\ \mathcal{O}(\epsilon,N) &\sim& N^{-\beta_\mathcal{O}/{\nu d}}g_\mathcal{O}(\epsilon N^{1/\nu d})\,,\label{eq:OPfss} \end{eqnarray} where $\epsilon = (q-q_c)/{q_c}$ is the reduced control parameter which measures the distance to QCP. The exponent $\beta_\mathcal{O}$ is the corresponding scaling dimension for observable $\mathcal{O}\,(\mathcal{O}=\mathcal{N},\mathcal{M})$, and $g_{1},\, g_\mathcal{O}$ are the scaling functions. \par We numerically diagonalize the Hamiltonian in the $F_z = 0$ subspace for different size $N$ to obtain the gap $\Delta(q)=E_1(q)-E_0(q)$, ground state fractional population $\mathcal{N}(q)$ and transverse magnetization $\mathcal{M}(q)$. In Fig.\,\ref{fig:rescale_eq}, we show the data collapse by using mean-field critical exponents in Table \ref{tab:exponents}. The scaling hypotheses in Eqs.\,(\ref{eq:Gapfss})-(\ref{eq:OPfss}) are thus well verified near the QCP for the spin mixing model we discuss. \begin{figure*}[!hbt] \centering \includegraphics[width=0.62\columnwidth]{Fig4a.pdf} \quad \includegraphics[width=0.94\columnwidth]{Fig4b.pdf} \caption{{\bf Driven dynamics.} (a) and (b) show the general structures of excitation probability $\mathcal{P}(q)$ and $\mathcal{Q}(q)$ at different driving rate, for $N = 1000$ as an example. (c)-(d) The excitation probability $P(\tau)$ and the heat density $\mathcal{Q}(\tau)$ at the end of the driving for different system size $N$. The driving parameters are taken as $q_i =0 $ and $ q_f = 6$. Three distinct dynamical regions are revealed according to the behaviors of $\mathcal{P}(\tau)$ and $\mathcal{Q}(\tau)$. The black dashed line and dash-dotted lines indicate the $\tau^{-2}$ and $\tau^{-1}$ power laws, respectively. Inset of (c), we rescale $\tau$-axis by $N$ and show the crossover between the adiabatic region and non-adiabatic region occurs at $\tau_c\propto N$ (see main text). }\label{fig:PexQ} \end{figure*} \section{Dynamic behaviors across the QCP}\label{sec:dynamic} The equilibrium criticality established above allows us to study the universal behaviors in the driven dynamics across the QCP. In this section, we discuss such behaviors for the driven dynamics in our model. \par We consider the case of a linear driving protocol with the quadratic Zeeman shift in Eq.\,(\ref{eq:Hamil0}) taking the form, \begin{equation} q(t) = q_i + (q_f - q_i)\cdot t/\tau,\quad\text{for}\quad t\in[0,\tau] \,,\label{eq:protocol} \end{equation} where $q_i\equiv q(0),\, q_f\equiv q(\tau)$ are the initial and final shifts respectively, and $\tau$ is the total driving duration and driving speed is $v = \frac{q_f - q_i}{\tau}\propto\tau^{-1}$. If $\tau\rightarrow 0$, such a driving protocol reduces to a sudden quench, while it corresponds to the adiabatic limit when $\tau\rightarrow \infty$. The initial state $|\Psi(t=0)\rangle$ is always taken to be the ground state of Hamiltonian $\hat H(q_i)$. The dynamical state $|\Psi(t)\rangle$ is solved numerically by evolving the Schr\"odinger equation $i\partial_t|\Psi(t)\rangle = \hat H(t)|\Psi(t)\rangle\,$, with the driving protocol $\hat H(t)\equiv\hat H[q(t)]$ of Eq.\,(\ref{eq:protocol}). Since only two parameters out of the three $(t,\, q,\, \tau)$ are independent, we can use either $(t,\tau)$ or $(q, \tau)$ to denote the same driving process in the following discussion, {\it i.e.}, $\mathcal{O}(q)\equiv \mathcal{O}[q(t)]$ for any time-dependent observables $\mathcal{O}$. \par One can always expand the state $|\Psi(q)\rangle$ as $ |\Psi(q)\rangle = \sum_{n=0}^{\mathcal{D}-1} a_n(q) e^{-i\Theta_n(q)} |\psi_n(q)\rangle \,, $ into the instantaneous eigenstates $ |\psi_n(q)\rangle\,( n\in\mathbb{N})$ of $\hat H(q)$ satisfying $\hat H(q)|\psi_n(q)\rangle=E_n(q)|\psi_n(q)\rangle$. $\{a_n\}$ is the coefficients of superposition and $\mathcal{D}$ is the dimension of Hilbert space. The time-dependent Schr\"odinger equation then reduces to \begin{eqnarray*} \partial_t a_n(t)& = &- \sum_{m=0}^{\mathcal{D}-1} a_m(t) e^{i\left[\Theta_n(t)-\Theta_m(t)\right]} \langle\psi_n(t)|\partial_t|\psi_m(t)\rangle\,, \end{eqnarray*} where the dynamical phase takes the explicit form $\Theta_n(q)=\int_{q_i}^q \frac{E_n(q^\prime)}{\dot{q}^\prime}dq^\prime=v\int_{q_i}^q {E_n(q^\prime)}dq^\prime$. \par We characterize the loss of adiabaticity employing the following two quantities: the excitation probability $\mathcal{P}(t)=1-|\langle\Psi(t)|\psi_0(t)\rangle|^2$ which measures the infidelity of the dynamical state $|\Psi(t)\rangle$ on the adiabatically connected ground state $|\psi_0(t)\rangle$ and the excess heat density $\mathcal{Q}(t)=[\langle\Psi(t)|\hat H(t)|\Psi(t)\rangle-E_0(t)]/N$, which measures the overall net energy gain over $E_0(t)\equiv\langle\psi_0(t)|\hat H(t)|\psi_0(t)\rangle$. Starting from the ground state, with $\mathcal{P}(q_i)=0$ and $\mathcal{Q}(q_i)=0$, we expect $1\geq \mathcal{P}(t)\geq 0$ and $ \mathcal{Q}(t)\geq 0$. \par This study is focused on driving the system from BA phase ($q_i = 0$) to deep in the polar phase ($q_f=6$). When the system is driven across the QCP, due to the vanishing gap at the critical field, non-adiabatic effects become unavoidable even if the driving velocity $v\rightarrow 0$. For a finite-size system, the gap remains finite, and the dynamics show quite different behaviors in the limit $v\rightarrow 0$. This constitutes an important topic to be addressed in the following. \par Based on numerical simulations, we find there exist three distinct regions according to the driving rate and will be called adiabatic, non-adiabatic, and far-from-equilibrium region respectively corresponding to long, intermediate, and short $\tau$. Their non-adiabatic indicators show quite different scaling behaviors and are essentially decided by the dominant time or length scales and the corresponding low energy excitations in the driven processes. {\it The adiabatic region for large $\tau$.}---For a large but finite $N$, a finite gap exists. Adiabatically passing through the pseudo-critical point is possible in the adiabatic perturbation limit $v\rightarrow 0$, when the system can only be excited by the so-called Landau-Zener mechanism. The adiabatic perturbation theory \cite{DeGrandi2010} gives \begin{widetext} \begin{eqnarray} |a_n(q)|^2 &\approx& v^2 \left\{ \left[\frac{|\langle\psi_n|\partial_{q_i}|\psi_0\rangle|^2}{(E_n(q_i)-E_0(q_i))^2} +\frac{|\langle\psi_n|\partial_{q}|\psi_0\rangle|^2}{(E_n(q)-E_0(q))^2}\right] -2 \frac{\langle\psi_n|\partial_{q_i}|\psi_0\rangle}{E_n(q_i)-E_0(q_i)} \frac{\langle\psi_n|\partial_{q}|\psi_0\rangle}{E_n(q)-E_0(q)}\cos[\delta\Theta_{n0}]\right\} \,, \label{eq:apt} \end{eqnarray} \end{widetext} where the accumulated phase difference between the $n$-th excited state and the ground state is defined as $ \delta\Theta_{n0}=\Theta_n(q)-\Theta_0(q)=v\int_{q_i}^q [E_n(q^\prime)-E_0(q^\prime)]d q^\prime$. Provided that only the dominant excitation into the first excited state is considered, we find $\delta\Theta_{10}= v\int_{q_i}^q\Delta(q^\prime) dq^\prime$, see Fig.\,\ref{fig:rescale_eq}\,(a). The integration of the gap ensures $\delta\Theta_{10}(q)$ be a continuous and monotonous increasing function of $q$ and linearly depend on $v$. Therefore, the two terms in Eq.\,(\ref{eq:apt}) can well describe the amplitude and oscillation behaviors of $\mathcal{P}(q)\approx|a_1(q)|^2$ as shown in Fig.\,\ref{fig:PexQ}\,(a), respectively. For a specific large $\tau$, $\mathcal{P}(q)$ shows slow oscillations with large envelope around the QCP and fast oscillations with small envelope away from the QCP. This is due to the gap closing near the QCP, which leads to a slower growth of $\delta\Theta_{10}$. The linear dependence on driving rate $v$ for $\delta\Theta_{10}$ is revealed by the oscillation period structure, shown respectively in Figs.\,\ref{fig:PexQ}\,(a) and (b), reminiscent of a Russian doll collection, between protocols with different $v$. \par In this adiabatic region, diabatic effects induced by the external driving enter only as a perturbation near the QCP. It is clear that the final excitation probability $\mathcal{P} (\tau)$ and excess heat density $\mathcal{Q}(\tau)$ both show the $\sim v^2\propto\tau^{-2}$ scaling for a generic gapped system \cite{Polkovnikov2008a}, as predicted by Eq.\,(\ref{eq:apt}), and also visibly confirmed in the large $\tau$ region in Figs.\,\ref{fig:PexQ}\,(c)-(d). The finite energy gap $\Delta_\text{min}$ at the QCP is the dominant energy scale during the dynamics, or the finite size $N$ is the smallest and dominant length scale. One can thus define a size-dependent KZ rate as $v_{\rm KZ}(N)\sim N^{-{(1+\nu z)}/{\nu d}}$ or equivalently a time scale $\tau_{\rm KZ}(N)\sim N^{{(1+\nu z)}/{\nu d}}$, with such driving rate or time the correlation length $N_\xi$ at the frozen moment is of the order of the system size $N$. When $v$ is smaller than $v_{\rm KZ}(N)$, the system always remains adiabatic \cite{Huang2014}. {\it The non-adiabatic universal region.}---In this intermediate region, $v> v_{\rm KZ}(N)$ but remains much less than the relevant initial gap. The non-adiabatic indicators $\mathcal{P}(\tau)$ and $\mathcal{Q}(\tau)$ exhibit distinct behaviors from the adiabatic region. It is due to the existence of another external time\,(length) scale $t_{\rm KZ}\,(\xi_{\rm KZ})$ which dominates near the QCP. This so-called KZ time $t_{\rm KZ}\sim v^{-\nu z/(1+\nu z)}$ or KZ length scale $\xi_{\rm KZ}\sim v^{-\nu/(1+\nu z)}$ , is determined by the external driving, and acts as the smallest time or length scale in the universal dynamics near the QCP. The crossover between the two regions occurs when $v\simeq v_{\rm KZ}$, which predicts the crossover happens at $\tau_c\propto N$ for different system size $N$, as shown in the inset of Fig.\,\ref{fig:PexQ}\,(c). Analogously, we can define a maximal defect-free size $N_{\rm KZ}\sim \xi_{\rm KZ}^{d}\sim v^{-d\nu/(1+\nu z)}$ or an effective length scale given by the driving, and the defect density from the KZ mechanism is proportional to $1/N_{\rm KZ}$. Therefore we find $\mathcal{P}(\tau)\sim {1}/{N_{\rm KZ}}\sim v^{d\nu/(1+\nu z)}$ and $\mathcal{Q}(\tau)\sim\mathcal{P}(\tau)\sim v^{d\nu/(1+\nu z)}$ \cite{Kolodrubetz2012b,Kolodrubetz2015,dutta2015quantum}. This KZ scaling is expected to hold in the limit of $v\rightarrow 0\; (\tau\rightarrow\infty)$ in the TDL [black dash-dot line in Figs.\,\ref{fig:PexQ}\,(c)-(d)]. The asymptotic behavior for $N\rightarrow\infty$ implies there adiabatic processes are excluded in the TDL. We recall the limits of $v\rightarrow 0$ ({\it i.e.}, $\tau\rightarrow\infty$) and $N\rightarrow\infty$ do not commute \cite{Polkovnikov2008a}. The above two regions respectively correspond to the adiabatic finite-size scaling (FSS) regime and the impulse finite-time scaling (FTS) regime of a finite-size system considered earlier in Ref.\,\cite{Huang2014}. In the FSS regime, $N<N_{\xi}$ and $N<N_{\rm KZ}$, for example $\mathcal{P} = N^{-1}f_1(vN^{\frac{1+\nu z}{\nu d}})$ and we have only considered the excitation at the QCP $\epsilon=0$. The argument $x=vN^{\frac{1+\nu z}{\nu d}}=vN$ is small and the scaling function $f_1(x)$ can be described perturbatively \cite{Huang2014,Liu2014} in $x$. Therefore we have $\mathcal{P}\simeq N^{-1}[f_1(0)+ f_1^\prime(0)\cdot x+\frac{1}{2}f_1^{\prime\prime}(0)\cdot x^2]$, where the first term $f_1(0)$ is the equilibrium excitation and should vanish for a finite system, the second and the third term arise from the perturbation of the driving and we argue that the linear term in $v$ is absent because the excitation or excess heat is insensitive to the sign of $v$ \cite{Polkovnikov2008a}, therefore we have $\mathcal{P}\simeq N^{-1}\cdot\frac{1}{2}f_1^{\prime\prime}(0)\cdot x^2\sim \tau^{-2}$. \par In a general scenario of KZ ramp, the tuning parameter is swept from the deep disordered phase (polar) to the ordered phase (BA). Due to the gap closing from $q=0$ to $q < 0$ and the appearance of a second QCP at $q=-2$, we choose to drive from the BA to the polar phase in order to obtain a steady value of $\mathcal{P}$ for a long ramp time. In order to address the experiments, according to Ref.\,\cite{Gong2010,DeGrandi2011}, the order parameters easily measurable in experiments satisfy the dynamical KZ scaling form, \begin{eqnarray} \mathcal{O}(\epsilon,v) &=& v^{\frac{\beta_\mathcal{O}}{1+\nu z}}f_ \mathcal{O}(\epsilon v^{-\frac{1}{1+\nu z}}, Nv^{\frac{\nu d}{1+\nu z}})\label{eq:KZscaling} \end{eqnarray} where $\mathcal{O} = \langle\mathcal{\hat O}\rangle$ can be either $\mathcal{N}$ or $\mathcal{M}$, $\beta_\mathcal{O}$ is the corresponding critical exponents given in Table \ref{tab:exponents}. $f_ \mathcal{O}(x,y)$ is a scaling function of arguments $(x,y)$, taking the FTS form in Ref.\,\cite{Huang2014} with finite-size effects included. In actual experiments, one can easily prepare the initial state in the polar phase with all atoms in $|1,m_f=0\rangle$ state (the $F_z = 0$ subspace) and tune the quadratic Zeeman shift $q$ in Eq.\,(\ref{eq:Hamil0}) linearly as in Eq.\,(\ref{eq:protocol}) with different driving time $\tau$. During the tuning process, the dynamical values of the fractional population $\mathcal{N}$ and the transverse magnetization $\mathcal{M}$ can be measured in successive realizations. One can also vary the system size $N$ to take the finite-size scaling into consideration. The scaling hypothesis in Eq.\,(\ref{eq:KZscaling}) can be checked by doing data collapse in the two scaling directions with the experimental results. \par We numerically check the full dynamical KZ scaling form by fixing $Nv^{\nu d/(1+\nu z)}=\text{const.}$, Fig.\,\ref{fig:dynamics}\,(a) and (c) show the numerically computed $\mathcal{M}$ and $\mathcal{N}$ with elected experimentally feasible system size $N$. These curves are indeed seen to collapse onto each other after rescaling according to Eq.\,(\ref{eq:KZscaling}), see Fig.\,\ref{fig:dynamics}\,(b) and (d). We note that for a small system size $N$, the scaling collapse region shrinks, which indicates the universality would disappear for the really small $\tau$ (large $v$) region. \begin{figure \centering \includegraphics[width=1.0\columnwidth]{Fig5.pdf} \caption{{\bf Finite-size Kibble-Zurek scaling.} For fixed $N\cdot v^{d\nu/(1+\nu z)}=N\cdot v = 180$ in Eq.\,(\ref{eq:KZscaling}) and starting from the polar phase ($q_i=4.0$) and sweeping to the BA phase $(q_f = 0)$. (a) The dynamical value of $\mathcal{N}(q)$ -- the fractional population. (b) The numerical data rescaled for $\mathcal{N}$. (c) The transverse magnetization $\mathcal{M}(q)$. (d) The numerical data rescaled for $\mathcal{M}$. For $N = 1\times 10^3, 5\times 10^3, 1\times 10^4 \text{ and } 2\times10^4$ which are all within experimentally feasible atom numbers. It is clear that in (b) and (d) the KZ scaling hypotheses are verified near the QCP, but for smaller system size (gray line with square marker), the collapsed region shrinks. This indicates the loss of universality when $v$ is too fast. }\label{fig:dynamics} \end{figure} {\it The far-from-equilibrium region for fast driving.}--- When the driving rate $v$ is too fast such that the driving determined length scale $N_{\rm KZ}$ is not only dominant near the QCP, but also during the whole dynamics as $N_{\rm KZ}< N_{\xi}(q_i)$, the system state becomes frozen in the whole driving process. The excitation probability $\mathcal{P}(q)$ saturates rapidly in the initial ramp and loses its feature as an indicator, as shown in Fig.\,\ref{fig:PexQ}\,(a) and (c). The heat density $\mathcal{Q}(\tau)$ shows almost no size dependence since the finite-size effects are unimportant at the initial gap $\Delta(q_i)$ [Fig.\,\ref{fig:rescale_eq}\,(a)], and $\mathcal{Q}(\tau)$ tends to nearly a constant for $\tau\rightarrow0$ as shown in Fig.\,\ref{fig:PexQ}\,(d). This far-from-equilibrium region by fast driving is non-universal. \section{Discussions and Conclusions}\label{sec:conclusion} In this paper, we study the equilibrium and dynamical properties in a ferromagnetic spinor atomic Bose-Einstein condensate. At equilibrium, we extract the mean-field critical exponents and verify the finite-size scaling hypothesis. Because of the infinitely long-range nature of the interaction (within the SMA), the mean-field theory gives exact results about the critical phenomena in the equilibrium. The dynamical process is realized by linearly tuning the quadratic Zeeman shift across a continuous QCP. In the vicinity of the QCP, universal behaviors are also observed in the dynamics. Three distinct dynamical regions are identified corresponding to different total driving time $\tau$\,(or equivalently driving rate $v\propto\tau^{-1}$), characterized by two adiabaticity indicators: the excitation probability $\mathcal{P}$ and the excess heat density $\mathcal{Q}\,$. We show that the adiabatic region of $\,\mathcal{P}\sim\mathcal{Q}\sim\tau^{-2}\,$ exists in any finite system for $v<v_{\rm KZ}(N)$, in which external driving enters the dynamics only as a perturbation. In this region the adiabatic perturbation theory can give a nice description for the dynamics. While the non-adiabatic universal region of $\,\mathcal{P}\sim\mathcal{Q}\sim\tau^{-\nu d/(1+\nu z)}\,$, which corresponds to intermediate driving rate $v>v_{\rm KZ}(N)$, and in the thermodynamic limit is well described by the Kibble-Zurek mechanism. The dynamical Kibble-Zurek scaling is found to apply to finite-size systems in this universal region and the scaling hypotheses for fractional population $\mathcal{N}$ and transverse magnetization $\mathcal{M}$ are presented which can be checked directly in ongoing experiments. Finally, the region of the fastest driving rate is found to be non-universal and far-from-equilibrium with $\mathcal{P}$ and $\mathcal{Q}$ essentially being constants independent of $\tau$. The distinct behaviors of the dynamics originate from the competitions between different length scales, the scale given by the external driving $N_{\rm KZ}$, the intrinsic correlation length scale of the system $N_{\xi}$, and the finite size $N$. The smallest one always dominates the dynamic behavior. We also note that the above three regions: adiabatic, non-adiabatic and far-from-equilibrium regions may respectively correspond to the analytical, non-adiabatic, and non-analytical processes in Ref.\,\cite{Polkovnikov2008a}. As pointed by the authors of Ref.\,\cite{Polkovnikov2008a}, in the analytical and non-analytical regimes, there exist no highly populated low-energy modes and finite-size or relaxation effects are unimportant. \par Finally, we emphasize that the simplicity and rich magnetic phases of spinor condensates could offer us a promising platform to study the critical phenomena theoretically and experimentally, both in equilibrium and the nonequilibrium. {\it Note added.}---A related work addressing the similar topic but in the Lipkin-Meshkov-Glick model appeared in the archive very recently \cite{Defenu2018}. \section*{Acknowledgement} This work is supported by the National Basic Research Program of China (973 program) (No. 2013CB922004), NSFC (No. 11574100, No. 91636213 and No. 11747605). S.Y. is supported in part by China Postdoctoral Science Foundation (Grant No. 2017M620035).
{ "timestamp": "2018-05-10T02:05:12", "yymm": "1805", "arxiv_id": "1805.02174", "language": "en", "url": "https://arxiv.org/abs/1805.02174" }
\section{Introduction} \label{sect:Einleitung} Many imaging and data analysis problems in the applied sciences lead to the numerical task of parameter identification in exponential sums $\sum_{j=1}^M c_j \textnormal{e}^{-2\pi i\langle t_j,\cdot\rangle}$. For sparse exponential sums, i.e., for small $M$, Prony's method enables the identification of its parameters $\{t_j\}_{j=1}^M\subset \ensuremath{\mathbb{R}}^d$ and contributions $\{c_j\}_{j=1}^M\subset\ensuremath{\mathbb{C}}$ from relatively few sampling values, see e.g.~\cite{Potts:2010ko,Potts:2013vb} and references therein. The most feasible implementations for $d=1$ are based on the eigenvalue analysis of the associated Prony matrix, see e.g.~\cite{Beinert:2017gy,PP13}. The principles of the multivariate setting have been examined in \cite{KPRO16,Kunis:by,AnCa17,Mo18}, for instance, but associated numerical schemes have not been extensively studied yet. The works \cite{ Sa18, DiIs17, PoTa13} describe multivariate Prony methods that are based on finding zeros of several univariate respectively multivariate polynomials. We shall completely circumvent this algebraic geometry problem by developing a numerical scheme based on a randomized multivariate matrix pencil method. We construct matrices $S_1,\ldots,S_d$ from the sampling values, so that their simultaneous diagonalization yields the parameters $\{t_j\}_{j=1}^M$. Since $S_1,\ldots,S_d$ are not normal, standard numerical algorithms for simultaneous diagonalization are not available, cf.~\cite{Bunse-Gerstner:1993jy,Cardoso:1996ck,Golub:1996fk,Kressner:2005sp}. To circumvent this problem, we derive the joint eigenbasis from the eigendecomposition of a single matrix that is a random linear combination of $S_1,\ldots,S_d$. While \cite{AnCa17} diagonalizes $S_1$ and hopes for simple eigenvalues, the recent papers \cite[Alg.~3.1]{Mo18} and the algorithm introduced in \cite{SaUsCo17} also use the above random linear combination and argue that generically the eigenvalues are simple. While in \cite{SaUsCo17} the authors focus on analyzing the influence of pertubations on their multivariate ESPRIT-method, here in the new multivariate matrix pencil method, we describe the situation of using a random linear combination of $S_1, \dots, S_d$ in more detail and quantify the influence of the minimal separation of $\{t_j\}_{j=1}^M$ on the eigendecomposition of the random matrix. To check on its feasibility, our methodology is applied to analyze fluorescence microscopy images. We cast the problem of locating protein markers as a parameter identification in exponential sums. Due to its analytic roots, Prony's method enables the identification of locations at the subpixel scale, sometimes referred to as superresolution fluorescence microscopy, cf.~\cite{Studer:2012oq}. The results on experimental fluorescence images show that our scheme is numerically feasible. The outline is as follows: In Section \ref{sect:pre} we develop our numerical scheme. The approach of simultaneous diagonalization to identify $\{t_j\}_{j=1}^M$ is presented in Section \ref{sec:sim}. The problem of simultaneous diagonalization is reduced to the diagonalization of a single random matrix in Section \ref{sec:single}, where we examine the influence of the minimal separation of the parameters $\{t_j\}_{j=1}^M$. Our new scheme is applied to synthetic and to experimental fluorescence microscopy data in Section \ref{sec:appl}. \section{Reconstruction of sparse exponential sums from samples}\label{sect:pre} Let $\{t_j\}_{j=1}^M\subset [0,1)^d$ always denote $M$ pairwise different $d$-dimensional parameters and consider the exponential sum \begin{equation}\label{eq:fund prob samp} f(k) = \sum_{j=1}^M c_j \textnormal{e}^{-2\pi \mathrm i\langle t_j,k\rangle},\quad k\in\ensuremath{\mathbb{Z}}^d, \end{equation} with nonzero coefficients $\{c_j\}_{j=1}^M\subset\ensuremath{\mathbb{C}}\backslash \{0\}$. Our aim is to identify the parameters $\{t_j\}_{j=1}^M$ and coefficients $\{c_j\}_{j=1}^M$ from sampling values $\{f(k)\}_{k\in I}$ with suitable $I\subset \ensuremath{\mathbb{Z}}^d$. \subsection{Reconstruction by simultaneous diagonalization}\label{sec:sim} For $n\in\ensuremath{\mathbb{N}}$, let $I_n:=\{0,\dots,n\}^d$ and select a fixed ordering of the elements in $I_n$. Knowledge of the sampling values of $f$ on the set difference $I:=I_{n+1}-I_n$ enables us to build the matrices \begin{equation*} T:= \left(f(k-l)\right)_{k,l\in I_n},\qquad T_{\ell}:=(f(k-l+e_{\ell}))_{k,l\in I_n}, \quad\ell=1,\ldots,d. \end{equation*} If $T$ has rank $M$, then we compute the reduced singular value decomposition \begin{equation*} T=U \Sigma V^*, \end{equation*} where $\Sigma\in\ensuremath{\mathbb{R}}^{M\times M}$ is positive definite and $U\in\ensuremath{\mathbb{C}}^{N\times M}$, $V\in\ensuremath{\mathbb{C}}^{M\times N}$ satisfy $U^*U=V^*V=\id\in\ensuremath{\mathbb{R}}^{M\times M}$ with $N:=\# I_n =(n+1)^d$. Therefore, we can define the set of $M\times M$ matrices \begin{equation}\label{eq:S def} S_{\ell}:=U^* T_{\ell} V \Sigma^{-1},\quad \ell=1,\ldots,d. \end{equation} These matrices turn out to be simultaneous diagonalizable, cf.~Theorem \ref{th:22}, which shall enable us to identify the vectors $\{t_j\}_{j=1}^M$. In the following theorem, $K_d$ denotes an absolute constant that only depends on $d$ and is further specified in \cite{Kunis:by,KPRO16}. We also make use of \begin{equation*} z_j:=\textnormal{e}^{-2\pi i t_j}:=(\textnormal{e}^{-2\pi i t_{j,1}},\ldots,\textnormal{e}^{-2\pi i t_{j,d}}), \quad j=1,\ldots,M, \end{equation*} so that it is sufficient to reconstruct $\{z_j\}_{j=1}^M$ in order to identify $\{t_j\}_{j=1}^M$. \begin{thm}\label{th:22} If $n\geq \frac{K_d}{\min_{i\neq j} \|z_i - z_j\|}$, then $T$ has rank $M$ and $S_1,\ldots,S_d$ are simultaneously diagonalizable. Furthermore, any regular matrix $W$ that simultaneously diagonalizes $S_1,\ldots,S_d$ yields a permutation $\tau$ on $\{1,\ldots,M\}$ such that \begin{equation*} W^{-1} S_\ell W = \diag(\langle z_{\tau(1)},e_\ell\rangle,\ldots,\langle z_{\tau(M)},e_\ell\rangle),\quad \ell=1,\ldots,d. \end{equation*} \end{thm} \begin{proof} According to \cite{KPRO16}, $T$ always admits the factorization \begin{equation}\label{eq:factorT} T= A^* D A, \end{equation} where $A$ is the $M\times N$ multivariate complex Vandermonde matrix \begin{equation*} A=\big(z_j^k\big)_{\substack{j=1,\dots,M\\k\in I_n}}, \end{equation*} and $D=\diag(c_1,\ldots,c_M)$. The condition on $n$ implies that $A$ has full rank $M$, cf.~\cite{Kunis:by,KPRO16}. Hence, $T$ has indeed rank $M$ since all $c_1,\ldots,c_M$ are nonzero. We also deduce the factorization \begin{equation*} T_\ell = A^* D_\ell A,\quad \ell=1,\ldots,d, \end{equation*} where the diagonal matrix $D_\ell$ is given by \begin{equation*} D_\ell:=\diag(c_1\langle z_1,e_\ell\rangle,\ldots,c_M\langle z_M,e_\ell\rangle), \quad \ell=1,\ldots,d. \end{equation*} We shall now check that the specific matrix $W_0:=(AU)^*$ (which is not accessible to us) simultaneously diagonalizes $S_1,\ldots,S_d$. Indeed, by inserting the definitions, we obtain \begin{equation*} W_0^{-1} S_\ell W_0 = (AU)^{-*} U^* A^* D_\ell A V \Sigma^{-1} (AU)^*. \end{equation*} Note that the reduced singular value decomposition implies that both matrices, $AU$ and $AV$, are regular. Since $\Sigma = U^*TV = U^*A^*DAV$, we deduce $\Sigma^{-1}=(AV)^{-1} D^{-1}(AU)^{-*}$, which implies \begin{equation*} W_0^{-1} S_\ell W_0 =D_\ell D^{-1} = \diag(\langle z_1,e_\ell\rangle,\ldots,\langle z_{M},e_\ell\rangle),\quad \ell=1,\ldots,d, \end{equation*} so that $W_0$ simultaneously diagonalizes $S_1,\ldots,S_d$. Note that $W_0$ also diagonalizes any complex linear combination \begin{equation}\label{eq:Cmu} C_\mu:=\sum_{\ell=1}^d \overline{\mu}_\ell S_\ell,\quad \mu\in\ensuremath{\mathbb{C}}^d. \end{equation} Because of \begin{equation*} W_0^{-1} C_\mu W_0 = \diag\left(\sum_{\ell = 1}^d \bar \mu_\ell\langle z_1,e_\ell \rangle, \ldots,\sum_{\ell = 1}^d \bar \mu_\ell\langle z_1,e_\ell \rangle \right), \end{equation*} the eigenvalues $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ of $C_\mu$ are \begin{equation*} \lambda_j(\mu)=\langle z_j,\mu\rangle \end{equation*} with the ordering induced by $W_0$. Since $\{t_j\}_{j=1}^M$ are pairwise different, so are $\{z_j\}_{j=1}^M$, and, hence, there is $\tilde{\mu}\in \S_\ensuremath{\mathbb{C}}^{d-1}=\{x\in\mathbb{C}^d:\|x\|=1\}$ such that $\langle z_i - z_j, \tilde \mu \rangle \neq 0$ for all $i\neq j$ and thus $\{\lambda_j(\tilde{\mu})\}_{j=1}^M$ are pairwise different. In other words, all eigenspaces of $C_{\tilde{\mu}}$ are $1$-dimensional. Any matrix $W=(w_1,\ldots,w_M)$ that simultaneously diagonalizes $S_1,\ldots,S_d$ also diagonalizes $C_{\tilde{\mu}}$. Thus, there is a permutation $\tau$ such that $w_{\tau(i)}$ spans the same space as the $i$-th column of $W_0$, which concludes the proof. \end{proof} According to Theorem \ref{th:22}, the diagonalization of $S_\ell$ encodes the $\ell$-th entry of a permutation of the vectors $\{z_j\}_{j=1}^M$. We require simultaneous diagonalization to ensure that these entries are associated to the same permutation across all $\ell=1,\ldots,d$. In general, the matrices $S_1,\ldots,S_d$ are not normal. Therefore, the numerical task of simultaneous diagonalization is difficult and many simultaneous diagonalization algorithms in the literature are not suitable, cf.~\cite{Bunse-Gerstner:1993jy,Cardoso:1996ck,Golub:1996fk,Kressner:2005sp}. We attempt to circumvent such issues by using $C_\mu$ from \eqref{eq:Cmu}, which shall enable us to restrict our diagonalization efforts to a single matrix: \begin{corollary}\label{th:single} If $\mu\in\ensuremath{\mathbb{C}}^d$ is such that $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ are pairwise different, then any matrix $W$ that diagonalizes $C_\mu$ also simultaneously diagonalizes $S_1,\ldots,S_d$. \end{corollary} \begin{proof} The matrices $C_\mu, S_1,\ldots,S_d$ are simultaneously diagonalizable. The same arguments as in the proof of Theorem \ref{th:22} imply the assertion. \end{proof} According to Corollary \ref{th:single} we aim to find $\mu\in\ensuremath{\mathbb{C}}^d$ such that $\lambda_1(\mu),\ldots,\lambda_M(\mu)$ are pairwise different. For a nonzero vector $z\in\ensuremath{\mathbb{C}}^d$, let $z^\perp$ denote the $d-1$-dimensional linear subspace of $\ensuremath{\mathbb{C}}^d$ orthogonal to $z$. The proof of Theorem \ref{th:22} reveals that \begin{equation}\label{eq:set} \big\{\mu \in\ensuremath{\mathbb{C}}^d : \lambda_1(\mu),\ldots,\lambda_M(\mu) \text{ are pairwise different} \big\} = \ensuremath{\mathbb{C}}^d\setminus \bigcup_{i\neq j}(z_i-z_j)^\perp \end{equation} Hence, this set is the entire $\ensuremath{\mathbb{C}}^d$ except for at most $\binom{M}{2}$ many $(d-1)$-dimensional subspaces. \begin{example} Let $d=2$, $M=5$, and choose $t_1, \dots, t_5 \in [0,1)^2$ randomly. We construct $S_1, S_2 \in\mathbb C^{5\times 5}$ by \eqref{eq:S def}. Thus, we choose $\mu = (\mu_1, \mu_2)^\top \in\mathbb{S}_\ensuremath{\mathbb{C}}^1$ and construct $C_\mu = \mu_1S_1 + \mu_2 S_2$. According to \eqref{eq:set} we expect $\binom{5}{2} = 10$ great circles on $\mathbb S_\ensuremath{\mathbb{C}}^1$, with the property that choosing a $\mu$ from one of those great circles results in a $C_\mu$, that has at least one eigenspace of dimension larger than one. For $\xi \in\mathbb C$, with $\|\xi\|=1$ we get $C_{\mu \xi} = \xi\left(\mu_1S_1 + \mu_2 S_2\right)$. This shows that the multiplication of $C_\mu$ by a global phase $\xi$ does not change the pairwise differences of the eigenvalues of $C_\mu$ and therefore we can use Hopf fibration, to identify great circles on $\mathbb S_\ensuremath{\mathbb{C}}^1$ with a single point on $\mathbb S^2$, for visualization. Indeed we can observe that the minimal distance of any two eigenvalues of $C_\mu$ is nonzero on $\mathbb{S}^2$ except for $10$ points, see Figure \ref{fig:MuAbhaengigkeit}(a). Note, that we only see $8$ of those $10$ points in \ref{fig:MuAbhaengigkeit}(a), the other $2$ are on the back side of the sphere. For visual illustration of the expected great circles, we now switch to the real case and choose $d=3$, $M=5$, and restrict $\mu$ to the real sphere $\mathbb S^2$. In Figure \ref{fig:MuAbhaengigkeit}(b) we see $10$ great circles on $\mathbb S^2$, for which $C_\mu$ has eigenspaces of dimension larger than one. Observe that away from those great circles, the minimal distance of any two eigenvalues of $C_\mu$ rapidly increases. \begin{figure} \subfigure[$S_1, S_2 \in\mathbb C^{5 \times 5}$, $\mu \in \mathbb S_\ensuremath{\mathbb{C}}^1$ ]{ \includegraphics[width=0.45\textwidth]{CvsR.pdf} } \subfigure[$d=3$, $M=5$, and $\mu \in \mathbb S^2$ ]{ \includegraphics[width=0.45\textwidth]{Sphaere8EigBar.pdf} } \caption{Visualization of the smallest distance of any two eigenvalues of $C_\mu$.} \label{fig:MuAbhaengigkeit} \end{figure} \end{example} \begin{remark} Our approach to simultaneous diagonalization of $S_1,\ldots,S_d$ suggested in Corollary \ref{th:single} requires our present setting, in which $\{z_j\}_{j=1}^M$ are pairwise different. It does not apply to the problem of simultaneous diagonalization in general. \end{remark} \subsection{Simultaneous diagonalization by random linear combinations}\label{sec:single} The present section is dedicated to quantify the difference $\lambda_i(\mu)-\lambda_j(\mu)$ in relation to the difference $z_i-z_j$. If $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ is a random vector, distributed according to the unitarily invariant probability measure on $\S_\ensuremath{\mathbb{C}}^{d-1}$, then \begin{equation*} \mathbb{E}|\lambda_i(\mu)-\lambda_j(\mu)| = \frac{1}{\sqrt{d}}\|z_i-z_j\|. \end{equation*} The following result provides a more quantitative analysis: \begin{thm}\label{th:stab oder so} Let $i\neq j$ be fixed and suppose $\epsilon\in[0,1]$. If $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ is a random vector, distributed according to the unitarily invariant probability measure on $\S_\ensuremath{\mathbb{C}}^{d-1}$, then the probability that \begin{equation}\label{eq:mu satifsies} |\lambda_i(\mu)-\lambda_j(\mu)| < \epsilon \|z_i-z_j\| \end{equation} holds is at most $2\sqrt{\frac{d}{\pi}}\epsilon$. \end{thm} Theorem \ref{th:stab oder so} immediately implies that the probability that any of the inequalities \begin{equation}\label{eq:mu satifsies} |\lambda_i(\mu)-\lambda_j(\mu)|\geq \epsilon \|z_i-z_j\|, \quad \forall i\neq j, \end{equation} is violated is at most $\binom{M}{2} 2\sqrt{\frac{d}{\pi}}\epsilon$. In other words, if we select about $M^2$ many independent $\mu$, then the probability that \eqref{eq:mu satifsies} fails is at most of the order $\epsilon$. \begin{proof}[Proof of Theorem \ref{th:stab oder so}] The complex sphere $\S_\ensuremath{\mathbb{C}}^{d-1}$ admits the standard identification with the real sphere $\S^{2d-1}$ by $x\mapsto \Big(\begin{smallmatrix}\Real(x)\\ \Imag(x) \end{smallmatrix}\Big) $, and $\Big(\begin{smallmatrix}\Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\Big)$ is distributed according to the orthogonal invariant probability measure on $\S^{2d-1}$, the latter being the standard normalized surface measure. Let $y:=\frac{z_i-z_j}{\|z_i-z_j\|}\in \mathbb{S}_\ensuremath{\mathbb{C}}^{d-1}$, so that $|\lambda_i(\mu)-\lambda_j(\mu)|/\|z_i-z_j\|=|\langle y,\mu\rangle|$. Since \begin{equation}\label{eq:Real Im} \Big| \left\langle \big(\begin{smallmatrix} \Real(y)\\ \Imag(y) \end{smallmatrix}\big), \big(\begin{smallmatrix} \Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\big)\right\rangle\Big| = |\Real\big(\langle y,\mu\rangle\big)| \leq |\langle y,\mu\rangle|, \end{equation} we obtain an upper bound by simply considering \begin{equation}\label{eq:dfrt} \Big| \left\langle \big(\begin{smallmatrix} \Real(y)\\ \Imag(y) \end{smallmatrix}\big), \big(\begin{smallmatrix} \Real(\mu)\\ \Imag(\mu) \end{smallmatrix}\big)\right\rangle\Big| \leq \epsilon. \end{equation} Due to the orthogonal invariance of the surface measure on $\S^{2d-1}$, the distribution of the left-hand-side in \eqref{eq:dfrt} does not depend on the special choice of $y\in\mathbb{S}_\ensuremath{\mathbb{C}}^{d-1}$, so that we can simply assume that $\Big(\begin{smallmatrix}\Real(y)\\ \Imag(y) \end{smallmatrix}\Big)$ is the north pole. The inequality \eqref{eq:dfrt} reduces to $-\epsilon\leq \Real(\mu_1)\leq \epsilon$, hence, describes the complement of two opposing spherical caps in $\S^{2d-1}$. This ``equatorial band'' has measure \begin{equation*} 1-\mathcal{I}_{[1-\epsilon^2]}(d-\frac{1}{2},\frac{1}{2}) = \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}), \end{equation*} see, for instance, \cite{Li:2011id}, where $\mathcal{I}_{[x]}(a,b)$ is the cumulative distribution function of the Beta distribution, i.e., \begin{equation*} \mathcal{I}_{[x]}(a,b) = \frac{\int_0^x t^{a-1}(1-t)^{b-1}dt }{\Beta(a,b)},\qquad \Beta(a,b) = \frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}. \end{equation*} For $d=1$, we observe \begin{equation*} \mathcal{I}_{[\epsilon^2]}(1/2,1/2) = \frac{2\arcsin(\epsilon)}{\pi} \leq \frac{2}{\sqrt{\pi}}\epsilon. \end{equation*} Suppose now $d\geq 2$ and define \begin{equation*} f(x):=2\sqrt{x} - \mathcal{I}_{[x]}(1/2,d-1/2) \Beta(1/2,d-1/2). \end{equation*} A short calculation yields that its derivative satisfies \begin{equation*} f'(x)=\frac{1-(1-x)^{d-3/2}}{\sqrt{x}}\geq 0,\quad x\in[0,1]. \end{equation*} Since $f(0)=0$, we obtain \begin{equation} \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) \leq \frac{2\epsilon}{\Beta(1/2,d-1/2)},\quad\epsilon\in[0,1]. \end{equation} The observation $1/\Beta(1/2,d-1/2)\leq \sqrt{d/\pi}$ concludes the proof. \end{proof} \begin{remark} A short calculation leads to \begin{equation*} \mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) = \frac{2}{\pi} \Big[ \arcsin(\epsilon) + \epsilon \sum_{k=2}^{d} \frac{4^{k-2}(k-2)!^2 }{(2k-3)(2k-4)!} (1-\epsilon^2)^{k-3/2} \Big]. \end{equation*} One then deduces directly that, for fixed $d$ and small $\epsilon$, the term $\mathcal{I}_{[\epsilon^2]}(\frac{1}{2},d-\frac{1}{2}) $ is of the order $\epsilon$. \end{remark} Theorem \ref{th:22}, Corollary \ref{th:single}, and Theorem \ref{th:stab oder so} enable us to determine $z_{\tau(1)},\ldots,z_{\tau(M)}$. The actual parameters $t_{\tau(j)}$ are computed as the principal values of $\log(z_{\tau(j)})$. The coefficients $c_{\tau(1)},\ldots,c_{\tau(M)}$ can be determined by solving the linear system $T=A^*DA$ for $D=\diag(c_{\tau(1)},\ldots,c_{\tau(M)})$ by the least squares method. We have summarized these steps in Algorithm \ref{alg_1}. \begin{algorithm} \caption{Prony's method using the multivariate matrix pencil approach}\label{alg_1} \begin{algorithmic}[1] \State \textbf{input} $f(k)$, $k\in I$. \State Compute the reduced singular value decomposition of $T$. \State Build the matrices $S_1,\ldots,S_d$. \State Choose random $\mu\in\S_\ensuremath{\mathbb{C}}^{d-1}$ and compute a matrix $W$ that diagonalizes $C_\mu$. \State Use $W$ to simultaneously diagonalize $S_1,\ldots,S_d$ and reconstruct $z_{\tau(1)},\ldots,z_{\tau(M)}$. \State Compute $t_{\tau(j)}$ as the principal value of $\log(z_{\tau(j)})$, $j=1,\ldots,M$. \State Solve $\mathrm{argmin}_{c}\,\,\|A^* c- f\|_2$ to recover $c_{\tau(1)},\ldots,c_{\tau(M)}$. \State \textbf{return} $t_{\tau(1)},\ldots,t_{\tau(M)}$ and $c_{\tau(1)},\ldots,c_{\tau(M)}$. \end{algorithmic} \end{algorithm} \section{Application in superresolution microscopy}\label{sec:appl} \subsection{Mathematical model} In fluorescence microscopy one puts a fluorescence marker on proteins and stimulates them with a laser. In accordance with the fluorescent microscope's resolution limits, proteins are modeled as point sources, cf.~\cite{Studer:2012oq}, so that the probe is considered a tempered distribution \begin{equation}\label{eq:dira} G = \sum_{j=1}^M c_j \delta_{t_j}, \end{equation} on $\ensuremath{\mathbb{R}}^d$, where $\{t_j\}_{j=1}^M\subset [0,1)^d$ is associated to the protein locations and $\delta_{t_j}$ denotes the Dirac delta function with center $t_j$. Let $\mathcal{F}$ denote the Fourier transform on the space of tempered distributions on $\ensuremath{\mathbb{R}}^d$. Then $\mathcal{F}(G)$ is an exponential sum \begin{equation}\label{eq:ft sum} \mathcal{F}(G)=\sum_{j=1}^M c_j \textnormal{e}^{-2\pi i\langle t_j,\cdot\rangle}. \end{equation} The actual measurements $g$ are the convolution of $G$ with some smooth and sufficiently fast decaying function $\varphi$, \begin{equation*} g=G*\varphi = \sum_{j=1}^M c_j \varphi(\cdot-t_j). \end{equation*} Usually, $\varphi$ is modeled as a Gaussian with known parameters determined by the camera system. In order to determine the locations $\{t_j\}_{j=1}^M$ and the contributions $\{c_j\}_{j=1}^M$, suppose we have access to the Fourier transform of the measurements, \begin{equation*} \mathcal{F}(g) = \mathcal{F}(G) \mathcal{F}(\varphi). \end{equation*} Since $\varphi$ is known, let us also assume that we have access to $\mathcal{F}(\varphi)$. If $\varphi$ is a Gaussian, for instance, we know $\mathcal{F}(\varphi)$ analytically. We now look for some sampling set $I\subset \ensuremath{\mathbb{Z}}^d$, where $\mathcal{F}(\varphi)$ does not vanish, and are able to determine the right-hand-side of \begin{equation}\label{eq:form ert} \mathcal{F}(G)(k)=\mathcal{F}(g)(k) / \mathcal{F}(\varphi)(k), \quad k\in I. \end{equation} Combining \eqref{eq:ft sum} with \eqref{eq:form ert} leads to the sampling problem \eqref{eq:fund prob samp} discussed in the previous sections, i.e., \begin{equation}\label{eq:eq finale} \sum_{j=1}^M c_j \textnormal{e}^{-2\pi i \langle t_j,k\rangle } = f(k),\qquad k\in I, \end{equation} with $f(k):=\mathcal{F}(g)(k) / \mathcal{F}(\varphi)(k)$. The parameters $\{t_j\}_{j=1}^M$ and $\{c_j\}_{j=1}^M$ can now be determined by Algorithm \ref{alg_1} in principle. Note that the above derivations in this section have also been used in \cite{PePoTa11} in combination with the univariate Prony's method. In practice though, we are not able to numerically compute the Fourier transform of $g$ directly, so that the right-hand-side of \eqref{eq:eq finale} is not readily available. Aiming at the application of the discrete Fourier transform (DFT), we recognize that sufficient decay of $\varphi$ implies $g\in L^1(\ensuremath{\mathbb{R}}^d)$, so that its periodization \begin{equation*} g_{\per} : = \sum_{l\in\ensuremath{\mathbb{Z}}^d} g(\cdot+l) \end{equation*} converges pointwise almost everywhere towards a function $g_{\per}\in L^1(\mathbb{T}^d)$, where $\mathbb{T}^d\simeq [0,1)^d$ is the $d$-dimensional torus. Let $\hat{g}_{\per}(k)$ denote the $k$-th Fourier coefficient of $g_{\per}$. The Poisson formula yields \begin{equation*} \mathcal{F}(g)(k) = \hat{g}_{\per}(k),\quad k\in I. \end{equation*} Thus, \eqref{eq:eq finale} can be evaluated by first computing the periodization $g_{\per}$, so that its Fourier coefficients yield \begin{equation}\label{eq:rhs final} \sum_{j=1}^M c_j \textnormal{e}^{-2\pi \mathrm i \langle t_j,k\rangle } = \hat{g}_{\per}(k) / \mathcal{F}(\varphi)(k) ,\qquad k\in I. \end{equation} Numerically, the DFT enables the approximation of the Fourier coefficients $\hat{g}_{\per}(k)$, $k\in I$, from samples of $g_{\per}$. It should be mentioned that all numerical experiments were realized in Python on an Intel~i7, 8GByte, 3GHz, macOS 10.12. \subsection{Numerical results on synthetic data} In our numerical experiments, we shall apply an implementation of the DFT to compute the discrete Fourier transform of samples of $g_{\per}$. The sampling rate of $g$ and hence $g_{\per}$ is determined by the pixel resolution. For both, synthetic and experimental fluorescence microscopy data, we choose $\varphi(\cdot) = \mathrm e^{-b \|\cdot\|^2}$ with adjusted parameter $b$ derived from the camera system. Therefore, the values $\mathcal{F}(\varphi)$ are even available in analytic form. Our analysis is first used on synthetic data in Figure \ref{fig:1} with \begin{align*} t_1 &= \left(\tfrac{2}{5}, \tfrac{2}{5}\right), &c_1 &= 1,& b&=150,\\ t_2 &= \left(\tfrac{2}{5}, \tfrac{3}{5}\right), & c_2 &= 1, \\ t_3 &= \left(\tfrac{3}{5}, \tfrac{2}{5}\right), & c_3& = 1. \end{align*} The measurements $g$ are first exact and in a second experiment corrupted by additive Gaussian noise with a signal to noise ratio of $\mathrm{SNR} = 2.554$, cf.~Figure \ref{fig:1}. For our computations we choose, if not stated otherwise, $n=4$, so that $I=\{-4,\ldots,5\}^2$ and $T$ is an $N\times N$ Toeplitz matrix with $N = 25$. These matrix dimensions show that our methodology is numerically feasible. By examining significant drops in the singular values of $T$, we determine $M$ being $3$ for the synthetic data. The reconstructed locations $\tilde{t}_1,\tilde{t}_2,\tilde{t}_3$ satisfy $\|t_j- \tilde{t}_j\|\leq 1.88\cdot 10^{-3}$, for $i=1,2,3$, in the noisy regime, and coincide with the correct locations up to machine precision in the noise-free regime, see Figure \ref{fig:1}. It is important to note that our approach does not require the parameters $\{t_j\}_{j=1}^M$ to lie on the pixel grid. The pixel grid is only used to approximate $\hat{g}_{\per}(k)$, $k\in I$, by the DCT to determine the right-hand-side in \eqref{eq:rhs final}. \begin{figure} \subfigure[Blue stars indicate the three identified locations within noiseless synthetic data.]{ \includegraphics[width=.45\textwidth]{ThreeBumps.pdf} \label{fig:EchteDaten}}\hfill \subfigure[Good location identification within synthetic data corrupted by additive Gaussian noise with $\mathrm{SNR}=2.554$.]{ \includegraphics[width=.45\textwidth]{ThreeNoisyBumps.pdf} \label{fig:ModellAusDaten} } \caption{In noiseless synthetic data and in the presence of additive Gaussian noise in spatial domain, our proposed algorithm manages to find the locations $t_1, t_2, t_3$ with reasonable accuracy.} \label{fig:1} \end{figure} Indeed, the locations that we compute do not lie on the pixel grid, so we are identifying locations on the subpixel level. This is an important advantage we gain by making our computations in the Fourier domain. Figure \ref{fig:SubpixelNeed} shows the difference between true locations $t_1 = 0.44, t_2 = 0.56$ of two one dimensional Gaussians, compared to the local maxima of their sum. For illustration purpose we use a one dimensional scenario in Figure \ref{fig:SubpixelNeed}. Even though this effect is negligible when $\|t_1 - t_2\|_2 \gg 0$, it would entail miscalculations when the positions $t_1, t_2$ of two proteins are close to each other. Consider a movie, where each frame is a picture as in Figure \ref{fig:ModellAusDaten} and the found locations $t_j$ are used to compute movement speeds of each protein. Then one would falsely compute an accelerated attraction and a longer contact phase of two approaching proteins if this effect is not considered. \begin{figure} \includegraphics[width=.75\textwidth]{NeedForSubpixelLocationCrop.pdf} \caption{The red crosses show the true location of $t_1= 0.44, t_2 = 0.56$ of two one-dimensional Gaussians, each depicted as a dotted line. The red bars however, show the local maxima of the sum of these gaussians and this sum is shown in a continuous line.} \label{fig:SubpixelNeed} \end{figure} To illustrate potential numerical issues when the measurements are corrupted by noise, i.e., when $\tilde{g}:=g+\varepsilon$ is measured in place of $g$, we show in Figure \ref{fig:NoiseVsNoNoise} the real-parts of $\hat{g}_{\per}(k)$, $\hat{\tilde{g}}_{\per}(k)$, approximated by the DFT and $\mathcal{F}(\varphi)(k)=\hat{\varphi}_{\per}(k)$, as well as the respective ratios on a line $k_1=0$ and $k_2=-15,\ldots,15$. Even though we are dealing with images of the size $31\times 31$ pixels, the frequency data of the noisy ratio $\hat{\tilde{g}}_{\per}(k)/\hat{\varphi}_{\per}(k)$ seems only reliable close to the center. While $\hat{\varphi}_{\per}(k)$ decays with growing $k$, the noise keeps $\hat{\tilde{g}}_{\per}(k)$ from decaying, so that the ratio becomes unreasonably large. Therefore, we must restrict $n$ depending on the noise level, and $n=4$ seems to work in our synthetic data with fixed $\mathrm{SNR}$ as well as in our fluorescence microscopy data. Figure \ref{fig:WeitTeilen} shows the ratios $\hat{g}_{\per}(k)/\hat{\varphi}(k)$ for $k\in \{-4,\ldots,5\}^2$. \begin{figure} \centering \includegraphics[width=0.6\textwidth]{NoiseVsNoNoise.pdf} \caption{The horizontal axis corresponds to $k_1=0$ and $k_2=-15,\ldots,15$. The decay of the Fourier coefficients $\hat{\tilde{g}}_{\per}$ stagnates in the presence of noise, so that the ratio $\hat{\tilde{g}}_{\per}(k)/\hat{\varphi}_{\per}(k)$ is unbounded away from the center.} \label{fig:NoiseVsNoNoise} \end{figure} \begin{figure} \subfigure[Real part]{ \includegraphics[width=0.45\textwidth]{GDurchPhiReal.pdf} } \subfigure[Imaginary part]{ \includegraphics[width=0.45\textwidth]{GDurchPhiImaginaer.pdf} } \caption{$\hat{g}_{\per}(k)/\hat{\varphi}_{\per}(k)$ on $k\in \{-4,\ldots,5\}^2$.} \label{fig:WeitTeilen} \end{figure} Theorem \ref{th:22} requires $n$ to be larger if the minimal separation distance \begin{equation*} q:=\min_{j\neq i}\|z_j-z_i\| \end{equation*} becomes smaller. In Figure \ref{fig:EinflussVonQ} we illustrate this relation by two examples with noisy synthetic data, one for $q_1 = 0.283$ and the other for $q_2 = 0.057$. For $n=1$ and $n=4$, the locations can still be recovered reasonably well for $q_1$. In the case $q_2$, the choice $n=1$ fails to recover the locations that are close to each other but $n=4$ is successful. \begin{figure} \subfigure[$\min_{i\neq j}\|z_i-z_j\|=0.283$: locations are recovered with error margins $\leq 7.1\cdot 10^{-3}$ and $\leq 2.8\cdot 10^{-3}$ for $n=1$ and $n=4$, respectively.]{ \includegraphics[width=0.45\textwidth]{QGross.pdf} } \hfill \subfigure[$\min_{i\neq j}\|z_i-z_j\|= 0.057$: $n=1$ fails. Locations are correctly recovered for $n=4$ with error $\leq 1\cdot 10^{-2}$.]{ \includegraphics[width=0.45\textwidth]{QKlein.pdf} } \caption{Noisy synthetic data with $\mathrm{SNR}=2.554$. The light blue circles show the true locations $t_1, t_2, t_3$. The blue stars show the reconstruction with $n=1$, the magenta crosses show the reconstruction with $n=4$. In accordance with the ``spirit'' of the requirements on $n$ in Theorem \ref{th:22}, well-separated true locations allow for small $n$. If locations are not well-separated, then $n=1$ fails but the choice $n=4$ enables reconstruction.} \label{fig:EinflussVonQ} \end{figure} \subsection{Numerical results on fluorescence microscopy data} The cell-surface receptor IFNAR2 (type I interferon beta-subunit) of living cells was labelled with biofunctionalized quantum dots (QD605, Cat. No. Q21501MP, Invitrogen \cite{YoWiRiBeLiPi13}). These nanoparticles are small in size (hydrodynamic radius of 15-21 nm) but show an extraordinary high fluorescence signal. Single-molecule imaging was done on an inverted TIRF (total internal reflection fluorescence) microscope (Olympus IX71) with a scientific grade digital camera (Hamamatsu ORCA Flash 4.0). After optical magnification (150xTIRF objective UAPO; NA, 1.45; Olympus) and pixel-binning the final pixel size in the image plane was calculated to be 87 nm. To achieve a high signal-to-noise ratio the signal integration time was set to 32 ms. The decay of the singular values of $T$ with $n=4$ for the experimental fluorescence microscopy data in Figure \ref{fig:real 1}(a) suggest $M=8$. This yields $C_\mu,S_1,S_2 \in\mathbb C^{8\times 8}$ and our algorithm finds the parameters $t_j, c_j$, $j=1,\ldots,8$, in less than a millisecond. Note in Figure \ref{fig:real 1}(b) that our algorithm, somewhat surprisingly, successfully identifies proteins at the boundary of the image, even though one would expect artifacts due to periodization issues. However, those identified translations close to the boundary are not very reliable and will need a post- or pre-processing step in a more elaborate analysis in practice. \begin{figure} \subfigure[]{ \includegraphics[width=0.45\textwidth]{Frame162.pdf} } \hfill \subfigure[]{ \includegraphics[width=0.45\textwidth]{Frame92.pdf} } \caption{Experimental data with blue stars marking identified locations.} \label{fig:real 1} \end{figure} \section*{Conclusion} We proposed an algorithm that finds multivariate frequencies out of structured samples of a finite sum of multivariate exponentials. Our proposed algorithm is a multivariate generalization of a matrix pencil method and is based on simultaneous diagonalization of a pencil of non-normal matrices. We also studied a method to simultaneously diagonalize the occurring non-normal matrices by analyzing random linear combinations. Randomness was also quantified in relation to the minimal separation of the exponential parameters. We successfully tested our algorithm on experimental data from fluorescence microscopy. \section*{Acknowledgements} The authors have been partially funded by WWTF through project VRG12-009, by DAAD through P.R.I.M.E.~57338904, by FWF project P30148, and by DFG-SFB944. \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR } \providecommand{\MRhref}[2]{% \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2} } \providecommand{\href}[2]{#2}
{ "timestamp": "2018-05-17T02:07:05", "yymm": "1805", "arxiv_id": "1805.02485", "language": "en", "url": "https://arxiv.org/abs/1805.02485" }
\section{Introduction} The second Gaia data release (Gaia DR2) contains astrometric data for 1.693 billion sources from magnitude 3 to 21 based on the observations of the European Space Agency Gaia satellite during the 22-month period between 25 July 2014 and 23 May 2016 \citep{Lindegren2018Gaia}, hereafter cited as Gaia-DR2 Astrometry paper. Among all the sources with a full 5-parameter astrometric solution, DR2 provides more than 550 000 quasars, obtained from a positional cross-match with the ICRF3-prototype and the AllWISE AGN catalogues. These quasars are made to represent a kinematically non-rotating reference frame (the celestial reference frame of Gaia, or Gaia-CRF2) in the optical domain \citep{Mignard2018Gaia}, hereafter denoted as Gaia-CRF2 paper. Quasars (or QSOs) are extremely distant and small in apparent size. They are essential for absolute astrometry in the sense that they present no significant parallax or proper motion. Thus, they are ideal objects to investigate the properties of an astrometric solution. Besides the AllWISE AGN catalog \citep{Secrest2015Identification}, there are other catalogues that can enlarge the sample of quasars in Gaia DR2, such as the Large Quasar Astrometric Catalogue (LQAC) \citep{Souchay2015The}, the spectroscopically confirmed quasars in the SDSS-DR14 Quasar Catalog \citep{P2017The} and the spectroscopically confirmed quasars in LAMOST DR5 using LAMOST spectroscopic data \citep{Cui2012LAmost}; all these have been cross-matched with DR2 sources and collected in a comprehensive catalog named "Known Quasars Catalog for Gaia" (KQCG) \citep{Liao2018KQCG}. The aim of this paper is to make an independent assessment of the astrometry of quasars in DR2. After describing the quasar selection process in section 2, we address the global parallax and proper motion bias in section 3. In section 4, we discuss the analysis of the proper motion field; the scalar spherical harmonics analysis of parallaxes is presented in section 5, and section 6 is devoted to the comparison between ICRF2 sources and their counterparts in Gaia DR2.The last section reports our conclusions. \section{Data Used} Gaia DR2 includes 555934 quasars matched to the AllWISE AGN catalogue, plus 2820 sources matched to the ICRF3-prototype \citep{Lindegren2018Gaia,Mignard2018Gaia}. The union of these two sets makes a total of 556869 sources, denoted as GCRF2. Among these, 485985 sources matched to the AllWISE AGN catalog are used to {\it define} a kinematically non-rotating reference frame, and are identified in the Gaia Archive by the field $frame\_rotator\_object\_type=3$ (Type3); whereas the 2820 sources matched to the ICRF3-prototype and used to align the GCRF2 axes with the radio ICRF are indicated by $frame\_rotator\_object\_type=2$ (Type2). To maximize the size of our quasars sample, we cross-matched Gaia DR2 with the compilation of SDSS-DR14Q, LQAC3 and LAMOST DR5 which are known to contain a huge number of reliable QSOs/AGNs. For the final selection, we adopt the joint conditions in Equation (14) of the Gaia DR2 Astrometric paper in order to reduce the risk of stellar contamination, as reported below: \begin{itemize} \item[(i)] astrometric$\_$matched$\_$observations $\ge$ 8, \item[(ii)] astrometric$\_$params$\_$solved=31, \item[(iii)] $\left|(\omega+0.029 mas)/\sigma_{\omega}\right|$<5, \item[(iv)] $(\mu_{\alpha^{\ast}}/\sigma_{\mu\alpha^{\ast}})^2+(\mu_{\delta}/\sigma_{\mu\delta})^2<25$, \item[(v)] $\left|\sin b\right|$>0.1 \item[(vi)] $\rho$<(2 arcsec)$\times$$\left|\sin b\right|$ \end{itemize} Where $\rho$ is the radius for the positional matching, $b$ is the Galactic latitude. With these precepts, we found 208743 new quasars (KQCG) in Gaia DR2. There are about 87$\%$ of the quasars located in the northern hemisphere. The sky density distribution of the KQCG catalog is depicted in Figure \ref{Figkqcg-non-define}; Figure \ref{FigGmag_kqcg} shows the histograms of the G-magnitude distribution for the GCRF2 and KQCG samples, indicating that our additional quasars populate the dimmer end, and most of them are fainter than $G=19$. \begin{figure} \centering \includegraphics[width=6cm]{Fig1.jpg} \caption{The sky distribution of KQCG. The map shows the sky density with each cell of approximately 0.84 $deg^{2}$, using the Hammer-Aitoff projection in Galactic coordinates with zero longitude at the centre and increasing longitude from right to left.} \label{Figkqcg-non-define} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig2.jpg} \caption{G magnitude distribution of the Gaia-CRF2 sources and the KQCG sources.} \label{FigGmag_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig3.jpg} \caption{Parallaxe distribution for the KQCG quasars. Outer (red) curve is the whole KQCG sample; inner (blue) is the subsample of 143806 sources with $\sigma_{\omega}$<1 mas.} \label{FigParallax_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig4.jpg} \caption{Proper motions distribution for the KQCG. The red curve is the proper motions in right ascension $\mu_{\alpha\ast}$ and the blue is the proper motions in declination $\mu_{\delta}$.} \label{FigPM_kqcg} \end{figure} \begin{figure} \centering \includegraphics[width=.30\textwidth]{Fig5-1.jpg} \includegraphics[width=.30\textwidth]{Fig5-2.jpg} \caption{Parallaxes of the KQCG quasars plotted against Gaia G magnitude (top), colour (bottom). The yellow dots are the parallax data points, the blue lines are the parallax medians $\omega_{med}$ of each running-bin.} \label{FigParallax_bisa_kqcg} \end{figure} \section{Global bias} \subsection{Parallax zero point} Figure \ref{FigParallax_kqcg} shows the distribution of parallaxes for the complete KQGC (red curve) and the high-precision subset (blue curve). The mean and median parallax of the whole sample are $-0.0330$ $mas$ and $-0.0278$ $mas$, respectively; the corresponding values for the high-precision subset are $-0.0270$ $mas$ and $-0.0264$ $mas$. Table \ref{parallax_bias} gives the different averages calculated for each data sample. The weighted mean parallax is consistent between different subsets, setting to about $-0.029$ $mas$. However, the mean parallax for $Type2$ is sensibly smaller, offsetting by $0.02$ $mas$ from the other samples. Plots of parallax versus magnitude and effective wavenumber, closely related to the source colour, are shown in Figure \ref{FigParallax_bisa_kqcg}, which reveals the presence of trends in the systematic parallax error, with an excursion of $\sim$0.020 mas over the range covered by the data. \begin{table} \centering \caption{The mean and median parallax (in mas) of different quasar subsets. The formal parallax error is used as weight to calculate the weighted average.} \label{parallax_bias} \begin{tabular}{ccccc} \hline \hline &&&&\\ Subset&N&Mean&Weighted Mean &Median\\ \hline &&&&\\ KQCG & 208743 & -0.0330 & -0.0291 & -0.0278 \\ GCRF2 & 556869 & -0.0308 & -0.0292 & -0.0287 \\ Type2 & 2843 & -0.0511 & -0.0382 & -0.0351 \\ Type3 & 485985 & -0.0284 & -0.0283 & -0.0281 \\ \hline \end{tabular} \end{table} \subsection{Proper motion bias} \label{pmbias} Besides parallaxes, the proper motions of quasars are also nominally zero (the Galactic acceleration effect is neglected here). Figure \ref{FigPM_kqcg} shows the distribution of the proper motion for the KQCG sample; Table \ref{men_med_pm} gives the mean and median proper motion of the different subsets. For the GCRF2 sample we obtain $+1.8$ $\mu as/yr$ and $-1.5$ $\mu as/yr$ in $\mu_{\alpha\ast}$, which is near zero; however, the mean and median in $\mu_{\delta}$ raise to $+12.3$ $\mu as/yr$ and $+11.7$ $\mu as/yr$. For the KQCG sample, the corresponding values are $-8.7$ $\mu as/yr$ and $-7.5$ $\mu as/yr$ in $\mu_{\alpha\ast}$, and +8.3 $\mu as/yr$ and $+11.4$ $\mu as/yr$ in $\mu_{\delta}$, respectively. Looking at the Type2 sample, we get $+10$ $\mu as/yr$ in both components. If we take weighted averages based on formal errors, only the KQCG sample has a significant bias of about $-9.1$ $\mu as/yr$ in $\mu_{\alpha\ast}$, while there is a common bias of $+10$ $\mu as/yr$ in declination for all subsets. The distribution of proper motion versus magnitude and effective wavenumber of KQCG and GCRF2 are plotted in Figures \ref{kqcgpm} and \ref{gcrfpm}. In the second panel of Figure \ref{gcrfpm}, the median proper motion of $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ ; this result seems in agreement with the findings of the Gaia-DR2 Astrometric paper, see their Figure 3. Interestingly, the KQCG sample does not clearly follow the same trend as function of the effective wavenumber, suggesting either a different correlation between magnitude and color for these quasars, or a more complex color dependence of the astrometric calibration for fainter objects. \begin{table*} \centering \caption{The mean and median proper motion of different quasar subsets. The proper motion error is used as weight.} \label{men_med_pm} \begin{tabular}{cccccccc} \hline \hline \multirow{2}{*}{Subset} & \multirow{2}{*}{N} & \multicolumn{3}{c}{$\mu_{\alpha\ast}$$(\mu as/yr)$} & \multicolumn{3}{c}{$\mu_{\delta}$$(\mu as/yr)$} \\ & & Mean &Weighted Average & Median & Mean &Weighted Average & Median \\ \hline &&&&&\\ KQCG & 208743 & -8.7 &-9.1 & -7.5 & +8.3 &+11.1 & +11.4 \\ GCRF2 & 556869 & +1.8 &-0.7 & -1.5 &+12.2 & +12.3 & +11.7 \\ Type2 & 2843 & +16.1 &+2.9 & +10.5 & +19.3 &+14.7 & +8.1 \\ Type3 & 485985 & +0.3 &-1.3 & -1.4 & +11.9 &+11.8 & +11.7 \\ \hline \end{tabular} \end{table*} \begin{figure*} \centering \includegraphics[width=4.15cm]{Fig6-1.jpg} \includegraphics[width=4.10cm]{Fig6-2.jpg} \includegraphics[width=4.15cm]{Fig6-3.jpg} \includegraphics[width=4.10cm]{Fig6-4.jpg} \caption{Proper motions of the KQCG plotted against the Gaia G magnitude and colour (the first and second panel from the left are for $\mu_{\alpha\ast}$, and the third and fourth are for $\mu_{\delta}$). The yellow dots are the proper motion data. The green line is the mean proper motion, while the red lines are the proper motion medians of each running-bin.} \label{kqcgpm} \end{figure*} \begin{figure*} \centering \includegraphics[width=4.15cm]{Fig7-1.jpg} \includegraphics[width=4.2cm]{Fig7-2.jpg} \includegraphics[width=4.2cm]{Fig7-3.jpg} \includegraphics[width=4.3cm]{Fig7-4.jpg} \caption{Proper motions of the GCRF2 plotted against the Gaia G magnitude and colour (the first and second panel from the left are for $\mu_{\alpha\ast}$, and the third and fourth are for $\mu_{\delta}$). The yellow dots are the proper motion data. The green line is the mean proper motion, while the red lines are the proper motion medians of each running-bin.} \label{gcrfpm} \end{figure*} \section{Analysis of the proper motion field}\label{pmvsh} In this section, we perform the vector spherical harmonics (VSH) analysis of different quasar samples. The results of the VSH analysis are listed in Table \ref{vshresults}. After adding the KQCG sample to GCRF2 (KQCG plus GCRF2, denoted as KG), rotation and glide do not change very much between the harmonics of degree $l=1$ and $l=10$, and agree well with the results of the GCRF2 sample. Since the quasars in KQCG are mostly fainter than 19 and are not uniformly distributed, we also compare two subsamples ($19\leq$ G< 20 and G$\geq$ 20) of KG and GCRF2. The results agree with each other, which indicates consistency between the astrometric solutions. As pointed out in Section \ref{pmbias}, the median proper motion of $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ for GCRF2 sample. The VSH analysis result shows that the two quasar subsets ($\nu_{eff}$ $\geq$ 1.58 and $\nu_{eff}$ < 1.58) have a similar glide but a very different rotation (mainly $x$ and $y$ components). The glide results agree with different subsets with a typical glide value of $(-9, +5 ,+12)\pm1$ $\mu as/yr$. If we subtract the global proper motion bias in both components before performing the VSH analysis, the typical glide is $(-9, +5 ,-2)\pm1$ $\mu as/yr$, see the rows marked with $\ast$ in Table \ref{vshresults}. \begin{table*} \centering \caption{VSH analysis of the proper motion field of different quasar subsets in Gaia DR2. In the rows marked with $\ast$ the mean proper motion is subtracted before the VSH analysis is performed. All solutions are weighted. "-" means no estimation.} \label{vshresults} \begin{tabular}{ccccccccc} \hline \hline & & & & && & & \\ \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Source\\ Subset\end{tabular}} & \multirow{2}{*}{$l_{max}$} & \multirow{2}{*}{N} & \multicolumn{3}{c}{Rotation[$\mu$as/yr]} & \multicolumn{3}{c}{Glide[$\mu$as/yr]} \\ & && x & y & z & x & y & z \\ \hline & & & & & & & & \\ GCRF2& 5 &556869& -5.5$\pm$1.1 & -7.4$\pm$0.9 & 5.6$\pm$1.2 & -9.2$\pm$1.2 & 4.7$\pm$1.0 & 11.6$\pm$1.0 \\ $\ast$&5 &556869& -5.5$\pm$1.1 & -7.4$\pm$0.9 & 3.5$\pm$1.2 & -9.1$\pm$1.2 & 4.8$\pm$1.0 & -2.9$\pm$1.0 \\ 19$\leq$G<20&5&257446&10.9$\pm$2.3 & 3.5$\pm$1.8 & 6.7$\pm$2.5 &-7.8$\pm$2.5 &1.9$\pm$2.0 &16.8$\pm$2.0 \\ G$\geq$20&5&148910&3.3$\pm$6.0 & 27.5$\pm$5.4 & 5.4$\pm$7.2 &-15.3$\pm$6.7 &12.6$\pm$5.7 &7.6$\pm$5.7 \\ & & & & & & & & \\ $\nu_{eff}$ $\geq$1.58&5&416380&-7.3$\pm$1.3 & -10.0$\pm$1.0 & 5.9$\pm$1.4 &-8.6$\pm$1.4 &4.0$\pm$1.1 &12.0$\pm$1.1 \\ $\nu_{eff}$<1.58&5&140489&6.9$\pm$2.5 & 10.6$\pm$2.5 & 5.5$\pm$3.0 &-15.1$\pm$2.8 &7.2$\pm$2.8 &13.4$\pm$2.5 \\ \hline & & & & && & & \\ KQCG & 1 &208743&9.6$\pm$2.6 &7.6$\pm$1.9 &-16.3$\pm$2.6 &- &- &- \\ $\nu_{eff}$ $\geq$1.58& 1 & 185360&7.8$\pm$2.7 &6.7$\pm$2.0 &-16.7$\pm$2.7 &- &- &- \\ $\nu_{eff}$<1.58&1&22526 &25.5$\pm$8.1 &18.3$\pm$6.4&-10.7$\pm$8.3 &- & - &-\\ \hline & & & & && & & \\ \multirow{3}{*}{KQCG+GCRF2} & 1 &765612 &-2.2$\pm$0.8 & -1.2$\pm$0.7 & -2.0$\pm$0.8 & -6.3$\pm$0.8 & 4.7$\pm$0.7 & 11.8$\pm$0.7 \\ & 5 &765612& -4.5$\pm$1.1 &-6.8$\pm$0.9 & 5.2$\pm$1.2 & -9.1$\pm$1.2 & 4.5$\pm$1.0 & 11.7$\pm$1.0 \\ & 10 &765612& -4.6$\pm$1.5 &-7.6$\pm$1.2 & 6.7$\pm$1.5 & -11.7$\pm$1.6 & 5.0$\pm$1.2 & 13.2$\pm$1.3 \\ $\ast$&5&765612&-4.5$\pm$1.1&-6.8$\pm$0.9&6.5$\pm$1.2&-9.0$\pm$1.2&4.6$\pm$1.0&-1.5$\pm$0.9\\ & & & && & & & \\ \multirow{1}{*}{19$\leq$G<20} & 5 & 329900 &11.2$\pm$2.2 &1.4$\pm$1.7 &6.6$\pm$2.4 &-8.4$\pm$2.4 &1.8$\pm$1.9& 15.7$\pm$1.9\\ \multirow{1}{*}{G$\geq$20} & 5 &273836 &5.1$\pm$5.6 &23.3$\pm$4.6 &9.0$\pm$6.1 &-15.6$\pm$6.2 &5.4$\pm$4.8 &6.7$\pm$4.9 \\ \hline & & & & && & & \\ \multirow{2}{*}{Type3} & 1 &485985&-3.8$\pm$0.8 &-3.2$\pm$0.7 &-0.9$\pm$0.9 &-6.9$\pm$0.8 &4.6$\pm$0.8 &11.5$\pm$0.8 \\ & 5 & 485985&-5.0$\pm$1.1 &-8.4$\pm$0.9 &5.6$\pm$1.2 &-10.0$\pm$1.2 &4.9$\pm$1.0 &10.8$\pm$1.0 \\ $\ast$&5&485985 &-5.0$\pm$1.1 &-8.4$\pm$0.9&5.2$\pm$1.2 &-9.9$\pm$1.2 & 5.0$\pm$1.0 & -3.3$\pm$1.0\\ && & && & & & \\ \multirow{2}{*}{Type2} & 1 &2843& -25.0$\pm$6.2 &-1.5$\pm$5.8 & 2.0$\pm$6.6 & -8.8$\pm$6.3 & -1.1$\pm$6.2 & 24.7$\pm$5.5 \\ & 5 &2843& -28.1$\pm$7.8 & -2.8$\pm$7.1 & 5.0$\pm$8.6 & -9.1$\pm$8.7 & 8.0$\pm$7.6 & 20.0$\pm$7.0 \\ $\ast$&5&2843 &-28.0$\pm$7.8 &-2.9$\pm$7.1&-14.1$\pm$8.6 &-9.0$\pm$8.7 & 7.8$\pm$7.6 & -2.8$\pm$7.0\\ \hline \end{tabular} \end{table*} \begin{table*} \centering \caption{Global rotation of different quasar subsets in proper motions. All solutions are weighted.} \label{rotation_result} \begin{tabular}{ccccc} \hline \hline Subset & N& \begin{tabular}[c]{@{}c@{}}$w_{X}$ \\ ($\mu$as/yr)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{Y}$\\ ($\mu$as/yr)\end{tabular} & \begin{tabular}[c]{@{}c@{}}$w_{Z}$\\ ($\mu$as/yr)\end{tabular} \\ \hline &&&\\ KQCG+GCRF2 &765612& -2.1$\pm$0.8&-0.8$\pm$0.7& -2.4$\pm$0.8 \\ North & 465093 &-3.4$\pm$1.0 & -2.2$\pm$0.9 & -7.3$\pm$1.2 \\ South & 300519 & 0.0$\pm$1.1 & 0.9$\pm$1.0 & 3.0$ \pm$1.2 \\ \hline &&&&\\ GCRF2 &556869& -3.1$\pm$0.8&-1.9$\pm$0.7& -1.0$\pm$0.9 \\ North & 285806 &-5.6$\pm$1.1 & -4.5$\pm$1.0 & -5.2$\pm$1.3 \\ South & 271063 & -0.3$\pm$1.2 & 0.8$\pm$1.0 & 3.0$ \pm$1.2 \\ \hline &&&&\\ Type3 &485985 &-3.3$\pm$0.8& -2.8$\pm$0.7 & -0.9$\pm$0.9 \\ North &247999& -5.7$\pm$1.1& -5.5$\pm$1.0 & -5.1$\pm$1.3 \\ South &237986& -0.6$\pm$1.2 & -0.1$\pm$1.0 & 2.9$ \pm$1.3 \\ \hline &&&&\\ Type2 &2843& -23.1$\pm$5.8 & 2.3$\pm$5.4 & 2.7$\pm$5.6 \\ North & 1635&-25.1$\pm$7.1 & -6.9$\pm$6.5 & 2.6$\pm$8.5 \\ South & 1208& -17.8$\pm$10.3& 9.0$\pm$9.7 & 1.5$ \pm$10.7 \\ \hline \end{tabular} \end{table*} We also tried to fit a pure rotation to the proper motions, using the following equations \citep{Mignard2012}: \begin{equation} \begin{array}{l} \mu_{\alpha\ast} = -w_{X}\cos\alpha\sin\delta-w_{Y}\sin\alpha\sin\delta+w_{Z}\cos\delta\\ \mu_{\delta} = +w_{X}\sin\alpha-w_{Y}\cos\alpha \end{array} \label{rotation} \end{equation} Where $w_{X}$, $w_{Y}$, and $w_{Z}$ are the three spin rates of the proper motion field. We apply this fit to further investigate the spin rate of different quasar subsets in the northern and southern hemisphere. The results are shown in Table \ref{rotation_result}. For the Type2 quasars, no significant spin difference between the two hemispheres is found. However, for the other quasar subsets, the spin rate is clearly above the statistical noise in the northern hemisphere, but negligible in the southern one; this feature could be explained by a north/south dichotomy in the magnitude and color distribution of the fitted quasars, or by a global positional rotation between the northern and southern subsets inducing a rotation in the proper motion field. \section{The scalar spherical harmonics expansion of parallaxes} The parallaxes of quasars can be treated as parallax residuals, and can be seen as the radial part of spatial position differences on the celestial sphere. Therefore, they represent a scalar field on the sphere surface that can be expanded in terms of spherical harmonics (SSH) as follows \citep{bucciarelli2011}: \begin{equation} \Delta\pi=V_{\pi}(\alpha,\delta)=\sum_{l}\sum_{m=-l}^{l}c_{lm}Y_{lm}(\alpha,\delta) \label{ssh_1} \end{equation} Where $Y_{lm}$ are the standard spherical functions defined here by the following sign convention: \begin{equation} Y_{lm}=(-1)^m\sqrt{\frac{2l+1}{4\pi}\frac{(l-m)!}{(l+m)!}}P_{lm}(\sin\delta)e^{im\alpha} \label{ylm} \end{equation} for $m\geq0$, and we have $Y_{l,-m}(\alpha,\delta)=(-1)^mY_{lm}^{\ast}(\alpha,\delta)$ for $m <0$. The ${\ast}$ denotes complex conjugation, and $P_{lm}(x)$ are the associated Legendre polynomials. Equation \ref{ssh_1} can be reduced as: \begin{equation} \Delta \pi(\alpha,\delta)=\sum^{l_{max}}_{l=1}\left[c^R_{l0}Y^R_{l0}+2\sum^{l}_{m=1}\left(c^R_{lm}Y^R_{lm}-c^I_{lm}Y^I_{lm}\right)\right] \label{ssh} \end{equation} Where $R$ and $I$ denote the real and imaginary part of the function. Starting from the definition of $power$ as the integral of the squared function divided by the domain area, by virtue of Parseval's Theorem we can express the total $power$ per degree $l$ of $\Delta\pi(\alpha,\delta)$ in terms of the expansion coefficents as \begin{equation} P_l=(c^R_{l0})^2+2\sum^l_{m=1}\left[(c^R_{lm})^2+(c^I_{lm})^2\right] \label{pow} \end{equation} Normalizing each coefficient of the above sum by its formal error, assuming white Gaussian noise, we obtain a $\chi^2$-distributed variable with $2l+1$ degrees of freedom, which can be used to test the statistical significance of the corresponding degree. A more robust form of test variable, still $\chi^2$-distributed, is given by equation (87) of \citet*{Mignard2012}, or the derived quantity $Z_{\chi^2}$ which follows a standard normal distribution (see Eq. (85) of \cite{Mignard2012}) and it is the one we used in the present analysis. The results of the SSH analysis, having subtracted beforehand the bias to each parallax, are summarized in Table \ref{ssh}. Note that a value of $Z_{\chi^2}> 2.33$ corresponds to a confidence level of $99\%$, or $2.33\sigma$ of a normal distribution. The parameter $(P_l/4\pi)^{1/2}$ represents the RMS value of the scalar field for the corresponding degree $l$. The expansion of Type2 subset does not present particular signatures, while the other subsets show significant powers for degrees $l=1$ and $l=4$. The total RMS value for $l\leq 10$ (angular scales $\ge$ $180/l$=18 degree) of each subset is about 13 $\mu as$ (apart from Type2). Using a different spatial correlation technique, the Gaia-DR2 Astrometry paper \citep{Lindegren2018Gaia} reports an angular scale of 14 degrees with a RMS amplitude of 17 $\mu as$, which is in good agreement with our results. \begin{table*} \centering \caption{The spherical harmonics expansion of the parallaxes of different quasar subsets. The parallax bias is subtracted before the expansion. All solutions are weighted.} \label{ssh} \begin{tabular}{ccccccccccc} \hline \hline &&&&\\ & \multicolumn{2}{c}{KQCG+GCRF2} & \multicolumn{2}{c}{Type2+Type3} & \multicolumn{2}{c}{Type3} & \multicolumn{2}{c}{Type2} & \multicolumn{2}{c}{GCRF2} \\ \hline &&&&&&&&\\ $l$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ & $(P_l/4\pi)^{1/2}(\mu as)$ & $Z_{\chi^2}$ \\ \hline &&&&&&&&\\ 1 & 5.1 & 5.9 & 5.9 & 7.2 &5.8&7.0&12.3&1.2&5.2&5.9 \\ 2 & 3.1 & 2.6 & 2.8 & 2.4 &3.1&2.7&18.7&2.1&2.8&2.0\\ 3 & 4.2 & 4.2 & 4.9 & 5.5 &4.8&5.3&16.7&1.1&4.3&4.2\\ 4 & 5.5 & 5.6 & 6.8 & 7.8 &7.0&7.9&16.8&0.6 &5.9&6.0 \\ 5 & 4.9 & 5.0 & 4.6 & 4.7 &4.6&4.6&21.2&1.5&4.8&4.7\\ 6 & 3.3 & 1.8 & 4.1 & 3.5 &4.4&3.9&24.7&2.1&3.3&1.7\\ 7 & 4.2 & 3.9 & 4.2 & 4.0 &4.3&4.0&21.7&1.1&4.0&3.4\\ 8 & 3.3 & 1.8 & 3.2 & 1.6 &3.3&1.8&18.5&-0.2&3.2&1.5\\ 9 & 3.3 & 2.7 & 3.5 & 3.1 &3.5&2.9&24.5&1.8 &3.5&2.9\\ 10 & 3.9 & 3.8 & 3.4 & 2.6 &3.4&2.5&27.0&2.6&3.8&3.3 \\ \hline \end{tabular} \end{table*} \section{ICRF2 sources in Gaia DR2} In this section, we compare the VLBI positions of ICRF2 sources \citep{Fey2015The} with their optical counterparts in Gaia DR2. After cross-matching, 2146 ICRF2 sources are found in the Gaia DR2 sources, with sky distribution given in Figure \ref{FigICRFDR2skydensity}. Most angular differences $\rho$ between matched sources are smaller than 1 mas, and just a few sources have $\rho>10$ mas, see Figure \ref{PD_H} for color-coded scatter plot of position differences $\rho$ in right ascension and declination. \begin{figure} \centering \includegraphics[width=6cm]{Fig8.jpg} \caption{Sky distribution of ICRF2 sources found in Gaia DR2, Hammer-Aitoff projection in equatorial coordinates. Blue dots are defining sources (D), green dots are VLBA Calibrator Survey sources (VCS V), and blacks are non VCS sources (N).} \label{FigICRFDR2skydensity} \end{figure} \begin{figure} \centering \includegraphics[width=5.5cm]{Fig9.jpg} \caption{Scatter plot of position differences in right ascension and declination (Gaia DR2 minus ICRF2). } \label{PD_H} \end{figure} \begin{figure} \centering \includegraphics[width=6cm]{Fig10.jpg} \caption{The formal position uncertainties $\sigma_{pos,max}$ of the Gaia DR2 sources (abscissa) with respect to the ICRF2 sources (ordinate). The color bar on the right is the position differences $\rho$ (in mas) between Gaia DR2 and the ICRF2 sources. The axis is log-log scale.} \label{rhoVsig} \end{figure} Figure \ref{rhoVsig} shows a plot of color-coded angular separations between matched sources in the plane of formal positional uncertainties $\sigma_{DR2}$ , $\sigma_{ICRF2}$ . Most of the sources in Gaia DR2 have position uncertainties under 1 mas, while the uncertainties of the sources in ICRF2 range from 0.04 mas to 10 mas with few even up to tens of mas. Some sources with small position uncertainties show large angular differences, which may caused by an offset between the centers of emission at optical and radio wavelengths. The alignment of the optical positions in Gaia DR2 with respect to the ICRF2 can be modelled by an infinitesimal solid rotation with the following equations \citep{Mignard2012}: \begin{equation} \begin{array}{l} \Delta\alpha_{\ast}=-\epsilon_{X}\cos\alpha\sin\delta-\epsilon_{Y}\sin\alpha\sin\delta+\epsilon_{Z}\cos\delta\\ \Delta\delta=+\epsilon_{X}\sin\alpha-\epsilon_{Y}\cos\alpha \end{array} \label{glodiff} \end{equation} Where $\Delta\alpha_{\ast}=\Delta\alpha\cos\delta$, and $\epsilon_{X}$, $\epsilon_{Y}$ and $\epsilon_{Z}$ are the three rotation angles between the two reference frames. \begin{table*} \centering \caption{Global difference between the Gaia-CRF2 positions of ICRF sources and their positions in ICRF2. } \label{globaldiff} \begin{tabular}{ccccc} \hline \hline & & & & \\ Subset & N & $\epsilon_{X}$ ($\mu$as) & $\epsilon_{Y}$ ($\mu$as) & $\epsilon_{Z}$ ($\mu$as)\\ \hline & & & & \\ All&2146&-3.6$\pm$ 27.5&27.2$\pm$26.9 &3.8$\pm$25.7\\ Defining&257&-19.1 $\pm$ 36.2&30.4$\pm$ 35.0&-32.9$\pm$ 37.0\\ Non-defining&1889&12.1 $\pm$ 37.7&25.2 $\pm$ 37.1&26.6$ \pm$ 32.9\\ \hline \end{tabular} \end{table*} The weighted least-squares estimation of the orientation parameters between Gaia-CRF2 and ICRF2 are listed in table \ref{globaldiff}. No significant rotation is found at the level of 0.03 $mas$ in position. This indicates that the axes of Gaia-CRF2 and the ICRF2 are well aligned with each other within 30 $\mu as$. \section{Conclusions} We cross-matched the quasars from the compilation of the SDSS-DR14, LQAC3 and LAMOST DR5 with Gaia DR2, and found 208743 extra quasars in Gaia DR2, which is about $37\%$ of the Gaia-CRF2 sample. We used this independent sample and the already known quasars in DR2 to investigate the properties of the QSOs solution, also by comparing the astrometric residuals of various quasar subsets in DR2. In general, we obtained consistent results between the samples; some signatures varying with different subsets, and clearly above the statistical noise, are still compatible with systematic errors depending on source position, magnitude and color not completely cured in the second release of the Gaia data, as discussed in the Gaia-DR2 astrometry paper. The results of our analysis are summarized below : \begin{enumerate} \item The parallaxes of our KQCG sample have a mean bias of $-0.0330$ $mas$ and a median of $-0.0278$ $mas$, which agree well with the results of the GCRF2 sample; we note, however, that the mean parallax of Type2 subset in GCRF2 is $0.02$ $mas$ smaller. \item There is a $-9.1$ $\mu as/yr$ bias in $\mu_{\alpha\ast}$ of the KQCG sample, and a bias of about $+10$ $\mu as/yr$ in $\mu_{\delta}$ for all quasar subsets. The mean systematic error in $\mu_{\alpha\ast}$ trends from positive to negative at the effective wavenumber $\nu_{eff}\sim$ 1.58 $\mu m^{-1}$ for the GCRF2 sample. \item The VSH method is applied to the proper motion vector field of different quasar subsets. The results for different subsets agree with each other. For Type2, no significant rotation difference between northern and southern hemisphere is found. However, the GCRF2 and other subsets shows a different rotation between two hemispheres. \item The spherical harmonics expansion of the parallaxes shows an angular scale of 18 deg with an RMS amplitude of 13 $\mu as$. \item The comparison of the VLBI-based positions of ICRF2 sources and their Gaia DR2 counterparts shows that the axes of Gaia-CRF2 and the ICRF2 are well aligned with each other within 30 $\mu as$. \end{enumerate} \section*{Acknowledgements} This work has made used of data from ESA space mission Gaia, processed by the Gaia Data Processing and Analysis Consortium (DPAC). We are grateful to the developers of the TOPCAT (\citep{Taylor2005TOPCAT}) software. This work has been supported by the grants from the National Science Foundation of China (NSFC) through grants 11703065, 11573054 and 11503042. \bibliographystyle{mnras}
{ "timestamp": "2018-10-25T02:13:29", "yymm": "1805", "arxiv_id": "1805.02194", "language": "en", "url": "https://arxiv.org/abs/1805.02194" }
\section{Introduction} Recommendation is ubiquitous in today's cyber-world --- almost every one of your Web activities can be viewed as a recommendation, such as news or music feeds, car or restaurant booking, and online shopping. Therefore, accurate recommender system is not only essential for the quality of service, but also the profit of the service provider. One such system should exploit the rich side information beyond user-item interactions, such as content-based (\textit{e.g.}, user attributes~\cite{Silkroad} and product image features~\cite{Yu:2018:ACR}), context-based (\textit{e.g.}, where and when a purchase is made~\cite{rendle2011fast,NFM}), and session-based (\textit{e.g.}, the recent browsing history of users~\cite{Li:2017:NAS:3132847.3132926,iCD}). However, existing collaborative filtering (CF) based systems merely rely on user and item features (\textit{e.g.}, matrix factorization based~\cite{fastMF} and the recently proposed neural collaborative filtering methods~\cite{NCF,bai2017neural}), which are far from sufficient to capture the complex decision psychology of the setting and the mood of a user behavior~\cite{ACF}. Factorization Machine (FM)~\cite{Rendle2011Factorization} is one of the prevalent feature-based recommendation model that leverages rich features of users and items for accurate recommendation. FM can incorporate any side features by concatenating them into a high-dimensional and sparse feature vector. The key advantage of it is to learn $k$-dimensional latent vectors , \textit{i.e.}, the embedding parameters $\mathbf{V}\in\mathbb{R}^{k\times n}$, for all the $n$ feature dimensions. They are then used to model pairwise interactions between features in the embedding space. However, since $n$ is large (\textit{e.g.} practical recommender systems typically need to deal with over millions of items and other features where $n$ is at least $10^7$~\cite{Wang:2018:PFD}), it is impossible on-device storage of $\mathbf{V}$. Moreover, it requires large-scale multiplications of the feature interaction $\mathbf{v}^T_i\mathbf{v}_j$ for user-item score, even linear time-complexity is prohibitively slow for float operations. Therefore, existing FM framework is not suitable for fast recommendation, especially for mobile users. In this paper, we propose a novel feature-based recommendation framework, named \textit{Discrete Factorization Machine} (DFM), for fast recommendation. In a nutshell, DFM replaces the real-valued FM parameters $\mathbf{V}$ by binary-valued $\mathbf{B}\in\{\pm 1\}^{k\times n}$. In this way, we can easily store a bit matrix and perform XOR bit operations instead of float multiplications, making fast recommendation on-the-fly possible. However, it is well-known that the binarization of real-valued parameters will lead to significant performance drop due to the quantization loss~\cite{Zhang2016Discrete}. To this end, we propose to directly optimize the binary parameters in an end-to-end fashion, which is fundamentally different from the widely adopted two-stage approach that first learns real-valued parameters and then applies round-off binarization~\cite{Zhang2014Preference}. Our algorithm jointly optimize the two challenging objectives: 1) to tailor the binary codes $\mathbf{B}$ to fit the original loss function of FM, and 2) imposing the binary constraint that is balanced and decorrelated, to encode compact information. In particular, we develop an alternating optimization algorithm to iteratively solve the mixed-integer programming problems. We evaluate DFM on two real-world datasets Yelp and Amazon, the results demonstrate that 1) DFM consistently outperforms state-of-the-art binarized recommendation models, and 2) DFM shows very competitive performance compared to its real-valued version (FM), demonstrating the minimized quantization loss. Our contributions are summarized as follows: \begin{itemize}[leftmargin=*] \item We propose to binarize FM, a dominant feature-based recommender model, to enable fast recommendation. To our knowledge, this is the first generic solution for fast recommendation that learns a binary embedding for each feature. \item We develop an efficient algorithm to address the challenging optimization problem of DFM that involves discrete, balanced, and de-correlated constraints. \item Through extensive experiments on two real-world datasets, we demonstrate that DFM outperforms state-of-the-art hash-based recommendation algorithms. \end{itemize} \section{Related Work} We first review efficient recommendation algorithms using latent factor models, and then discuss recent advance in discrete hashing techniques. \subsection{Efficient Recommendation} As pioneer work, \cite{Das2007Google} used Locality-Sensitive Hashing (LSH) \cite{Gionis1999Similarity} to generate hash codes for Google new users based on their item-sharing history similarity. Following the work, \cite{Karatzoglou2010Collaborative} applied random projection for mapping learned user-item latent factors from traditional CF into the Hamming space to acquire hash codes for users and items. Similar to the idea of projection, \cite{Zhou2012Learning} generate binary code from rotated continuous user-item latent factors by running ITQ \cite{Gong2011Iterative}. In order to derive more compact binary codes, \cite{Liu2014Collaborative} imposed the de-correlation constraint of different binary codes on continuous user-item latent factors and then rounded them to produce binary codes. However, \cite{Zhang2014Preference} argued that hashing only preserves similarity between user and item rather than inner product based preference, so subsequent hashing may harm the accuracy of preference predictions, thus they imposed a Constant Feature Norm(CFN) constraint on user-item continuous latent factors, and then quantized similarities by respectively thresholding their magnitudes and phases. The aforementioned work can be easily summarized as two independents stages: relaxed user-item latent factors learning with some specific constraints and binary quantization. However, such a two-stage relaxation is well-known to suffer from a large quantization loss according to \cite{Zhang2016Discrete}. \subsection{Binary Codes Learning} Direct binary code learning by discrete optimization --- is becoming popular recently in order to decrease quantization loss aforementioned. Supervised hashing~\cite{Luo:2018} improve on joint optimizations of quantization losses and intrinsic objective functions, whose significant performance gain over the above two-stage approaches. In the recommendation area, \cite{Zhang2016Discrete} is the first work that proposes to learn binary codes for users and items by directly optimizing the recommendation task. The proposed method \textit{Discrete Collaborative Filtering} (DCF) demonstrates superior performance over aforementioned two-stage efficient recommendation methods. To learn informative and compact codes, DCF proposes to enforce balanced and de-correlated constraints on the discrete optimization. Despite its effectiveness, DCF models only user-item interactions and cannot be trivially extended to incorporate side features. As such, it suffers from the cold-start problem and can not be used as a generic recommendation solution, e.g., for context-aware~\cite{Rendle2011Factorization} and session-based recommendation~\cite{iCD}. Same as the relationship between FM and MF, our DFM method can be seen as a generalization of DCF that can be used for generic feature-based recommendation. Specifically, feeding only ID features of users and items to DFM will recover the DCF method. In addition, our DFM can learn binary codes for each feature, allowing it to be used for resource-limited recommendation scenarios, such as context-aware recommendation in mobile devices. This binary representation learning approach for feature-based recommendation, to the best of knowledge, has never been developed before. The work that is most relevant with this paper is \cite{Lian2017Discrete}, which develops a discrete optimization algorithm named \textit{Discrete Content-aware Matrix Factorization} (DCMF), to learn binary codes for users and items at the presence of their respective content information. It is worth noting that DCMF can only learn binary codes for each user ID and item ID, rather than their content features. Since its prediction model is still MF (\textit{i.e.,}, the dot product of user codes and item codes only), it is rather limited in leveraging side features for accurate recommendation. As such, DCMF only demonstrates minor improvements over DCF for feature-based collaborative recommendation~(\textit{cf.} Figure 2(a) for their original paper). Going beyond learning user codes and item codes, our DFM can learn codes for any side feature and model the pairwise interactions between feature codes. As such, our method has much stronger representation ability than DCMF, demonstrating significant improvements over DCMF in feature-based collaborative recommendation. \section{Preliminaries} We use bold uppercase and lowercase letters as matrices and vectors, respectively. In particular, we use $\mathbf{a}_i$ as the $a$-th column vector of matrix $\mathbf{A}$. We denote ${\|\cdot\|}_F$ as the Frobenius norm of a matrix and $\text{tr}(\cdot)$ as the matrix trace. We denote $\text{sgn}(\cdot):\mathbb{R}\rightarrow \{\pm 1\}$ as the round-off function, \textit{i.e.}, $\text{sgn}(x) = +1$ if $x\geq 0$ and $\text{sgn}(x) = -1$ otherwise. Factorization Machine (FM) is essentially a score prediction function for a (user, item) pair feature $\mathbf{x}$: \begin{equation} \label{eq:fm} \small \text{FM}(\mathbf{x}):= w_{0}+\sum\limits_{i=1}^{n} w_i x_i+ \sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{v}_i,\mathbf{v}_j\rangle x_i x_j, \end{equation} where $\mathbf{x}\in\mathbb{R}^n$ is a high-dimensional feature representation of the rich side-information, concatenated by one-hot user ID and item ID, user and item content features, location features, \textit{etc}. $\mathbf{w}\in\mathbb{R}^n$ is the model bias parameter: $w_o$ is the global bias and $w_i$ is the feature bias. $\mathbf{V}\in\mathbb{R}^{k\times n}$ is the latent feature vector, and every $\langle \mathbf{v}_i,\mathbf{v}_j\rangle$ models the interaction between the $i$-th and $j$-th feature dimensions. Therefore, $\mathbf{V}$ is the key reason why FM is an effective feature-based recommendation model, as it captures the rich side-information interaction. However, on-the-fly storing $\mathbf{V}$ and computing $\langle \mathbf{v}_i,\mathbf{v}_j\rangle$ are prohibitively expensive when $n$ is large. For example, a practical recommender system for Yelp\footnote{\href{https://www.yelp.ca/dataset}{https://www.yelp.ca/dataset}} needs to provide recommendation for over $1,300,000$ users with about $174,000$ business, which have more than $1,200,000$ attributes (here, $n=1,300,000+174,000+1,200,000=2,674,000$). To this end, we want to use binary codes $\mathbf{B}\in\{\pm 1\}^{k\times n}$ instead of $\mathbf{V}$, to formulated our proposed framework: Discrete Factorization Machines (DFM): \begin{equation}\label{eq:dfm}\small \text{DFM}(\mathbf{x}):= w_{0}+\sum\limits_{i=1}^{n} w_i x_i+ \sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j. \end{equation} However, directly obtain $\mathbf{B} = \textrm{sgn}(\mathbf{V})$ will lead to large quantization loss and thus degrade the recommendation accuracy significantly~\cite{Zhang2016Discrete}. In the next section, we will introduce our proposed DFM learning model and discrete optimization that tackles the quantization loss. \section{Discrete Factorization Machines} We first present the learning objective of DFM and then elaborate the optimization process of DFM, which is the key technical difficulty of the paper. At last, we shed some lights on model initialization, which is known to have a large impact on a discrete model. \subsection{Model Formulation} Given a training pair $(\mathbf{x},y)\in\mathcal{V}$, where $y$ is the groundtruth score of feature vector $\textbf{x}$ and $\mathcal{V}$ denotes the set of all training instances, the problem of DFM is formulated as: \begin{align}\small &\mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{B}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} (y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i -\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j)^2 \notag \\ &+ \alpha\sum\limits_{i=1}^{n} w_i^2, \text{s.t.}\ \mathbf{B} \in\{\pm1\}^{k\times n},\ \underbrace{\mathbf{B}\mathbf{1} = \mathbf{0}}_{\text{Balance}},\ \underbrace{ \mathbf{B}\mathbf{B}^T = n\mathbf{I} }_{\text{De-correlation}} \label{eq:obj} \end{align} Due to the discrete constraint in DFM, the regularization ${\|\mathbf{B}\|}_F^2$ becomes an constant and hence is removed. Additionally, DFM imposes balanced and de-correlated constraints on the binary codes in order to maximize the information each bit carries and to make binary codes compact \cite{Zhou2012Learning}. However, optimizing the objective function in Eq.(\ref{eq:obj}) is a highly challenging task, since it is generally NP-hard. To be specific, finding the global optimum solution needs to involve $\mathcal{O}(2^{kn})$ combinatorial search for the binary codes~\cite{Stad2001Some}. Next, we introduce a new learning objective that allows DFM to be solved in a computationally tractable way. The basic idea is to soften the balanced and de-correlated constraints. To achieve this, let us first introduce a delegate continuous variable $\mathbf{D}\in\mathcal{D}$, where $\mathcal{D}=\{\mathbf{D}\in\mathbb{R}^{k\times n}|\mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I}\}$. Then the balanced and de-correlated constraints can be softened by $\min_{D\in\mathcal{D}}\|\mathbf{B}-\mathbf{D}\|_F$. As such, we can get the softened learning objective for DFM as: \begin{align}\small\label{eq:softobj} \mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{B}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} &(y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j)^2 \notag \\ &+ \alpha\sum\limits_{i=1}^{n} w_i^2 - 2\beta tr(\mathbf{B}^T\mathbf{D}), \\ \notag \text{s.t.}\ &\mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I},\mathbf{B}\in\{\pm 1\}^{k\times n}, \end{align} where we use $2tr(\mathbf{B}^T\mathbf{D})$ instead of $\|\mathbf{B}-\mathbf{D}\|_F$ for the ease of optimization (note that the two terms are identical since $\mathbf{B}^T\mathbf{B}$ and $\mathbf{D}^T\mathbf{D}$ are constant). $\beta$ is tunable hyperparameter controlling the strength of the softened de-correlation constraint. As the above Eq.(\ref{eq:softobj}) allows a certain discrepancy between $\mathbf{B}$ and $\mathbf{D}$, it makes the binarized optimization problem computationally tractable. Note that if there are feasible solution in Eq.(\ref{eq:obj}), we can impose a very large $\beta$ to enforce $\mathbf{B}$ to be close to $\mathbf{D}$. The above Eq.(\ref{eq:softobj}) presents the objective function to be optimized for DFM. It is worth noting that we do not discard the discrete constraint and we still perform a direct optimization on discrete $\mathbf{B}$. Furthermore, through joint optimization for the binary codes and the delegate real variables, we can obtain nearly balanced and uncorrelated binary codes. Next, we introduce an efficient solution to solve the mixed-integer optimization problem in Eq.(\ref{eq:softobj}). \subsection{Optimization} We employ alternating optimization strategy~\cite{liu2017pami} to solve the problem. Specifically, we alternatively solve three subproblems for DFM model in Eq.(\ref{eq:softobj}), taking turns to update each of $\mathbf{B}$, $\mathbf{D}$, $\mathbf{w}$, given others fixed. Next we elaborate on how to solve each of the subproblems. \noindent $\mathbf{B}$\textbf{-subproblem}.\quad In this subproblem, we aim to optimize $\mathbf{B}$ with fixed $\mathbf{D}$ and $\mathbf{w}$. To achieve this, we can update $\mathbf{B}$ by updating each vector $\mathbf{b}_r$ according to \begin{equation*} \begin{aligned}\small &\mathop{\arg\min}\limits_{\mathbf{b}_r\in\{\pm 1\}^k} \mathbf{b}_r^T\mathbf{U} (\sum\limits_{\mathcal{V}_r}x_r^2 \hat{\mathbf{x}} \hat{\mathbf{x}} ^T) \mathbf{U}^T\mathbf{b}_r -2(\sum\limits_{\mathcal{V}_r}x_r\psi \hat{\mathbf{x}} ^T )\mathbf{U}^T \mathbf{b}_r \\ &-2\beta \mathbf{d}_r^T\mathbf{b}_r,\ \text{where}\ \psi = y-w_0 - \textbf{w}^T \textbf{x} - \sum\limits_{i=1}^{n-1}\sum\limits_{j=i+1}^{n-1}\langle \mathbf{u}_i,\mathbf{u}_j\rangle \hat{x}_i \hat{x}_j \end{aligned} \end{equation*} where $\mathcal{V}_r=\{(\mathbf{x},y)\in \mathcal{V}|x_r\neq 0\}$ is the training set for $\mathbf{r}$, vector $\hat{\mathbf{x}}$ is equal to $\mathbf{x}$ excluding element $x_r$, $\mathbf{U}$ excludes the column $\mathbf{b}_r$ of matrix $\mathbf{B}$, and $\mathbf{u}_i$ is a column in $\mathbf{U}$. Due to the discrete constraints, the optimization is generally NP-hard. To this end, we use Discrete Coordinate Descent (DCD)~\cite{Zhang2016Discrete} to take turns to update each bit of binary codes $\mathbf{b}_r$. Denote $b_{rt}$ as the $t$-th bit of $\mathbf{b}_r$ and $\mathbf{b}_{r\bar{t}}$ as the rest codes excluding $b_{rt}$, DCD will update $b_{rt}$ by fixing $\mathbf{b}_{r\bar{t}}$. Thus, we update $b_{rt}$ based on the following rule: \begin{equation}\small \begin{split} &b_{rt}\leftarrow\text{sgn}\big( K(\hat{b}_{rt},b_{rt})\big),\\ \hat{b}_{rt}=\sum_{\mathcal{V}_r} &(x_r\psi-x_r^2\hat{\mathbf{x}} ^T\mathbf{Z}_{\bar{t}}\mathbf{b}_{r\bar{t}}) \hat{\mathbf{x}}^T\mathbf{z}_t +\beta d_{rt} \end{split} \end{equation} where $\mathbf{Z}=\mathbf{U}^T$, $\mathbf{z}_t$ is the $t$-th column of the matrix $\mathbf{Z}$ while $\mathbf{Z}_{\bar{t}}$ excludes the $t$-th column from $\mathbf{Z}$, and $K(x,y)$ is a function that $K(x,y)=x$ if $x\neq 0$ and $K(x,y)=y$ otherwise. Through this way, we can control that when $\hat{b}_{rt}=0$, we do not update $b_{rt}$. \vspace{+5pt} \noindent $\mathbf{D}$\textbf{-subproblem}.\quad When $\mathbf{B}$ and $\mathbf{w}$ are fixed in Eq.(\ref{eq:softobj}), the optimization subproblem for $\mathbf{D}$ is: \begin{equation}\label{eq:dsubp}\small \mathop{\arg\max}\limits_{\mathbf{D}}tr(\mathbf{B}^T\mathbf{D}), s.t.\ \mathbf{D}\mathbf{1}=\mathbf{0}, \mathbf{D}\mathbf{D}^T=m\mathbf{I}. \end{equation} It can be solved with the aid of a centering matrix $\mathbf{J}=\mathbf{I}-\frac{1}{n}\mathbf{1}\mathbf{1}^T$. Specifically, by Singular Value Decomposition (SVD), we have $\mathbf{B}\mathbf{J}=\overline{\mathbf{B}}=\mathbf{P}\mathbf{\Sigma}\mathbf{Q}^T$, where $\mathbf{P}\in\mathbb{R}^{k\times k'}$ and $\mathbf{Q}\in\mathbb{R}^{n\times k'}$ are left and right singular vectors corresponding to the $r' (\leq r)$ positive singular values in the diagonal matrix $\mathbf{\Sigma}$. We first apply eigendecomposition for the small $k\times k$ matrix $\overline{\mathbf{B}}\ \overline{\mathbf{B}}^T= \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix} \begin{bmatrix} \mathbf{\Sigma}^2&\mathbf{0}\\ \mathbf{0}&\mathbf{0} \end{bmatrix} \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix}^T$, where $\widehat{\mathbf{P}}$ are the eigenvectors of the zero eigenvalues. Therefore, by the definition of SVD, we have $\mathbf{Q}=\overline{\mathbf{B}}^T\mathbf{P}\mathbf{\Sigma}^{-1}$. In order to satisfy the constraint $\mathbf{D}\mathbf{1}=0$, we further obtain additional $\widehat{\mathbf{Q}}\in\mathbb{R}^{n\times(k-k')}$ by Gram-Schmidt orthogonalization based on $\begin{bmatrix} \mathbf{Q}&\mathbf{1} \end{bmatrix}$. As such, we have $\widehat{\mathbf{Q}}^T\mathbf{1}=\mathbf{0}$. Then we can get the closed-form update rule for the $\mathbf{D}$-subproblem in Eq.(\ref{eq:dsubp}) as: \begin{equation}\small \mathbf{D}\leftarrow\sqrt{n} \begin{bmatrix} \mathbf{P}&\widehat{\mathbf{P}} \end{bmatrix} \begin{bmatrix} \mathbf{Q}&\widehat{\mathbf{Q}} \end{bmatrix}^T \end{equation} \noindent $\mathbf{w}$\textbf{-subproblem}.\quad When $\mathbf{B}$ and $\mathbf{D}$ are fixed in Eq.(\ref{eq:softobj}), the subproblem is for optimizing $\mathbf{w}$ is: \begin{equation}\small \begin{split} \mathop{\arg\min}\limits_{w_0,\mathbf{w}} &\sum\limits_{(\mathbf{x},y)\in\mathcal{V}} (\phi -w_{0}-\sum\limits_{i=1}^{n} w_i x_i)^2 +\alpha\sum\limits_{i=1}^{n} w_i^2 ,\\ &\phi = y-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{b}_i,\mathbf{b}_j\rangle x_i x_j. \end{split} \end{equation} Since $\textbf{w}$ is a real-valued vector, it is the standard multivariate linear regression problem. Thus we can use coordinate descent algorithm provided in the original FM~\cite{Rendle2011Factorization} to find the optimal value of $\mathbf{w}$ and the global bias $w_0$. \subsection{Initialization} Since DFM deals with mixed-integer non-convex optimization, the initialization of model parameters plays an important role for faster convergence and for finding better local optimum solution. Here we suggest an efficient initialization strategy inspired by DCF~\cite{Zhang2016Discrete}. It first solves a relaxed optimization problem in Eq.(4) by discarding the discrete constraints as: \begin{equation} \begin{small} \begin{aligned} \label{eq:init} \small &\mathop{\arg\min}\limits_{w_0,\mathbf{w},\mathbf{V}} \sum\limits_{(\mathbf{x},y)\in \mathcal{V}} (y-w_{0}-\sum\limits_{i=1}^{n} w_i x_i-\sum\limits_{i=1}^{n}\sum\limits_{j=i+1}^{n}\langle \mathbf{v}_i,\mathbf{v}_j\rangle x_i x_j)^2\notag \\ + &\alpha \sum\limits_{i=1}^{n} w_i^2 + \beta\|\mathbf{V}\|_F^2 - 2\beta tr(\mathbf{V}^T\mathbf{D}), \text{s.t.}\ \mathbf{D}\mathbf{1} = \mathbf{0},\mathbf{D}\mathbf{D}^T = n\mathbf{I}\notag \end{aligned} \end{small} \end{equation} To solve the problem, we can initialize real-valued $\mathbf{V}$ and $\mathbf{w}$ randomly and find feasible initializations for $\mathbf{D}$ by solving $\mathbf{D}$-subproblem. The optimization can be done alternatively by solving $\mathbf{V}$ by traditional FM, solving $\mathbf{D}$ by $\mathbf{D}$-subproblem, and solving $\mathbf{w}$ by gradient descent. Assuming the solution is ($\mathbf{V}^\ast,\mathbf{D}^\ast,\mathbf{w}^\ast,w_0^\ast$), we can then initialize the parameters in Eq.(\ref{eq:softobj}) as: \begin{equation} \mathbf{B}\leftarrow\text{sgn}(\mathbf{V}^\ast), \mathbf{D}\leftarrow\mathbf{D}^\ast, \mathbf{w}\leftarrow\mathbf{w}^\ast, w_0\leftarrow w_0^\ast \end{equation} \section{Experiments} As the key contribution of this work is the design of DFM for fast feature-based recommendation, we conduct experiments to answer the following research questions: ~\\ \noindent \textbf{RQ1}.\quad How does DFM perform as compared to existing hash-based recommendation methods? ~\\ \noindent \textbf{RQ2}.\quad How does the key hyper-parameter of DFM impact its recommendation performance? ~\\ \noindent \textbf{RQ3}.\quad How efficient is DFM as compared to the real-valued version of FM? \begin{figure*}[!tbh] \centering \includegraphics[scale = 1.5]{leg.pdf}\\ \large\textbf{Yelp}\\ \vspace{-0.2cm} \includegraphics[width=0.265\textwidth]{y8.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y16.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y32.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{y64.pdf} \\ \vspace{-0.2cm} \large\textbf{Amazon}\\ \includegraphics[width=0.265\textwidth]{a8.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a16.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a32.pdf} \hspace{-0.24in} \includegraphics[width=0.265\textwidth]{a64.pdf} \vspace{-0.2cm} \caption{\textbf{Performance of NDCG@K (K ranges from 1 to 10) \textit{w.r.t.,} code length ranges for 8 to 64 on the two datasets.}} \label{fig:performance} \vspace{-0.3cm} \end{figure*} \subsection{Experimental Settings} \textbf{Datasets}. We experiment on two publicly available datasets with explicit feedbacks from different real-world websites: \textit{Yelp} and \textit{Amazon}. Note that we assume each user has only one rating for an item and average the scores if an item has multiple ratings from the same user. \textbf{a) Yelp.} This dataset \cite{Lian2017Discrete} originally contains 409,117 users, 85,539 items (points of interest on Yelp such as restaurants and hotels), and 2,685,066 ratings with integer scores ranging from 1 to 5. Besides, each item has a set of textual reviews posted by the users. \textbf{b) Amazon.} This book rating dataset \cite{mcauley2015inferring} originally includes 12,886,488 ratings of 929,264 items (books on Amazon from 2,588,991 users. In this dataset, an item also has a set of integer rating scores in $[1, 5]$ and a set of textual reviews. Considering the extreme sparsity of the original Yelp and Amazon datasets, we remove users with less than $20$ ratings and items rated by less than $20$ users. After the filtering, there are 13,679 users, 12,922 items, and 640,143 ratings left in the Yelp dataset. For the Amazon dataset, we remain 35,151 users, 33,195 items, and 1,732,060 ratings. For fair comparison with DCMF, we leave out the side information from the user field and represent an item with the bag-of-words encoding of its textual contents after aggregating all review contents of the item. Note that we remove \textit{stopping words} and truncate the vocabulary by selecting the top 8,000 words regarding their \textit{Term Frequency–Inverse Document Frequency}. By concatenating the bag-of-words encoding (side information of the item) and one-hot encoding of user and item ID, we obtain a feature vector of dimensionality 34,601 and 76,346 for a rating (use-item pair) for Yelp and Amazon, respectively. \vspace{+5pt} \noindent\textbf{Baselines}. We implement our proposed DFM method using Matlab\footnote{Codes are available: \href{https://github.com/hanliu95/DFM}{https://github.com/hanliu95/DFM}} and compare it with its real-valued version and state-of-the-art binarized methods for Collaborative Filtering: \begin{itemize}[leftmargin=*] \item \textbf{libFM}. This is the original implementation\footnote{\href{http://www.libfm.org/}{http://www.libfm.org/}} of FM which has achieved great performance for feature-based recommendation with explicit feedbacks. Note that we adopt $l_2$ regularization on the parameters to prevent overfitting and use the SGD learner to optimize it. \item \textbf{DCF}. This is the first binarized CF method that can directly optimize the binary codes~\cite{Zhang2016Discrete}. \item \textbf{DCMF}. This is the state-of-the-art binarized method for CF with side information~\cite{Lian2017Discrete}. It extends \textbf{DCF} by encoding the side features as the constraints for user codes and item codes. \item \textbf{BCCF}. This is a two-stage binarized CF method~\cite{Zhou2012Learning} with a relaxation stage and a quantization stage. At these two stages, it successively solves MF with balanced code regularization and applies orthogonal rotation to obtain user codes and item codes. \end{itemize} Note that for \textbf{DCF} and \textbf{DCMF}, we use the original implementation as released by the authors. For BCCF, we re-implement it due to the unavailability. \vspace{+5pt} \noindent\textbf{Evaluation Protocols}. We first randomly split the ratings from each user into training ($50\%$) and testing ($50\%$). As practical recommender systems typically recommend a list of items for a user, we rank the testing items of a user and evaluate the ranked list with \textit{Normalized Discounted Cumulative Gain} (NDCG), which has been widely used for evaluating ranking tasks like recommendation~\cite{NCF}. To evaluate the efficiency of \textbf{DFM} and real-valued FM, we use \textit{Testing Time Cost} (TTC) \cite{Zhang2016Discrete}, where a lower cost indicates better efficiency. \vspace{+5pt} \noindent\textbf{Parameter Settings}. As we exactly follow the experimental settings of \cite{Lian2017Discrete}, we refer to their optimal settings for hyper-parameters of \textbf{DCMF}, \textbf{DCF}, and \textbf{BCCF}. For \textbf{libFM}, we test the $l_2$ regularization on feature embeddings $\mathbf{V}$ of $\{10^{-i} | i = -4, -3, -2, -1, 0, 1, 2\}$. Under the same range, we test the de-correlation constraint (\textit{i.e.,} $\beta$ in Eq. (\ref{eq:obj})) of \textbf{DFM}. Besides, we test the code length in the range of $[8, 16, 32, 64]$. It is worth mentioning that we conduct all the experiments on a computer equipped with an Intel(R) Core(TM) i7-7700k 4 cores CPU at 4.20GHZ, 32GB RAM, and 64-bit Windows 7 operating system. \subsection{Performance Comparison (RQ1)} In Figure \ref{fig:performance}, we show the recommendation performance (NDCG@1 to NDCG@10) of \textbf{DFM} and the baseline methods on the two datasets. The code length varies from 8 to 64. From the figure, we have the following observations: \begin{itemize}[leftmargin=*] \item \textbf{DFM} demonstrates consistent improvements over state-of-the-art binarized recommendation methods across code lengths (the average improvement is 7.95\% and 2.38\% on Yelp and Amazon, respectively). The performance improvements are attributed to the benefits of learning binary codes for features and modeling their interactions. \item Besides, \textbf{DFM} shows very competitive performance compared to \textbf{libFM}, its real-valued version, with an average performance drop of only 3.24\% and 2.40\% on the two datasets. By increasing the code length, the performance gap continuously shrinks from 5.68\% and 4.76\% to 1.46\% and 1.19\% on Yelp and Amazon, respectively. One possible reason is that \textbf{libFM} suffers from overfitting as the increase of its representative capability (\textit{i.e.,} larger code length)~\cite{NFM}, whereas binarizing the parameters can alleviate the overfitting problem. This finding again verifies the effectiveness of the proposed \textbf{DFM}. \item Between baseline methods, \textbf{DCF} consistently outperforms \textbf{BCCF}, while slightly underperforms \textbf{DCMF} with an average performance decrease of 1.58\% and 0.76\% on the two datasets, respectively. This is consistent with the findings in \cite{Liu2014Collaborative} that the direct discrete optimization is stronger than two-stage approaches and that side information makes the user codes and item codes more representative, which can boost the performance of recommendation. However, the rather small performance gap between \textbf{DCF} and \textbf{DCMF} indicates that \textbf{DCMF} fails to make full use of the side information. The main reason is because that \textbf{DCMF} performs prediction only based on user codes and item codes (which is same as \textbf{DCF}). This inevitably limits the representation ability of DCMF. \end{itemize} \begin{figure} \centering \includegraphics[width=0.252\textwidth]{p1.pdf} \hspace{-0.24in} \includegraphics[width=0.252\textwidth]{p2.pdf} \vspace{-15pt} \caption{\textbf{Recommendation performance of libFM and DFM (code length=64) on NDCG@10 \textit{w.r.t.,} $l_2$ regularization (libFM) and de-correlation constraint (DFM). }} \label{fig:hyperparameter} \vspace{-0.3cm} \end{figure} \subsection{Impact of Hyper-parameter (RQ2)} Figure \ref{fig:hyperparameter} shows the recommendation performance of \textbf{libFM} and \textbf{DFM} on NDCG@10 regarding $l_2$ regularization of \textbf{libFM} and de-correlation constraint, respectively. We omit the results on different values of $K$ and code length other than $K = 10$ and code length = 64 since they shown the same trend. First, we can see that the performance of \textbf{libFM} continuously drops as we decrease the $l_2$ regularization. One reason is that \textbf{libFM} could easily suffer from overfitting \cite{xiao2017attentional}. Second, we observe that \textbf{DFM} performs slightly worse as decreasing the de-correlation constraint. By setting the de-correlation constraint and $l_2$ regularization to be zero, both of \textbf{DFM} and \textbf{libFM} exhibit significant performance decrease in NDCG@10. Specifically, the performance of \textbf{DFM} drops with a 1.91\% and 2.05\% margin on Yelp and Amazon, respectively, while \textbf{libFM} encounters a 10.44\% and 6.56\% one. The above findings again demonstrate the overfitting problem of \textbf{libFM}, which leads to \textbf{libFM} to be very sensitive to the $l_2$ regularization hyper-parameter, while the proposed \textbf{DFM} is relatively insensitive to its de-correlation constraint hyper-parameter. \subsection{Efficiency Study (RQ3)} As \textbf{libFM} is implemented based on C++, we re-implement the testing algorithm of \textbf{DFM} with C++ and compile it with the same C++ compiler (gcc-4.9.3) for a fair comparison. Table \ref{tab:efficiency} shows the efficiency comparison between \textbf{DFM} and \textbf{libFM} regarding TTC on the two datasets. We have the following observations: \begin{itemize}[leftmargin=*] \item \textbf{DFM} achieves significant speedups on both datasets regarding TTC, significantly accelerating the \textbf{libFM} by a large amplitude (on average, the acceleration ratio over \textbf{libFM} is 15.99 and 16.04, respectively). This demonstrates the great advantage of binarizing the real-valued parameters of FM. \item The acceleration ratio of \textbf{DFM} based on \textbf{libFM} is stable around 16 times on both the datasets when the code length increases from 8 to 64. \end{itemize} Along with the comparable recommendation performance of \textbf{DFM} and \textbf{libFM}, the above findings indicate that \textbf{DFM} is an operable solution for many large-scale Web services, such as Facebook, Instagram, and Youtube, to substantially reduce the computation cost of their recommendation systems. \begin{table}[t] \centering \caption{\textbf{Efficiency comparison between DFM (C++ implementation) and libFM \textit{w.r.t.,} TTC (minutes) where the code length ranges from 8 to 64 on the two datasets.}} \vspace{-0.3cm} \textbf{Yelp}\\ \vspace{+1pt} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c||c|c|c|c|} \hline \textbf{Code Length} & \textbf{8} & \textbf{16} & \textbf{32} & \textbf{64}\\ \hline \textbf{libFM} (TTC) &$27.18$ & $56.77$ & $114.10$ & $217.64$ \\ \hline \textbf{DFM} (TTC) &$2.06$ & $3.56$ & $6.60$ & $12.43$\\ \hline Acceleration Ratio &$13.19$ & $15.95$ & $17.29$ & $17.51$\\ \hline \end{tabular} } ~\\ \vspace{+1pt} \textbf{Amazon}\\ \vspace{+2pt} \resizebox{0.48\textwidth}{!}{ \begin{tabular}{|c||c|c|c|c|} \hline Code Length & 8 &16 & 32 & 64\\ \hline \textbf{libFM} (TTC) & $177.03$ & $357.46$ & $716.83$ & $1,414.67$ \\ \hline \textbf{DFM} (TTC) &$12.67$ & $22.50$ & $42.56$ & $81.04$\\ \hline Acceleration Ratio &$13.97$ & $15.89$ & $16.84$ & $17.46$\\ \hline \end{tabular} } \label{tab:efficiency} \vspace{-0.3cm} \end{table} \section{Conclusions} In this paper, we presented DFM, the first binary representation learning method for generic feature-based recommendation. In contrast to existing hash-based recommendation methods that can only learn binary codes for users and items, our DFM is capable of learning a vector of binary codes for each feature. As a benefit of such a compact binarized model, the predictions of DFM can be done efficiently in the binary space. Through extensive experiments on two real-world datasets, we demonstrate that DFM outperforms state-of-the-art hash-based recommender systems by a large margin, and achieves a recommendation accuracy rather close to that of the original real-valued FM. This work moves the first step towards developing efficient and compact recommender models, which are particularly useful for large-scale and resource-limited scenarios. In future, we will explore the potential of DFM for context-aware recommendation in mobile devices, a typical application scenario that requires fast and compact models. Moreover, we will develop pairwise learning method for DFM, which might be more suitable for personalized ranking task. With the fast developments of neural recommendation methods recently~\cite{NFM}, we will develop binarized neural recommender models in the next step to further boost the performance of hash-based recommendation. Besides, we are interested in deploying DFM for online recommendation scenarios, and explore how to integrate bandit-based and reinforcement learning strategies into DFM. Lastly, we will explore the potential of DFM in other tasks such as popularity prediction of online content~\cite{feng2018learning}. \vspace{+5pt} \noindent\textbf{Acknowledgment} This work is supported by the National Basic Research Program of China (973 Program), No.: 2015CB352502; National Natural Science Foundation of China, No.: 61772310, No.: 61702300, and No.: 61702302; and the Project of Thousand Youth Talents 2016. This work is also part of NExT research, supported by the National Research Foundation, Prime Minister's Office, Singapore under its IRC@SG Funding Initiative.
{ "timestamp": "2018-09-20T02:07:22", "yymm": "1805", "arxiv_id": "1805.02232", "language": "en", "url": "https://arxiv.org/abs/1805.02232" }
"\\section{Introduction}\\label{sect.introduction}\nExpressive and efficient temporal reasoning is e(...TRUNCATED)
{"timestamp":"2018-08-07T02:08:54","yymm":"1805","arxiv_id":"1805.02183","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\vspace{-0.5em}\nMobile edge computing (MEC) has emerged as a promising t(...TRUNCATED)
{"timestamp":"2018-05-08T02:13:46","yymm":"1805","arxiv_id":"1805.02322","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\label{section:intro}\nThis article deals with the convergence of sequenc(...TRUNCATED)
{"timestamp":"2018-05-08T02:11:00","yymm":"1805","arxiv_id":"1805.02195","language":"en","url":"http(...TRUNCATED)
"\\subsection{Abstract}\n{\\small\\bf Abstract}\\\\\n\n{\\small\nThe centrality dependence of rapidi(...TRUNCATED)
{"timestamp":"2019-10-31T01:02:08","yymm":"1805","arxiv_id":"1805.02552","language":"en","url":"http(...TRUNCATED)
"\\section{Introduction}\n\\input{intro.tex}\n\n\\section{Background}\n\\label{sec:rel-work}\n\\inpu(...TRUNCATED)
{"timestamp":"2018-05-08T02:12:53","yymm":"1805","arxiv_id":"1805.02275","language":"en","url":"http(...TRUNCATED)
End of preview. Expand in Data Studio

No dataset card yet

Downloads last month
6